Comments (19)
I had the same issue here.
You get this error because the default property phoenix.query.timeoutMs is set to 10 minutes (https://github.com/forcedotcom/phoenix/wiki/Tuning)
What you can do is to edit (or create) your hbase-site.xml in /usr/lib/phoenix/bin/ and add the phoenix.query.timeoutMs parameter as follow: (1 hour in my config)
<property>
<name>phoenix.query.timeoutMs</name>
<value>3600000</value>
</property>
from phoenix.
ok,thanks,i will try and get back to you
2014-6-24 下午9:04于 "Charles Bernard" [email protected]写道:
I had the same issue here.
phoenix.query.timeoutMs 3600000
You get this error because the default property phoenix.query.timeoutMs
is set to 10 minutes (https://github.com/forcedotcom/phoenix/wiki/Tuning)
What you can do is to edit (or create) your hbase-site.xml in
/usr/lib/phoenix/bin/ and add the phoenix.query.timeoutMs parameter as
follow: (1 hour in my config)—
Reply to this email directly or view it on GitHub.
from phoenix.
Thanks @charlesb your solution worked for me.
from phoenix.
same error i am gacing, i have set the phoenix.query.timeoutMs but cout not resolve it
below is error help
Error: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
Fri Jan 09 09:21:07 CST 2015, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=62318: row '' on table 'PJM_DATASET' at region=PJM_DATASET,,1420633295836.4394a3aa2721f87f3e6216d20ebeec44., hostname=hadoopm1,60020,1420815633410, seqNum=34326 (state=08000,code=101)
from phoenix.
Can you paste your hbase-site.xml?
What version of hadoop (hadoop stack maybe? CDH, HDP,...) are you using?
from phoenix.
Hi i am getting same error Please help.
from phoenix.
ok below is my hbase configuration .
<configuration>
<property>
<name>dfs.domain.socket.path</name>
<value>/var/lib/hadoop-hdfs/dn_socket</value>
</property>
<property>
<name>hbase.client.keyvalue.maxsize</name>
<value>10485760</value>
</property>
<property>
<name>hbase.client.scanner.caching</name>
<value>100</value>
</property>
<property>
<name>hbase.client.scanner.timeout.period</name>
<value>60000000</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.coprocessor.master.classes</name>
<value>org.apache.phoenix.hbase.index.master.IndexMasterObserver</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.coprocessor.AggregateImplementation</value>
</property>
<property>
<name>hbase.defaults.for.version.skip</name>
<value>true</value>
</property>
<property>
<name>hbase.hregion.majorcompaction</name>
<value>604800000</value>
</property>
<property>
<name>hbase.hregion.max.filesize</name>
<value>10737418240</value>
</property>
<property>
<name>hbase.hregion.memstore.block.multiplier</name>
<value>4</value>
</property>
<property>
<name>hbase.hregion.memstore.flush.size</name>
<value>536870912</value>
</property>
<property>
<name>hbase.hregion.memstore.mslab.enabled</name>
<value>true</value>
</property>
<property>
<name>hbase.hstore.blockingStoreFiles</name>
<value>10</value>
</property>
<property>
<name>hbase.hstore.compactionThreshold</name>
<value>3</value>
</property>
<property>
<name>hbase.local.dir</name>
<value>${hbase.tmp.dir}/local</value>
</property>
<property>
<name>hbase.master.info.bindAddress</name>
<value>0.0.0.0</value>
</property>
<property>
<name>hbase.master.info.port</name>
<value>60010</value>
</property>
<property>
<name>hbase.master.port</name>
<value>60000</value>
</property>
<property>
<name>hbase.regionserver.global.memstore.lowerLimit</name>
<value>0.38</value>
</property>
<property>
<name>hbase.regionserver.global.memstore.upperLimit</name>
<value>0.4</value>
</property>
<property>
<name>hbase.regionserver.handler.count</name>
<value>60</value>
</property>
<property>
<name>hbase.regionserver.info.port</name>
<value>60030</value>
</property>
<property>
<name>hbase.regionserver.wal.codec</name>
<value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoopctrl.dev.oati.local:8020/apps/hbase/data</value>
</property>
<property>
<name>hbase.rpc.timeout</name>
<value>1500000</value>
</property>
<property>
<name>hbase.security.authentication</name>
<value>simple</value>
</property>
<property>
<name>hbase.security.authorization</name>
<value>false</value>
</property>
<property>
<name>hbase.superuser</name>
<value>hbase</value>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/hadoop/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoopm1.dev.oati.local,hadoopm2.dev.oati.local,hadoopm3.dev.oati.local</value>
</property>
<property>
<name>hbase.zookeeper.useMulti</name>
<value>true</value>
</property>
<property>
<name>hfile.block.cache.size</name>
<value>0.40</value>
</property>
<property>
<name>phoenix.client.maxMetaDataCacheSize</name>
<value>10240000</value>
</property>
<property>
<name>phoenix.clock.skew.interval</name>
<value>2000</value>
</property>
<property>
<name>phoenix.connection.autoCommit</name>
<value>false</value>
</property>
<property>
<name>phoenix.coprocessor.maxMetaDataCacheSize</name>
<value>20480000</value>
</property>
<property>
<name>phoenix.coprocessor.maxMetaDataCacheTimeToLiveM</name>
<value>180000</value>
</property>
<property>
<name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name>
<value>30000</value>
</property>
<property>
<name>phoenix.distinct.value.compress.threshold</name>
<value>1024000</value>
</property>
<property>
<name>phoenix.groupby.estimatedDistinctValues</name>
<value>1000</value>
</property>
<property>
<name>phoenix.groupby.maxCacheSize</name>
<value>102400000</value>
</property>
<property>
<name>phoenix.groupby.spillable</name>
<value>true</value>
</property>
<property>
<name>phoenix.groupby.spillFiles</name>
<value>2</value>
</property>
<property>
<name>phoenix.index.failure.handling.rebuild</name>
<value>true</value>
</property>
<property>
<name>phoenix.index.failure.handling.rebuild.interval</name>
<value>10000</value>
</property>
<property>
<name>phoenix.index.failure.handling.rebuild.overlap.time</name>
<value>300000</value>
</property>
<property>
<name>phoenix.index.maxDataFileSizePerc</name>
<value>50</value>
</property>
<property>
<name>phoenix.index.mutableBatchSizeThreshold</name>
<value>5</value>
</property>
<property>
<name>phoenix.mutate.batchSize</name>
<value>1000</value>
</property>
<property>
<name>phoenix.mutate.maxSize</name>
<value>500000</value>
</property>
<property>
<name>phoenix.query.dateFormat</name>
<value>yyyy-MM-dd HH:mm:ss</value>
</property>
<property>
<name>phoenix.query.maxGlobalMemoryPercentage</name>
<value>15</value>
</property>
<property>
<name>phoenix.query.maxGlobalMemorySize</name>
<value>2147483648</value>
</property>
<property>
<name>phoenix.query.maxGlobalMemoryWaitMs</name>
<value>10000</value>
</property>
<property>
<name>phoenix.query.maxServerCacheBytes</name>
<value>104857600</value>
</property>
<property>
<name>phoenix.query.maxSpoolToDiskBytes</name>
<value>1024000000</value>
</property>
<property>
<name>phoenix.query.maxTenantMemoryPercentage</name>
<value>100</value>
</property>
<property>
<name>phoenix.query.numberFormat</name>
<value>#,##0.###</value>
</property>
<property>
<name>phoenix.query.rowKeyOrderSaltedTable</name>
<value>true</value>
</property>
<property>
<name>phoenix.query.spoolThresholdBytes</name>
<value>20971520</value>
</property>
<property>
<name>phoenix.query.useIndexes</name>
<value>true</value>
</property>
<property>
<name>phoenix.schema.dropMetaData</name>
<value>true</value>
</property>
<property>
<name>phoenix.sequence.cacheSize</name>
<value>100</value>
</property>
<property>
<name>phoenix.stats.guidepost.per.region</name>
<value>None</value>
</property>
<property>
<name>phoenix.stats.minUpdateFrequency</name>
<value>450000</value>
</property>
<property>
<name>phoenix.stats.updateFrequency</name>
<value>900000</value>
</property>
<property>
<name>phoenix.stats.useCurrentTime</name>
<value>true</value>
</property>
<property>
<name>zookeeper.session.timeout</name>
<value>120000000</value>
</property>
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase</value>
</property>
and i am using hdp 2.2
i have 4000000000 rows in my phoenix table .
please help asap.
from phoenix.
i have setup the phoenix.query.timeoutMs property from ambari web UI but it is not reflecting in hbase-site.xml file.
from phoenix.
<configuration>
<property>
<name>dfs.domain.socket.path</name>
<value>/var/lib/hadoop-hdfs/dn_socket</value>
</property>
<property>
<name>hbase.client.keyvalue.maxsize</name>
<value>10485760</value>
</property>
<property>
<name>hbase.client.scanner.caching</name>
<value>100</value>
</property>
<property>
<name>hbase.client.scanner.timeout.period</name>
<value>60000000</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.coprocessor.master.classes</name>
<value>org.apache.phoenix.hbase.index.master.IndexMasterObserver</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.coprocessor.AggregateImplementation</value>
</property>
<property>
<name>hbase.defaults.for.version.skip</name>
<value>true</value>
</property>
<property>
<name>hbase.hregion.majorcompaction</name>
<value>604800000</value>
</property>
<property>
<name>hbase.hregion.majorcompaction.jitter</name>
<value>0.50</value>
</property>
<property>
<name>hbase.hregion.max.filesize</name>
<value>10737418240</value>
</property>
<property>
<name>hbase.hregion.memstore.block.multiplier</name>
<value>4</value>
</property>
<property>
<name>hbase.hregion.memstore.flush.size</name>
<value>536870912</value>
</property>
<property>
<name>hbase.hregion.memstore.mslab.enabled</name>
<value>true</value>
</property>
<property>
<name>hbase.hstore.blockingStoreFiles</name>
<value>10</value>
</property>
<property>
<name>hbase.hstore.compactionThreshold</name>
<value>3</value>
</property>
<property>
<name>hbase.local.dir</name>
<value>${hbase.tmp.dir}/local</value>
</property>
<property>
<name>hbase.master.info.bindAddress</name>
<value>0.0.0.0</value>
</property>
<property>
<name>hbase.master.info.port</name>
<value>60010</value>
</property>
<property>
<name>hbase.master.loadbalancer.class</name>
<value>org.apache.phoenix.hbase.index.balancer.IndexLoadBalancer</value>
</property>
<property>
<name>hbase.master.port</name>
<value>60000</value>
</property>
<property>
<name>hbase.region.server.rpc.scheduler.factory.class</name>
<value>org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory</value>
</property>
<property>
<name>hbase.regionserver.global.memstore.lowerLimit</name>
<value>0.38</value>
</property>
<property>
<name>hbase.regionserver.global.memstore.upperLimit</name>
<value>0.4</value>
</property>
<property>
<name>hbase.regionserver.handler.count</name>
<value>60</value>
</property>
<property>
<name>hbase.regionserver.info.port</name>
<value>60030</value>
</property>
<property>
<name>hbase.regionserver.wal.codec</name>
<value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoopctrl.dev.oati.local:8020/apps/hbase/data</value>
</property>
<property>
<name>hbase.rpc.timeout</name>
<value>1500000</value>
</property>
<property>
<name>hbase.security.authentication</name>
<value>simple</value>
</property>
<property>
<name>hbase.security.authorization</name>
<value>false</value>
</property>
<property>
<name>hbase.superuser</name>
<value>hbase</value>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/hadoop/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoopm1.dev.oati.local,hadoopm2.dev.oati.local,hadoopm3.dev.oati.local</value>
</property>
<property>
<name>hbase.zookeeper.useMulti</name>
<value>true</value>
</property>
<property>
<name>hfile.block.cache.size</name>
<value>0.40</value>
</property>
<property>
<name>phoenix.client.maxMetaDataCacheSize</name>
<value>10240000</value>
</property>
<property>
<name>phoenix.clock.skew.interval</name>
<value>2000</value>
</property>
<property>
<name>phoenix.connection.autoCommit</name>
<value>false</value>
</property>
<property>
<name>phoenix.coprocessor.maxMetaDataCacheSize</name>
<value>20480000</value>
</property>
<property>
<name>phoenix.coprocessor.maxMetaDataCacheTimeToLiveM</name>
<value>180000</value>
</property>
<property>
<name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name>
<value>30000</value>
</property>
<property>
<name>phoenix.distinct.value.compress.threshold</name>
<value>1024000</value>
</property>
<property>
<name>phoenix.groupby.estimatedDistinctValues</name>
<value>1000</value>
</property>
<property>
<name>phoenix.groupby.maxCacheSize</name>
<value>102400000</value>
</property>
<property>
<name>phoenix.groupby.spillable</name>
<value>true</value>
</property>
<property>
<name>phoenix.groupby.spillFiles</name>
<value>2</value>
</property>
<property>
<name>phoenix.index.failure.handling.rebuild</name>
<value>true</value>
</property>
<property>
<name>phoenix.index.failure.handling.rebuild.interval</name>
<value>10000</value>
</property>
<property>
<name>phoenix.index.failure.handling.rebuild.overlap.time</name>
<value>300000</value>
</property>
<property>
<name>phoenix.index.maxDataFileSizePerc</name>
<value>50</value>
</property>
<property>
<name>phoenix.index.mutableBatchSizeThreshold</name>
<value>5</value>
</property>
<property>
<name>phoenix.mutate.batchSize</name>
<value>1000</value>
</property>
<property>
<name>phoenix.mutate.maxSize</name>
<value>500000</value>
</property>
<property>
<name>phoenix.query.dateFormat</name>
<value>yyyy-MM-dd HH:mm:ss</value>
</property>
<property>
<name>phoenix.query.maxGlobalMemoryPercentage</name>
<value>15</value>
</property>
<property>
<name>phoenix.query.maxGlobalMemorySize</name>
<value>2147483648</value>
</property>
<property>
<name>phoenix.query.maxGlobalMemoryWaitMs</name>
<value>10000</value>
</property>
<property>
<name>phoenix.query.maxServerCacheBytes</name>
<value>104857600</value>
</property>
<property>
<name>phoenix.query.maxSpoolToDiskBytes</name>
<value>1024000000</value>
</property>
<property>
<name>phoenix.query.maxTenantMemoryPercentage</name>
<value>100</value>
</property>
<property>
<name>phoenix.query.numberFormat</name>
<value>#,##0.###</value>
</property>
<property>
<name>phoenix.query.rowKeyOrderSaltedTable</name>
<value>true</value>
</property>
<property>
<name>phoenix.query.spoolThresholdBytes</name>
<value>20971520</value>
</property>
<property>
<name>phoenix.query.timeoutMs</name>
<value>60000000</value>
</property>
<property>
<name>phoenix.query.useIndexes</name>
<value>true</value>
</property>
<property>
<name>phoenix.schema.dropMetaData</name>
<value>true</value>
</property>
<property>
<name>phoenix.sequence.cacheSize</name>
<value>100</value>
</property>
<property>
<name>phoenix.stats.guidepost.per.region</name>
<value>None</value>
</property>
<property>
<name>phoenix.stats.minUpdateFrequency</name>
<value>450000</value>
</property>
<property>
<name>phoenix.stats.updateFrequency</name>
<value>900000</value>
</property>
<property>
<name>phoenix.stats.useCurrentTime</name>
<value>true</value>
</property>
<property>
<name>zookeeper.session.timeout</name>
<value>120000000</value>
</property>
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase</value>
</property>
ok here is final hbase-site.configuration
from phoenix.
jdbc:phoenix:hadoopm1> Select count(*) from PJM_DATASET;
here is query and below is the execption.
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
Sat Jan 10 05:06:02 CST 2015, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=62320: row '' on table 'PJM_DATASET' at region=PJM_DATASET,,1420633295836.4394a3aa2721f87f3e6216d20ebeec44., hostname=hadoopm1,60020,1420887782278, seqNum=34350
at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2440)
at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2074)
at sqlline.SqlLine.print(SqlLine.java:1735)
at sqlline.SqlLine$Commands.execute(SqlLine.java:3683)
at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
at sqlline.SqlLine.dispatch(SqlLine.java:821)
at sqlline.SqlLine.begin(SqlLine.java:699)
at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
at sqlline.SqlLine.main(SqlLine.java:424)
0: jdbc:phoenix:hadoopm1>
from phoenix.
please help what wrong i am doing.
from phoenix.
Seems like something else times out. Have you tried to scan this table from your hbase client (hbase shell)?
Check this: http://hbase.apache.org/book/ch15s15.html (pragraph titled Connection Timeouts)
from phoenix.
i have one master and three slaves .. i uninstall and reinstall the hbase and phoenix at master and install the hbase at other slave machin but now i am even not able to start the hbase master from ambari.. web UI
from phoenix.
RegionServer
at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2486)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:61)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2501)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2484)
... 5 more
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2076)
at org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:617)
... 10 more
Caused by: java.lang.ClassNotFoundException: Class org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1982)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2074)
... 11 more
from phoenix.
above is the current error i got from hbase regionserver log.
from phoenix.
ok one question do we need to installed phoenix at the same machine where we have region server. ?
from phoenix.
Phoenix moved to Apache over a year ago, so this site is no longer
active nor maintained. Please post your question on our Apache mailing
list and you'll likely get more help:
http://phoenix.apache.org/mailing_list.html
from phoenix.
ok thanks i will
from phoenix.
2015-01-14 02:27:57,512 WARN [DataStreamer for file /apps/hbase/data/WALs/hadoopm2.dev.oati.local,60020,1421221188209/hadoopm2.dev.oati.local%2C60020%2C1421221188209.1421223957430 block BP-337983189-10.100.227.107-1397418605845:blk_1073948934_216462] hdfs.DFSClient: DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[10.100.227.107:50010, 10.100.227.104:50010], original=[10.100.227.107:50010, 10.100.227.104:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1041)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1107)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1254)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1005)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:549)
Wed Jan 14 04:26:19 CST 2015 Terminating regionserver
2015-01-14 04:26:19,509 INFO [Thread-11] regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@48a9bc2b
Wed Jan 14 04:30:46 CST 2015 Terminating regionserver
Wed Jan 14 04:34:13 CST 2015 Terminating regionserver
2015-01-14 04:34:32,959 DEBUG [regionserver60020-EventThread] regionserver.SplitLogWorker: tasks arrived or departed
Wed Jan 14 04:58:39 CST 2015 Terminating regionserver
Wed Jan 14 05:34:36 CST 2015 Terminating regionserver
now i am getting this error on region server .
from phoenix.
Related Issues (20)
- Got TableNotFoundException when upgrading from Phoenix 2.2.x HOT 1
- Not able to see the table in hbase
- java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
- Creating Empty Column Families with CREATE Table HOT 1
- why am i select data slow? HOT 2
- Add REGEXP_LIKE buit-in function HOT 2
- When I run the Phoenix over my hbase cluster I meet the warning below HOT 2
- phoenix map hbase table , phoenix data content is not correct HOT 1
- ERROR 1012 (42M03): Table undefined HOT 6
- How to use phoenuix to map to an Existing HBase Table
- what situation does index works? HOT 1
- how can i use UPSERT VALUES? HOT 1
- Exception on upserting data on table with using upsert select
- Query a Secure HBase cluster through Phoenix In Java code HOT 2
- Offtopic Question: Bloom Filter Implementation In Apex
- Phoenix View for HBase is not updating
- Operations on table throw exception: ArrayIndexOutOfBoundsException & DoNotRetryIOException
- Phoenix issue-Distribution-IBM BigInsights- Hbase(1.1.1)-Phoenix 4.7 HOT 1
- Phoenix View on pre-existing HBase namespace table? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from phoenix.