docker-hbase's People
Forkers
hanhongyuan alekodu naresh-kumar4242 leonie922 ffminx nato16 kuryaki tmwong2003 karakasli supreetacharya kellerli songyanho lanxy88 trunet folcky vpovarna cr75 caichangqi dudaka sokoban h5z2p5z2p trucnguyenlam xiaochencui apache-toolbox eloirobe flyingglass tj-ss-masatoshi-koga zhangweibin yelijun hemanshupaliwa7 limavi2015 dsevilla shirley-wln sev7e0 mustamias zhengmingliang hanibalgk kevinfu2 anhuaxiang romankharkiv drkiettran haiyuanhe al-assad abelash sean2018zhangxiaofeng domingit rvegas gonzalogmn hebskjcc aaronjazhiel milnomada kamaljahangir morellifernando02 mrreboot7 akwangl przor3n aristotelisxs kimahriman trinhvo sonvt1710 bhanudevops2019 psyoblade opiena pradeepsrin imgaojp data-drone gurk0001 jingdq zcwgogo liuyonghengheng feng893449885 bestyond pulquero xingxiuyi 17512018736 hippalus zhangchao 765741668 oiue violetlife blueapple168 malfusion chaohaizhen mohammadabdellatif newhe3dber bgdsh gloryhui livecityccz worldmaomao nonego tarmee2019 panmax jintonghuoya songfang rolandocm sebagonella everestjoo merlin2008 liumingjian junhyeon47docker-hbase's Issues
hbase port for docker-compose-distributed-local.yml and docker-compose-standalone.yml is wrong
in yml file:
- 60000:60000
- 60010:60010
- 60020:60020
- 60030:60030
and this is not correct. it should start with 1600XX.
How can new add new datanode or hbase region server to the existing Hadoop cluster?
I use this project to make Hadoop cluster, since I want to bigger this cluster what should I do (beside adding new datanode or Hbase region in docker compose file) ?
HBase Error Upon Docker Container Restart: Timedout waiting for namespace table to be assigned and enabled
Getting a strange error when attempting to restart HBase Pseudodistributed mode. The sequence of events is as follows:
- Configure brand new HBase environment pointing to HDFS.
- docker compose up -> Environment starts with no issues. I can create tables using hbase shell
- docker compose down -> container shutdown sequence is hbase-region-1, hbase-master, zoo-1, data-nodes, namenode
- docker compose up -> HBase master times out after 5 mins with error: Timedout 300000ms waiting for namespace table to be assigned and enabled: tableName=hbase:namespace, state=ENABLED
- If I remove the /hbase directory from HDFS and restart the environment it will come back up and will be available until I restart the Docker environment again.
java client raises UnknownHostException: can not resolve hbase-master
How should I connect a client using the java API?. I'm using this at the moment
Configuration config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", "localhost");
config.set("hbase.zookeeper.property.clientPort", "2181");
HBaseAdmin.available(config);
Returns
org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: java.net.UnknownHostException: can not resolve hbase-master,16000,1592488743309
at org.apache.hadoop.hbase.client.ConnectionImplementation.isMasterRunning(ConnectionImplementation.java:610)
at org.apache.hadoop.hbase.client.HBaseAdmin.available(HBaseAdmin.java:2410)
at Main.main(Main.java:95)
How should I fix this?
I'm using hbase-client
2.2.5, but version 1.6.0 rises the same problem.
<!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-client -->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>2.2.5</version>
</dependency>
Add a license
I'd like to mess around with the files here on my own projects, but without infringing anything :-).
https://choosealicense.com/ is good for selecting whatever is appropriate for you
Not support hbase backup ?
when running hbase backup
by:
docker exec hbase hbase backup
Error: Could not find or load main class backup
Caused by: java.lang.ClassNotFoundException: backup
Using remote zookeeper instead of embedded
Hi guys. I want to use remote instance zookeeper. How can I do it ?
HBase master failed to start
I have installed zookeeper cluster and Hadoop cluster according to your installation, and they are running well. When I installed the HBase cluster, I received the following error:
master:
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
2020-08-27 06:14:59,080 INFO [Thread-63] hdfs.DFSClient : Excluding datanode 10.0.16.6:50010,
2020-08-27 06:14:59,101 INFO [Thread-63] hdfs.DFSClient : Exception in createBlockOutputStream,
java.net.ConnectException : Connection refused,
At sun.nio.ch . SocketChannelImpl.checkConnect (Native Method),
At sun.nio.ch . SocketChannelImpl.finishConnect ( SocketChannelImpl.java:717 )
At org.apache.hadoop . net.SocketIOWithTimeout.connect ( SocketIOWithTimeout.java:206 )
At org.apache.hadoop . net.NetUtils.connect ( NetUtils.java:529 )
At org.apache.hadoop . hdfs.DFSOutputStream.createSocketForPipeline ( DFSOutputStream.java:1526 )
At org.apache.hadoop . hdfs.DFSOutputStream $ DataStreamer.createBlockOutputStream ( DFSOutputStream.java:1328 )
At org.apache.hadoop . hdfs.DFSOutputStream $ DataStreamer.nextBlockOutputStream ( DFSOutputStream.java:1281 )
At org.apache.hadoop . hdfs.DFSOutputStream $ DataStreamer.run ( DFSOutputStream.java:526 )
2020-08-27 06:14:59,101 INFO [Thread-63] hdfs.DFSClient : Abandoning BP-849875342-10.0.16.27-1598501656095:blk_ 1073741845_ 1021,
2020-08-27 06:14:59,108 INFO [Thread-63] hdfs.DFSClient : Excluding datanode 10.0.16.9:50010,
2020-08-27 06:14:59,113 WARN [Thread-63] hdfs.DFSClient : DataStreamer Exception,
org.apache.hadoop . ipc.RemoteException ( java.io.IOException ): File /hbase/.tmp/ hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and 3 node(s) are excluded in this operation.,
At org.apache.hadoop . hdfs.server.blockmanagement . BlockManager.chooseTarget4NewBlock ( BlockManager.java:1628 )
At org.apache.hadoop . hdfs.server.namenode . FSNamesystem.getNewBlockTargets ( FSNamesystem.java:3121 )
At org.apache.hadoop . hdfs.server.namenode . FSNamesystem.getAdditionalBlock ( FSNamesystem.java:3045 )
At org.apache.hadoop . hdfs.server.namenode . NameNodeRpcServer.addBlock ( NameNodeRpcServer.java:725 )
At org.apache.hadoop . hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB .addBlock(ClientNamenodeProtocolSe rverSideTranslatorPB.java:493 )
At org.apache.hadoop . hdfs.protocol.proto .ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNa menodeProtocolProtos.java )
At org.apache.hadoop . ipc.ProtobufRpcEngine
At org.apache.hadoop . ipc.RPC $ Server.call ( RPC.java:982 )
At org.apache.hadoop . ipc.Server $Handler$1.run( Server.java:2217 )
At org.apache.hadoop . ipc.Server $Handler$1.run( Server.java:2213 )
At java.security.AccessController .doPrivileged(Native Method),
At javax.security.auth . Subject.doAs ( Subject.java:422 )
At org.apache.hadoop . security.UserGroupInformation.doAs ( UserGroupInformation.java:1746 )
At org.apache.hadoop . ipc.Server $ Handler.run ( Server.java:2213 ),
At org.apache.hadoop . ipc.Client.call ( Client.java:1411 )
At org.apache.hadoop . ipc.Client.call ( Client.java:1364 )
At org.apache.hadoop . ipc.ProtobufRpcEngine $ Invoker.invoke ( ProtobufRpcEngine.java:206 )
At com.sun.proxy .$Proxy15.addBlock(Unknown Source),
At sun.reflect.NativeMethodAccessorImpl .invoke0(Native Method),
At sun.reflect.NativeMethodAccessorImpl .invoke(Nati veMethodAccessorImpl.java:62 )
At sun.reflect.DelegatingMethodAccessorImpl .invoke(Delegati ngMethodAccessorImpl.java:43 )
At java.lang.reflect . Method.invoke ( Method.java:498 )
At org.apache.hadoop . io.retry.RetryInvocationHandler .invokeMethod(Re tryInvocationHandler.java:187 )
At org.apache.hadoop . io.retry.RetryInvocationHandler .invoke(Re tryInvocationHandler.java:102 )
At com.sun.proxy .$Proxy15.addBlock(Unknown Source),
At org.apache.hadoop . hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB .addBlock(ClientNamenode ProtocolTranslatorPB.java:368 )
At sun.reflect.NativeMethodAccessorImpl .invoke0(Native Method),
At sun.reflect.NativeMethodAccessorImpl .invoke(Nati veMethodAccessorImpl.java:62 )
At sun.reflect.DelegatingMethodAccessorImpl .invoke(Delegati ngMethodAccessorImpl.java:43 )
At java.lang.reflect . Method.invoke ( Method.java:498 )
At org.apache.hadoop . hbase.fs.HFileSystem $1.invoke( HFileSystem.java:279 )
At com.sun.proxy .$Proxy16.addBlock(Unknown Source),
At org.apache.hadoop . hdfs.DFSOutputStream $ DataStreamer.locateFollowingBlock ( DFSOutputStream.java:1449 )
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
regionserver:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
2020-08-27 04:32:52,422 INFO [main] mortbay.log : Started [email protected] :16030
2020-08-27 04:32:52,474 INFO [regionserver/hbase-regionserver-1/10.0.16.36:16020] zookeeper.RecoverableZooKeeper : Process identifier=hconnection-0x522de6a5 connecting to ZooKeeper ensemble=zoo1:2181,zoo2:2181,zoo3:2181
2020-08-27 04:32:52,474 INFO [regionserver/hbase-regionserver-1/10.0.16.36:16020] zookeeper.ZooKeeper : Initiating client connection, connectString=zoo1:2181,zoo2:2181,zoo3:2181 sessionTimeout=120000 watcher=hconnection-0x522de6a50x0, quorum=zoo1:2181,zoo2:2181,zoo3:2181, baseZNode=/hbase
2020-08-27 04:32:52,516 INFO [regionserver/hbase-regionserver-1/10.0.16.36:16020-SendThread(10.0.16.10:2181)] zookeeper.ClientCnxn : Opening socket connection to server 10.0.16.10/10.0.16.10:2181. Will not attempt to authenticate using SASL (unknown error)
2020-08-27 04:32:52,517 INFO [regionserver/hbase-regionserver-1/10.0.16.36:16020-SendThread(10.0.16.10:2181)] zookeeper.ClientCnxn : Socket connection established to 10.0.16.10/10.0.16.10:2181, initiating session
2020-08-27 04:32:52,531 INFO [regionserver/hbase-regionserver-1/10.0.16.36:16020-SendThread(10.0.16.10:2181)] zookeeper.ClientCnxn : Session establishment complete on server 10.0.16.10/10.0.16.10:2181, sessionid = 0x1000567cc4a0001, negotiated timeout = 40000
2020-08-27 04:32:52,532 INFO [regionserver/hbase-regionserver-1/10.0.16.36:16020] client.ZooKeeperRegistry : ClusterId read in ZooKeeper is null
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I see that the datanode node of the log does not match the node IP of the current three datanodes. I tried the following operation, but it did not work
hbase-clean:
docker exec -it namenode hadoop fs -rm -r /hbase/*
docker exec -it zoo bash # bin/ zkCli.sh && echo "rmr /hbase"
What shall I do?
Still active?
What's the status of this repo? It hasn't been updated for over two years. Googling suggests that the BDE project has ended, suggesting there are unlikely to be any further updates. However the docker-hadoop repo gets updated fairly frequently (by @eagafonov most recently), suggesting that perhaps there is a little activity in this area.
I could easily submit a PR to bump the version of hbase, but I'm not clear if there are any contributors likely to merge the PR and then not sure if that's actually going to result in a new image being pushed to the docker hub?
Change hbase.regionserver.thrift.framed and hbase.regionserver.thrift.compact to True?
Thanks for providing the container! After pulling the image we notice that hbase.regionserver.thrift.framed
is set to False
in the configuration file.
However, the HBase official document and Cloudera troubleshooting page recommends to set hbase.regionserver.thrift.framed
and hbase.regionserver.thrift.compact
to True
:
-
This is the recommended transport for thrift servers and requires a similar setting on the client side. Changing this to false will select the default transport, vulnerable to DoS when malformed requests are issued due to THRIFT-601.
-
To prevent the possibility of crashes due to buffer overruns, use the framed and compact transport protocols by setting hbase.regionserver.thrift.framed and hbase.regionserver.thrift.compact to true in hbase-site.xml.
Maybe it's better to enable these two parameters? Thank you!
typo in docker-compose-distributed-local.yml
i think in line 70 there is a typo:
image: bde2020/hbase-hmaster:1.0.0-hbase1.2.6
should be:
image: bde2020/hbase-master:1.0.0-hbase1.2.6
how The deployment is the same as in quickstart HBase documentation.
how access to hbase shell
Change the default value of hbase.rootdir and hbase.tmp.dir?
The Hbase official configuration file recommends to change the default value of hbase.rootdir and hbase.tmp.dir to another location such as 'hbase://...', or else all data will be lost on machine restart.
"Change this setting to point to a location more permanent than '/tmp', the usual resolve for java.io.tmpdir, as the '/tmp' directory is cleared on machine restart."
I wonder if the default setting needs to be changed. Thanks.
[Q] Is it OK without Passwordless SSH access?
Thanks for providing the containers!
The HBase manual page guides to configure passwordless SSH access from master servers to region servers,
(https://hbase.apache.org/book.html#quickstart_fully_distributed)
but the containers do not seem to do that.
Probably, it is because the passwordless SSH access is only required to run start-hbase.sh on a single master server and the containers run their daemons by themselves.
Do I guess correctly?
Or, do I loose some part of HBase functionality without the SSH access?
namenode exited with code 1
Hi. I'm using docker-compose.yml from master, and getting error:
namenode | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
namenode | - Setting yarn.resourcemanager.address=resourcemanager:8032
namenode | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
namenode | - Setting yarn.nodemanager.resource.memory-mb=16384
namenode | - Setting yarn.nodemanager.resource.cpu-vcores=8
namenode | Configuring httpfs
namenode | Configuring kms
namenode | Configuring mapred
namenode | - Setting mapreduce.map.java.opts=-Xmx3072m
namenode | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.1/
namenode | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.1/
namenode | - Setting mapred.child.java.opts=-Xmx4096m
namenode | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.1/
namenode | - Setting mapreduce.framework.name=yarn
namenode | - Setting mapreduce.reduce.java.opts=-Xmx6144m
namenode | - Setting mapreduce.reduce.memory.mb=8192
namenode | - Setting mapreduce.map.memory.mb=4096
namenode | Configuring for multihomed network
namenode | WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
namenode | namenode is running as process 359. Stop it first.
namenode exited with code 1
could you please help me ?
'docker-compose-distributed-local.yml' has some spelling mistakes
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.