HBase cluster setup :
HBase is an open-source, distributed, versioned, column-oriented store modeled after Google 'Bigtable’.
This tutorial will describe how to setup and run Hbase cluster, with not too much explanation about hbase. There are a number of articles where the Hbase are described in details.
We will build hbase cluster using three Ubuntu machine in this tutorial.
A distributed HBase depends on a running ZooKeeper cluster. All participating nodes and clients need to be able to get to the running ZooKeeper cluster. HBase by default manages a ZooKeeper cluster for you, or you can manage it on your own and point HBase to it. In our case, we are using default ZooKeeper cluster, which is manage by Hbase
Following are the capacities in which nodes may act in our cluster:
1. Hbase Master:- The HbaseMaster is responsible for assigning regions to HbaseRegionserver, monitors the health of each HbaseRegionserver.
2. Zookeeper: - For any distributed application, ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.
3. Hbase Regionserver:- The HbaseRegionserver is responsible for handling client read and write requests. It communicates with the Hbasemaster to get a list of regions to serve and to tell the master that it is alive.
In our case, one machine in the cluster is designated as Hbase master and Zookeeper. The rest of machine in the cluster act as a Regionserver.
Before we start:
Before we start configure HBase, you need to have a running Hadoop cluster, which will be the storage for hbase(Hbase store data in Hadoop Distributed File System). Please refere to Installing Hadoop in the cluster - A complete step by step tutorial post before continuing.
INSTALLING AND CONFIGURING HBASE MASTER
1. Download hbase-0.20.6.tar.gz from http://www.apache.org/dyn/closer.cgi/hbase/ and extract to some path in your computer. Now I am calling hbase installation root as $HBASE_INSTALL_DIR.
2. Edit the file /etc/hosts on the master machine and add the following lines.
192.168.41.53 hbase-master hadoop-namenode
#Hbase Master and Hadoop Namenode is configure on same machine
#Hbase Master and Hadoop Namenode is configure on same machine
192.168.41.67 hbase-regionserver1
192.168.41.67 hbase-regionserver2
Note: Run the command “ping hbase-master”. This command is run to check whether the hbase-master machine ip is being resolved to actual ip not localhost ip.
3. We have needed to configure password less login from hbase-master to all regionserver machines.
2.1. Execute the following commands on hbase-master machine.
$ssh-keygen -t rsa
$scp .ssh/id_rsa.pub ilab@hbase-regionserver1:~ilab/.ssh/authorized_keys
$scp .ssh/id_rsa.pub ilab@hbase-regionserver2:~ilab/.ssh/authorized_keys
4. Open the file $HBASE_INSTALL_DIR/conf/hbase-env.sh and set the $JAVA_HOME.
export JAVA_HOME=/user/lib/jvm/java-6-sun
Note: If you are using open jdk , then give the path of open jdk.
5. Open the file $HBASE_INSTALL_DIR/conf/hbase-site.xml and add the following properties.
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.master</name>
<value>hbase-master:60000</value>
<description>The host and port that the HBase master runs at.
A value of 'local' runs the master and a regionserver
in a single process.
in a single process.
</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop-namenode:9000/hbase</value>
<description>The directory shared by region servers.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed
Zookeeper true: fully-distributed with unmanaged Zookeeper
Quorum (see hbase-env.sh)
</description>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2222</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hbase-master</value>
<description>Comma separated list of servers in the ZooKeeper Quorum.
For example,
"host1.mydomain.com,host2.mydomain.com".
"host1.mydomain.com,host2.mydomain.com".
By default this is set to localhost for local and
pseudo-distributed modes of operation. For a
fully-distributed setup, this should be set to a full
pseudo-distributed modes of operation. For a
fully-distributed setup, this should be set to a full
list of ZooKeeper quorum servers. If
HBASE_MANAGES_ZK is set in hbase-env.sh
this is the list of servers which we will start/stop
ZooKeeper on.
HBASE_MANAGES_ZK is set in hbase-env.sh
this is the list of servers which we will start/stop
ZooKeeper on.
</description>
</property>
</configuration>
Note:-
In our case, Zookeeper and hbase master both are running in same machine.
6. Open the file $HBASE_INSTALL_DIR/conf/hbase-env.sh and uncomment the following line:
export HBASE_MANAGES_ZK=true
7. Open the file $HBASE_INSTALL_DIR/conf/regionservers and add all the regionserver machine names.
hbase-regionserver1
hbase-regionserver2
hbase-master
Note: Add hbase-master machine name only if you are running a regionserver on hbase-master machine.
INSTALLING AND CONFIGURING HBASE REGIONSERVER
1. Download hbase-0.20.6.tar.gz from http://www.apache.org/dyn/closer.cgi/hbase/ and extract to some path in your computer. Now I am calling hbase installation root as $HBASE_INSTALL_DIR.
2. Edit the file /etc/hosts on the hbase-regionserver machine and add the following lines.
192.168.41.53 hbase-master hadoop-namenode
Note: In my case, Hbase-master and hadoop-namenode are running on same machine.
Note: Run the command “ping hbase-master”. This command is run to check whether the hbase-master machine ip is being resolved to actual ip not localhost ip.
3.We have needed to configure password less login from hbase-regionserver to hbase-master machine.
2.1. Execute the following commands on hbase-server machine.
$ssh-keygen -t rsa
$scp .ssh/id_rsa.pub ilab@hbase-master:~ilab/.ssh/authorized_keys2
4. Open the file $HBASE_INSTALL_DIR/conf/hbase-env.sh and set the $JAVA_HOME.
export JAVA_HOME=/user/lib/jvm/java-6-sun
Note: If you are using open jdk , then give the path of open jdk.
5. Open the file $HBASE_INSTALL_DIR/conf/hbase-site.xml and add the following properties.
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.master</name>
<value>hbase-master:60000</value>
<description>The host and port that the HBase master runs at.
A value of 'local' runs the master and a regionserver
in a single process.
in a single process.
</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop-namenode:9000/hbase</value>
<description>The directory shared by region servers.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed
Zookeeper true: fully-distributed with unmanaged Zookeeper
Quorum (see hbase-env.sh)
</description>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2222</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hbase-master</value>
<description>Comma separated list of servers in the ZooKeeper Quorum.
For example, "host1.mydomain.com,host2.mydomain.com".
By default this is set to localhost for local and
pseudo-distributed modes of operation. For a fully-distributed
setup, this should be set to a ful list of ZooKeeper quorum
servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
pseudo-distributed modes of operation. For a fully-distributed
setup, this should be set to a ful list of ZooKeeper quorum
servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
this is the list of servers which we will start/stop ZooKeeper on.
</description>
</property>
</configuration>
6. Open the file $HBASE_INSTALL_DIR/conf/hbase-env.sh and uncomment the following line:
export HBASE_MANAGES_ZK=true
Note:-
Above steps is required on all the datanode in the hadoop cluster.
START AND STOP HBASE CLUSTER
1. Starting the Hbase Cluster:-
we have need to start the daemons only on the hbase-master machine, it will start the daemons in all regionserver machines. Execute the following command to start the hbase cluster.
$HBASE_INSTALL_DIR/bin/start-hbase.sh
Note:-
At this point, the following Java processes should run on hbase-master machine.
ilab@hbase-master:$jps
14143 Jps
14007 HQuorumPeer
14066 HMaster
and the following java processes should run on hbase-regionserver machine.
23026 HRegionServer
23171 Jps
2. Starting the hbase shell:-
$HBASE_INSTALL_DIR/bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Version: 0.20.6, r965666, Mon Jul 19 16:54:48 PDT 2010
hbase(main):001:0>
Now,create table in hbase.
hbase(main):001:0>create 't1','f1'
0 row(s) in 1.2910 seconds
hbase(main):002:0>
Note: - If table is created successfully, then everything is running fine.
3. Stoping the Hbase Cluster:-
Execute the following command on hbase-master machine to stop the hbase cluster.
$HBASE_INSTALL_DIR/bin/stop-hbase.sh
106 comments:
thanks for this very usefull post!
hi
i am working with 2 node hbase cluster as shown below
On node1 (10.0.1.54) : master node, region server, hadoop namenode, hadoop
datanode
on node2 (10.0.1.55): region server, hadoop datanode.
When i start both hadoop then hbase, all daemons are running properly on
masternode i.e node1,
2404 NameNode
3657
3007 TaskTracker
2848 JobTracker
3522 HRegionServer
3848 Main
3292 HQuorumPeer
2769 SecondaryNameNode
3345 HMaster
2575 DataNode
4768 Jps
but on node2, only TaskTracker and Datanode daemons are running, the
HRegionServer daemon is not running.
I provide some files which may be helpful in helping me. i have my
/etc/hosts file of node1 (vamshikrishna-desktop) has
# /etc/hosts (for master AND slave)
127.0.0.1 localhost
127.0.1.1 vamshikrishna-desktop
10.0.1.54 hbase-master hadoop-namenode
10.0.1.55 hbase-regionserver1 hadoop-datanode1
and file {HBASE_HOME}/conf/regionservers has
hbase-regionserver1
hbase-master
i have my /etc/hosts file of node2 (vamshikrishna-laptop) has
127.0.0.1 localhost
127.0.0.1 vamshi-laptop
10.0.1.54 hbase-master hadoop-namenode
10.0.1.55 hbase-regionserver1 hadoop-datanode1
and file {HBASE_HOME}/conf/regionservers has
localhost
Please help me in finding out what is the reason for
Regionserver not running on node2..? And moreover when i run
start-hbase.sh , it will display error like
hbase-regionserver1: bash: line 0: cd: {HBASE_HOME}/bin/..: No such file or
directory
hbase-regionserver1: bash: {HBASE_HOME}/bin/hbase-daemon.sh: No such file
or directory ,
but i could find out {HBASE_HOME}/bin/hbase-daemon.sh on my node2
clearly.. i don't know what went wrong..!
Hi,
Add the following line in conf file regionservers on node2 (vamshikrishna-laptop) and remove localhost entry.
hbase-regionserver1
hbase-master
I think its work for u.
do you have any idea of sqoop
yes .. I have also worked on sqoop.
hi Ankit,
I am having problem with HBASE
when i start hbase, following processess run
HMaster
HquorumPeer
namaenode
secondarynamenode
jobtracker
but when i access hbase master's web UI it gives error like problem accessing master.jsp caused by hregion was null or empty
also while creating table it gives exception like hregion was null or empty in -ROOT-
Hi Pranay,
Please share the detail of your HBase cluster. How many regionservers are running?
1 namenode and hmaster
1 secondary
3 datnodes and regionserver
total - 5 machines
Multi node runs successfully and mapreduce also runs great
but after configuration of hbase, hadoop is running great but am hving proble with hbase
first is that the web UI is not launching and another thing is that when i create table it gives error like
"HRegion was null or empty in -ROOT-" and gives ioexception as "retries exhausted exception"
help me out please
May be it because of DNS problem. Look at your master and region server logs.
is all regionservers are running?
Hi ankit
Once again I started from scrap with two nodes on virtual machines.
master - namenode, secondary namenode, hmaster , jopbtracker
slavea - datanode, hregionserver, tasktracker
etc/hosts on master :-
127.0.0.1 localhost localhost.localdomain localhost
192.168.10.128 master master
192.168.10.129 slavea
on slave :-
127.0.0.1 localhost localhost.localdomain localhost
192.168.10.128 master master
192.168.10.129 slavea
hadoop is running great but having same problem with hbase
following are the logs in next comment:
2012-01-10 05:17:09,539 INFO org.mortbay.log: jetty-6.1.26
2012-01-10 05:17:10,127 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:60030
2012-01-10 05:17:10,127 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server Responder: starting
2012-01-10 05:17:10,129 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server listener on 60020: starting
2012-01-10 05:17:10,129 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020: starting
2012-01-10 05:17:10,130 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020: starting
2012-01-10 05:17:10,130 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020: starting
2012-01-10 05:17:10,137 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020: starting
2012-01-10 05:17:10,137 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 1 on 60020: starting
2012-01-10 05:17:10,137 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 2 on 60020: starting
2012-01-10 05:17:10,138 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Serving as slavea,60020,1326201427607, RPC listening on /192.168.10.129:60020, sessionid=0x134c7c3ba8f0002
2012-01-10 05:17:10,143 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 9 on 60020: starting
2012-01-10 05:17:10,143 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Allocating LruBlockCache with maximum size 199.7m
2012-01-10 05:17:11,638 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Received request to open region: -ROOT-,,0.70236052
2012-01-10 05:17:11,640 DEBUG org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Processing open of -ROOT-,,0.70236052
2012-01-10 05:17:11,642 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x134c7c3ba8f0002 Attempting to transition node 70236052/-ROOT- from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
2012-01-10 05:17:11,654 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x134c7c3ba8f0002 Successfully transitioned node 70236052 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
continued.....
2012-01-10 05:17:11,664 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Opening region: REGION => {NAME => '-ROOT-,,0', STARTKEY => '', ENDKEY => '', ENCODED => 70236052, TABLE => {{NAME => '-ROOT-', IS_ROOT => 'true', IS_META => 'true', FAMILIES => [{NAME => 'info', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '10', COMPRESSION => 'NONE', TTL => '2147483647', BLOCKSIZE => '8192', IN_MEMORY => 'true', BLOCKCACHE => 'true'}]}}
2012-01-10 05:17:11,670 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Instantiated -ROOT-,,0.70236052
2012-01-10 05:17:11,726 DEBUG org.apache.hadoop.hbase.regionserver.Store: loaded file:/tmp/hbase-hduser/hbase/-ROOT-/70236052/info/2864333833895073083, isReference=false, isBulkLoadResult=false, seqid=3, majorCompaction=false
2012-01-10 05:17:11,783 INFO org.apache.hadoop.hbase.regionserver.HRegion: Onlined -ROOT-,,0.70236052; next sequenceid=4
2012-01-10 05:17:11,783 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x134c7c3ba8f0002 Attempting to transition node 70236052/-ROOT- from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENING
2012-01-10 05:17:11,787 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x134c7c3ba8f0002 Successfully transitioned node 70236052 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENING
2012-01-10 05:17:11,800 INFO org.apache.hadoop.hbase.catalog.RootLocationEditor: Setting ROOT region location in ZooKeeper as slavea:60020
2012-01-10 05:17:11,806 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x134c7c3ba8f0002 Attempting to transition node 70236052/-ROOT- from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED
2012-01-10 05:17:11,810 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x134c7c3ba8f0002 Successfully transitioned node 70236052 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED
2012-01-10 05:17:11,810 DEBUG org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Opened -ROOT-,,0.70236052
2012-01-10 05:17:11,867 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Received request to open region: .META.,,1.1028785192
2012-01-10 05:17:11,868 DEBUG org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Processing open of .META.,,1.1028785192
2012-01-10 05:17:11,868 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x134c7c3ba8f0002 Attempting to transition node 1028785192/.META. from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
2012-01-10 05:17:11,872 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x134c7c3ba8f0002 Successfully transitioned node 1028785192 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
Please remove the line:
127.0.0.1 localhost localhost.localdomain localhost
and restart the HBase cluster. It may solve your problem.
continued......
2012-01-10 05:17:11,872 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Opening region: REGION => {NAME => '.META.,,1', STARTKEY => '', ENDKEY => '', ENCODED => 1028785192, TABLE => {{NAME => '.META.', IS_META => 'true', FAMILIES => [{NAME => 'info', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '10', COMPRESSION => 'NONE', TTL => '2147483647', BLOCKSIZE => '8192', IN_MEMORY => 'true', BLOCKCACHE => 'true'}]}}
2012-01-10 05:17:11,873 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Instantiated .META.,,1.1028785192
2012-01-10 05:17:11,882 INFO org.apache.hadoop.hbase.regionserver.HRegion: Onlined .META.,,1.1028785192; next sequenceid=1
2012-01-10 05:17:11,882 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x134c7c3ba8f0002 Attempting to transition node 1028785192/.META. from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENING
2012-01-10 05:17:11,885 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x134c7c3ba8f0002 Successfully transitioned node 1028785192 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENING
2012-01-10 05:17:11,906 INFO org.apache.hadoop.hbase.catalog.MetaEditor: Updated row .META.,,1.1028785192 in region -ROOT-,,0 with server=slavea:60020, startcode=1326201427607
2012-01-10 05:17:11,907 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x134c7c3ba8f0002 Attempting to transition node 1028785192/.META. from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED
2012-01-10 05:17:11,911 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x134c7c3ba8f0002 Successfully transitioned node 1028785192 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED
2012-01-10 05:17:11,912 DEBUG org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Opened .META.,,1.1028785192
2012-01-10 05:17:11,924 INFO org.apache.hadoop.hbase.zookeeper.MetaNodeTracker: Detected completed assignment of META, notifying catalog tracker
-----------------------------------------------
still not starting...giving same problem
when create table it gives error like:
ERROR: org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact region server null for region , row 't2,,00000000000000', but failed after 7 attempts.
Exceptions:
java.io.IOException: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326204636902/Put/vlen=12, .META.,,1/info:serverstartcode/1326204636902/Put/vlen=8}
java.io.IOException: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326204636902/Put/vlen=12, .META.,,1/info:serverstartcode/1326204636902/Put/vlen=8}
java.io.IOException: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326204636902/Put/vlen=12, .META.,,1/info:serverstartcode/1326204636902/Put/vlen=8}
java.io.IOException: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326204636902/Put/vlen=12, .META.,,1/info:serverstartcode/1326204636902/Put/vlen=8}
java.io.IOException: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326204636902/Put/vlen=12, .META.,,1/info:serverstartcode/1326204636902/Put/vlen=8}
java.io.IOException: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326204636902/Put/vlen=12, .META.,,1/info:serverstartcode/1326204636902/Put/vlen=8}
java.io.IOException: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326204636902/Put/vlen=12, .META.,,1/info:serverstartcode/1326204636902/Put/vlen=8}
Here is some help for this command:
Create table; pass table name, a dictionary of specifications per
column family, and optionally a dictionary of table configuration.
Dictionaries are described below in the GENERAL NOTES section.
Examples:
hbase> create 't1', {NAME => 'f1', VERSIONS => 5}
hbase> create 't1', {NAME => 'f1'}, {NAME => 'f2'}, {NAME => 'f3'}
hbase> # The above in shorthand would be the following:
hbase> create 't1', 'f1', 'f2', 'f3'
hbase> create 't1', {NAME => 'f1', VERSIONS => 1, TTL => 2592000, BLOCKCACHE => true}
continued....
2012-01-10 05:17:11,953 DEBUG org.apache.hadoop.hbase.master.CatalogJanitor: Scanned 0 catalog row(s) and gc'd 0 unreferenced parent region(s)
2012-01-10 05:17:39,075 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=master:2181 sessionTimeout=180000 watcher=hconnection
2012-01-10 05:17:39,080 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server master/192.168.10.128:2181
2012-01-10 05:17:39,084 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to master/192.168.10.128:2181, initiating session
2012-01-10 05:17:39,087 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server master/192.168.10.128:2181, sessionid = 0x134c7c3ba8f0004, negotiated timeout = 180000
continued....
2012-01-10 05:17:39,105 DEBUG org.apache.hadoop.hbase.client.MetaScanner: Scanning .META. starting at row= for max=2147483647 rows
2012-01-10 05:17:39,106 DEBUG org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@11a0d35; hsa=slavea:60020
2012-01-10 05:17:39,116 DEBUG org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: locateRegionInMeta parentTable=-ROOT-, metaLocation=address: slavea:60020, regioninfo: -ROOT-,,0.70236052, attempt=0 of 10 failed; retrying after sleep of 1000 because: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326201431905/Put/vlen=12, .META.,,1/info:serverstartcode/1326201431905/Put/vlen=8}
2012-01-10 05:17:39,116 DEBUG org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@11a0d35; hsa=slavea:60020
2012-01-10 05:17:40,118 DEBUG org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@11a0d35; hsa=slavea:60020
2012-01-10 05:17:40,122 DEBUG org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: locateRegionInMeta parentTable=-ROOT-, metaLocation=address: slavea:60020, regioninfo: -ROOT-,,0.70236052, attempt=1 of 10 failed; retrying after sleep of 1000 because: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326201431905/Put/vlen=12, .META.,,1/info:serverstartcode/1326201431905/Put/vlen=8}
2012-01-10 05:17:40,122 DEBUG org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@11a0d35; hsa=slavea:60020
2012-01-10 05:17:41,123 DEBUG org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@11a0d35; hsa=slavea:60020
2012-01-10 05:17:41,127 DEBUG org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: locateRegionInMeta parentTable=-ROOT-, metaLocation=address: slavea:60020, regioninfo: -ROOT-,,0.70236052, attempt=2 of 10 failed; retrying after sleep of 1000 because: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326201431905/Put/vlen=12, .META.,,1/info:serverstartcode/1326201431905/Put/vlen=8}
2012-01-10 05:17:41,128 DEBUG org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@11a0d35; hsa=slavea:60020
Run the following command on hbase shell:
hbase> flush '.META.'
hbase> major_compact '.META.'
It may solve your problem
I did the same as you told..giving following error.....
hbase(main):001:0> flush '.META.'
0 row(s) in 1.6460 seconds
hbase(main):002:0> major_compact '.META.'
0 row(s) in 0.0550 seconds
hbase(main):003:0> status
1 servers, 0 dead, 2.0000 average load
hbase(main):004:0> create 't2', 'f2'
ERROR: org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact region server null for region , row 't2,,00000000000000', but failed after 7 attempts.
Exceptions:
java.io.IOException: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326264128464/Put/vlen=12, .META.,,1/info:serverstartcode/1326264128464/Put/vlen=8}
java.io.IOException: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326264128464/Put/vlen=12, .META.,,1/info:serverstartcode/1326264128464/Put/vlen=8}
java.io.IOException: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326264128464/Put/vlen=12, .META.,,1/info:serverstartcode/1326264128464/Put/vlen=8}
java.io.IOException: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326264128464/Put/vlen=12, .META.,,1/info:serverstartcode/1326264128464/Put/vlen=8}
java.io.IOException: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326264128464/Put/vlen=12, .META.,,1/info:serverstartcode/1326264128464/Put/vlen=8}
java.io.IOException: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326264128464/Put/vlen=12, .META.,,1/info:serverstartcode/1326264128464/Put/vlen=8}
java.io.IOException: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326264128464/Put/vlen=12, .META.,,1/info:serverstartcode/1326264128464/Put/vlen=8}
Run the following command on HBase shell and paste the output of command here..
hbase>zk_dump
hbase(main):001:0> zk_dump
HBase is rooted at /hbase
Master address: master:60000
Region server holding ROOT: slavea:60020
Region servers:
slavea:60020
Quorum Server Statistics:
master:2181
Zookeeper version: 3.3.2-1031432, built on 11/05/2010 05:32 GMT
Clients:
/192.168.10.129:46187[1](queued=0,recved=37,sent=38)
/192.168.10.128:54436[1](queued=0,recved=93,sent=111)
/192.168.10.128:54437[1](queued=0,recved=37,sent=38)
/192.168.10.128:55497[0](queued=0,recved=1,sent=0)
/192.168.10.128:55495[1](queued=0,recved=13,sent=13)
/192.168.10.129:46186[1](queued=0,recved=75,sent=87)
/192.168.10.128:54440[1](queued=0,recved=37,sent=37)
Latency min/avg/max: 0/2/121
Received: 342
Sent: 373
Outstanding: 0
Zxid: 0x23
Mode: standalone
Node count: 12
HBase configuration is given as follows:
on master:-
hbase-site.xml :-
hbase.zookeeper.quorum
master
The directory shared by RegionServers.
hbase.master
master:60000
Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored.
hbase.cluster.distributed
true
The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
regionservers :-
slavea
on slavea :-
hbase-site.xml :-
hbase.zookeeper.quorum
master
The directory shared by RegionServers.
hbase.master
master:60000
Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored.
hbase.cluster.distributed
true
The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
regionservers :-
localhost
i have not given hbase.rootdir because if i give this property as master:54310/hbase, HMaster doesnt starts. And if i m not giving this hbase.rootdir property, HMaster runs. Thats why i didnt given hbase.rootdir property.
FYI: I have configured hadoop and hbase inside /usr/local directory
Please share the details of regionservers conf file.
on regionserver i.e slavea :-
hbase-site.xml :-
---hbase.cluster.distributed---
---true---
---hbase.master---
---master:60000---
---hbase.zookeeper.quorum---
---master---
regionserver file :-
---localhost---
Use slavea instead of localhost in regionservers conf file and restart the cluster.
i have done that as you said... after that i did as given follow....
hbase(main):003:0> flush '.META.'
0 row(s) in 0.1350 seconds
hbase(main):004:0> major_compact '.META.'
0 row(s) in 0.1070 seconds
hbase(main):005:0> create table 't3','f3'
NoMethodError: undefined method `table' for #
hbase(main):006:0> create 't3','f3'
ERROR: org.apache.hadoop.hbase.NotAllMetaRegionsOnlineException: org.apache.hadoop.hbase.NotAllMetaRegionsOnlineException: Timed out (10000ms)
Also when i scan .META. , it gives :-
hbase(main):007:0> scan '.META.'
ROW COLUMN+CELL ERROR: java.io.IOException: HRegionInfo was null or empty in -ROOT-, row=keyvalues={.META.,,1/info:server/1326275904462/Put/vlen=12, .META.,,1/info:serverstartcode/1326275904462/Put/vlen=8}
go through this link ... may this solve your problem ..
http://comments.gmane.org/gmane.comp.java.hadoop.hbase.user/18868
not getting any help from this link...
i have tried as they told in this link to give 127.0.0.1 localhost....
but still not able to solve the problem...
getting same error...
Hi,
I have done this,
hbase(main):002:0> scan '-ROOT-'
ROW COLUMN+CELL
.META.,,1 column=info:server, timestamp=1326366857720, value=slavea:
60020
.META.,,1 column=info:serverstartcode, timestamp=1326366857720, valu
e=1326366835760
1 row(s) in 0.2230 seconds
hbase(main):003:0> deleteall '-ROOT-','.META.,,1'
0 row(s) in 0.0170 seconds
hbase(main):004:0> scan '-ROOT-'
ROW COLUMN+CELL
0 row(s) in 0.0420 seconds
hbase(main):005:0> unassign '.META.,,1'
ERROR: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hbase.UnknownRegionException: .META.,,1
at org.apache.hadoop.hbase.master.HMaster.unassign(HMaster.java:1039)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:570)
at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1039)
hbase(main):012:0> create 'tff','ft'
ERROR: org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact region server null for region , row 'tff,,00000000000000', but failed after 7 attempts.
Exceptions:
org.apache.hadoop.hbase.TableNotFoundException: .META.
org.apache.hadoop.hbase.TableNotFoundException: .META.
org.apache.hadoop.hbase.TableNotFoundException: .META.
org.apache.hadoop.hbase.TableNotFoundException: .META.
org.apache.hadoop.hbase.TableNotFoundException: .META.
org.apache.hadoop.hbase.TableNotFoundException: .META.
org.apache.hadoop.hbase.TableNotFoundException: .META.
Now any help please...?
Hey ,
I am also trying to connect to HBASE from the Java application that is hosted on a machine and HBASE on the other machine. If both are on the same machine they work perfectly but on different machine they do not. I have changed the hbase-site.xml in my application as :
hbase.rootdir
hdfs://master:54310/hbase
The directory shared by region servers. Should be
fully-qualified to
include the filesystem to use. E.g:
hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
hbase.cluster.distributed
true
The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed
Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see
hbase-env.sh)
hbase.zookeeper.quorum
master
Comma separated list of servers in the ZooKeeper Quorum.
If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list of
servers which we will start/stop ZooKeeper on.
hbase.master
master:60010
hbase.zookeeper.property.clientPort
2181
But the application is throwing an exception a it is still trying to connect with localhost rather than to master
Please share your java code.
Hi Devsri,
#Sample Code: Create HBase table
import java.io.IOException;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.MasterNotRunningException;
import org.apache.hadoop.hbase.ZooKeeperConnectionException;
import org.apache.hadoop.hbase.client.HBaseAdmin;
public class HBaseTableOperation {
private static Configuration conf;
/**
* Initialize class Object
*/
public HBaseTableOperation() {
conf = new Configuration();
conf = HBaseConfiguration.create(conf);
final String HBASE_CONFIGURATION_ZOOKEEPER_QUORUM = "hbase.zookeeper.quorum";
final String HBASE_CONFIGURATION_ZOOKEEPER_CLIENTPORT = "hbase.zookeeper.property.clientPort";
conf.set(HBASE_CONFIGURATION_ZOOKEEPER_QUORUM, "IP_OF_ZOOKEEPER_MACHINE");
conf.setInt(HBASE_CONFIGURATION_ZOOKEEPER_CLIENTPORT, 2181;
}
/**
* Logger
*/
private static final Log log = LogFactory.getLog(HBaseTableOperation.class);
/**
* checks whether table exists or not in hbase
*
* @param tableName
* @return
*/
public boolean tableExists(String tableName) {
HBaseAdmin admin = null;
try {
admin = new HBaseAdmin(conf);
if (admin.tableExists(tableName)) {
return true;
}
} catch (MasterNotRunningException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ZooKeeperConnectionException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return false;
}
/**
* Create the HBase Table
*
* @param tableName
* HBase Table Name
* @param colFamilyNames
* HBase Table column famalies
*/
public void createTable(final String tableName, final String[] colFamilyNames) {
HBaseAdmin hbase;
try {
hbase = new HBaseAdmin(conf);
HTableDescriptor table = new HTableDescriptor(tableName);
int size = colFamilyNames.length;
for (int i = 0; i < size; i++) {
HColumnDescriptor colFamily = new HColumnDescriptor(colFamilyNames[i].getBytes());
table.addFamily(colFamily);
}
hbase.createTable(table);
} catch (MasterNotRunningException e) {
log.error("HBase master not running :"+e);
} catch (ZooKeeperConnectionException e) {
log.error("Error while connecting from Zookeeper Server :"+e);
} catch (IOException e) {
log.error("IOException :"+e);
}
}
}
Replace IP_OF_ZOOKEEPER_MACHINE with the zookeeper machine IP.
Thanks,
Ankit
Hi,
I have successfully started hadoop and hbase. Now the problem is when i give hbase.rootdir as hdfs://master:54310/hbase, HMaster doesnt starts
hbase.rootdir
hdfs://master:54310/hbase
The directory shared by region servers.
but if give nothing i.e like following,
hdfs://master:54310/hbase
The directory shared by region servers.
HMaster starts and works fine.
Then it is taking tmp file by default. so for rootdir wat to give and how to configure. what is the problem? please give me solution for this.
Please share the entries of /etc/hosts file and also share the entry of hadoop core-site.xml file.
On master machine:
/etc/hosts:
master 172.25.20.74
slave 172.25.20.71
core-site.xml:
< property >
< name >master< /name >
hbase-master:60000
< /property >
< property >
< name >hbase.rootdir< /name >
< value >hdfs://master:54310/hbase< /value >
< /property >
< property >
< name >hbase.cluster.distributed< /name >
< value >true< /value >
< /property >
< property >
< name >hbase.zookeeper.property.clientPort< /name >
< value >2181< /value >
< /property >
< property >
< name >hbase.zookeeper.quorum< /name >
< value >master< /value >
< /property >
regionserver file:
master
slave
On slave machine:
/etc/hosts :
master 172.25.20.74
slave 172.25.20.71
core-site and regionserver file: same as of master.
In this case master doesnt start but if i delete the hbase.rootdir property, then master starts and table is also created successfully.
In above i written wrong configuration for hbase.master. Its like this:
< property >
< name >hbase.master< /name >
< value >master:60000< /value >
< /property >
Give me solution for if i am giving hbase.rootdir property, then master should start and work properly.
Hi Pranay,
Thanks for your efforts.
But I wanted to have a look at the core-site.xml which you can find in your HADOOP_HOME/conf directory. Please provide the entries for that file,
Regards,
Ankit
sorry ankit, I misunderstand and gave you hbase-site file. Here are hadoop conf files:
conf/core-site.xml
< property >
< name >fs.default.name< /name >
< value >hdfs://master:54310< /value >
< /property >
< property >
< name >hadoop.tmp.dir< /name >
< value >/app/hadoop/tmp< /value >
< /property >
whats the solution?
It seems everything is correct.
Try to use hdfs://172.25.20.74:54310/hbase as value of hbase.rootdir property.
no...still not starting...
share the hbase master logs or mail me on ankitjaincs06@gmail.com
I have mailed you. Please check it out.
I go through the log file mailed by you.. I have seen the below error line on log file.
org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client = 42, server = 41)
your Hadoop and HBase version are not compatible with each other.
Which version of Hadoop and Hbase you are using?
I am using hadoop 0.20.2 and hbase 0.90.4
In this there is not durable sync as i read in apache hbase book. for that they said to give property of append to be true. so i gave that also. it works, but when i give hbase.rootdir property master doesnt starts.
I have successfully started hadoop and hbase. Thanks for your support.
hey i am using hadoop 0.20.2 and hbase 0.92.0
hadoop works fine but wen i start the hbase zookeeper on master and 2 slaves starts but region server doesnot start giving the following error.
hostname nor servname provided or not known.
plz help me....
Hi Tahreem,
Did you have added the mapping of regionservers specified in regionservers conf file of HBase into /etc/hosts file.
For example:
If your regionservers conf file of HBase contains the following entries:
regionserver1
regionserver2
then your /etc/hosts file must have following entries:
IP_of_regionserver1 regionserver1
IP_of_regionserver2 regionserver2
Thanks,
Ankit
i have added all the regionservers in /etc/hosts file.but nothing seems to work.is dere any other way(other then hbase) of retrieving database in hadoop????....plzz help....
hey i have successfully started the hbase but dont know how to retreive tables...can u plz give any example of retrieving data from the hbase tables???
Hi Tahreem,
Below the sample code for the same.
//Get an row of a table.
private static Configuration conf;
/**
* Initialize class Object
*/
public HBaseTableOperation() {
conf = new Configuration();
conf = HBaseConfiguration.create(conf);
final String HBASE_CONFIGURATION_ZOOKEEPER_QUORUM = "hbase.zookeeper.quorum";
final String HBASE_CONFIGURATION_ZOOKEEPER_CLIENTPORT = "hbase.zookeeper.property.clientPort";
conf.set(HBASE_CONFIGURATION_ZOOKEEPER_QUORUM, "hadoop-namenode"); conf.setInt(HBASE_CONFIGURATION_ZOOKEEPER_CLIENTPORT, 2181);
}
// function to get an row of a table.
public String getRow(String tableName, String rowName,String colFamilyName, String [] colName)
{
String result = colName[0];
try
{
HTable table = new HTable(conf, tableName);
byte[] rowKey = Bytes.toBytes(rowName);
Get getRowData = new Get(rowKey);
Result res = table.get(getRowData);
for(int j=0 ; j < colName.length ; j++)
{
byte[] obtainedRow = res.getValue(Bytes.toBytes(colFamilyName),Bytes.toBytes(colName[j]));
System.out.println(colName[j]);
String s = Bytes.toString(obtainedRow);
if(j==0)
result = colName[j] + "=" + s ;
else
result = result + "&" + colName[j] + "=" + s;
System.out.println(s);
}
} catch (IOException e)
{
System.out.println("Exception occured in retrieving data");
}
return result;
}
Note: Replace the "hadoop-namenode" with zookeeper machine or IP of your machine.
Hi Ankit,
Followed the steps and setup a cluster with 2 nodes. When starting hbase from master, regionserver on master is UP but not able to bring up regionserver2 on remote machine. Hbase is not able to establish connection to master where as Zookeeper did.
Seems my hbase-master is tied to localhost .. netstat gives me this for HMaster:
tcp6 0 0 127.0.0.1:60000 :::* LISTEN 5686/java
Any help appreciated..
********************************************
From Log file of remote region server:
2012-03-09 10:18:02,520 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server hadoop-namenode/10.172.41.195:2181, sessionid = 0x135f45cc9070004, negotiated timeout = 180000
......
2012-03-09 10:21:11,912 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Attempting connect to Master server at hbase-master:60000
2012-03-09 10:22:12,004 WARN org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to connect to master. Retrying. Error was:
java.net.ConnectException: Connection refused
Hi Ankit,
Little progress from above:
As a workaround , If I comment 127.0.0.1 localhost entry and map localhost to IP in /etc/hosts in my master node. my HMaster is tied to IP and remote regionserver started fine
Any idea how to fix this?
Hi Dreamer,
Please read the following URL: http://wiki.apache.org/hadoop/Hbase/Troubleshooting
Hi Ankit,
missed this link before..
Now mapping fully-qualified name to IP fixed the issue
Thanks..
hi ankit,
how to acess hbase trough hadoop in cloudera-manager
Hi Vaddi,
Could you please explain you ploblem in detail.
Hi Ankit,
I Installed cloudera manager sucessfully.
After that when i am check the status of hbase master. I got following error
Traceback (most recent call last):
File "/usr/lib64/cmf/agent/src/cmf/monitor/master/__init__.py", line 87, in collect
json = simplejson.load(urllib2.urlopen(self._metrics_url))
File "/usr/lib64/python2.4/urllib2.py", line 130, in urlopen
return _opener.open(url, data)
File "/usr/lib64/python2.4/urllib2.py", line 358, in open
response = self._open(req, data)
File "/usr/lib64/python2.4/urllib2.py", line 376, in _open
'_open', req)
File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain
result = func(*args)
File "/usr/lib64/python2.4/urllib2.py", line 1032, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib64/python2.4/urllib2.py", line 1006, in do_open
raise URLError(err)
URLError:
Hi
Can you share how to install HBase in pseudo distributed mode
Thanx in advance
Hi I am new to hbase
please help me, after installation what should I do ??
I have to create POC in hbase, but I am not getting what should I do ??
please share some basic commands, and some real example
Extremely thank you
Hi Ankit i am getting FATAL Error In hbase service of one client
How to check the data in hbase is distributed or not.
Pls Help me out
Hi Ankit i am getting FATAL Error In hbase service of one client
How to check the data in hbase is distributed or not.
Pls Help me out
thanks in advance pl help me out
It is a very useful blog! Thank you!
Hi Ankit, very Good post. I need a quick help. I followed you post and have implemented a node cluster, but instead of 4 i can only see 1 region server. I have 4 nodes c1,c2,c3,c4 in hadoop where c1 is master and datanode as well, the same in hbase too. But from the above config I have only c1 listed in my region servers. What might be the problem.
Hi,
did you mentioned the machine-name/Ip of all the four regionservers in $HBASE_INSTALL_DIR/conf/regionservers file?
Yes c1,c2,c3 c4. In my hadoop Cluster c1 is datanode as well as namenode
Here's my Query in detail http://stackoverflow.com/questions/11606675/hbase-site-config-for-distributed-mode-shows-only-one-region-server
Please share the logs of regionservers.
Thanks,
Ankit
Hey Ankit,
I am getting following error's while starting hbase.
ERROR: org.apache.hadoop.hbase.NotAllMetaRegionsOnlineException: org.apache.hadoop.hbase.NotAllMetaRegionsOnlineException: Timed out (10000ms)
from my master :
root@ctodev:/opt/softwares/hbase-0.90.6/bin# jps
15307 HMaster
15257 HQuorumPeer
8502 NameNode
15509 Jps
12299 Bootstrap
8688 SecondaryNameNode
8762 JobTracker
Slave :
root@wfdp-desktop:/opt/softwares/hbase-0.90.6/conf# jps
8535 HRegionServer
8844 Jps
4124 DataNode
4260 TaskTracker
Thanks,
Vaibhav.
Ankit,
You are doing great job. Can you please post MongoDB also as well?
Convenient base classes for backing Hadoop MapReduce jobs with Hbase tables.
hi ANkit,
when i start hbase, following errors appear
hbase-master: starting zookeeper, logging to /usr/local/hbase/bin/../logs/hbase-hduser-zookeeper-HHDPISHU.out
hbase-master: java.io.IOException: Could not find my address: localhost in list of ZooKeeper quorum servers
hbase-master: at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.writeMyID(HQuorumPeer.java:134)
hbase-master: at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.main(HQuorumPeer.java:61)
Hi Aniket, I have tried steps you told but got the error on my slave machine as follows
FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: Master rejected startup because clock is out of sync
org.apache.hadoop.hbase.ClockOutOfSyncException: org.apache.hadoop.hbase.ClockOutOfSyncException: Server slave,60020,1381986646465 has been rejected; Reported time is too far out of sync with master. Time difference of 59713ms > max allowed of 30000ms
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hbase.ClockOutOfSyncException: Server slave,60020,1381986646465 has been rejected; Reported time is too far out of sync with master. Time difference of 59713ms > max allowed of 30000ms
at org.apache.hadoop.hbase.master.ServerManager.checkClockSkew(ServerManager.java:235)
at org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:169)
2013-10-17 10:40:46,778 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server slave,60020,1381986646465: Unhandled exception: org.apache.hadoop.hbase.ClockOutOfSyncException: Server slave,60020,1381986646465 has been rejected; Reported time is too far out of sync with master. Time difference of 59713ms > max allowed of 30000ms
java.lang.NullPointerException
at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1918)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:799)
at java.lang.Thread.run(Thread.java:722)
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/rs/slave,60020,1381986646465
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
2013-10-17 10:40:46,916 INFO org.apache.zookeeper.ZooKeeper: Session: 0x141c4d398b70001 closed
2013-10-17 10:40:46,916 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
2013-10-17 10:40:46,916 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server null; zookeeper connection closed.
2013-10-17 10:40:46,916 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: regionserver60020 exiting
2013-10-17 10:40:46,917 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-5,5,main]
2013-10-17 10:40:46,917 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown hook
2013-10-17 10:40:46,917 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs shutdown hook thread.
2013-10-17 10:40:46,918 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook finished.
Hello Sanjay,
It seems the clock time of your slave and master machines is out of sync.
You need to sync the time using ntp server or any other mechanism.
Hi Aniket, after correcting time problem now i am getting following problem:
2013-10-17 17:24:06,232 [main-SendThread(localhost:2181)] WARN
org.apache.zookeeper.ClientCnxn - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
How to solve this problem?
Hi Sanjay,
The above issue is zookeeper connectivity issue.
are you able to create a table into HBase using shell?
yes
Means HBase cluster is working fine.
In which scenario, you are getting above exception?
When i try to insert data into the hbase table using Pig Script.
Greetings,
first of all thank you for your tutorial, very easy to read and understand. I have installed 6 node cluster with 1 master and 5 regionservers. However, mine setup is not working properly and I can not understnd why. I have installed Hadoop and I have all its's processes running. After I setup HBase and run /start-hbase.sh it says that all nodes have started but when I run "jps" I only have Hadoop processes running and nothing from HBase. After, when I try to use shell I get : "ERROR: org.apache.hadoop.hbase.MasterNotRunningException: Retried 7 times"
Also, when I try to /stop-hbase.sh I get "localhost: no zookeeper to stop because kill -0 of pid 22876 failed with status 1". Any ideas?
Hi ,
i installed hbase in fuly-distributed mode.when i want create a table then it is hanging....but i checked all the conf files all are correct...
am getting this error:
ERROR: org.apache.hadoop.hbase.MasterNotRunningException: Retried 7 times
Here is some help for this command:
Create table; pass table name, a dictionary of specifications per
column family, and optionally a dictionary of table configuration.
Dictionaries are described below in the GENERAL NOTES section.
Examples:
hbase> create 't1', {NAME => 'f1', VERSIONS => 5}
hbase> create 't1', {NAME => 'f1'}, {NAME => 'f2'}, {NAME => 'f3'}
hbase> # The above in shorthand would be the following:
hbase> create 't1', 'f1', 'f2', 'f3'
hbase> create 't1', {NAME => 'f1', VERSIONS => 1, TTL => 2592000, BLOCKCACHE => true}
hbase> create 't1', 'f1', {SPLITS => ['10', '20', '30', '40']}
hbase> create 't1', 'f1', {SPLITS_FILE => 'splits.txt'}
hbase> # Optionally pre-split the table into NUMREGIONS, using
hbase> # SPLITALGO ("HexStringSplit", "UniformSplit" or classname)
hbase> create 't1', 'f1', {NUMREGIONS => 15, SPLITALGO => 'HexStringSplit'}
could you please help me
thanks
Hi Ankit,
I am new in hbase
My question is when i am executing "bin/stop-hbase.sh" by master then its stop all region server ?
As my setup i have two region server one in my local and other is in other system
Right now i am facing problem when i stop hbase master then it's not stop region server this behavior is default by hbase or i am missing with some configuration ?
Hi Ankit,
i am new in hbase
when i start hbase i got error like
[root@Root hbase-0.98.5-hadoop2]# bin/start-hbase.sh
Error: Could not find or load main class org.apache.hadoop.hbase.util.HBaseConfTool
Error: Could not find or load main class org.apache.hadoop.hbase.zookeeper.ZKServerTool
starting master, logging to /usr/lib/hbase/hbase-0.98.5-hadoop2/logs/hbase-root-master-Root.out
Error: Could not find or load main class org.apache.hadoop.hbase.master.HMaster
root@hadoop-slave-1's password: hadoop-master: starting regionserver, logging to /usr/lib/hbase/hbase-0.98.5-hadoop2/logs/hbase-root-regionserver-Root.out
hadoop-master: Error: Could not find or load main class org.apache.hadoop.hbase.regionserver.HRegionServer
my hbase.xml is
hbase.rootdir
hdfs://hadoop-master:9000/hbase
hbase.cluster.distributed
true
hbase.zookeeper.quorum
10.0.0.126
hbase.zookeeper.property.dataDir
/home/hduser/hbase/zookeeper
hbase.zookeeper.property.clientPort
2181
hbase.master
hdfs://hadoop-master:60000
and my regionservers is
hadoop-slave-1
hadoop-master
please help me i have spent lot of time on it.
but issue remain same
Check if you have all the jars in the hbase/lib directory. You might not be using the right binaries. (Remember, dont use the source tarball) :)
ZooKeeper is the great tool used to coordinate clusters which may be distributed across different networks and locations. Managing and implementing highly distributed applications is always a complicated task, which is where ZooKeeper comes in. It allows for easier coordination and management by HBase and other distributed components of Hadoop. More at Hadoop Online Training
HRegion Server not starting in master node, but its starting in all slave nodes . Any help is appreciated ..
i'm using HBase-0.94.8,i'm getting error in log file as failed to start master and problem in loading webpage for hbase web interface
i'm using HBase-0.94.8,i'm getting error in log file as failed to start master and problem in loading webpage for hbase web interface
great and useful blog.. explanation are step by step so easy to understand.. thanks for sharing this blog to us
hadoop training institute in adyar | big data training institute in adyar | hadoop training in chennai adyar | big data training in chennai adyar
after reading this blog i very strong and clear in this topic and i got more useful information from this blog..
hadoop training institute in velachery | big data training institute in velachery | hadoop training in chennai velachery | big data training in chennai velachery
Hello Ankit Sir,
I need your help. Do you have any idea of trafodion. Please help me.
Thanks
Asheesh Jain
Thanks for your article. Its very helpful.HBase concepts was really useful. Hadoop training in chennai | Hadoop Training institute in chennai
Thank you.Well it was nice post and very helpful information on Big Data Hadoop Online Training Bangalore
Nice and good article. It is very useful for me to learn and understand easily. Thanks for sharing your valuable information and time. Please keep updating Hadoop Admin online training
Really very happy to say, your post is very interesting to read. You’re doing a great job.Keep it up
check out:
big data training cost in chennai
best big data training center in chennai
hadoop course in chennai
Excellent blog I visit this blog it's really informative. By reading your blog, I get inspired and this provides useful information.
Check out:
Selenium course fees in chennai
Best Selenium training in chennai
Selenium training courses in chennai
Selenium training courses in chennai
Interesting and informative blog.Thank you for sharing this informative blog with us.
Hire Expert Dedicated Developer
Please refer below if you are looking for Online Job Support and Proxy support from India
Java Online Job Support and Proxy support from India | AWS Online Job Support and Proxy Support From India | Python Online Job Support and Proxy Support From India | Angular Online Job Support from India | Android Online Job Support and Proxy Support from India
Thank you for excellent article.
Great article,keep sharing more posts with us.
Thank you...
big data hadoop course
hadoop admin online course
Fudx is a hospitality industry which caters the need of an individual by providing them with food,medicines,grocery and dairy products at their door steps with speedy delivery from your favourite places. One can order through Fudx app and the needs of the customers are met with their speedy service. One need not go anywhere ,just download its app and start ordering.
contact@thefudx.com
+91 9833 86 86 86
022 4976 1922
Post a Comment