Wednesday, February 19, 2014

Fun with HBase shell



HBase shell is great, specially while getting yourself familiar with HBase. It provides lots of useful shell commands using which you can perform trivial tasks like creating tables, putting some test data into it, scanning the whole table, fetching data from a specific row etc etc. Executing help on HBase shell will give you the list of all the HBase shell commands. If you need help on a specific command, type help "command". For example, help "get" will give you a detailed explanation of the get command.

But this post is not about the above said stuff. We will try to do something fun here. Something which is available, but less known. So, get ready, start your HBase daemons, open HBase shell and get your hands dirty.

For those of us who are unaware, HBase shell is based on JRuby, the Java Virtual Machine-based implementation of Ruby. More specifically, it uses the Interactive Ruby Shell (IRB), which is used to enter Ruby commands and get an immediate response. HBase ships with Ruby scripts that extend the IRB with specific commands, related to the Java-based APIs. It inherits the built-in support for command history and completion, as well as all Ruby commands.

We will start with something that is my favorite, which is having shell commands that provide jruby-style object-oriented references for tables. What does that mean?? Previously all of the HBase shell commands that act upon a table have a procedural style that always took the name of the table as an argument. But now it is possible to assign a table to a jruby variable. So no more unnecessary typing of table names.

The table reference can then be used to perform data read write operations such as puts, scans, and gets along with admin functionality such as disabling, dropping, describing tables.

For example, previously we would always have to specify a table name while performing some operations, like get, scan, disable etc :

hbase(main):000:0> create 'demo', 'cf'
0 row(s) in 1.0970 seconds

hbase(main):001:0> put 'demo', row1', 'cf:c1', 'val1'
0 row(s) in 0.0080 seconds

hbase(main):002:0> scan 'demo' 
ROW                                COLUMN+CELL
 row1                              column=cf:c1, timestamp=1378473207660, value=val1                                                      
1 row(s) in 0.0130 seconds

But now you can assign the table to a variable and use the results in jruby shell code :

hbase(main):007 > demo = create 'demo', 'cf'
0 row(s) in 1.0970 seconds

=> Hbase::Table - demo
hbase(main):008 > demo.put 'row1', 'cf:c1', 'val1'
0 row(s) in 0.0640 seconds

hbase(main):009 > demo.scan
ROW                           COLUMN+CELL                                                                        
 row1                            column=cf:c1, timestamp=1331865816290, value=val1                                        
1 row(s) in 0.0110 seconds

You can even assign a table to a variable by using the get_table method :

hbase(main):012:0> demo = get_table 'demo'
0 row(s) in 0.0010 seconds

=> Hbase::Table - demo
hbase(main):013:0> demo.put ‘row1’ ,’cf:c1’, ‘val1’ 
0 row(s) in 0.0100 seconds
hbase(main):014:0> demo.scan
ROW                                COLUMN+CELL                                                                                      
 row1                                column=cf:c1, timestamp=1378473876949, value=val1
1 row(s) in 0.0240 seconds

Isn't it handy?

NOTE : You need HBase 0.95 for this

Moving further, have you ever felt how cool it would be to have the ability to clear HBase shell? Quite often you would find HBase shell completely filled with results of previously executed queries. But we don't have a clear command like our OS to clear the shell and make it cleaner so that we can concentrate on the result of next query properly. To overcome this problem we can again take advantage of the fact that HBase shell is based on JRuby. All we have to do is create a .irbrc file with the desired customization logic. To do this we just have to create a file named .irbrc in our home directory and add the desired customization code in it.

For our clear screen example, we could do this :

vi ~/.irbrc

#Clear HBase shell
def cls
  system('clear')
end

Kernel.at_exit do
  IRB.conf[:AT_EXIT].each do |i|
    i.call
  end

end
~
~

Save the file and open HBase shell. Execute cls and if everything goes fine you will find your shell all clear. Another trick could be to have the history command enabled for HBase shell so that you just use the up arrow key to select a previously executed command. HBase by default maintains command history for a particular session. Once you come out of the shell the history is gone. But using the below shown piece of code you can use the history feature even if you restart the HBase shell. To do that reopen your ~/.irbrc file and append the below shown code in it. So, your ~/.irbrc will look like this :

vi ~/.irbrc

#Clear HBase shell
def cls
  system('clear')
end

#Enable history
require "irb/ext/save-history"
#No. of commands to be saved. 100 here
IRB.conf[:SAVE_HISTORY] = 100
# The location to save the history file
IRB.conf[:HISTORY_FILE] = "#{ENV['HOME']}/.irb-save-history"

Kernel.at_exit do
  IRB.conf[:AT_EXIT].each do |i|
    i.call
  end

end
~
~

Save the file and exit. To ross check, open HBase shell and start pressing the up arrow key. You should be able to see the commands executed in previous sessions.

Another good feature to have could be to have the ability to list HDFS dires/files from HBase shell like we can do from Pig's grunt shell or Hive shell. You will have to add these lines in your ~/.irbrc file for that :

vi ~/.irbrc

#Clear HBase shell
def cls
  system('clear')
end

#Enable history
require "irb/ext/save-history"
#No. of commands to be saved. 100 here
IRB.conf[:SAVE_HISTORY] = 100
# The location to save the history file
IRB.conf[:HISTORY_FILE] = "#{ENV['HOME']}/.irb-save-history"

#List given HDFS path
def ls(path)
  directory="/"+path
  system("$HADOOP_HOME/bin/hadoop fs -ls #{directory}")
end

Kernel.at_exit do
  IRB.conf[:AT_EXIT].each do |i|
    i.call
  end

end
~
~

Save the files and exit. Open your HBase shell and type :

hbase(main):012:0> ls ('directory_name')

This will list down all the directories and files present inside the directory called directory_name.

NOTE : Please mind the quotes(' ') in the above shown command.

Another shell feature which I really like is the ability to use HBase Filters. For example, if I want to get all the rows from a table called users where value of the column called name is abc, I can do this :

hbase(main):001:0> import org.apache.hadoop.hbase.util.Bytes

hbase(main):002:0> import org.apache.hadoop.hbase.filter.SingleColumnValueFilter

hbase(main):003:0> import org.apache.hadoop.hbase.filter.BinaryComparator

hbase(main):004:0> import org.apache.hadoop.hbase.filter.CompareFilter

hbase(main):005:0> scan 'users', { FILTER => SingleColumnValueFilter.new(Bytes.toBytes('cf'), Bytes.toBytes('name'), CompareFilter::CompareOp.valueOf('EQUAL'), BinaryComparator.new(Bytes.toBytes('abc')))}

This comes in pretty handy when you want to perform some quick checks on your data.

That was it for today. I will try to cover few more things some other day. As always, your comments and suggestions are welcome. Do let me know if there is any scope to make the post better in any manner.



How to work with Avro data using Apache Spark(Spark SQL API)

We all know how cool Spark is when it comes to fast, general-purpose cluster computing. Apart from the core APIs Spark also provides a rich ...