Friday, April 26, 2013
Hadoop Herd : When to use What...
8 years ago not even Doug Cutting would have thought that the tool which he's naming after the name of his kid's soft toy would so soon become a rage and change the way people and organizations look at their data. Today Hadoop and BigData have almost become synonyms to each other. But Hadoop is not just Hadoop now. Over the time it has evolved into one big herd of various tools, each meant to serve a different purpose. But glued together they give you a powerpacked combo.
Having said that, one must be careful while choosing these tools for their specific use case as one size doesn't fit all. What is working for someone might not be that productive for you. So, here I am trying to show you which tool should be picked in which scenario. It's not a big comparative study but a short intro to some very useful tools. And, I am really not an expert or an authority so there is always some scope of suggestions. Please feel free to comment or suggest if you have any. I would love to hear from you. Let's get started :
1- Hadoop : Hadoop is basically 2 things, a distributed file system(HDFS) which constitutes Hadoop's storage layer and a distributed computation framework(MapReduce) which constitutes the processing layer. You should go for Hadoop if your data is very huge and you have offline, batch processing kinda needs. Hadoop is not suitable for real time stuff. You setup a Hadoop cluster on a group of commodity machines connected together over a network(called as a cluster). You then store huge amounts of data into the HDFS and process this data by writing MapReduce programs(or jobs). Being distributed, HDFS is spread across all the machines in a cluster and MapReduce processes this scattered data locally by going to each machine, so that you don't have to relocate this gigantic amount of data.
2- Hbase : Hbase is a distributed, scalable, big data store, modelled after Google's BigTable. It stores data as key/value pairs. It's basically a database, a NoSQL database and like any other database it's biggest advantage is that it provides you random read/write capabilities. As I have mentioned earlier, Hadoop is not very good for your real time needs, so you can use Hbase to serve that purpose. If you have some data which you want to access real time, you could store it in Hbase. Hbase has got it's own set of very good API which could be used to push/pull the data. Not only this, Hbase can be seamlessly integrated with MapReduce so that you can do bulk operation, like indexing, analytics etc etc.
Tip : You could use Hadoop as the repository for your static data and Hbase as the datastore which will hold data that is probably gonna change over time after some processing.
3- Hive : Originally developed by Facebook, Hive is basically a data warehouse. It sits on top of your Hadoop cluster and provides you an SQL like interface to the data stored in your Hadoop cluster. You can then write SQLish queries using Hive's query language, called as HiveQL and perform operations like store, select, join, and much more. It makes processing a lot easier as you don't have to do lengthy, tedious coding. Write simple Hive queries and get the results. Isn't that cool??RDBMS folks will definitely love it. Simply map HDFS files to Hive tables and start querying the data. Not only this, you could map Hbase tables as well, and operate on that data.
Tip : Use Hive when you have warehousing needs and you are good at SQL and don't want to write MapReduce jobs. One important point though, Hive queries get converted into a corresponding MapReduce job under the hood which runs on your cluster and gives you the result. Hive does the trick for you. But each and every problem cannot be solved using HiveQL. Sometimes, if you need really fine grained and complex processing you might have to take MapReduce's shelter.
4- Pig : Pig is a dataflow language that allows you to process enormous amounts of data very easily and quickly by repeatedly transforming it in steps. It basically has 2 parts, the Pig Interpreter and the language, PigLatin. Pig was originally developed at Yahoo and they use it extensively. Like Hive, PigLatin queries also get converted into a MapReduce job and give you the result. You can use Pig for data stored both in HDFS and Hbase very conveniently. Just like Hive, Pig is also really efficient at what it is meant to do. It saves a lot of your effort and time by allowing you to not write MapReduce programs and do the operation through straightforward Pig queries.
Tip : Use Pig when you want to do a lot of transformations on your data and don't want to take the pain of writing MapReduce jobs.
5- Sqoop : Sqoop is a tool that allows you to transfer data between relational databases and Hadoop. It supports incremental loads of a single table or a free form SQL query as well as saved jobs which can be run multiple times to import updates made to a database since the last import. Not only this, imports can also be used to populate tables in Hive or HBase. Along with this Sqoop also allows you to export the data back into the relational database from the cluster.
Tip : Use Sqoop when you have lots of legacy data and you want it to be stored and processed over your Hadoop cluster or when you want to incrementally add the data to your existing storage.
6- Oozie : Now you have everything in place and want to do the processing but find it crazy to start the jobs and manage the workflow manually all the time. Specially in the cases when it is required to chain multiple MapReduce jobs together to achieve a goal. You would like to have some way to automate all this. No worries, Oozie comes to the rescue. It is a scalable, reliable and extensible workflow scheduler system. You just define your workflows(which are Directed Acyclical Graphs) once and rest is taken care by Oozie. You can schedule MapReduce jobs, Pig jobs, Hive jobs, Sqoop imports and even your Java programs using Oozie.
Tip : Use Oozie when you have a lot of jobs to run and want some efficient way to automate everything based on some time (frequency) and data availabilty.
7- Flume/Chukwa : Both Flume and Chukwa are data aggregation tools and allow you to aggregate data in an efficient, reliable and distributed manner. You can pick data from some place and dump it into your cluster. Since you are handling BigData, it makes more sense to do it in a distributed and parallel fashion which both these tools are very good at. You just have to define your flows and feed them to these tools and rest of things will be done automatically by them.
Tip : Go for Flume/Chukwa when you have to aggregate huge amounts of data into your Hadoop environment in a distributed and parallel manner.
8- Avro : Avro is a data serialization system. It provides functionalities similar to systems like Protocol Buffers, Thrift etc. In addition to that it provides some other significant features like rich data structures, a compact, fast, binary data format, a container file to store persistent data, RPC mechanism and pretty simple dynamic languages integration. And the best part is that Avro can easily be used with MapReduce, Hive and Pig. Avro uses JSON for defining data types.
Tip : Use Avro when you want to serialize your BigData with good flexibility.
The list is actually pretty big, but I have covered only the most significant tools. Over time if I feel like something else should be mentioned here I would definitely do that. Comments and suggestions are welcome.
Sunday, April 21, 2013
Hadoop+Ubuntu : The Big Fat Wedding.
Now, here is a treat for all you Hadoop and Ubuntu lovers. Last month, Canonical, the organization behind the Ubuntu operating system, partnered with MapR, one of the Hadoop heavyweights, in an effort to make Hadoop available as an integrated part of Ubuntu through its repositories. The partnership announced that MapR's M3 Edition for Apache Hadoop will be packaged and made available for download as an integrated part of the Ubuntu operating system. Canonical and MapR are also working to develop a Juju Charm that can be used by OpenStack and other customers to easily deploy MapR into their environments.
The free MapR M3 Edition includes HBase, Pig, Hive, Mahout, Cascading, Sqoop, Flume and other Hadoop-related components for unlimited production use. MapR M3 will be bundled with Ubuntu 12.04 LTS and 12.10 via the Ubuntu Partner Archive. MapR also announced that the source code for the component packages of the MapR Distribution for Apache Hadoop is now publicly available on GitHub.
MapR is the only distribution that enables Linux applications and commands to access data directly in the cluster via the NFS interface that is available with all MapR Editions. The MapR M5 and M7 Editions for Apache Hadoop, which provide enterprise-grade features for HBase and Hadoop such as mirroring, snapshots, NFS HA and data placement control, will also be certified for Ubuntu.
Now, as you get Hadoop integrated natively with Ubuntu, it's a lot easier to install it and go. No more unnecessary downloads and wacky configuration steps. And the best part is the NFS interface available with MapR's distribution that enables other Linux commands and application to access the cluster data directly. The Ubuntu/MapR package will be available through the Ubuntu Partner Archive for 12.04 LTS and 12.10 releases of Ubuntu on the official website starting from April 25, 2013.
For more info you can get the Ubuntu and Hadoop: the perfect match white paper from here.
The free MapR M3 Edition includes HBase, Pig, Hive, Mahout, Cascading, Sqoop, Flume and other Hadoop-related components for unlimited production use. MapR M3 will be bundled with Ubuntu 12.04 LTS and 12.10 via the Ubuntu Partner Archive. MapR also announced that the source code for the component packages of the MapR Distribution for Apache Hadoop is now publicly available on GitHub.
MapR is the only distribution that enables Linux applications and commands to access data directly in the cluster via the NFS interface that is available with all MapR Editions. The MapR M5 and M7 Editions for Apache Hadoop, which provide enterprise-grade features for HBase and Hadoop such as mirroring, snapshots, NFS HA and data placement control, will also be certified for Ubuntu.
Now, as you get Hadoop integrated natively with Ubuntu, it's a lot easier to install it and go. No more unnecessary downloads and wacky configuration steps. And the best part is the NFS interface available with MapR's distribution that enables other Linux commands and application to access the cluster data directly. The Ubuntu/MapR package will be available through the Ubuntu Partner Archive for 12.04 LTS and 12.10 releases of Ubuntu on the official website starting from April 25, 2013.
For more info you can get the Ubuntu and Hadoop: the perfect match white paper from here.
Monday, April 15, 2013
Is your data really Big(Data)??
The advent of so many noticeable tools and technologies for handling BigData problems has made the lives of a lot of people and organizations easier. A lot of these are open source, they have good support, good community and are pretty active. But there is another aspect of it. When things become easy, free, with good support and in abundance, we often start to over-utilize them. Having said that, I would like to share one incident.
We organize Hadoop meetups here in Bangalore(India). In one of the initial meetings we just decided to exchange views with each other on how we are using Hadoop, and other related projects. There I noticed that a lot of folks were either using or planning to use Hadoop for problems which could easily be solved using traditional systems. In fact they could be solved in a much better and efficient way. There was absolutely no need to use Hadoop for these kind of problems. So, it raised question in my mind. The question was, are we really getting the 'point'. To me it seems like those folks were trying to stitch a piece of cloth using a sword.
From my experience, I have learned one thing. Even if we have the strongest of weapons we can't win a battle if we are not using it at the right spot at the right time. Same holds good for the industry. Normally we tend to use a particular 'thing' for all our needs, if we find that it had worked for us in the past. There is no harm in it. This is human tendency to try to make things swift. But this doesn't work always. Same is the case when it comes to BigData.
First of all, BigData is not an absolute term. It is rather relative. Relative to the resources that we have. For example 1PB might be big enough for me, but for an internet giant, say Google, it is still not that big. So how to decide whether the data which I am going to handle qualifies to be called BigData or not. The thumb rule is that once you cross the threshold after which you are not able to handle the data, which you have, with the help of resources and system you already have, you can assume that your data has grown into BigData. But, in the process we should always keep one thing in mind. Are we really able to exploit the resources we already have. Not to offend anyone, but I have seen it a couple of times that folks are not using their systems to the fullest and turning towards rather new, and meant for completely different systems, to solve their issues.
For instance if somebody wants to run real time ad-hoc queries over his or her 1TB data set, he or she could do it pretty efficiently using MySQL. Planning to use Hadoop or Hbase in such a situation makes no sense. Moreover it would be wastage of systems and resource, atleast in my view.
Long story short, 'think well before you act'. Analyze your data and the requirements properly and then conclude whether you are really gonna face BigData issues. Because, 'with BigData, comes big responsibilities'.
We organize Hadoop meetups here in Bangalore(India). In one of the initial meetings we just decided to exchange views with each other on how we are using Hadoop, and other related projects. There I noticed that a lot of folks were either using or planning to use Hadoop for problems which could easily be solved using traditional systems. In fact they could be solved in a much better and efficient way. There was absolutely no need to use Hadoop for these kind of problems. So, it raised question in my mind. The question was, are we really getting the 'point'. To me it seems like those folks were trying to stitch a piece of cloth using a sword.
From my experience, I have learned one thing. Even if we have the strongest of weapons we can't win a battle if we are not using it at the right spot at the right time. Same holds good for the industry. Normally we tend to use a particular 'thing' for all our needs, if we find that it had worked for us in the past. There is no harm in it. This is human tendency to try to make things swift. But this doesn't work always. Same is the case when it comes to BigData.
First of all, BigData is not an absolute term. It is rather relative. Relative to the resources that we have. For example 1PB might be big enough for me, but for an internet giant, say Google, it is still not that big. So how to decide whether the data which I am going to handle qualifies to be called BigData or not. The thumb rule is that once you cross the threshold after which you are not able to handle the data, which you have, with the help of resources and system you already have, you can assume that your data has grown into BigData. But, in the process we should always keep one thing in mind. Are we really able to exploit the resources we already have. Not to offend anyone, but I have seen it a couple of times that folks are not using their systems to the fullest and turning towards rather new, and meant for completely different systems, to solve their issues.
For instance if somebody wants to run real time ad-hoc queries over his or her 1TB data set, he or she could do it pretty efficiently using MySQL. Planning to use Hadoop or Hbase in such a situation makes no sense. Moreover it would be wastage of systems and resource, atleast in my view.
Long story short, 'think well before you act'. Analyze your data and the requirements properly and then conclude whether you are really gonna face BigData issues. Because, 'with BigData, comes big responsibilities'.
Tuesday, April 2, 2013
Happy Birthday Hadoop
Although I am a bit late, it is still worth wishing the most significant 'Computer Science Thing' I have know since I got my computer science senses. You might find me biased towards Hadoop, but I am actually helpless, when it comes to Hadoop. I started my career as a Hadoop developer so i'll always have that 'first love' kinda feeling for Hadoop.
Back in 2004, not even Dough Cutting would have thought that Hadoop will so quickly grow into one of the most powerful computing platforms, when he had started to work on a platform for distributed storage and processing, after getting inspired by those 2 great papers from Google on GFS(Google File System) and MapReduce, which he later on named 'Hadoop' after his kid's toy elephant. And here we are today.
It was mid 2006 when I had heard about Hadoop for the first time at an Open Source Conference, held here in Bangalore(India). But I never knew at that time this is that piece of technology that is going to fire a revolution in the field of computing. After that I almost forgot about all of this. But destiny had tied Hadoop with me by then.
On one fine evening of early 2007, I went to see my sister who was working on something related to distributed computing at that time. I had actually gone there to get some guidance for my final year engineering project. That was the incident that changed everything. Asking about something for myself I ended up with some insights on Hadoop. Since then I am just in love with it and still trying to learn everything about it.
I am sorry if you were expecting this post to be a technical one, like other posts of mine. This one is just about Hadoop in a totally non technical way. I remember that thread from Doug Cutting which says "Release 0.1.0 of Hadoop is now available". It was April 2nd, 2006. Who would have imagined that this 0.1.0 would so quickly turn into 2.0.0. Many thanks to the great community, all the contributors, committer, QAs QCs and everybody else who has helped Hadoop in growing so fast and thus helping people like me.
Back in 2004, not even Dough Cutting would have thought that Hadoop will so quickly grow into one of the most powerful computing platforms, when he had started to work on a platform for distributed storage and processing, after getting inspired by those 2 great papers from Google on GFS(Google File System) and MapReduce, which he later on named 'Hadoop' after his kid's toy elephant. And here we are today.
It was mid 2006 when I had heard about Hadoop for the first time at an Open Source Conference, held here in Bangalore(India). But I never knew at that time this is that piece of technology that is going to fire a revolution in the field of computing. After that I almost forgot about all of this. But destiny had tied Hadoop with me by then.
On one fine evening of early 2007, I went to see my sister who was working on something related to distributed computing at that time. I had actually gone there to get some guidance for my final year engineering project. That was the incident that changed everything. Asking about something for myself I ended up with some insights on Hadoop. Since then I am just in love with it and still trying to learn everything about it.
I am sorry if you were expecting this post to be a technical one, like other posts of mine. This one is just about Hadoop in a totally non technical way. I remember that thread from Doug Cutting which says "Release 0.1.0 of Hadoop is now available". It was April 2nd, 2006. Who would have imagined that this 0.1.0 would so quickly turn into 2.0.0. Many thanks to the great community, all the contributors, committer, QAs QCs and everybody else who has helped Hadoop in growing so fast and thus helping people like me.
Subscribe to:
Posts (Atom)
How to work with Avro data using Apache Spark(Spark SQL API)
We all know how cool Spark is when it comes to fast, general-purpose cluster computing. Apart from the core APIs Spark also provides a rich ...
-
HBase shell is great, specially while getting yourself familiar with HBase. It provides lots of useful shell commands using which you ca...
-
Hive is a wonderful tool for those who like to perform batch operations to process their large amounts of data residing on a Hadoop cluster ...
-
SSH (Secure Shell) is a network protocol secure data communication, remote shell services or command execution and other secure network ser...