Although I am a bit late, it is still worth wishing the most significant 'Computer Science Thing' I have know since I got my computer science senses. You might find me biased towards Hadoop, but I am actually helpless, when it comes to Hadoop. I started my career as a Hadoop developer so i'll always have that 'first love' kinda feeling for Hadoop.
Back in 2004, not even Dough Cutting would have thought that Hadoop will so quickly grow into one of the most powerful computing platforms, when he had started to work on a platform for distributed storage and processing, after getting inspired by those 2 great papers from Google on GFS(Google File System) and MapReduce, which he later on named 'Hadoop' after his kid's toy elephant. And here we are today.
It was mid 2006 when I had heard about Hadoop for the first time at an Open Source Conference, held here in Bangalore(India). But I never knew at that time this is that piece of technology that is going to fire a revolution in the field of computing. After that I almost forgot about all of this. But destiny had tied Hadoop with me by then.
On one fine evening of early 2007, I went to see my sister who was working on something related to distributed computing at that time. I had actually gone there to get some guidance for my final year engineering project. That was the incident that changed everything. Asking about something for myself I ended up with some insights on Hadoop. Since then I am just in love with it and still trying to learn everything about it.
I am sorry if you were expecting this post to be a technical one, like other posts of mine. This one is just about Hadoop in a totally non technical way. I remember that thread from Doug Cutting which says "Release 0.1.0 of Hadoop is now available". It was April 2nd, 2006. Who would have imagined that this 0.1.0 would so quickly turn into 2.0.0. Many thanks to the great community, all the contributors, committer, QAs QCs and everybody else who has helped Hadoop in growing so fast and thus helping people like me.
Subscribe to:
Post Comments (Atom)
How to work with Avro data using Apache Spark(Spark SQL API)
We all know how cool Spark is when it comes to fast, general-purpose cluster computing. Apart from the core APIs Spark also provides a rich ...
-
HBase shell is great, specially while getting yourself familiar with HBase. It provides lots of useful shell commands using which you ca...
-
Hive is a wonderful tool for those who like to perform batch operations to process their large amounts of data residing on a Hadoop cluster ...
-
SSH (Secure Shell) is a network protocol secure data communication, remote shell services or command execution and other secure network ser...
Nice post Mohammad, happy birthday to hadoop ;)
ReplyDelete