Wednesday, December 19, 2012

India to Open Up Its Data

Here comes a moment to be proud of. India has joined a select group of over 20 countries whose governments have launched open data portals. With a view to improving government transparency and efficiency, www.data.gov.in, (which is in beta right now) will provide access to a valuable repository of datasets, from government departments, ministries, and agencies, and autonomous bodies.


Data Portal India is a platform for supporting Open Data initiative of Government of India. The portal is intended to be used by Ministries/Department/Organizations of Government of India to publish datasets, and applications for public use. It intends to increase transparency in the functioning of Government and also opens avenues for many more innovative uses of Government Data to give different perspective.

The entire product is available for download at the Open Source Code Sharing Platform GitHub.

Open data will be made up of “non-personally identifiable data” collected, compiled, or produced during the normal course of governing. It will be released under an unrestricted license -- meaning it is freely available for everyone to use, reuse, or distribute, but citations will be required.

For a detailed info and all the terms and conditions, you can visit the official web site.

Exception in thread "main" java.lang.NoSuchFieldError: type at org.apache.hadoop.hive.ql.parse.HiveLexer.mKW_CREATE(HiveLexer.java:1602)

Now you have successfully configured Hadoop and everything is running perfectly fine. So, you decided to give Hive a try. But, oops...as soon as you try to create the very first table you find yourself into something like this :


Exception in thread "main" java.lang.NoSuchFieldError: type
        at org.apache.hadoop.hive.ql.parse.HiveLexer.mKW_CREATE(HiveLexer.java:1602)
        at org.apache.hadoop.hive.ql.parse.HiveLexer.mTokens(HiveLexer.java:6380)
        at org.antlr.runtime.Lexer.nextToken(Lexer.java:89)
        at org.antlr.runtime.BufferedTokenStream.fetch(BufferedTokenStream.java:133)
        at org.antlr.runtime.BufferedTokenStream.sync(BufferedTokenStream.java:127)
        at org.antlr.runtime.CommonTokenStream.setup(CommonTokenStream.java:132)
        at org.antlr.runtime.CommonTokenStream.LT(CommonTokenStream.java:91)
        at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:547)
        at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:438)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:416)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:909)
        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258)
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:215)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:406)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:557)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

Need not worry. It's something related to antlr-*.jar which is present inside you HIVE_HOME/lib directory. Just make sure you don't have any other antlr-*.jar in your classpath. If it still doesn't work, download the latest version from the ANTLR website and put it inside your HIVE_HOME/lib. Restart Hive and you are good to go...

NOTE: If you want to see how to configure Hadoop you can go here

How to work with Avro data using Apache Spark(Spark SQL API)

We all know how cool Spark is when it comes to fast, general-purpose cluster computing. Apart from the core APIs Spark also provides a rich ...