Train to be a Hadoop Jedi Master for $50 at Data Day Austin

At Infochimps, we believe increasing the number of people familiar with handling and making sense of big data is good for the web community as a whole.  That’s why we are happy to contribute our expertise to Data Day Austin, an event put together by Lynn Bender at GeekAustin and our friends at Riptano.

Data Day Austin includes both basic and advanced training in Hadoop as well as Cassandra.  It takes place on Saturday, January 29, 2010 at the Norris Conference Center.  The speaker list is as follows:

Introduction to Cassandra for Java Developers
Nate McCall – Software Developer, Riptano

I Know Where You Are: an introduction to working with location data.
Sandeep Parikh
– Principal, Robotten Labs
Shaun Dubuque – Co-founder, Argia, Inc
Thinking of developing location-based apps? Sandeep and Shaun show you sources for location data and strategies for managing it.

Additional presentations and workshops to be announced shortly.

Hadoop Deep Dive includes:

It’s common to pay a few thousand dollars for a day of Hadoop training. We have Austin’s top Hadoop talent teaming up to give you a day of instruction as part of Data Day Austin. These are not mere presentations. If you so desire, we want you to leave Data Day Austin with a working knowledge of Hadoop.

Hadoop Introduction and Tutorial
Steve Watt (blog) – IBM Big Data Lead, IBM Software Strategy
This introduction includes MapReduce, the Hadoop Distributed File System, and the Hadoop ecosystem.

Higher Order Languages for Hadoop I – Wukong
Flip Kromer Founder and CTO, Infochimps
Wukong allows you to treat your dataset like:
* a stream of lines when it’s efficient to process by lines, * a stream of field arrays when it’s efficient to deal directly with fields
* a stream of lightweight objects when it’s efficient to deal with objects
No one knows more about Wukong that Flip Kromer.

Higher Order Languages for Hadoop II- Pig
Jacob Perkins – Hadoop Engineer, Infochimps
Pig is a Hadoop extension that simplifies Hadoop programming by giving you a high-level data processing language while keeping Hadoop’s simple scalability and reliability.

Web Crawling and Data Gathering with Apache Nutch
Steve Watt (blog) – IBM Big Data Lead, IBM Software Strategy
The first phase of any analytics pipeline is finding and loading the data. Apache Nutch is a Hadoop based web crawler that acts as an excellent tool to be able to pull down content from the web and load it into the HDFS to make it available for Hadoop Analytics. This session will teach you how to install and configure Nutch, how to use it to crawl and gather targeted content from the web and how to fine tune your crawls through the Nutch API.

Hadoop Analytics for the Business Professional
(BigSheets demonstration with multiple analytic scenarios)
Instructor to be announced shortly

Additional workshops/presentations to be announced…

Be sure to register soon as there is currently Early Bird pricing.  For comments, questions, or sponsorship opportunities, contact lynnbender@geekaustin.org

Comments

  1. Souvik May 19, 2012 at 11:54 pm

    Thanks for the interesting thigns you have exposed in your short article. One thing I’d like to touch upon is that FSBO interactions are built after a while. By bringing out yourself to owners the first end of the week their FSBO is definitely announced, prior to a masses start calling on Friday, you produce a good interconnection. By mailing them resources, educational components, free records, and forms, you become a great ally. By subtracting a personal desire for them and their problem, you make a solid relationship that, on most occasions, pays off once the owners decide to go with a real estate agent they know as well as trust preferably you actually.

  2. Pingback: Tweets that mention Train to be a Hadoop Jedi Master for $50 at Data Day Austin | blog.infochimps.org -- Topsy.com