What's Next: Infinite Monkeywrench starting to take form.

We’re starting beta testing of infochimps.org v1.0 — see the following post. In order to start really populating infochimps.org with dataset payloads, the Infinite Monkeywrench is about to get some major love. The following syntax is still evolving, but we’re already using it to do some really fun stuff: here’s a preview.

One of the data set’s we’re proud to be liberating is the National Climate Data Center’s global weather data. To use that data, you need the file describing each of the NCDC weather stations. (I’ll just describe the stations metadata file — the extraction cartoon for the main dataset is basically the same but like 10 feet wide.)

The weather station metadata is found at at ftp://ftp.ncdc.noaa.gov/pub/data/gsod/ish-history.txt, it’s a flat file, it has a header of 17 lines, it contains fields describing each stations latitude, longitude, call sign and all that, and has lines that look like

# USAF   WBAN  STATION NAME                  CTRY  ST CALL  LAT    LON     ELEV(.1M)
# 010014 99999 SOERSTOKKEN                   NO NO    ENSO  +59783 +005350 +00500

Here’s what a complete Infinite Monkeywrench script to download that file, spin each line into a table row, and export as CSV, YAML, and marked-up XML would look like:

    #!/usr/bin/env ruby
    require 'imw'; include IMW
    imw_components :datamapper, :flat_file_parser

    # Stage as an in-memory Sqlite3 connection:
    DataMapper.setup(:staging_db, 'sqlite3::memory:')

    # Load the infochimps schema -- this has table and field names including type info
    ncdc_station_schema = ICSSchema.load('ncdc_station_schema.icss.yaml')

    # Create the tables from the schema
    ncdc_station_schema.auto_migrate!

    # Parse the station info file
    stations = FlatFileParser.new({
	:database  => :staging_db,
	:schema    => ncdc_station_schema,
	:each_line => :station,
	:filepaths => [:ripd, ['ftp://ftp.ncdc.noaa.gov/pub/data/gsod/ish-history.txt']],
	:skip_head => 17,
	:cartoon   => %q{
	# USAF   WBAN  STATION NAME                  CTRY  ST CALL  LAT    LON     ELEV(.1M)
	  s6    .s5   .s30                           s2.s2.s2.s4  ..ci5   .ci6    .ci5
	},
      })

    # Dump as CSV, YAML and XML
    stations.dump_all out_file => [:fixd, "weather_station_info"], :formats => [:csv, :xml, :yaml]

Almost all of that is setup and teardown. Once the infochimps schema has field names, the only part you really have to figure out is the cartoon,

      s6    .s5   .s30                           s2.s2.s2.s4  ..ci5   .ci6    .ci5

If you’ve used perl’s unpack(), you’ll get the syntax — this says ‘take the USAF call sign from the initial 6-character string; ignore one junk character; … take one character as the latitude sign, and an integer of up to 5 digits as the scaled latitude, ….’

Rather load it into a database? Leave the last line out, and stage right into your DB. (Any of MySQL 4.x+, Potsgres 8.2+, SQLite3+ work.)

    # Load parsed files to the 'ncdc_weather' database in a remote MySQL DB store
    DataMapper.setup(:master_weather_db, 'mysql://remotedb.mycompany.com/ncdc_weather')

Surely a hand-tuned scripts will do this more thoroughly (and more quickly), but you can write this in a few minutes, set it loose on the gigabytes of data, and do all the rest from the comfort of your DB, your hadoop cluster, or a script that starts with populated datastructures given by a YAML file.

Another example. The US Nations Institute for Science and Technology (NIST) publishes an authoritative guide to conversion factors for units of measurement. It is, unhelpfully, only available as an HTML table or a PDF file.

If we feed into the InfiniteMonkeywrench

	fields:
	  - { name: unit_from,                  type: str},
	  - { name: unit_to,                    type: str},
	  - { name: conversion_mantissa,        type: float},
	  - { name: conversion_exponent,        type: float},
	  - { name: is_exact,                   type: boolean},
	  - { name: footnotes,
	      type: seq,
	      sequence: str }
  • The cartoon
	  { :each    => '//table.texttable/tr[@valign="top"]:not(:first-child)',
	    :makes   => :unit_conversion, # a UnitConversion struct
	    :mapping => [
	      '/td'      	  => { :unit_from, :unit_to, :conversion_mantissa, :conversion_exponent],
	      '/td/b'    	  => :is_exact,
	      '/td/a'    	  => :footnotes,
	    ]
	  }

We’d get back something like

  - unit_from: 		 'dyne centimeter (dyn · cm)'
    unit_to:		 ' newton meter (N · m)'
    conversion_mantissa:  1.0
    conversion_exponent: -0.7

  - unit_from: 		 'carat, metric'
    unit_to:		 'gram (g)'
    conversion_mantissa:  2.0
    conversion_exponent: -1
    is_exact: 		 true

  - unit_from: 		 'centimeter of mercury (0 °C) <a href="http://physics.nist.gov/Pubs/SP811/footnotes.html#f13">13</a>'
    unit_to:		 ' pascal (Pa)'
    conversion_mantissa: 1.33322
    conversion_exponent: 3
    footnotes:           [ '<a href="http://physics.nist.gov/Pubs/SP811/footnotes.html#f13">13</a>' ]

Now with some tweaking, you could do even more (and you’ll find you need to hand-correct a couple rows), but note:

  • Once one person’s done it nobody else has to.
  • This snippet gets you most of the way to a semantic dataset in your choice of universal formats.
  • In fact, there’s so little actual code left over we can eventually just take schema + url + cartoon as entered on the website, crawl the relevant pages, and provide each such dataset as CSV, XML, YAML, JSON, zip’d sqlite3 file … you get the idea — and we can do that without having to run code from strangers on our server.
  • Most importantly, for an end user this isn’t like trusting some random dude’s CSV file uploaded to a site named after a chimpanzee. The transformation from NIST’s data to something useful is so simple you can verify it by inspection. Of course, you can run the scripts yourself to check; or you can trace the Monkeywrench code itself; and once we have digital fingerprinting set up on infochimps.org anyone willing to stake their reputation on the veracity of a file can sign it — but it’s pretty easy to accept something this terse but expressive as valid. Our goal is to give transparent provenance of infochimps.org data to any desired degree.