Monthly Archives April 2008

The gems of our collection — The best of what's to come

Hooray! The infochimps have been waxy’ed.  Let’s see how the server bonobos stand up.

It’s been suggested that I highlight some of the “gems” of our collection, which we’re going to spend the whole weekend shoveling into the pile. These first few are really deep, and somewhat hard to get / not widely known:

  • Full game state for every play of every baseball game in 2007, majors and minors.  Additionally, for about half of the major league games, *pitch by pitch* trajectory and game state information.  (MLB Gameday)
  • Word frequencies in written text for ~800,000 word tokens (British National Corpus)
  • All the wikipedia infoboxes, turned on their side and put into a table for each infobox type.
  • 250,000+ Materials Safety Data sheets – the chemical and safety information required by OHSA
  • 100 years of Hourly weather data; from 1973 on there’s about 10,000 stations all taking hourly readings … put another way, it’s 475,000+ station-years of hourly readings and weighs in at ~15 GB compressed.

(Incidentally, many of those datasets sell for inexcusable and malicious prices.  For those with a commercial bent, something tells me there’s room in the market if you’re willing to accept a markup of less than 10,000 times).

These are a bit silly but interesting for their ridiculous depth:
* A variety of mathematical constants (pi, e, Catalan’s number, the Golden Ratio, others) calculated to in some cases a preposterous 100 billion decimal places (I’ll probably chop them off at a still-ludicrous 500 million).
* 5000 years of solar eclipse times, 6000 years of precise lunar phase, 6000 years of venus transits.
* Odds of Dying for every Cause of Death listed in the US in a given year.

There are also, of course, the well-known collections: IMDB.com, musicbrainz, dbpedia, CIA factbook, geonames, citeseer, census, statistical abstract and the like.  So let’s see how much of the low-hanging fruit we can toss up there this weekend (the hard parts are adding metadata, and getting the non-copyrightable data out of the copyrighted screenscrapes, so what you’ll see are minimal metadata and the non-screenscraped datasets — still beats paying $1200+/GB though.)

[edit: dates for holidays by country, year-by-year odds of dying for all causes of death from the recent 8 year, NIST values for physical and chemical constants, mechanical properties of common engineering materials, and the spoken and written word frequencies for ~800,000 word tokens datasets should be up later today — if the site is down briefly we’re pushing that update to the server.  (If the site is down not-briefly we’ve been del.waxyslashdiggdotted)  Thanks to my friend Ned for helping do some drudge work to get those out.]

All of Wikipedia's infoboxes & templates, in individual tables for each kind

FINALLY — got the wikipedia infoboxen posted to the site, along with some tiny fixes.

This is 3000+ tables on everything from ABA Teams through Simpsons Episodes to Zodiac Signs.  There’s a fair amount of cruft in these, but until I have live metadata editing going I’m not going to worry about it: it takes about 8 hours start to finish to process this dataset, they’re not perfect but they are perfectly usable.

I have the weather dataset and baseball datasets almost ready to go (along with a whole buncha others), but I’m going to take some time to get the site running better first.  Here’s a rough TODO list:

  1. live, versioned metadata editing
  2. uploading
  3. Allow grouping of datasets by collection and add category tags
  4. Make it so fields & contributors tie together.  (For complicated reasons, each dataset creates a new personal version of the field so you can’t actually walk from one “stock price” field to other datasets with that tag.

Then I’ll turn some intensive attention finally to the InfiniteMonkeywrench code.  We need better tools to wrangle these huge datasets into shape.

Good Neighbors and Open Grazing: Datasets, Creative Works and Copyright

Many people don’t know how broad our rights to factual data actually are.  Unlike the mishegaas that reigns in copyright land, the world of data is largely open (and rightfully so).  To arrive at the age of ubiquitous information with a sound policy, however, we have to exercise those rights assertively, respectfully and prudently.

Let me start with the traditional IANAL and point out that if you take legal advice from a chimpanzee you deserve what you get. Instead, read iusmentis on database law and bitlaw on compilations and databases. (In which case you can probably skip the rest of this post.) (Also, the following only applies to the US, where the database laws are actually more liberal than elsewhere; I have no idea what the situation is outside the US)

In general, a comprehensive assemblage of facts cannot be copyrighted. Copyright only applies where there is creative content. A comprehensive list of cars and retail prices cannot be copyrighted; a comprehensive collection of reviews of those cars can be copyrighted. A list of all the musical albums released each year is data; the lyrics and music within them is creative. A list of word tokens sorted by artist, genre, release date and song length is data, and a list of the top-100 selling albums by year is data. This is the important Feist Publications v. Rural Telephone Service case:

“Facts, whether alone or as part of a compilation, are not original and therefore may not be copyrighted. A factual compilation is eligible for copyright if it features an original selection or arrangement of facts, but the copyright is limited to the particular selection or arrangement. In no event may copyright extend to the facts themselves.” — Sandra Day O’Connor for the Supreme Court

“A collections of facts are not copyrightable per se … A compilation, like any other work, is copyrightable only if it satisfies the originality requirement (“an original work of authorship”). Facts are never original, so the compilation author can claim originality, if at all, only in the way the facts are presented. The facts must be selected, coordinated, or arranged “in such a way” as to render the work as a whole original.” — Sandra Day O’Connor for the Supreme Court

A presentation of data can be creative — you can’t xerox the blue book and hand that out. However, a conversion of otherwise unrestricted data into your own creative presentation satisfies this restriction. So would a presentation (original or converted) that did not arise from a creative act — you couldn’t claim copyright on a .CSV file of some dataset.

Besides “presentation” and a couple edge cases (“hot news”, “selection and arrangement”), the main one to be aware of is “Terms of Service“. If you have to agree to terms of service that restrict the data, but you take it anyway, you can be guilty of trespass. My understanding there is that if you can a) access the site by robot (no person clicks anything) AND b) there is no robots.txt, they shouldn’t be able to sustain a claim that it’s a restricted resource.

I personally go by balancing two principles:

  1. It’s our world, and we deserve access to the information that describes it.  Besides our legal rights, we have an even stronger moral claim to the chronicle of our collective story.  And we all stand to benefit: there have to be incentives to gather and organize data, but the modest benefits of making a data provider a lot richer don’t stand against the much larger marginal benefit of making the world a timy bit smarter.
  2. Be a good neighbor.  A lot of work goes in to gathering, processing, verifying, distributing an interesting dataset.  If we infochimps run around ignoring people’s requests for modest usage conditions, we’ll have a bit extra of open data and a lot extra of pissed-off ex-kindred souls who feel like we stole their cake.  Inevitably, this will mean that people won’t put data online at all for public access.

The best approach is

  • Scrupulously credit contributions, make clear that their efforts are recognized, and that we’ll link back to them for their ultimate benefit.
  • Clearly state the usage restrictions requested by the contributor, adhere to them, and ask that recipients of the data do the same.
  • Make clear the benefits to the world for making this data available.
  • Make clear the benefits to the contributor — this data will, for free, be enhanced with metadata, converted for use by diverse tools, interlinked with other rich datasets, and power interesting projects.  If your mission statement is “build reliable and exciting cars” or “make powerful music”, then your mission statement isn’t “explore and explain unexpected correlations among disparate rich information pools”.  Let someone else do it for you, and let them build the tools to do so around your data.  Consider how much Baseball has benefitted from its statistical revolution — fed by its incredibly rich ecosystem of open data.
  • Finally, as far as scientific or government prepared data that’s otherwise rights-free: gloves off, we’re taking that data.  If you’re a researcher, and you’re not openly sharing your data, you’re not only a bad scientist but also a bad person.  Ditto for data collected at taxpayer expense.