Pop Data

A Guide to the Internet of Things [Infographic]

Every so often there comes along a fact or prediction about the future of technology that blows my mind. This infographic, published by BusinessIntelligence.com, takes us on a tour of the Internet of Things landscape, exploring its current state and its future possibilities. This fact in particular is the one that got me this week: “Our Internet of Things world is growing at a breathtaking pace — from 2 billion objects in 2006 to a projected 200 billion objects by 2020.” Not only that but by 2025, the total global worth of IoT technology could reach up to $6.2 trillion, with healthcare and manufacturing making up the bulk.

This infographic goes into a variety of different industry examples, but I’m especially happy to see that robotic machinery got a shout out. (If you missed it, check out one of my previous blog posts on the quantified cow, where I talk about robotic milking machines.)

View the full infographic below and let us know what you think. Did anything in particular blow you away?

IoT Infographic A Guide to the Internet of Things [Infographic]

a9935ad0 3a68 44de 8b48 8fab1afa4b04 A Guide to the Internet of Things [Infographic]

10 Social Media Facts You Should Know in 2014

Did you know 500 million tweets are sent every day? Or that more than 55 million photos are shared every day? The online social landscape is impressive — both Facebook and YouTube have more than 1 billion users, Twitter has 232 million active users (and 651 million accounts), and Pinterest has more than 70 million users, just to name a few. There’s no doubt about it — that’s a staggering amount of people connecting through the Internet. While the popularity of these sites may be something we’ve known for a long time, the amount of data shared and transmitted on each social network just may surprise you.

Check out the video below (created by 24motiondesign) and let us know what you think in the comments. Did these stats surprise you?

Explore more visuals like this one on the web’s largest information design community – Visually.

5fd3b37b f0ff 4b11 a9ba 54ff208f06f1 10 Social Media Facts You Should Know in 2014

Science Tells Us How to Have a Happy Relationship

The secret to a happy relationship has finally been cracked thanks to Happify and countless hours of scientific research. Now I know some of you have probably seen this infographic before (probably a couple months ago around Valentine’s Day), but I thought it was so good I had to share in case you missed it. Also who doesn’t want a few tips every now and then about how to dominate your romantic life?

The infographic includes some insightful pointers, but here’s a little tip of my own to help send you down the path to happiness with your partner — send it to your significant other for brownie points because it’s going to start a dialogue. Not only that but it’s going to start a positive dialogue (I’m 95% sure on that but don’t quote me), which just so happens to be the number one takeaway: Have a good positive to negative interaction ratio. This is probably the most important message, and the rest is all about how to reach that ratio.

Getting a good positive to negative interaction ratio comes in many shapes and sizes, but we’ll let the data speak for itself. Here’s how you and your partner can make it happen:

7TknIii Science Tells Us How to Have a Happy Relationship

Movies + Charts = Nerdy Creativity

I love movies. I love charts. I have to say that FlowingData did it again – this is brilliant:

AFI movie quotes Movies + Charts = Nerdy Creativity


In celebration of their 100-year anniversary, the American Film Institute selected the 100 most memorable quotes from American cinema. FlowingData took those quotes and created the 100 most memorable quotes in chart form.

See the chart in bigger detail, here. >>

As always, thank you FlowingData for providing interesting posts for us data nerds.

3527b357 2038 47ae a163 deda4a8c5176 Movies + Charts = Nerdy Creativity

More Complex in Asia: Mapping the Most Visited Website by Country

Being engulfed in the online world, this article from FlowingData caught my attention about the most visited website by country. Mark Graham and Stefano De Sabbata from Information Geographies mapped the most visited site based on Alexa data. Countries are sized by Internet population.

Seeing the pretty visual graphic along with the post didn’t draw my attention to the red and blue (the obvious Google and Facebook takeovers in the Americas and Europe), but instead to the massive screaming green.

TopSitePerCountry InternetPopulation More Complex in Asia: Mapping the Most Visited Website by Country

Mark Graham and Stefano De Sabbata’s findings suggest “the situation is more complex in Asia, as local competitors have been able to resist the two large American empires. Baidu is well known as the most used search engine in China, which is currently home to the world’s largest Internet population at over half a billion users. At the same time, we see a puzzling fact that Baidu is also listed as the most visited website in South Korea (ahead of the popular South Korean search engine, Naver). We speculate that the raw data that we are using here are skewed. However, we may also be seeing the Baidu empire in the process of expanding beyond its traditional home territory. The remaining territories that have escaped being subsumed into the two big empires include Yahoo! Japan in Japan (in join venture with SoftBank) and Yahoo! in Taiwan (after the acquisition of Wretch). The Al-Watan Voice newspaper is the most visited website in the Palestinian Territories, the e-mail service Mail.ru is the most visited in Kazakhstan, the social network VK the most visited in Belarus, and the search engine Yandex the most visited in Russia.”

READ More Complex in Asia: Mapping the Most Visited Website by Country



Thank you FlowingData for providing interesting findings for us data nerds.


b0bae296 90b0 4bfe 8177 b5ac72be71c6 More Complex in Asia: Mapping the Most Visited Website by Country

3 Tiers: What Infochimps and Netflix Have in Common

Infochimps Cloud 300x150 3 Tiers: What Infochimps and Netflix Have in CommonA recent article on Gigaom, “3 shades of latency: How Netflix built a data architecture around timeliness”, shines some light on how the best-in-class architecture for Big Data has 3 different levels, separated by the dimension of “timeliness”.

“Netflix knows that processing and serving up lots of data — some to customers, some for use on the backend — doesn’t have to happen either right away or never. It’s more like a gray area, and Netflix detailed the uses for three shades of gray — online, offline and nearline processing.”

Just as Netflix defined their “three shades of gray”, Infochimps defined the three shades through our three cloud services: Cloud::Streams (real-time processing / online), Cloud::Queries (near real-time processing / nearline), and Cloud::Hadoop (batch processing /offline). By satisfying all aspects along the time dimension, companies unlock the ability to handle virtually any use case. Collect data in real-time, or import it in batch. Process data and generate insights as it flows, or do it in large-scale historical jobs. Choose your Big Data analysis adventure by mixing and matching approaches.

The article highlights how this approach “is fairly common among web companies that understand that different applications can tolerate different latencies”. Just as LinkedIn and Facebook were mentioned sharing the same general theory, working with Infochimps will provide you the benefits from a similar architecture; delivering the superior “3 tier approach” to Big Data.

6fefa857 2e95 4742 9684 869168ac7099 3 Tiers: What Infochimps and Netflix Have in Common

Big Data’s Evolution: 5 Things That Might Surprise You

Evolution of Big Data Big Data’s Evolution: 5 Things That Might Surprise You  Over the past several years, Big Data has gone from being a somewhat obscure concept to a genuine business buzzword. As is often the case with buzzwords, when you dig a little deeper you find that many people have substantial misconceptions about what Big Data is, where it came from and where it is going.

Here are a few things that might surprise you about the evolution of Big Data:

  1. There are more “failures” out there than you’d think. We’re bombarded with the hype, but the reality is that this is still an early technology. As people are unfamiliar with the tech components of Big Data, they’re often prone to thinking that they can jump in and do everything themselves. However, the task of streaming and analyzing batch, near-real-time and real-time data in a comprehensible form is beyond the capabilities of most in-house IT departments, and will require outside expertise.

  2. It is an evolution, not a revolution. The topic of Big Data has exploded so quickly onto the media landscape that it’s easy to get the impression that it appeared from nowhere and is transforming business in a whirlwind. While the ultimate impact Big Data will have on business cannot be underestimated, its ascension has been much more incremental than media coverage might lead you to believe. Its earliest stages began more than a decade ago with the increasing focus on unstructured data, and since then companies have been steadily experimenting with and building capabilities and best practices. It’s important to make the distinction between evolution and revolution because viewing Big Data as revolutionary may lead to the temptation to dive in headlong without a real plan. The smart course of action involves identifying a very specific business challenge that you’d like to address with Big Data, and then expanding and iterating your program step-by-step.

  3. Big Business doesn’t yet ‘get’ Big Data. You’d think big enterprises would have captured a 360-degree view of their customers by now. But they haven’t, and evidence of this abounds in the sloppy marketing outreach efforts that everyone experiences on a daily basis. Two essential changes need to happen in order for enterprises to truly get a handle on Big Data:  1) corporations need to break down departmental silos and freely share customer insights organization-wide; and 2) they must start bringing the apps to the data, rather than bringing their data to the apps. Companies have been reluctant to embrace the cloud for sensitive, proprietary data due to security and other reasons. However, we now have the ability to build apps in virtual private clouds that reside in tier-4 data centers, eliminating the need for the expensive, risk-laden migrations that have stood in the way of enterprises’ ability to adopt effective Big Data strategies.

  4. Housing your own data is too cost-prohibitive. The old ways of doing things simply won’t work for Big Data — it’s just too big. While 10TB of legacy infrastructure costs in excess of $1M to store, the data warehouse for any significant company is going to be way past 20 TB. The math isn’t difficult —  housing your own data is super expensive. There’s no way that companies like Facebook and LinkedIn, for whom customer data is lifeblood, could have done it without leveraging the cloud. More and more, enterprises are discovering that they can achieve analytic insights from Big Data by deploying in the cloud.

  5. Hadoop alone won’t do it. Although Hadoop gets 80% of the Big Data attention, it’s only 20% of the solution. Predicting customer behavior is kind of like shooting a bullet with another bullet, and is going to require much more than a historical data perspective. Sure, Hadoop gets most of the press these days, but in order for enterprises to gain a truly customer-centric view they’ll need to tie together historical, real-time and near real-time data through a single, user-friendly interface that enables them to analyze and make decisions in-flight.

Dave Spenhoff is the VP of Marketing at Infochimps. He has helped emerging companies, including Inxight and MarkLogic, define and launch successful go-to-market strategies and has provided strategic marketing guidance to a number of technology companies.

6fefa857 2e95 4742 9684 869168ac7099 Big Data’s Evolution: 5 Things That Might Surprise You

There’s an app for that: Visualizing the Internet

“There’s an app for that.”

We’ve heard it many times, the spoken certainty that the necessities of the world are satisfied by an app.

We love cool apps as much as anyone, so FlowingData caught our attention again with this blog post: “App shows what the Internet looks like

Visualizing the Internet Theres an app for that: Visualizing the Internet

“In a collaboration between PEER 1 Hosting, Steamclock Software, and Jeff Johnston, the Map of the Internet app provides a picture of what the physical Internet looks like. Users can view Internet service providers (ISPs), Internet exchange points, universities and other organizations through two view options — Globe and Network. The app also allows users to generate a trace route between where they are located to a destination node, search for where popular companies and domains are, as well as identify their current location on the map.”

Now that’s a cool app.

Read more details here >> and download the app for free on iTunes.

Thank you FlowingData for providing interesting posts for us data nerds.

119efc1b cf09 4f4f 9085 057e76e0464c Theres an app for that: Visualizing the Internet

Image source: FlowingData.com

Streaming Data, Aggregated Queries, Real-Time Dashboards

Some customers have a large volume of historical data that needs to be processed in our Cloud::Hadoop. Others are trying to power data-driven web or mobile applications with our Cloud::Queries powered by a scalable, NoSQL database such as HBase or Elasticsearch.

But there’s one use case that keeps popping up across our customers, industries and across nearly all use cases: streaming aggregation of high-throughput data to be used to power dynamic customer-facing applications and dashboards for internal business users.

Why is Streaming Aggregation Such a Challenge?

Here are a couple of example use cases that demand of the streaming aggregation use case:

  • Retail: You have 10s of millions of customers and millions of products. Your daily transaction volumes are enormous (e.g. up to 10M events per second for some of our bigger online retailers) but they’re also at a very fine a level of detail. When reporting, you want to see data aggregated by product or by customer so you can do trending, correlation, market basket, etc., kinds of analyses.
  • Ad Tech: You generate 100s of millions of ads, pixels, impressions, clicks, conversions, each day. It’s uninteresting to track each event separately; you care about how a particular advertiser or campaign is doing. You need to provide real-time dashboards, which show performance and the value of your service to your advertisers over a dataset which can be queried ad-hoc or interactively.

Sound familiar? Do you:

  • Have more than 1 M+ new records per day delivered continuously (~10K new records / sec)? This is when things begin to get interesting.
  • Aggregate the input data on a subset of its dimensions? Say 100 dimensions?
  • Store the aggregated inputs for several months or years? So that you can analyze trends over time?
  • And, demand the ability to create dashboards or use business intelligence tools to slice and dice this data in a variety of ways?
  • Would you like to have the original input records available when you need them? Just in case your questions change later?

If you answered yes to some or all of these questions, then you need to investigate the stream aggregation services offered by Infochimps Cloud for Big Data.

But before we get into the benefits of using Infochimps’ cloud services to solve this problem, let me first describe some other approaches and why they ultimately can fail (see our recent survey here>>).

The Traditional Approach

The Traditional Approach1 Streaming Data, Aggregated Queries, Real Time Dashboards

The traditional approach to solving a streaming aggregation problem leverages only the traditional 3-tier web application stack of web client (browser), web/application server, and SQL database.

Many organizations start out with this technology when their applications are still new. Their initial success leads to growth, which leads to more input data, which leads to users and partners demanding more transparency and insight into their product: so BI dashboards become an important aspect of managing your business effectively.

The traditional web stack provides enough data processing power during the early days, but as data volumes grow, the process of dumping raw records into your SQL database and aggregating them once nightly no longer scales.

24 hours of data from the previous day starts taking 3-4 then 7-8, then 12-13 hours to process. Ever experience this? A problem almost over 300 IT professionals told us about in a recent survey, had to do with this issue of a nightly aggregation step that, many times, leads to many days of frustrating downtime or critical delays in the business or in the worst case a situation where you simply never can fix this scaling issue referred to as the “horizon of futility” — the moment when the amount of time taken to aggregate a given amount of data is equal to the amount of time taken to generate that data.

Are you using challenged with this scenario? Do you:

  • Rely on an SQL database like Oracle, SQL Server, MySQL, or PostgreSQL to store all your data?
  • Use this same database to calculate your aggregates in a (nightly?) batch process?
  • Grudgingly tolerate the slow down in the performance of the application during periods in which your database is executing your batch jobs?
  • Have data losses or long delays between input data and output graphs?
  • Feel overburdened by the operations workload of pushing the 3-tier technology stack?

If so, maybe you’ve already taken some of evolutionary steps using new webscale technologies such as Hadoop…

Half-Hearted Solution

Half a Solution2 Streaming Data, Aggregated Queries, Real Time Dashboards

Or should we say that your heart is in the right place, but the solution still falls short of expectations. Organizations confronted with streaming aggregation problems usually correctly identify one of the symptoms of lack of scalability in their infrastructure. Unfortunately, they often choose an approach to scale which is already known to them or easy to hire for: scale up your webservers and your existing SQL database(s), and then add Hadoop!

They make this choice because it is easy and it is incremental. Adding “just a little more RAM” to an SQL database may sound like the right approach, may often work just fine in the truly early days, but soon becomes unmanageable as the figure of merit — speedup in batch job per dollar spent (aka price-performance) on RAM for the database  becomes lower and lower as data volumes increase. This becomes even more costly as the organization needs to scale up resources to just “keep the lights on” with such an infrastructure.

Scaling of web services is often handled by spawning additional web servers (also referred to as ‘horizontally scaling’), which is a fine solution for the shared-nothing architecture of a web application. This approach, when applied to critical analytic data infrastructure, leads to the “SQL database master-slave replication and sharding” scenario that is supported by so many DBAs in the enterprise today.

What About Hadoop?

Confronted with some of these problems, organizations will often start attending Big Data conferences and learn about Hadoop, a batch processing technology at the very tip of the Big Data spear. This leads either to a search for talent where organizations quickly realize that Hadoop engineers and sys admins are incredibly rare resources; or internal teams get pulled from existing projects to build the “Hadoop cluster”. These are exciting times for internal staff, until after a period of time where the organization has a functioning Hadoop cluster, albeit at a great internal operations cost and after many months of critical business delay. This Hadoop cluster may even work, happily calculating aggregate metrics from data collected in streams the prior day, and even at orders of magnitude faster than with the Traditional Approach above.

Organizations who arrive at this point in their adoption of Big Data infrastructure then uneasily settle into believing they’ve solved their streaming aggregation problem with a newfangled batch-processing system with Hadoop. But many folks in the organization will then realize that:

  • They are spending too much time on operations and not enough time on product or business needs as engineering struggles with educating the organization on how to use these new technologies it doesn’t understand.
  • They are still stuck solving a fundamentally real-time problem with a batch-solution.
  • Their sharded approach is only delaying the inevitable.

How Does Facebook, Twitter, Linkedin, etc. Do It?

Multiple Applications Streaming Data, Aggregated Queries, Real Time Dashboards

It’s not surprising that Hadoop is the first Big Data technology brought in by many organizations. Google and then Yahoo! set the stage. But what they didn’t tell you was that is “yesterday’s approach”. So how do webscale companies like Facebook and the like to things today? Yes, Hadoop is powerful and it’s been around longer than many other Big Data technologies, and it has great PR behind it. But Hadoop isn’t necessarily the (complete) answer to every Big Data problem.

The streaming aggregation problem is by its nature real-time.  An aggregation framework that works in real-time is the ideal solution.

Infochimps Cloud::Streams provides this real-time aggregation because it is built on top of leading stream processing frameworks used by leaders TODAY.  Records can be ingested, processed, cleaned, joined, and — most importantly for all use cases — aggregated into time and field based bins in real-time: the bin for “this hour” or “this day” contain data from this second.

This approach is extremely powerful for solving the use cases defined above because:

  • Aggregated data is immediately available in downstream data stores and analysis (do you care to act on data now or hours, days, later?).
  • Raw data can be written to the same or a number of data stores for different kinds of processing to occur later. Not every data store is equal. You may need several to accommodate the organizations needs.
  • By NOT waiting for a batch job to complete means that data pipeline or analytics errors are IMMEDIATELY detected as they occur — and immediately recovered from — instead of potentially adding days of delay due to the failure of long-running batch jobs.
  • Ingestion and aggregation are decoupled from storing and serving historical data so applications are more robust.

Infochimps Cloud and its streaming services is more than just a point product: it’s a suite of data analytics services addressing your streaming, ad-hoc/interactive query, and batch analytics needs all in an integrated solution that you can take advantage of within 30 days. It is also offered as a private cloud service managed by dedicated support and operations engineers who are experts at Big Data.  This means you get all the benefits of Big Data technologies without having to bear the tremendous operations burden they incur.

What Comes Next?

We covered how Hadoop isn’t a good solution to the streaming aggregation problem but that doesn’t mean it isn’t useful.  On the contrary, long-term historical analysis of raw data collected by a streaming aggregation service is crucial to developing deeper insights than are available in real-time.

That’s why the Infochimps Cloud for Big Data also includes Hadoop.  Collect and aggregate data in real-time and then spin up a dynamic Hadoop cluster every weekend to process weekly trends.  The combination of real-time responsiveness and insight from long-time-scale analysis creates a powerful approach to harnessing a high throughput stream of information for business value.

Dhruv Bansal is the Chief Science Officer and Co-Founder of Infochimps, He holds a B.A. in Math and Physics from Columbia University in New York and attended graduate school in Physics at the University of Texas at Austin. For more information, email Dhruv at dhruv@infochimps.com or follow him on Twitter at @dhruvbansal.

119efc1b cf09 4f4f 9085 057e76e0464c Streaming Data, Aggregated Queries, Real Time Dashboards

ZDNet Article Asks The Same Question: Why Wouldn’t You?

Toby Wolpe, senior reporter at ZDNet, recently wrote an article entitled “Big data: Why most businesses just don’t get it” highlighting findings from Gartner vice president and analyst Debra Logan.

Her  quote caught our eye about how acquiring big-data services from a third party could make sense:

  • “If it is cheap, if big data turns out to be something you can get from someone else, you can rent the infrastructure, you can ship a bunch of your data and you can just see what happens, then why not? Why wouldn’t you do that?”

Chimpmark ZDNet Article Asks The Same Question: Why Wouldnt You? Why wouldn’t you do that? Why wouldn’t you want to benefit from the fastest way to develop and deploy Big Data environments with Infochimps?

Wolpe also states, “one of the main barriers to asking the right questions of big data is a lack of expertise and a shortage of data scientists”. The Infochimps team is made of data scientists and cloud computing experts, available to help you effectively leverage Big Data, resulting in better data-driven decisions.

6fefa857 2e95 4742 9684 869168ac7099 ZDNet Article Asks The Same Question: Why Wouldnt You?