Pop Data

Science Tells Us How to Have a Happy Relationship

The secret to a happy relationship has finally been cracked thanks to Happify and countless hours of scientific research. Now I know some of you have probably seen this infographic before (probably a couple months ago around Valentine’s Day), but I thought it was so good I had to share in case you missed it. Also who doesn’t want a few tips every now and then about how to dominate your romantic life?

The infographic includes some insightful pointers, but here’s a little tip of my own to help send you down the path to happiness with your partner — send it to your significant other for brownie points because it’s going to start a dialogue. Not only that but it’s going to start a positive dialogue (I’m 95% sure on that but don’t quote me), which just so happens to be the number one takeaway: Have a good positive to negative interaction ratio. This is probably the most important message, and the rest is all about how to reach that ratio.

Getting a good positive to negative interaction ratio comes in many shapes and sizes, but we’ll let the data speak for itself. Here’s how you and your partner can make it happen:

7TknIii Science Tells Us How to Have a Happy Relationship

Movies + Charts = Nerdy Creativity

I love movies. I love charts. I have to say that FlowingData did it again – this is brilliant:

AFI movie quotes Movies + Charts = Nerdy Creativity

 

In celebration of their 100-year anniversary, the American Film Institute selected the 100 most memorable quotes from American cinema. FlowingData took those quotes and created the 100 most memorable quotes in chart form.

See the chart in bigger detail, here. >>

As always, thank you FlowingData for providing interesting posts for us data nerds.




3527b357 2038 47ae a163 deda4a8c5176 Movies + Charts = Nerdy Creativity




More Complex in Asia: Mapping the Most Visited Website by Country

Being engulfed in the online world, this article from FlowingData caught my attention about the most visited website by country. Mark Graham and Stefano De Sabbata from Information Geographies mapped the most visited site based on Alexa data. Countries are sized by Internet population.

Seeing the pretty visual graphic along with the post didn’t draw my attention to the red and blue (the obvious Google and Facebook takeovers in the Americas and Europe), but instead to the massive screaming green.

TopSitePerCountry InternetPopulation More Complex in Asia: Mapping the Most Visited Website by Country

Mark Graham and Stefano De Sabbata’s findings suggest “the situation is more complex in Asia, as local competitors have been able to resist the two large American empires. Baidu is well known as the most used search engine in China, which is currently home to the world’s largest Internet population at over half a billion users. At the same time, we see a puzzling fact that Baidu is also listed as the most visited website in South Korea (ahead of the popular South Korean search engine, Naver). We speculate that the raw data that we are using here are skewed. However, we may also be seeing the Baidu empire in the process of expanding beyond its traditional home territory. The remaining territories that have escaped being subsumed into the two big empires include Yahoo! Japan in Japan (in join venture with SoftBank) and Yahoo! in Taiwan (after the acquisition of Wretch). The Al-Watan Voice newspaper is the most visited website in the Palestinian Territories, the e-mail service Mail.ru is the most visited in Kazakhstan, the social network VK the most visited in Belarus, and the search engine Yandex the most visited in Russia.”

READ More Complex in Asia: Mapping the Most Visited Website by Country

 

 

Thank you FlowingData for providing interesting findings for us data nerds.

 




b0bae296 90b0 4bfe 8177 b5ac72be71c6 More Complex in Asia: Mapping the Most Visited Website by Country




3 Tiers: What Infochimps and Netflix Have in Common

Infochimps Cloud 300x150 3 Tiers: What Infochimps and Netflix Have in CommonA recent article on Gigaom, “3 shades of latency: How Netflix built a data architecture around timeliness”, shines some light on how the best-in-class architecture for Big Data has 3 different levels, separated by the dimension of “timeliness”.

“Netflix knows that processing and serving up lots of data — some to customers, some for use on the backend — doesn’t have to happen either right away or never. It’s more like a gray area, and Netflix detailed the uses for three shades of gray — online, offline and nearline processing.”

Just as Netflix defined their “three shades of gray”, Infochimps defined the three shades through our three cloud services: Cloud::Streams (real-time processing / online), Cloud::Queries (near real-time processing / nearline), and Cloud::Hadoop (batch processing /offline). By satisfying all aspects along the time dimension, companies unlock the ability to handle virtually any use case. Collect data in real-time, or import it in batch. Process data and generate insights as it flows, or do it in large-scale historical jobs. Choose your Big Data analysis adventure by mixing and matching approaches.

The article highlights how this approach “is fairly common among web companies that understand that different applications can tolerate different latencies”. Just as LinkedIn and Facebook were mentioned sharing the same general theory, working with Infochimps will provide you the benefits from a similar architecture; delivering the superior “3 tier approach” to Big Data.




6fefa857 2e95 4742 9684 869168ac7099 3 Tiers: What Infochimps and Netflix Have in Common



Big Data’s Evolution: 5 Things That Might Surprise You

Evolution of Big Data Big Data’s Evolution: 5 Things That Might Surprise You  Over the past several years, Big Data has gone from being a somewhat obscure concept to a genuine business buzzword. As is often the case with buzzwords, when you dig a little deeper you find that many people have substantial misconceptions about what Big Data is, where it came from and where it is going.

Here are a few things that might surprise you about the evolution of Big Data:

  1. There are more “failures” out there than you’d think. We’re bombarded with the hype, but the reality is that this is still an early technology. As people are unfamiliar with the tech components of Big Data, they’re often prone to thinking that they can jump in and do everything themselves. However, the task of streaming and analyzing batch, near-real-time and real-time data in a comprehensible form is beyond the capabilities of most in-house IT departments, and will require outside expertise.

  2. It is an evolution, not a revolution. The topic of Big Data has exploded so quickly onto the media landscape that it’s easy to get the impression that it appeared from nowhere and is transforming business in a whirlwind. While the ultimate impact Big Data will have on business cannot be underestimated, its ascension has been much more incremental than media coverage might lead you to believe. Its earliest stages began more than a decade ago with the increasing focus on unstructured data, and since then companies have been steadily experimenting with and building capabilities and best practices. It’s important to make the distinction between evolution and revolution because viewing Big Data as revolutionary may lead to the temptation to dive in headlong without a real plan. The smart course of action involves identifying a very specific business challenge that you’d like to address with Big Data, and then expanding and iterating your program step-by-step.

  3. Big Business doesn’t yet ‘get’ Big Data. You’d think big enterprises would have captured a 360-degree view of their customers by now. But they haven’t, and evidence of this abounds in the sloppy marketing outreach efforts that everyone experiences on a daily basis. Two essential changes need to happen in order for enterprises to truly get a handle on Big Data:  1) corporations need to break down departmental silos and freely share customer insights organization-wide; and 2) they must start bringing the apps to the data, rather than bringing their data to the apps. Companies have been reluctant to embrace the cloud for sensitive, proprietary data due to security and other reasons. However, we now have the ability to build apps in virtual private clouds that reside in tier-4 data centers, eliminating the need for the expensive, risk-laden migrations that have stood in the way of enterprises’ ability to adopt effective Big Data strategies.

  4. Housing your own data is too cost-prohibitive. The old ways of doing things simply won’t work for Big Data — it’s just too big. While 10TB of legacy infrastructure costs in excess of $1M to store, the data warehouse for any significant company is going to be way past 20 TB. The math isn’t difficult —  housing your own data is super expensive. There’s no way that companies like Facebook and LinkedIn, for whom customer data is lifeblood, could have done it without leveraging the cloud. More and more, enterprises are discovering that they can achieve analytic insights from Big Data by deploying in the cloud.

  5. Hadoop alone won’t do it. Although Hadoop gets 80% of the Big Data attention, it’s only 20% of the solution. Predicting customer behavior is kind of like shooting a bullet with another bullet, and is going to require much more than a historical data perspective. Sure, Hadoop gets most of the press these days, but in order for enterprises to gain a truly customer-centric view they’ll need to tie together historical, real-time and near real-time data through a single, user-friendly interface that enables them to analyze and make decisions in-flight.

Dave Spenhoff is the VP of Marketing at Infochimps. He has helped emerging companies, including Inxight and MarkLogic, define and launch successful go-to-market strategies and has provided strategic marketing guidance to a number of technology companies.




6fefa857 2e95 4742 9684 869168ac7099 Big Data’s Evolution: 5 Things That Might Surprise You



There’s an app for that: Visualizing the Internet

“There’s an app for that.”

We’ve heard it many times, the spoken certainty that the necessities of the world are satisfied by an app.

We love cool apps as much as anyone, so FlowingData caught our attention again with this blog post: “App shows what the Internet looks like

Visualizing the Internet Theres an app for that: Visualizing the Internet

“In a collaboration between PEER 1 Hosting, Steamclock Software, and Jeff Johnston, the Map of the Internet app provides a picture of what the physical Internet looks like. Users can view Internet service providers (ISPs), Internet exchange points, universities and other organizations through two view options — Globe and Network. The app also allows users to generate a trace route between where they are located to a destination node, search for where popular companies and domains are, as well as identify their current location on the map.”

Now that’s a cool app.

Read more details here >> and download the app for free on iTunes.

Thank you FlowingData for providing interesting posts for us data nerds.




119efc1b cf09 4f4f 9085 057e76e0464c Theres an app for that: Visualizing the Internet




Image source: FlowingData.com

Streaming Data, Aggregated Queries, Real-Time Dashboards

Some customers have a large volume of historical data that needs to be processed in our Cloud::Hadoop. Others are trying to power data-driven web or mobile applications with our Cloud::Queries powered by a scalable, NoSQL database such as HBase or Elasticsearch.

But there’s one use case that keeps popping up across our customers, industries and across nearly all use cases: streaming aggregation of high-throughput data to be used to power dynamic customer-facing applications and dashboards for internal business users.

Why is Streaming Aggregation Such a Challenge?

Here are a couple of example use cases that demand of the streaming aggregation use case:

  • Retail: You have 10s of millions of customers and millions of products. Your daily transaction volumes are enormous (e.g. up to 10M events per second for some of our bigger online retailers) but they’re also at a very fine a level of detail. When reporting, you want to see data aggregated by product or by customer so you can do trending, correlation, market basket, etc., kinds of analyses.
  • Ad Tech: You generate 100s of millions of ads, pixels, impressions, clicks, conversions, each day. It’s uninteresting to track each event separately; you care about how a particular advertiser or campaign is doing. You need to provide real-time dashboards, which show performance and the value of your service to your advertisers over a dataset which can be queried ad-hoc or interactively.

Sound familiar? Do you:

  • Have more than 1 M+ new records per day delivered continuously (~10K new records / sec)? This is when things begin to get interesting.
  • Aggregate the input data on a subset of its dimensions? Say 100 dimensions?
  • Store the aggregated inputs for several months or years? So that you can analyze trends over time?
  • And, demand the ability to create dashboards or use business intelligence tools to slice and dice this data in a variety of ways?
  • Would you like to have the original input records available when you need them? Just in case your questions change later?

If you answered yes to some or all of these questions, then you need to investigate the stream aggregation services offered by Infochimps Cloud for Big Data.

But before we get into the benefits of using Infochimps’ cloud services to solve this problem, let me first describe some other approaches and why they ultimately can fail (see our recent survey here>>).

The Traditional Approach

The Traditional Approach1 Streaming Data, Aggregated Queries, Real Time Dashboards

The traditional approach to solving a streaming aggregation problem leverages only the traditional 3-tier web application stack of web client (browser), web/application server, and SQL database.

Many organizations start out with this technology when their applications are still new. Their initial success leads to growth, which leads to more input data, which leads to users and partners demanding more transparency and insight into their product: so BI dashboards become an important aspect of managing your business effectively.

The traditional web stack provides enough data processing power during the early days, but as data volumes grow, the process of dumping raw records into your SQL database and aggregating them once nightly no longer scales.

24 hours of data from the previous day starts taking 3-4 then 7-8, then 12-13 hours to process. Ever experience this? A problem almost over 300 IT professionals told us about in a recent survey, had to do with this issue of a nightly aggregation step that, many times, leads to many days of frustrating downtime or critical delays in the business or in the worst case a situation where you simply never can fix this scaling issue referred to as the “horizon of futility” — the moment when the amount of time taken to aggregate a given amount of data is equal to the amount of time taken to generate that data.

Are you using challenged with this scenario? Do you:

  • Rely on an SQL database like Oracle, SQL Server, MySQL, or PostgreSQL to store all your data?
  • Use this same database to calculate your aggregates in a (nightly?) batch process?
  • Grudgingly tolerate the slow down in the performance of the application during periods in which your database is executing your batch jobs?
  • Have data losses or long delays between input data and output graphs?
  • Feel overburdened by the operations workload of pushing the 3-tier technology stack?

If so, maybe you’ve already taken some of evolutionary steps using new webscale technologies such as Hadoop…

Half-Hearted Solution

Half a Solution2 Streaming Data, Aggregated Queries, Real Time Dashboards

Or should we say that your heart is in the right place, but the solution still falls short of expectations. Organizations confronted with streaming aggregation problems usually correctly identify one of the symptoms of lack of scalability in their infrastructure. Unfortunately, they often choose an approach to scale which is already known to them or easy to hire for: scale up your webservers and your existing SQL database(s), and then add Hadoop!

They make this choice because it is easy and it is incremental. Adding “just a little more RAM” to an SQL database may sound like the right approach, may often work just fine in the truly early days, but soon becomes unmanageable as the figure of merit — speedup in batch job per dollar spent (aka price-performance) on RAM for the database  becomes lower and lower as data volumes increase. This becomes even more costly as the organization needs to scale up resources to just “keep the lights on” with such an infrastructure.

Scaling of web services is often handled by spawning additional web servers (also referred to as ‘horizontally scaling’), which is a fine solution for the shared-nothing architecture of a web application. This approach, when applied to critical analytic data infrastructure, leads to the “SQL database master-slave replication and sharding” scenario that is supported by so many DBAs in the enterprise today.

What About Hadoop?

Confronted with some of these problems, organizations will often start attending Big Data conferences and learn about Hadoop, a batch processing technology at the very tip of the Big Data spear. This leads either to a search for talent where organizations quickly realize that Hadoop engineers and sys admins are incredibly rare resources; or internal teams get pulled from existing projects to build the “Hadoop cluster”. These are exciting times for internal staff, until after a period of time where the organization has a functioning Hadoop cluster, albeit at a great internal operations cost and after many months of critical business delay. This Hadoop cluster may even work, happily calculating aggregate metrics from data collected in streams the prior day, and even at orders of magnitude faster than with the Traditional Approach above.

Organizations who arrive at this point in their adoption of Big Data infrastructure then uneasily settle into believing they’ve solved their streaming aggregation problem with a newfangled batch-processing system with Hadoop. But many folks in the organization will then realize that:

  • They are spending too much time on operations and not enough time on product or business needs as engineering struggles with educating the organization on how to use these new technologies it doesn’t understand.
  • They are still stuck solving a fundamentally real-time problem with a batch-solution.
  • Their sharded approach is only delaying the inevitable.

How Does Facebook, Twitter, Linkedin, etc. Do It?

Multiple Applications Streaming Data, Aggregated Queries, Real Time Dashboards

It’s not surprising that Hadoop is the first Big Data technology brought in by many organizations. Google and then Yahoo! set the stage. But what they didn’t tell you was that is “yesterday’s approach”. So how do webscale companies like Facebook and the like to things today? Yes, Hadoop is powerful and it’s been around longer than many other Big Data technologies, and it has great PR behind it. But Hadoop isn’t necessarily the (complete) answer to every Big Data problem.

The streaming aggregation problem is by its nature real-time.  An aggregation framework that works in real-time is the ideal solution.

Infochimps Cloud::Streams provides this real-time aggregation because it is built on top of leading stream processing frameworks used by leaders TODAY.  Records can be ingested, processed, cleaned, joined, and — most importantly for all use cases — aggregated into time and field based bins in real-time: the bin for “this hour” or “this day” contain data from this second.

This approach is extremely powerful for solving the use cases defined above because:

  • Aggregated data is immediately available in downstream data stores and analysis (do you care to act on data now or hours, days, later?).
  • Raw data can be written to the same or a number of data stores for different kinds of processing to occur later. Not every data store is equal. You may need several to accommodate the organizations needs.
  • By NOT waiting for a batch job to complete means that data pipeline or analytics errors are IMMEDIATELY detected as they occur — and immediately recovered from — instead of potentially adding days of delay due to the failure of long-running batch jobs.
  • Ingestion and aggregation are decoupled from storing and serving historical data so applications are more robust.

Infochimps Cloud and its streaming services is more than just a point product: it’s a suite of data analytics services addressing your streaming, ad-hoc/interactive query, and batch analytics needs all in an integrated solution that you can take advantage of within 30 days. It is also offered as a private cloud service managed by dedicated support and operations engineers who are experts at Big Data.  This means you get all the benefits of Big Data technologies without having to bear the tremendous operations burden they incur.

What Comes Next?

We covered how Hadoop isn’t a good solution to the streaming aggregation problem but that doesn’t mean it isn’t useful.  On the contrary, long-term historical analysis of raw data collected by a streaming aggregation service is crucial to developing deeper insights than are available in real-time.

That’s why the Infochimps Cloud for Big Data also includes Hadoop.  Collect and aggregate data in real-time and then spin up a dynamic Hadoop cluster every weekend to process weekly trends.  The combination of real-time responsiveness and insight from long-time-scale analysis creates a powerful approach to harnessing a high throughput stream of information for business value.

Dhruv Bansal is the Chief Science Officer and Co-Founder of Infochimps, He holds a B.A. in Math and Physics from Columbia University in New York and attended graduate school in Physics at the University of Texas at Austin. For more information, email Dhruv at dhruv@infochimps.com or follow him on Twitter at @dhruvbansal.




119efc1b cf09 4f4f 9085 057e76e0464c Streaming Data, Aggregated Queries, Real Time Dashboards



ZDNet Article Asks The Same Question: Why Wouldn’t You?

Toby Wolpe, senior reporter at ZDNet, recently wrote an article entitled “Big data: Why most businesses just don’t get it” highlighting findings from Gartner vice president and analyst Debra Logan.

Her  quote caught our eye about how acquiring big-data services from a third party could make sense:

  • “If it is cheap, if big data turns out to be something you can get from someone else, you can rent the infrastructure, you can ship a bunch of your data and you can just see what happens, then why not? Why wouldn’t you do that?”

Chimpmark ZDNet Article Asks The Same Question: Why Wouldnt You? Why wouldn’t you do that? Why wouldn’t you want to benefit from the fastest way to develop and deploy Big Data environments with Infochimps?

Wolpe also states, “one of the main barriers to asking the right questions of big data is a lack of expertise and a shortage of data scientists”. The Infochimps team is made of data scientists and cloud computing experts, available to help you effectively leverage Big Data, resulting in better data-driven decisions.




6fefa857 2e95 4742 9684 869168ac7099 ZDNet Article Asks The Same Question: Why Wouldnt You?



Customized, Intelligent, Vertical Applications – The Future of Big Data?

Future of Big Data Customized, Intelligent, Vertical Applications   The Future of Big Data?

The Ideal Big Data Application Development Environment

Lets assume that your entire organization had access to the following building blocks:

  • Data: All sources of data from the enterprise (at rest and in motion)
  • Analytics: Any/All Queries, Algorithms, Machine Learning Models
  • Application Business Logic: Domain specific use-cases / business problems
  • Actionable Insights: Knowledge of how to apply analytics against data through the use of application business logic to produce a positive impact to the business
  • Infrastructure Configuration: High scalable, distributed, enterprise-class infrastructure capable of combining data, analytics, with app logic to produce actionable insights

Imagine if your entire organization was empowered to produce data-driven applications tailored specifically for your vertical use-cases?

Data-Driven Vertical Apps

banking Customized, Intelligent, Vertical Applications   The Future of Big Data?

You are a regional bank who is under heavier regulation, focused on risk management, and expanding your mobile offerings. You are seeking ways to get ahead of your competition through the use of Big Data by optimizing financial decisions and yields.

What if there was an easy and automated way to define new data sources, create new algorithms, apply these to gain better insight into your risk position, and ultimately operationalize all this by improving your ability to reject and accept loans?

Retailer Customized, Intelligent, Vertical Applications   The Future of Big Data?

You are a retailer who is being affected by the economic downturn, demographic shifts, and new competition from online sources. You are seeking ways of leveraging the fact that your customers are empowered by mobile and social by transforming the shopping experience through the use of Big Data.

What if there was an easy and automated way to capture all customer touch points, create new segmentation and customer experience analytics, apply these to create a customized cross-channel solution which integrates online shopping with social media, personalized promotions, and relevant content?

Telecommunications Customized, Intelligent, Vertical Applications   The Future of Big Data?

You are a fixed line operator, wireless network provider, or fixed broadband provider who is in the middle of convergence of both services and networks, and feeling price pressures of existing services. You are seeking ways to leverage cloud and Big Data to create smarter networks (autonomous and self-analyzing), smarter operations (improving working efficiency and capacity of day-to-day operations), and ways to leverage subscriber demographic data to create new data products and services to partners.

What if there was an easy and automated way to start by consuming additional data across the organization, deploy segmentation analytics to better target customers and increase ARPU?

It Starts With The “Infrastructure Recipe”

Application Dev Team Customized, Intelligent, Vertical Applications   The Future of Big Data?OK. You are a member of the application development team. All you have to do is create a data-driven application “deploy package.” It’s your recipe of all the data sources, analytics, and application logic needed to insert into this magical cloud service that produces your industry and use-case specific application. You don’t need to be an analytics expert. You don’t need to be a DBA, an ETL expert or even a Big Data technologist. All you need is a clear understanding of your business problem, and you can assemble the parts through a simple-to-use “recipe” which is abstracted from the details of the infrastructure used to execute on that recipe.

Any Data Source

Data Source Customized, Intelligent, Vertical Applications   The Future of Big Data?Imagine an environment where your enterprise data is at your fingertips – no heavy ETL tools, no database exports, no Hadoop flume or sqoop jobs. Access to data is as simple as defining “nouns” in a sentence. Where your data lives is not a worry. You are equipped with the magic ability to simply define what the data source is and where it lives and accessing it is automated. You also care less whether the data is some large historic volume living in a relational database or whether it is real-time streaming event data.

Analytics Made Easy

Analytics Customized, Intelligent, Vertical Applications   The Future of Big Data?Imagine a world where you can pick from literally thousands of algorithms and apply them to any of the above data sources in part or in combination. You create one algorithm and can apply it to years of historic data and/or a stream of live real-time data. Also, imagine a world where configuring your data in a format that your algorithms can consume is made seamless. Lastly, your algorithms execute on infrastructure in a parallel, distributed, highly scalable way. Getting excited yet?

Focus on Applications With Actionable Insights

Actionable Insights Customized, Intelligent, Vertical Applications   The Future of Big Data?

Now lets embody this combination of analytics and data in a way that can actually be consumed and acted upon. Imagine a world where you can produce your insights and report on them with your BI tool of choice. That’s kind of exciting.

But what’s even more exciting is the ability to deploy your insights operationally through an application that leverages your domain expertise and understanding of the business logic associated with the targeted use-case you are solving. Translation – you can code up a Java, Python, PHP, or Ruby application that is light, simple, and easy to build/maintain. Why? Because the underlying logic normally embedded in ETL tools, separate analytics software tools, MapReduce code, NoSQL queries and stream processing logic is pushed up into the hands of application developers. Drooling yet?  Wait, it gets better.

Big Data, Cloud and The Enterprise

Big Data Cloud Customized, Intelligent, Vertical Applications   The Future of Big Data?

Lets take this entire application paradigm and automate it within an elastic cloud service purpose-built for the organization. You have the ability to submit your application “deploy packages” to be instantly processed without having to understand the compute infrastructure and, better yet, without having to understand the underlying data analytic services required to process your various data sources in real-time, near real-time or in batch modes.

Ok…if we had such an environment, we’d all be producing a ton of next-generation applications…data-driven, highly intelligent and specific to our industry and use-cases.

I’m ready…are you?

Jim Kaskade serves as CEO of Austin-based Infochimps, the leading Big Data Platform-as-a-Service provider. Jim is a visionary leader within both large as well as small company environments with over 25 years of experience building hi-tech businesses, leading startups in cloud computing enterprise software, software as a service (SaaS), online and mobile digital media, online and mobile advertising, and semiconductors from their founding to acquisition.




6fefa857 2e95 4742 9684 869168ac7099 Customized, Intelligent, Vertical Applications   The Future of Big Data?



5 Questions Framing Data-Driven Decisions

5 Data Driven Decision Questions 5 Questions Framing Data Driven DecisionsWhile data-driven decisions is nothing new (remember the rise of “decision support systems” and “business intelligence”?), it does seem that enterprises have a new urgency these days: Enterprises that make data-driven decisions are gaining benefits ranging from better customer insights, higher sales, more efficient operations and lower costs. What’s not to like with that?

Today, the “volume, velocity and variety” of data that enterprises have at their disposal is mind-bendingly greater than just a few years ago. And, enterprises are embracing the kind of real-time decision making that does not just run the business, it runs the business smarter. Whether driving better customer engagement (and sales) or enabling more efficient operations, big data has become an essential asset for the modern enterprise.

Earlier in my career I was an operations research analyst – kind of an early-day data scientist. There were 5 questions I always made sure to answer regarding any project I undertook. These questions frame an analytic process that underlies making effective data-driven decisions, and I think they are as applicable today as ever.

  1. Do I understand the decision to be made, especially the business factors that make this decision important?
  2. Do I have a model that captures the decision process? I.e., do I have an analytic framework, mathematical description, appropriate algorithms, etc., that describe the decision to an appropriate degree of detail. Part of this is picking the right algorithms.
  3. Do I have the data? This is pretty obvious: if data is going to drive a decision, you need to have the data. Even in today’s environment of an overabundance of data, it’s still important to make sure you have data appropriate for the model and the decision.
  4. Do I have the necessary computational infrastructure? This used to mean, can I run this on my PC or do I need to get time in the data center? Today it means, how can I get a cluster of Hadoop servers pumping data into a NoSql database to drive my analytics. Today’s infrastructure is much harder to master.
  5. Am I producing results that are driving the decision? If so, great. If not, maybe I got something wrong in #’s 1-4. Repeat 1-4 until satisfied.

Questions 1 and 5 are about the business. Since you know your business better than anyone, you’re pretty much on your own for these. Questions 2, 3 and 4 are about the data, analytics and computational infrastructure to get you the answers you need. There are plenty of companies that can help you here, in whole or in part. The important thing is to not get bogged down in the infrastructure. That’s where the Infochimps platform really shines. As quoted from a recent TechCrunch article, “Infochimps is one of a growing ecosystem of companies that are programming the knowledge of data scientists, statisticians and programmers into applications that businesspeople can use.”




34523bb2 2e50 4f42 88a1 5bd9ed0fddac 5 Questions Framing Data Driven Decisions




Image Source: beafields.com