How Big Data and Analytics Will Change Society
The societal impact of data analytics — predictions from 2013 and where we are now.
title: "How Big Data and Analytics Will Change Society" slug: "how-big-data-and-analytics-will-change-society" description: "The societal impact of data analytics — predictions from 2013 and where we are now." datePublished: "2013-01-21" dateModified: "2026-03-15" category: "Data Strategy" tags: ["society", "big data", "analytics", "impact"] tier: 3 originalUrl: "http://www.applieddatalabs.com/content/how-big-data-and-analytics-will-change-society" waybackUrl: "https://web.archive.org/web/20130121064503/http://www.applieddatalabs.com:80/content/how-big-data-and-analytics-will-change-society"
How Big Data and Analytics Will Change Society
In January 2013, I sat down and wrote a piece about how big data and analytics were going to change society. I made predictions. Some of them were right. Some were embarrassingly wrong. And a few came true in ways I never would have guessed.
Let me score my own report card.
What We Predicted in 2013
The original article laid out a vision of data as a democratizing force. We talked about personalized education, data-driven healthcare, smarter cities, and the ability for organizations to understand their customers at a level that had never been possible. We argued that as data collection scaled up and analytics tools got cheaper, the benefits would spread beyond Fortune 500 companies to small businesses, nonprofits, and government agencies.
We also flagged the risks. Privacy was a concern even then. We worried about surveillance, about data being used to discriminate, about the concentration of data power in a few large companies. But we framed these as problems to be solved, not reasons to slow down.
The overall tone was optimistic. Data was going to fix things.
We predicted data would change society. We just didn't predict that "change" would cut both ways so sharply.
The Scorecard
Personalized everything: Got this one right. Netflix recommendations, Spotify Discover Weekly, TikTok's algorithm, Amazon's "customers also bought." Personalization won. It won so thoroughly that most people now find it creepy rather than delightful.
Data-driven healthcare: Partially right. Electronic health records spread. Wearables produce floods of health data. But the healthcare system still runs on fax machines in many places, and data interoperability remains a mess in 2026.
Smarter cities: Mixed. Some cities deployed sensors and predictive analytics for traffic, water, and energy management. But the "smart city" vision we imagined, where data optimized everything from parking to policing, ran into serious equity and privacy concerns. Predictive policing, in particular, turned out to reinforce existing biases rather than eliminate them.
Data solving inequality: Wrong. I was too optimistic here. Data access didn't equalize. It concentrated. The companies with the most data got more powerful. The digital divide widened. Communities without broadband, without technical literacy, without representation in training data sets, got left behind or, worse, got misrepresented.
Surveillance as a concern: Correct, but we underestimated the scale. Edward Snowden's revelations came just months after this article was published. Cambridge Analytica was still five years away. Today, facial recognition in public spaces, location tracking by data brokers, and AI-generated deepfakes represent threats we didn't even have vocabulary for in 2013.
What We Didn't See Coming
The biggest miss was AI itself. In 2013, we were still talking about "analytics" and "data mining." Deep learning had just started winning image recognition competitions. Nobody in the mainstream was predicting that by 2023, a chatbot would pass the bar exam, or that AI would generate photorealistic images from text descriptions, or that companies would need entire operational frameworks just to manage their AI deployments.
We also didn't predict the attention economy. Data didn't just help companies understand customers. It helped platforms engineer addiction. The societal impact of algorithmically-optimized content feeds, teen mental health effects, political polarization, information silos, is arguably the biggest way data changed society. And it wasn't on anyone's radar in 2013.
Where We Are in 2026
AI is now actually changing society in the ways we dimly sensed but couldn't articulate. Generative AI is automating knowledge work. Workforce planning looks completely different when AI can draft legal briefs, write code, and generate marketing copy. The societal implications are real: job displacement, questions about authorship and creativity, the blurring line between human and machine output.
But here's what gives me hope. The conversation has matured. In 2013, we talked about data as though it were inherently good and just needed to be unleashed. In 2026, we understand that data and AI are tools, and tools can build or destroy depending on who wields them and how they're governed.
The organizations getting this right are the ones treating AI governance as a first-class concern, not an afterthought. They're asking not just "can we do this?" but "should we, and for whom?"
I got some predictions right in 2013. I got others wrong. But the prediction I'm most confident about now is this: the organizations that figure out how to use AI responsibly will outperform those that don't. Not because responsibility is a nice thing to have, but because trust is a competitive advantage that compounds over time.