The data economy or digital economy refers to the production, distribution and consumption of digital data. This global system collects huge amounts of behavioral, social, location, and biometric data. Analysts use that data to produce insights, and businesses buy and sell those insights for profit.
In the data economy, companies and institutions collect the data we generate online. This data covers a range of online activity, including clicks, searches, purchases, and location. They analyze this data to guide business decisions, target ads, and develop new products. The more information companies can gather, the better they can turn everyday online activity into economic value.
Personalized advertising is a key part of the data economy.
Many people often describe the data economy as surveillance capitalism, a term that Harvard scholar Shoshana Zuboff popularized. It describes how companies track what people do online, collect their data, and use it to make money. This often happens through targeted advertising or data-driven products. “Surveillance” describes how companies constantly track people, while “capitalism” explains how they turn that data into profit.
While surveillance capitalism focuses on the business model of behavioral extraction and prediction, the data economy is broader. It covers:
For many people, the data economy and surveillance capitalism make services more convenient and relevant. However, this convenience also comes with real risks. These include loss of privacy, reduced autonomy, security breaches, discrimination, and growing power imbalances between individuals and large technology platforms.
The roots of the data economy lie in the explosive growth of the internet, social media, and mobile computing. As digital platforms became part of everyday use, companies collected more than what users knowingly shared. This includes search terms, purchases, posts, and the less visible traces of online behavior.
This background data, sometimes called digital exhaust, includes every click, scroll, pause, and GPS movement. Zuboff refers to this as behavioral surplus – the extra information collected beyond what’s necessary to provide a service. This extra data lets companies build detailed user profiles. They use these profiles to predict behavior and make money through targeted ads and other data-driven markets.
The rapid development of machine learning, artificial intelligence, and cloud storage has quickly scaled and expanded the data economy.
After companies collect data, algorithms analyze it to predict future behavior. These predictions include what people might buy, where they might go, or how they might vote. These predictions are valuable to businesses that want to influence decisions and maximize engagement or sales.
The global market for predictive analytics is growing quickly. Analysts expect the market to grow from about USD 19 billion in 2024 to over USD 80 billion by 2030. This growth shows how much commercial value companies place on predicting and shaping human behavior.
Datafication is the process of turning everyday human activities into measurable, explainable data. Every digital action, like tapping a fitness tracker or liking a post, creates a data point. Analysts can record and study this data.
The transformation extends to the Internet of Things (IoT). Devices like smart appliances, wearables, vehicles, and even smart cities collect data nonstop. While datafication supports innovation and efficiency, it also creates challenges. Continuous monitoring, loss of privacy, and few ways to opt out are major concerns.
Behind the scenes, platforms, data brokers, ad-tech systems, analytics companies, and cloud providers work together. They help drive the data economy.
Platforms such as search engines, social media networks, and e-commerce sites act as major data collectors.
Data brokers collect information from public records, purchases, and online activity. They use this data to build detailed consumer profiles, which they sell or share with third parties.
These operations are often unclear. Users rarely know which organizations hold their data or how those organizations use it.
In surveillance capitalism, the traditional relationship between user and service provider changes. The service itself is no longer the main product; the user’s behavior is. As the saying goes, “If you’re not paying for the product, you are the product.”
Platforms profit by transforming human experience into data that analysts can examine, package, and sell. The more detailed and predictive the data, the more valuable it becomes.
Zuboff uses the term instrumentarian power to describe a new type of control. This control happens when companies monitor, predict, and influence human behavior using data-driven systems. Unlike totalitarian power, which demands obedience, instrumentarian power aims to guide behaviour quietly and automatically, through nudges, notifications, and algorithmic suggestions.
This power operates invisibly, embedding itself in digital tools that seem neutral or helpful but aim to influence choices. Over time, it can shift control away from individuals and towards the systems that anticipate and steer their actions.
The overlap between the data economy and surveillance capitalism creates a range of risks for individuals and society:
Constant and large-scale data collection erodes privacy. Many people do not realize how much information companies collect about them.
This data can show personal details like health, feelings, or political views. Even small, seemingly harmless data points can combine to produce detailed personal insights. Making matters worse, privacy policies are often long and complex, and how to opt out can be difficult and unclear.
When algorithms predict and influence behavior, users may feel they are making free choices. In reality, someone carefully shapes their experiences. Zuboff warns that this “architecture of behavioral change” can weaken independent thinking. It can also impact democracy, as hidden feedback loops and targeted messages influence people’s choices.
The more data that is collected and stored, the larger the target for hackers and cybercriminals. Massive data breaches have exposed hundreds of millions of individuals to identity theft and fraud.
Companies often reuse data in new ways. For example, someone could analyze information from a health app for insurance or job decisions. These secondary uses can occur without consent and increase the risk of re-identification or misuse.
Algorithms that profile users can unintentionally reinforce biases or discrimination. People may receive different prices, job ads, or opportunities based on hidden factors like age, race, or location.
This creates power asymmetries: large corporations and governments hold vast amounts of knowledge about individuals, while individuals have little visibility or recourse. The balance of control shifts away from the user.
Beyond personal harm, widespread data monitoring can have chilling effects on free expression. When people feel watched, they may censor themselves or avoid controversial topics.
In politics, companies and campaigns use data-driven targeting to shape public opinion and voters’ choices. Micro-targeting, propaganda, and algorithmic governance challenge democratic transparency and trust.
Most digital systems operate on “consent by default” with users agreeing to complex terms they rarely read. Once companies or brokers receive data, no one knows how they will use it.
A single app or smart device can connect to multiple third-party systems. This lets companies combine data to create profiles that go far beyond what the user expects.
The European Union’s General Data Protection Regulation (GDPR) started in 2018. Still, one of the most complete privacy laws in the world exists. It gives people the right to access, correct, and delete their data, and requires companies to have a lawful reason to use it.
Some people say that GDPR is a large achievement. However, it does not fully address the business model of behavioral prediction. Enforcement of the law has also been inconsistent.
The United States has a more disjointed approach. Each state bases its privacy laws. For example, California has the Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA). These laws depend a lot on consumer complaints or actions by agencies.
Other regions, including Australia, Canada, and parts of Asia, have introduced or updated data protection laws. However, the level of protection and enforcement varies widely.
Some big tech companies have reacted to rules and consumer demands by adding privacy controls. For example, Apple has “opt-in” tracking permissions.
Google plans to phase out third-party cookies, but this has not happened yet. They are also considering new changes. But some argue these steps often serve as PR measures rather than solid changes to surveillance-based business models.
Innovation based on data isn’t always negative. It supports services like medical research, navigation, and communication. It also drives progress in AI, healthcare, logistics, and regulation.. But the exchange of data for access to services must be fair and transparent, and choice and competition must exist.
Reforming business models so that they respect human dignity and autonomy could help to protect consumers. New models could restrict the amount of data that researchers collect. They could also share value with users or offer subscription services that don’t rely on tracking behavior.
Decentralized identity and digital identity technologies give people more control over their personal information. Users can share only what’s necessary. The system also avoids using central databases that hackers might breach or users might misuse.
These privacy tools give power back to consumers from corporations. They create a safer and clearer way to join the data economy. Check out Anonyome Labs’ solutions for both individuals and businesses.
Data Economy