The practice, known as “surveillance pricing,” relies on AI-driven analysis of personal data to tailor costs for each consumer, often in real time.
What once seemed like a distant possibility — paying more than a neighbor for the same carton of milk or airline ticket — is already taking shape across industries. Airlines, e-commerce platforms, and retailers are exploring pricing systems that track digital footprints to determine a buyer’s likely “pain point,” the highest amount they’re willing to pay before walking away.
The U.S. Federal Trade Commission has begun examining how companies use consumer data to drive such pricing decisions. Its January 2025 report outlined how retailers and digital platforms collect vast amounts of behavioral information — including search patterns, device type, location, and even how long a cursor lingers over a button — to predict spending limits.
These insights feed into machine learning systems that adjust prices in fractions of a second, creating a personalized market where no two customers necessarily see the same cost for the same item. Surveillance pricing represents a major shift from the traditional model of dynamic pricing, which adjusts costs broadly based on supply and demand.
Instead of charging more when demand rises or stock runs low, this new approach applies data analytics to infer personal willingness to pay. Every online interaction, from scrolling speed to shopping cart abandonment, becomes a data point that algorithms translate into a financial profile.
Analysts say this evolution transforms commerce from a competitive marketplace into a behavioral experiment. Instead of discovering a fair price through competition, consumers face a mirror reflecting their own data. Someone browsing on a high-end device may be classified as affluent, while another with a nearly drained phone battery might be flagged as desperate to complete a transaction quickly. Each variable can influence how much an algorithm thinks they can be pushed to pay.
Concerns about the ethics of this practice intensified after disclosures by major U.S. corporations that artificial intelligence already plays a role in pricing. One airline recently revealed that AI shapes a small but growing portion of its domestic fares, a figure expected to expand significantly by year’s end.
The admission triggered questions from lawmakers about whether such systems exploit private information or discriminate among customers. Regulators have asked for transparency, seeking to understand whether pricing models rely on personal data or simply market analytics.
The line between legitimate business strategy and digital manipulation is increasingly thin. Companies argue that algorithmic pricing helps balance demand and efficiency, ensuring competitive offers for those who shop smart.
Consumer advocates counter that when AI has access to intimate behavioral data — especially without consent — the playing field is no longer fair. If one shopper’s profile marks them as less price-sensitive, they could be quietly charged more than another person for the same product, in the same place, at the same time.
The FTC’s findings showed how detailed these systems can become. Companies collect data through account sign-ups, loyalty programs, and embedded trackers across websites and apps. These “pixels” capture granular signals like how far a user scrolls down a page, which videos they watch, and how fast they navigate between screens.
Combined with demographic data and purchase history, such patterns enable retailers to infer emotions, urgency, or economic status — indicators that can shape what price is presented next. Surveillance pricing also extends beyond online shopping.
Physical retailers equipped with digital loyalty systems or app-based coupons can merge in-store behavior with online records, creating an integrated picture of individual spending habits. Cameras, sensors, and point-of-sale software can link purchase decisions to digital identities, deepening the loop of predictive analytics.
Academic research suggests that this method allows sellers to divide consumers into increasingly fine-grained categories. AI tools can map out thousands of micro-segments, each aligned with a different price curve. In its most advanced form, the system no longer seeks a general price that suits a crowd — it seeks the maximum revenue extractable from each specific person.
While some businesses describe the technology as a natural extension of market evolution, regulators are beginning to view it as a potential threat to consumer fairness. Critics argue that without oversight, AI could learn to exploit psychological vulnerabilities.
Someone who hesitates on a product page may trigger a timed discount, while another who clicks quickly might be offered a higher “premium” price. Such techniques turn shopping into a form of behavioral gaming, rewarding some users while penalizing others for traits they never knew were being tracked.
The issue also exposes how much personal information ordinary consumers reveal during routine activity. From filling out surveys to accepting cookies on websites, millions of users unknowingly grant companies permission to collect the data that can later influence how much they pay. Even location settings or typing speed can become elements in the algorithmic calculus of pricing power.
The implications of surveillance pricing stretch far beyond retail. In housing, travel, healthcare, and even digital entertainment, algorithms can quietly adjust rates in ways that favor corporate profits over transparency. Lawmakers across the United States are now moving to address the practice, introducing dozens of bills to regulate how personal data informs price-setting.
By early 2025, legislators in 24 states had proposed measures targeting algorithmic pricing and rent-setting software. Several of these bills focus on disclosure, requiring companies to inform consumers when prices are generated or influenced by automated systems.
New York enacted one of the first such laws, banning undisclosed personalized pricing based on individual data. In Ohio, pending bills would compel businesses earning more than $5 million annually to disclose whether a price was determined by an algorithm.
California’s attempts to ban the use of personal data for surveillance-based pricing met resistance from corporate lobbies, leaving only limited protections in place. However, the debate is far from over. Federal regulators are also considering broader frameworks that could treat undisclosed individualized pricing as a deceptive practice under consumer protection law.
Similar efforts are emerging internationally. The United Kingdom’s Digital Markets, Competition and Consumers Act, which took effect in April, grants regulators authority to fine companies up to 10 percent of global revenue for unfair or opaque digital practices, including biased pricing. The law signals a growing consensus that AI-driven commerce requires new forms of accountability.
For consumers, the challenge lies in recognizing when they are being targeted. Surveillance pricing operates invisibly. Two people can browse the same site at the same time and never realize they are seeing different offers. In some cases, algorithms may even suppress discounts for “loyal” customers on the assumption they will buy regardless, while offering promotions to new users seen as flight risks.
The Federal Trade Commission’s study detailed how such differentiation could appear in subtle ways: online stores adjusting prices when a shopper hesitates, travel platforms altering fares based on search history, or pharmacies excluding frequent buyers from coupons. Each change may seem minor, but collectively they represent a transformation in how markets perceive and value individual consumers.
Experts warn that surveillance pricing blurs ethical boundaries because it monetizes personal psychology. The same data used to enhance user experience — faster recommendations, customized ads — can just as easily be repurposed to extract higher payments. As artificial intelligence improves, the gap between personalization and manipulation narrows.
For now, most companies defend algorithmic pricing as a legitimate tool of competition. They argue that data helps them respond to demand more efficiently, reduce waste, and tailor promotions. But transparency remains scarce. Few firms publicly explain what data sources feed their systems or how they determine when one customer should pay more than another.
Consumer advocates urge individuals to take limited steps to protect themselves. Private browsing modes, virtual private networks, and regularly cleared cookies can reduce tracking, though these methods offer imperfect shields. Device fingerprinting — the technique of identifying users through unique hardware and software configurations — can still allow companies to follow shoppers even when traditional tracking is blocked.
Watchdog groups also recommend minimizing personal data shared with retailers, declining optional surveys, and avoiding unnecessary account registrations. Yet even those precautions may not stop the practice entirely. As long as predictive algorithms can infer behavior from subtle signals like scrolling speed or response time, total privacy remains elusive.
The emergence of surveillance pricing underscores a larger shift in the relationship between technology and trust. What began as a promise of convenience — smarter systems that adapt to individual needs — has evolved into a mechanism capable of exploiting those same traits. The boundary between service and surveillance is increasingly blurred.
For regulators, the question is how to preserve innovation while preventing exploitation. For consumers, it is about how to participate in the digital marketplace without surrendering control over what they pay. The answers will shape not just the cost of a flight or a bottle of milk, but the meaning of fairness in the age of artificial intelligence.