ai
It's Not the Pricing, It's the Opacity
"The New York Times reported this week that Maryland became the first US state to ban AI-driven price increases at grocery stores."
The New York Times reported this week that Maryland became the first US state to ban AI-driven price increases at grocery stores. The law takes effect in October. It prohibits grocery stores and third-party delivery services from using consumer data to set personalized prices. Two shoppers, same store, same item, same time: under Maryland law, they now pay the same price or somebody is breaking the rules.
Twenty days ago, NDP leader Avi Lewis called on the federal Liberal government to do the same thing here, and broader. Not just groceries. A Canada-wide ban on algorithmic personalized pricing across the board.
I'm going to be the unpopular opinion in the room.
This is a market problem, and the market should solve it. Disclosure should be mandatory. The activity itself shouldn't be banned.
What they're actually objecting to
For those not familiar, algorithmic personalized pricing is when a retailer uses data about you (your postal code, your browsing history, your battery level, your loyalty profile, the device you're shopping from) to estimate how much you would pay for a thing, then sets your price accordingly. Two shoppers in the same app, same milk, two different totals.
The Conversation put it cleanly: "Two shoppers, same store, same item: two different prices, generated by data neither of them can see."
That feels offensive. It feels like getting ripped off. I'll acknowledge that up front, because it is the legitimate underlying concern.
But there are two problems with the response.
We have been doing this for as long as commerce has existed
Personalized pricing is not a new invention from Silicon Valley. It is the oldest behaviour in commerce. The mechanism changed. The practice did not.
Walk into any bazaar from the last four thousand years and the merchant is sizing you up. Your clothes. Your accent. Whether you sound local. Whether you look like you know what the thing is worth. Whether you seem rushed. The opening price you get is a function of every signal the seller can read off you in three seconds. Tourists pay more. People in nice shoes pay more. Confident hagglers pay less.
Walk onto a car lot in 1995 and the same thing happens. Harvard's Ian Ayres documented exactly this in Pervasive Prejudice?: salespeople routinely opened with different numbers based on perceived wealth, gender, race, urgency, and financing dependence. That is not algorithmic. That is a human running an algorithm in his head, in real time, with worse data and worse outcomes than the modern version.
Insurance underwriters have been pricing you on postal code, occupation, marital status, and age since long before machine learning had a name. Real estate agents adjust their negotiating tactics the second a buyer signals emotional attachment. Uber's surge pricing is personalized pricing, and most of us accept it because the mechanism is at least visible and we understand why the price moves.
And then there's the 2012 Orbitz incident: Mac users were being shown more expensive hotels than Windows users on the assumption that Mac households had higher incomes. Same hotels. Same dates. Different listings. Fourteen years ago. AI did not do that. A query parameter and a marketing analyst did.
This is the second time in a week I've watched a policy conversation get hijacked by "AI did it" framing when the actual practice predates AI by centuries. I wrote about this on Monday in the context of Cursor deleting a production database, and the shape of the argument is identical here. AI does something humans have been doing for as long as there have been humans, gets blamed as the cause when it is in fact the medium, and a regulatory panic kicks off that misses the actual problem.
The mechanism changed. That's it. The bazaar merchant sizing you up by your watch became a recommendation engine sizing you up by your postal code, your phone model, and your purchase history. The observation layer went from human intuition to industrialized inference. The thing being observed, your willingness to pay, is exactly the same.
It's also worth noting that Maryland's own consumer protection alliance pointed out that the kind of price discrimination this law targets was already prohibited under the existing Maryland Consumer Protection Act, and that "the notion that widespread, individualized price gouging could occur in such a market is inconsistent with the economic realities of the industry." Translation: this law is doing a victory lap on a practice the existing framework already covered, with "AI" added for political flavour.
Where the critics are right
Here is the part I'll cede.
Surveillance pricing feels uniquely creepy not because personalized pricing is new, but because the observation layer became invisible. In a bazaar you know the merchant is reading you. You can read him back. You can walk away. The negotiation is symmetric, more or less. In an algorithmic system, you don't know what is being inferred about you, you don't know what price the person beside you was offered, and you have no mechanism to push back.
That asymmetry is the legitimate complaint, and it is fair.
But the answer to information asymmetry is information disclosure, not a ban.
Consumers already accept personalized pricing all over the place when they understand it and sometimes benefit from it. Student discounts. Senior discounts. Happy hour. Loyalty programs. First-time user promos on Uber and DoorDash. Coupons, which are, mathematically, personalized pricing for people willing to clip them. Financial aid is one of the most aggressive personalized pricing systems in our society and we built the entire post-secondary system around it.
People object to personalized pricing when they don't understand why it exists, when they can't opt in or out, when it feels arbitrary, and when it never benefits them. Every one of those problems is solved by disclosure and competition. None of them is solved by a ban.
What we should actually do
So what can we do about this?
Mandate disclosure. If a retailer is using algorithmic personalized pricing, require them to say so, prominently, with a plain-language explanation of what categories of data feed the model. Make it as visible as a nutrition label. Let consumers see what is happening.
Then let the market do its job.
If consumers find Loblaws-style algorithmic pricing offensive, they shop at the grocer advertising "one price for everyone." That grocer now has a competitive advantage. Other stores either match it or lose customers. The practice gets shaped by demand, not legislated out of existence by a regulator solving a problem that may not actually exist at the scale claimed, which, in Maryland's case, was also already illegal.
What we do not need is another nanny-state intervention where Ottawa decides on our behalf which forms of pricing we are emotionally mature enough to handle. We have been negotiating with merchants for as long as we have had merchants. We can handle being told the merchant is now using software.
The bottom line is this. Ban opacity, not personalization. Disclose the practice and let people choose. The bazaar is not the problem. The bazaar with the lights off is the problem, and you do not fix that by banning bazaars.