Snapchat must have known what it was doing when it called its photo tool “memories”. Really, it’s just a back-up tool like any other, allowing users to store their pictures. But calling them memories was a recognition that those pictures are not just files, but a reminder of people’s most precious moments.
So when the company announced recently its decision to start charging for “memories”, it was met with a backlash that it should really have seen coming. How can we run out of storage space for memories? How dare anyone ask us to pay a recurring fee to keep our memories safe?
The recent march of technology suggests that this kind of conflict is going to keep happening. Companies are increasingly treating our data as something to own, to fight about and to charge for. And the advancement of AI is only going to make this worse.
Around the same time as Snapchat’s announcement, fitness social media platform Strava announced that it was suing the smartwatch maker Garmin. The details of the case are fairly niche, since it relates to the segments and heat maps on the platforms. But at the heart of the argument is the question of who gets to display data, and how. Garmin wants Strava to show the fact that a workout was done by someone using one of its devices, but Strava refuses.
Notably, in one of Strava’s many statements about the falling out, it recognised the emotional and moral pull of owning one’s own data. It suggested that it shouldn’t be required to show Garmin’s information because “We consider this to be YOUR data”. “If you recorded an activity on your watch, we think that is your data,” a company representative wrote.
The fitness world is a useful test case for arguments about who owns what data, because it relies on the idea of sharing it around. Runners record their run on a Garmin smartwatch, say, but that data gets sent up to Strava so that their friends can see it, and to training platforms for recommendations for future runs, only for those future runs to be sent back down to the Garmin watch. Until now, there has always been something slightly retro about the relatively good relationship between all of those technology companies. But even the utopia of fitness tech appears to be getting muddied, as the rush to own as much data as possible continues.
There is something deeply frustrating about generating that data – especially when it requires doing a long workout – and not feeling like it is your own. But that is happening more and more, especially as AI platforms turn data into the central and most in-demand resource on the internet, required not only to power the offerings of today but to train the future as well.
Companies are increasingly treating our data as something to own, to fight about and to charge for (Getty)
Reddit and Wikipedia, for instance, remain two of the most useful troves of quality training data for AI. But companies that use them to teach their AI systems about language don’t actually need to worry about the people who created that data in the first place, since that was done on a voluntary, and often even anonymous, basis.
All of this matters because we have no real way of sharing our data outside of this system. The dream of a web powered by a host of decentralised servers all talking to each other is over, even if some people try to revive it. If you’re going to share photos with your fiends, you’re probably going to have to do so on Instagram, at which point they sort of stop being your photos at all.
This might be part of the reason why people have been retreating from public social media and into the safety of the group chat. Many of the most popular messaging platforms – iMessage, for instance, and even Meta’s WhatsApp – make big privacy promises that they actually keep, which means that they cannot read the content of messages. Chats are safe from prying eyes – or AIs.
Privacy has long felt like something of an abstract concern, in part because it has always seemed so impossible to achieve on the internet. But as the web becomes a training ground for AIs, we might finally want to take proper control of our data. Until we do so, we won’t just be paying for our memories, but having them used to train whole new machines, too.