It was only a matter of time. Having invaded the software world, AI has now fixed its sights on once-benign household objects and desk fodder.
Researchers at Carnegie Mellon University (CMU) in the US city of Pittsburgh, Pennsylvania, have built a computer vision system that gives everyday objects the ability to predict what you’ll do next and to roll themselves into place before you ask. Think a stapler scooting across the desk to meet a waiting hand, or a kitchen knife politely sliding aside before you impale yourself on the kitchen counter.

And they said IoT was trash: Sheffield ‘smart’ bins to start screaming when they haven’t been emptied for a fortnight
READ MORE
The team calls it “unobtrusive physical AI,” though “mildly unsettling office poltergeist” wouldn’t be far off.
The system combines ceiling-mounted cameras with computer vision and large language models (LLMs) to monitor what humans are doing and deciding when to lend a hand. When a person begins an action – say, reaching for papers or chopping vegetables – the AI generates a short text description of the scene and uses it to infer what might happen next. Movement commands are then sent to small, wheeled robotic bases under the objects, which the researchers call “proactive assistants.”
Project lead Alexandra Ion, from CMU’s Human-Computer Interaction Institute, said the goal is to study what happens when AI and motion merge into familiar, everyday objects.
“We classify this work as unobtrusive because the user does not ask the objects to perform any tasks,” said Ion. “Instead, the objects sense what the user needs and perform the tasks themselves.”
That idea may sound outlandish, but there’s thoughtful engineering behind it. The team argues that people already trust simple, physical tools far more than voice assistants or smart speakers. If those tools could adapt to context – for example, desk items shuffling themselves back into place at the end of the workday, or kitchen utensils repositioning themselves as you cook – it could pave the way for more useful forms of domestic or industrial automation.
“We have a lot of assistance from AI in the digital realm, but we want to focus on AI assistance in the physical domain,” said Violet Han, a Ph.D. student working on the project alongside Ion. “We chose to enhance everyday objects because users already trust them. By advancing the objects’ capabilities, we hope to increase that trust.”
The team showed off their prototype at the recent ACM Symposium on User Interface Software and Technology (UIST) in Busan, pitching it as a glimpse of how proactive assistance might work in the real world.
A hospital tray that repositions itself, or a shelf that folds out when you walk into the house with groceries, might not be so far-fetched. On the other hand, the sight of a self-aware stapler whizzing across a desk might not be something humans are mentally equipped to deal with – especially when we’re still arguing over whether anyone really needs an internet-enabled fridge. ®