Berkeley engineer Negar Mehr envisions a future where robot assistants are commonplace, from our homes to deep space

March 3, 2026 by Marni Ellery

As robots move beyond assembly lines and into homes, can we rely on them to always make safe, intelligent decisions?

UC Berkeley’s Negar Mehr (Ph.D.’19 ME), assistant professor of mechanical engineering, thinks we might be getting one step closer. At the Berkeley Intelligent Control (ICON) Lab, she and her research team are studying ways to control robot behavior by developing algorithms that will enable autonomous systems to interact with humans and other robots safely and efficiently.

Mehr, who recently won a Young Researcher Award from the Office of Naval Research, spoke with Berkeley Engineering about her work and the ways autonomous robots could someday become more integrated into the fabric of human life, from space exploration to elder care.

You’re currently working to develop algorithms for controlling robots, specifically their social interactions. What problems are you hoping to solve with your research?

Today, we can see a lot of the progress that we’ve made in robotics, including many promising examples of robots automating tasks on their own. But if robots are truly going to transform our lives, we need to feel safe and confident enough to share our workspaces with them. For example, I want to reach the point where robots that work in factories don’t have to be put in cages.

And if someday I want to have an assistive robot at home that can help with loading and unloading the dishwasher or doing the laundry, the robot must do those tasks reliably on its own. But it also needs to safely share my home with me, family members and any pets. Or if I want to have robots that help the elderly with their care, they need to be able to directly interface with humans.

So we want them to be capable of these social interactions. And I would argue that these social interactions are going to matter down the line, even if we want to have multiple robots work together on a task. I think that’s going to be a key enabling technology for realizing the potential of robotic systems.

In what ways are we trying to get robots to learn and infer more like humans so that they can better tackle complex tasks and situations?

There has been a lot of work in psychology and cognitive science to model and understand how humans make decisions, to understand how humans interact with one another. We draw a lot from that literature. We frequently use something called theory of mind, where the idea is humans can interact effectively with one another because they can predict the decision-making process of others. They can predict that if I do this, here is probably what the other humans will want to do in response to me.

For example, when you are trying to merge onto the freeway, you kind of do that mental modeling that if I try to merge, the other vehicle will probably slow down. Because of all the repeated interactions that we’ve had with other humans, we can have a mental model of how they’re going to behave, how they’re going to react to us.

To make these models useful for a robot, we need to have an algorithmic characterization of them. We need these algorithms to be aware that the embodiment of the robot is not the same as the human.

And on top of that, with these models that we develop and draw from cognitive science, their mathematical counterpart should not be too complex, so that we can run them on a resource-constrained robot.

Do you think achieving this will ensure that robots will interact safely with humans and other robots in the wild?

I’m pretty confident that it is one of the missing pieces. While controls play a big role, this is a multifaceted research problem. Other aspects like the hardware and sensing capabilities go hand in hand with controls. So the more advanced those are, the better we can do with proper control design.

Let’s talk about your lab. What are some of your key research areas?

At a high level, we have two themes in the lab. One is what I call safe control. Let’s say I just have one robot, and I wanted to automate a task. What does it mean to have the robot reliably do that task? How can I make sure that the robot — if it’s in a never-before-seen environment — can succeed in that task in a very reliable way? Basically, at what point can I be assured that the robot is going to do the task on its own in a safe manner?

Another area we’re exploring is interactive autonomy. If we have a robot that can automate a task safely on its own, how is this robot going to function and operate if it is coexisting with other agents? There is this added layer of complexity when I want that robot to interact either with humans around it or with other robots around it. Maybe that robot can do a task on its own, but how can I essentially build on that knowledge so that a robot can team up with another robot or help a human trying to cook food at home?

How is your research applicable to aerospace?

The algorithms that we’ve developed around multi-agent decision-making can potentially be used to enable autonomous in-space assembly and manufacturing. For example, we envision someday being able to send a team of autonomous robots to deep space to repair, maintain and assemble objects or to build infrastructure. This could go a long way toward advancing space exploration.

In addition, the algorithms that we’ve developed to help robots safely navigate around humans also have applications in the aerospace domain. It turns out they are very useful in the lower Earth orbit for figuring out how a satellite should safely maneuver around space debris, which surprisingly happens to resemble some aspects of how humans move.

Have you had any social interactions with an autonomous system? How would you improve it?

I’ve taken Waymo before, and I thought my first ride was amazing. But I’ve also realized that people’s perceptions of autonomous systems can vary. For example, my mother tends to interpret the Waymo’s actions at an intersection as much riskier than my father does. So I’m looking into algorithms that can help robots understand a human’s perception and, possibly, tailor its actions to make the person feel more comfortable. 

If we can get autonomous robots to work reliably and safely, what are some other applications where you think they’ll become commonplace?

Though I don’t think we are even close yet, I’m personally fascinated by the idea of facilitating the robots to be altruistic partners, to benefit human beings. And one of the application domains that I always think about is having robots provide care for the elderly.

I hope to see a future where, in the same way that every person has a smartphone these days, every person has an assistive robot that could help them. I really want to see a day when I go back home after work and my robot has folded the laundry, loaded the dishwasher, unloaded the dishwasher. Wouldn’t that just be amazing? Wouldn’t that save so much time for everyone?

I also really want to see the day when there are a lot of autonomous cars on the road, and they are reducing traffic congestion and helping people with disabilities have better modes of transport.

For me, the question is: Can we reach the point where we have enough robots out there that are helping humans and collectively helping society, as opposed to having robots help just a few people make a ton of money?