AI researcher Ethan Mollick says most public conversation focuses too much on potential AI catastrophes and not enough on making the technology work for people. Mollick says if we don’t change that, none of us will be prepared for the near future where “everything will change all at once.”

Guest

Ethan Mollick, Associate professor of management and co-director of Generative AI Labs at the Wharton School of Business at the University of Pennsylvania. Author of the book Co-Intelligence: Living and Working with AI and the Substack One Useful Thing.

Transcript

Part I

MEGHNA CHAKRABARTI: Ethan Mollick was named one of Time Magazine’s 100 most important people in AI back in 2024. He’s advised the White House and Fortune 500 companies.

He also teaches at the Wharton School at the University of Pennsylvania and is co-director of their Generative AI Labs. And he’s author of Co-Intelligence: Living and Working with AI and he also has a Substack. It’s called One Useful Thing.

And I really wanted to hear from him because, surely as the sun will rise, human beings will continue to develop new technologies. We are hardwired to try out new things. It’s why we have everything from prehistoric fire-making to a helicopter that flew on Mars. It’s also what brought us the AI revolution we’re all currently living through.

The thing with AI, though, is that, unlike technological revolutions of the past — say, the printing press or the internal combustion engine — AI is bringing sweeping change very, very fast. And that’s concerning to say the least.

As AI pervades every aspect of our lives, big questions pop up. And I will stand here in defense of asking the big questions, because we are also hardwired to do that. Questions like, “Will AI take my job? Will AI take away my privacy? Will AI take away our ability to think independently? Will AI just take over?”

Well, to all that, Mollick says: Just calm down. Because what we still have is human agency. That we can, as individuals, actually make choices about how we integrate AI into our lives. And the faster more people do that, the more likely it is that we might avoid the more frightening future scenarios that those big questions point to.

So Ethan Mollick, welcome to On Point.

ETHAN MOLLICK: Thank you for having me.

CHAKRABARTI: So would you call yourself overall an AI optimist?

MOLLICK: So it feels weird to talk about broad strokes of technology. We call — I’m a professor who studies technology and innovation and we call the kinds of technology that AI is a general purpose technology — ironically, a GPT.

And general purpose technologies have all sorts of effects, good or bad, right? Electricity does good stuff and bad stuff for us. So I think I’m very optimistic about pieces of what we do. I think I’m very nervous about misuse, and I think a lot of people don’t know where this is going, including me in the long term.

So I would say I’m cautiously optimistic, but I think we need to take some charge of where things are going to get good results.

CHAKRABARTI: Ah. This is exactly why I’m so glad that you agreed to join us because I feel like I share the same general outlook as you. I might be a little less cautious, or even more cautious, in my optimism than you. But gimme an example right now of how you sort of assert your own agency.

How do you use AI in your life kind of on a daily basis?

MOLLICK: So I use it for lots of different things. But that agency piece really matters. So, for example, I love writing. You mentioned my book. I have a Substack. I write all this other stuff. And I think I’m a pretty good writer.

And for me, and not for everybody, writing is, in a lot of ways, thinking. I think out ideas by writing. I come up with, you know, hopefully good jokes while writing. And so when I write things, I always do my own draft, but then I absolutely extensively use AI afterwards.

I use it to help me do research. Some — the deep research mode of AI is quite good. Other modes can make things up. But then once I’m done writing something, I’ll often ask the AI to act like an editor and give me feedback. I’ll ask for it to help me with, you know, “I didn’t land the last sentence. Gimme 30 versions of that sentence.” So I can start to come up with different ideas based on that. “Pretend to be a naive reader and tell me what’s confusing.”

So I manage to both use my own work, but also draw on the AI when I need to.

CHAKRABARTI: Okay. So can you — let’s talk more about this example. Because what you just described is not what I often hear when people say, “Oh, I’m using AI to help me write things.” Right? They just like, go to Claude or ChatGPT and they give in the prompts like, “Please help me write an email to my boss asking for a raise.” Right? And it comes out with a first draft generated by the LLM.

But you said you start your — walk me through the steps in more detail. You start your first draft with actually what you have written yourself.

MOLLICK: I actually am very cautious about ever asking the AI for advice before I start writing. So I don’t ask it for advice on what I should write about. I don’t have it create a draft or an outline. Because I find that messy process — I know enough about myself that I know that that messy process of figuring things out, writing, moving paragraphs around, has to come from me for the work to be my own.

And also, once you do some writing, it kind of anchors the AI later in that style so that it doesn’t feel like a Claude-style writing or a ChatGPT style of writing where it keeps telling you how great you are or the sentence is. This is not just a radio show, it’s a triumph or all the other stuff you’re used to seeing in AI writing.

So it both lets me think about things and makes things more my style.

CHAKRABARTI: (LAUGHS) Sorry, I didn’t mean to laugh. But like, I would like to think of On Point as a radio show that’s a triumph, but if I said that people would know I’d reading a script that was generated by ChatGPT. (LAUGHS)

Okay. So the, the, the process you just laid out about you writing your own draft first and then using AI essentially as a research assistant or an editor. Why do you think that that’s not necessarily the approach that we often hear, you know, in the media? Or I guess, sticking with writing, that people talk about when they’re like very concerned that students are just using AI to write their papers for school?

MOLLICK: Well, you know, I mean, I’m a professor, too. So everybody is using AI to write papers.

And I mean, the thing is that writing and learning is hard. And we have a lot of research on how we learn. And how we learn turns out to be we have to do the hard work. It’s just like exercise or dieting or anything else. You get benefits from the pain, right? The pain is the indicator that you’re out — you’re outside your comfort zone. And people don’t like that. And it’s stressful and it’s hard.

So you could choose to delegate all your thinking to the AI, have it do the writing for you, but you’re not gonna get any benefits from that. You’re not gonna remember what you wrote. It’s not gonna be in your style. So just like other forms of work that matter, you want to take control over things that matter to you.

“You could choose to delegate all your thinking to the AI, have it do the writing for you, but you’re not gonna get any benefits from that.”

Now, you were kind of mentioning the email to the boss. Now, that might be such a stressful situation for you that it would’ve taken you three days to write an email to your boss. If the AI draft gets you there and you’ve already thought through the problem, I don’t see that as a big issue. Right? And for a, you know, minor email to a government official, maybe you wanna use AI to do that.

So I think we have to separate out the work that matters to us, where the process matters, from work where the output is just output and it doesn’t matter to us at all.

CHAKRABARTI: Okay. You know, it’s interesting because I’m looking at, I think maybe it’s your recent, your most recent post in your Substack, One Useful Thing, talking about like, how does AI affect our brains in terms of how we learn and think. Because there’s been a lot of writing and discussion around that just even in the past few weeks.

And you cite two or a couple of different studies that show that effective use of AI can actually enhance learning sometimes more than typical classroom instruction. Right? You cite a Harvard experiment that took a large physics class and found that well-prompted AI tutoring outperformed active classes and learning outcomes. And then there was a study out of Malaysia that you cite that found that AI used in conjunction with teacher guidance — from a human — and solid pedagogy led to even more learning.

So what does that tell you?

MOLLICK: So this is part of what the potential of AI is. Is that now, you know, the dream of educators for a long time has been one-on-one tutoring, right? We wanna be able to teach a student one-on-one.

I teach large classes. You know, dozens of people in a class, sometimes hundreds, and I can’t give them one-on-one attention. And that means that their education is not personalized. So, on one hand, we have this idea that AI can help us cheat or help our students cheat, which it absolutely can.

On the other, there’s this incredible potential for it to talk to you in the level that you wanna speak, that you understand, put things in analogies you get. It’s patient. It’s tireless. And so we have this kind of contrasting forces, right? On one hand it can help us be very lazy. On the other, it can help us learn. And so it’s not surprising the research finds both things.

If people just use AI to answer their questions, they don’t learn anything. But they think they’ve learned because the AI gives them helpful advice. They’re like, “Oh yeah, I got this.” But if you use it like a tutor, we’re finding in classrooms from, you know, Harvard to Nigeria that it seems to have a very big impact on learning outcomes. But it’s not like automatic, right? We need to put the work in to get that good outcome that we want.

CHAKRABARTI: Okay. And so that’s what you keep — that’s why you keep your writing or at least, what attracts me to your thinking keeps turning back to is like we actually have a choice here. Right? In terms of how we use AI. We have human agency.

But I wonder, though, if there are also other human tendencies that work in opposition to how you’re talking about we should be using AI? I’m just thinking of like, general sort of lowest common denominator, or we sometimes like to fall into a low-energy state, let me put it that way. So it’s actually, like you said, real learning is hard.

It’s just AI has made so much, so many things much easier, like writing, that we’d have to overcome our own internal laziness in order to use AI in the way that you’re talking about.

MOLLICK: Yeah. We don’t spend a lot of time thinking about process and what processes matter to us.

Same thing in companies, right? Like, you could use AI to write performance reviews. That’s the first thing everybody does with AI. But what’s the value of a performance review when it’s not being done by a human being? Right? So do we, is that something people should be spending time on? Is it important that I write a letter of recommendation by hand? Or is it just a pro-forma thing that I have to hand to somebody?

So I think that that challenge is kind of the clarifying aspect of AI is: Does a human need to be involved? And if the answer to that question is no, do we really need to do this task at all? And I think that is sort of the weird existential crisis outside of the existential crisis of like, what does it mean to have a machine that thinks like us and seems to be alive even though it isn’t? The other one is, why are we automating this task? Was this task important in the first place?

“Why are we automating this task? Was this task important in the first place?”

And I think that requires you to be very present in the task to understand why it serves a purpose. And we’re not used to that. Because the only option I had was either I pay someone to write a paper for me, or I’d write it myself. Now, I have other options in between, and we have to start deciding the value of these underlying tasks and what parts of them are valuable.

Think about even assigning an essay. We assign essays all the time as teachers, and we assume something magical happens when we assign an essay to someone. But there’s not actually a lot of research suggesting what’s good about essays for learning and what isn’t. And now we have to start thinking about that much more seriously, now that there’s an alternative to humans doing essay writing.

CHAKRABARTI: Oh, that’s interesting. But didn’t you just say this a few minutes ago that you find that the process of writing yourself that first draft of whatever you’re working on is a clarifying process in terms of what you actually think or the message that you want to get across? So wouldn’t that also just kind of be an inbuilt function of essay writing more broadly?

MOLLICK: Well, except that, you know, writing is thinking is true for me. It’s not necessarily true for everybody. There’s lots of forms of thinking. So some people think in different kinds of ways. And just because, as an educator, you know, writing is thinking for me doesn’t mean I should always assume it’s the same for everybody. So I need to start thinking hard about when I want to use an essay and when I don’t.

Part II 

CHAKRABARTI: So for the skeptics out there, professor Mollick, who kind of just wouldn’t wanna touch AI with a 10-foot pole purposefully, if they could avoid it, why do you think they should? Why do you think people should even just sort of sit down for, I don’t know, 10, 15 minutes a day and just play around with some of these AI tools?

MOLLICK: I mean, I think there’s lots of reasons to worry about AI, right? We haven’t even talked about the big-picture stuff.

People worry about job replacement. They worry about the ethics of how these things are trained, the companies involved in them, where the data comes from. And all of that is very legitimate, right? I think that there are concerns, some more than others, about AI use. But I also think that this is a real technology with real positives as well that is actually shaping the world.

And I worry that people who would be intentional critics of AI just sit it out. And I don’t think that’s a good idea. First of all, we know that people who use AI actually end up liking it. So in surveys of teachers, those who don’t use AI are doubtful about it — and as soon as they start using it, a Walton Family Foundation/Gallup study just found it saves teachers six hours a week when they use it — they like it, right?

So it’s a fun tool to play with. It’s unnerving at first, but kind of interesting and it can make a difference in your life. And we have early research from a lot of controlled studies in medical journals suggesting AI does a pretty good job as a second opinion for diagnoses. It’s really good at generating ideas. There’s a lot of interesting aspects. And I think the only way to know what it’s good or bad for — because it’s not good at everything — is to use it. And I just would encourage people to spend some time just playing with these systems to understand what they do or don’t do.

“I think the only way to know what [AI] is good or bad for — because it’s not good at everything — is to use it.”

CHAKRABARTI: Okay, so, but playing how? What does that mean?

MOLLICK: So the biggest piece of advice I have is use it for your actual work tasks or key hobbies. So if you are preparing for a radio show, for example, use it for everything. Ask it to generate ideas about, you know, what the show should be about, what kind of questions to ask, how your guests might respond, how do we summarize it, what areas of conflict might be interesting to explore.

And you’re going to find as somebody who is very, very good at running radio shows, that there’s some things it’s very good at and some things it’s very bad at. Some areas you’d want to hand off AI work to, some areas where you’ll never want to use AI, and some areas where you’ll work together with it.

So I think that use for work is great because that’s where you have expertise, so you could understand what we call the jagged frontier of AI. It’s really good at some stuff you wouldn’t expect, really bad at some stuff you wouldn’t expect, but you only discover that through use. There’s no instruction manual out there that you’re missing. It really is about individual exploration.

CHAKRABARTI: Yeah, don’t knock it till you try. In general, I’ve tried to follow that precept in my life.

But there’s one thing that you said, that I — I’ll just use myself as an example — could play with the AI and ask it what questions to ask. That’s the only one I got hung up on, professor Mollick. Because that’s where — that, to me, that’s one of the jagged edges. It’s like, well, how is it that an AI tool can suggest things that would drive my human curiosity, right?

That’s where I’m just like, “Ah, I feel like I’m giving too much weight and power to suggestions from an LLM,” which is basically just taking like a lot of data and spitting it out according to my prompt.

Am I misreading that?

MOLLICK: Oh, I mean, I think that your discomfort is completely fine, right? I think the danger is not trying it. Because what you wanna do is figure out is it actually good at this stuff or not? How insightful are the questions? You might find that it fits into your workflow in ways you didn’t expect.

So for example, just look at my writing. If you’re trying to generate ideas, you should probably generate ideas on your own first so that it’s not contaminated by the AI work. But all the evidence we have is that AI does really well in creativity tests, far better than most humans.

“All the evidence we have is that AI does really well in creativity tests, far better than most humans.”

And you probably wanna also turn to the AI for idea generation. So in the same way, maybe you do write out the questions that you wanna do, but then ask the AI, “Am I missing anything?” Is there anything interesting? And you’ll use that to spark your own knowledge and experience and come up with ideas rather than trust that the AI, you’re not gonna read the AI’s ideas verbatim.

Or maybe you push back on them. “I love idea two that you came up with, but I think that it’s really generic. How could I sharpen this more?” And so you’re gonna engage in interaction.

Most of us don’t have access to tireless editors or, you know, coworkers who will just push things back and forth with us until we come with something interesting. Or you may decide after all this that it makes you too uncomfortable and you don’t want to use it this way. But I think you have to try it first to understand what the limitations are.

CHAKRABARTI: Okay. I think you’ve brought up an excellent point. And I would summarize it this way, and you tell me if it’s accurate or not.

Maybe one way for people who are cautious about AI to think about it is that use AI, play around with AI in the same way that you would actually like, talk with a person, right? Because you don’t necessarily — you won’t trust everything that comes out of a person’s mouth, right? But you do bounce ideas off of them.

We do that every single day, most people, in their lines of work or at home, without even thinking about it. Like, “Hey, what do you think about blah, blah, blah?” It’s not that I would actually just automatically do what that person says, but it would generate sort of next steps. Is that what you’re saying, essentially?

MOLLICK: Yeah. Very much so. I mean, even do it explicitly. There’s an audio mode available for free in ChatGPT and Anthropic and other modes that will just chat with you. And that conversation could be really useful.

I spoke to a quantum physicist at Harvard who told me that all of his best ideas come from AI and I’m like, “Wait, is it good at quantum physics?” This was a year and a half ago. It’s actually quite good at quantum physics now, but it was definitely not a year and a half ago. And he said, “No, no, not at all. But it’s really good at asking me good questions.”

So I think that solicitation of your own thinking is a real value. “Interview me. Help me think through my ideas.” And I think that interaction is something that people can get a lot of value of, even if the system itself isn’t giving you the answers you want.

CHAKRABARTI: Oh. Okay. Okay. So this has given me a little bit more hope, professor Mollick. But I’m still captivated by your suggestion of asking AI what questions to ask about something in particular.

I couldn’t help it. I just opened up ChatGPT on my computer. I don’t necessarily know if it’s the best one. Maybe I should have gone to Claude instead.

But I’ve got ChatGPT open here and I’m just gonna try this. (LAUGHS) I have no idea what’s gonna happen, but I’m gonna give it a prompt.

(TYPING) “I am a host of a public radio –” let’s call it a — “public affairs program –“

MOLLICK: It probably knows who you are, if you wanna give it your name.

CHAKRABARTI: (LAUGHS) Oh gosh. Oh my God. Okay. (TYPING) “I am Meghna Chakrabarti.” This is scary. “I am interviewing Ethan Mollick. What questions should I ask him?” Okay. Enter. It’s thinking.

It says you’re an expert in innovation, entrepreneurship, intersection of business and technology. Oh, okay. So here’s a bunch of questions.

“Ethan, you’ve done a lot of research into how people organize — and organizations innovate. Can you tell us what you’ve discovered about the key drivers of successful innovation in today’s world?” I guess that’s an okay question.

Do you think that’s a good question to ask you?

MOLLICK: Well, that’s a good question to ask me. We’re getting very meta here.

CHAKRABARTI: (LAUGHS)

MOLLICK: Well, actually, notice a few things. Okay? So if I was telling you how to prompt this AI, I would say — first of all, you didn’t say this was a discussion on AI, right?

CHAKRABARTI: Oh, okay.

MOLLICK: So it’s trying — I’ve been a professor for a long time and a lot of my work’s on innovation, so that’s probably part of it.

Another part of it is that AIs have cutoffs in knowledge. But all of them could search the web now. So you might say, “Look at his most recent work and his Substack to draw on interesting questions.” That might get you further.

And then also you wanna push it. I mean, just like it was an intern. I would say, “These are really boring questions. I’m interviewing Ethan on AI and you should look at his most recent work and come up with something that would create a much better discussion.” So push back. Don’t just take those default questions. Those are very kind of milquetoast, I think.

“You wanna push it … just like it was an intern. I would say, ‘These are really boring questions. I’m interviewing Ethan on AI … [C]ome up with something that would create a better discussion.'”

CHAKRABARTI: Push back on the AI. Okay. I’m not gonna let myself get distracted by doing that throughout the rest of this conversation, but that was very, very useful. Okay. So, so like kind of a life hack using AI. Push back on the AI. Be specific in your prompts. Right?

And it sounds to me like almost like people could use a course on like how to effectively, you know, engage with or construct prompts for some of these tools.

MOLLICK: Well, sort of. But it’s actually becoming much easier. As AI models get better, they’re better at understanding your intent. So there are some tips and tricks, you’re exactly right. Be specific is extremely helpful. And you know, like, push back and forth. Those are the two biggest things I would tell people to do with AI.

A lot of the tips and tricks — we’ve actually been tested them at Generative AI Labs — they don’t matter anymore. So like, it turns out being polite or not doesn’t really matter to the AI. Telling it to think out loud doesn’t really matter anymore. These things made a big difference a while ago, but no longer.

So I think sometimes that the knowledge that I — there’s a feeling like you need a particular skill. I think as you interact with it, you use the analogy earlier of a person, you start to kinda get a sense of what kind of person it is. In fact, treating a like a person, even though it’s very much not a person, is often the most effective technique to make it useful.

CHAKRABARTI: Okay. Treat it like a person in terms of like the information and value that we want from our interactions with the AI. I’m just actually, literally writing this down because I’m still an old-school handwriting person.

Okay. Now you have another example of how we might play with some of these AI tools or experiment with them. And it has to do with literally the sound of your voice. Can you tell me a little bit about what you have there?

MOLLICK: Sure. I mean, so I think playing is really important. And there’s lots of different ways to play with AI. There’s visual models that you all can get access to. You can prompt almost any image you want out of the ether. There is now Google lets you generate images, videos, with their Veo3 app for free. By the way, I don’t take any money from any labs, so I’m not advertising anybody here.

There’s also, for better or for worse, voice is now very good for these models, and you can even clone your own voice. So I did that right here before we started talking. I cloned my voice using a tool called ElevenLabs. And I’m gonna try and share that statement. So this is a hundred percent the AI’s voice, and let’s see if you can hear this successfully or not.

CHAKRABARTI: I’m not really hearing anything.

MOLLICK: Okay, let me make sure I came through.

(AI VOICE CLONE OF ETHAN MOLLICK) Hi, I’m Ethan Mollick. I study how we work with emerging technologies, and right now no technology is reshaping our world faster than AI.

CHAKRABARTI: Wow.

MOLLICK: And so yeah —

CHAKRABARTI: It sounded okay. I mean, I would say it wouldn’t have entirely convinced me that it was you, now that I’ve actually talked with you for half an hour or so, but it’s close.

MOLLICK: Well, you are an expert. I mean, this is the interesting piece, right? First of all, you’re an expert in hearing people’s voices. And second, I threw this together with my sort of laptop microphone with, you know, a 30-second sample of my voice, right?

So, but I agree with you. I mean, this is the jagged edge of this stuff. It’s not perfect in a lot of ways, but it’s certainly, you know, going to reshape things one way or another. I mean, that is probably good enough to fool most speech recognition tools to log into banks, for better or worse.

CHAKRABARTI: Hmm. Oh yeah. Okay. Well, so there’s one of those red flags on the use of AI. We’ll come back to that.

So what the tool just said, that wasn’t what you actually recorded into it, right? Was that what was generated from then a text prompt that you gave it?

MOLLICK: Yeah, no, I just asked it to come up with a sentence introducing Ethan Malik, and then read it in my voice. So my voice, I talked, I think about — I was born and raised in Wisconsin — so I think I did a 30-second monologue on cheeses.

CHAKRABARTI: (LAUGHS)

MOLLICK: And then it sampled my voice. It was able now it could say anything in that term. I could have it laugh evilly, if that helps. Let’s try that here.

CHAKRABARTI: Oh, can you try that? Yeah.

MOLLICK: Yes. I’m gonna add the instructions. “Chuckles ominously.” Hold on. Let’s see how this works. We’re typing live, which is always the best way to do radio, so I’m told.

CHAKRABARTI: (LAUGHS)

MOLLICK: I should get one of those keyboards that makes better noises so that way it sounds good. All right. So let me try, I’m regenerating that speech and I’ll play it for you. I just hit the generate button. So this will let you know how long this takes.

Okay. And it’s ready. And I have never heard it. So let’s see. Okay, it sounds like it generated. Let’s share that. And the sharing actually takes longer than the voice generation. Here we go. I added, “chuckles ominously.”

(AI VOICE CLONE OF ETHAN MOLLICK) (LAUGHS) Hi, I’m Ethan Mollick. I study how we work with emerging technologies and right now —

MOLLICK: So, soon I’ll be able to sing opera.

CHAKRABARTI: (LAUGHS) That ominous chuckle wasn’t terribly ominous. But can you — so you can actually give it even sort of suggestions on changing the emotive quality of that? Like, you could say, like, “say it as if I’m feeling,” I don’t know, “depressed,” or “deliriously happy.” You can do things like that?

MOLLICK: Yeah. I mean, I’ve been having — you can give it stage directions and it operates according to those stage directions. And increasingly, you can do that with video also.

CHAKRABARTI: Okay. So then, I mean, I actually can see the immediate use of tools like this. Right? Just the other day, we did a show that talked about for people who have lost the use of their voice, the physical use of their voice, that these AI tools are literally giving them their voices back, which is incredible.

But then, you know, are there other sort of less all-encompassing use cases that you can come up with, like people who actually still can speak, why would they want to use these tools?

MOLLICK: One thing is that I can now create, going back to our education piece, I can create customized structural videos for everybody, where, you know, where I get to be a character. And there’s an interesting little study out of the Media Lab at MIT suggesting that when you’re taught by a character who’s an expert in your field, that you pay more attention initially. So there’s some interesting angles there.

It’s reshaping entertainment, certainly. I think there’s less value in cloning my own voice. I think that’s more on the slightly ominous side of things than there is being able to create an infinite number of voices that are expressive and could talk to you about a topic that you’re interested in.

CHAKRABARTI: Let’s see. We have about a minute and a half before our next break. I’m not sure we can fit this in, but how far do you think you can push it? Like, what’s the craziest emotion you can ask it to speak back in?

MOLLICK: Alright. Let’s try something fun here. Like, I’m gonna go, let’s say, I’m gonna giggle nervously, I guess. And let’s also sing some opera — operatically. Actually, let’s do, “giggles nervously.” And I will also add in, let’s see here, “singing operatically.”

I have no idea what this will do, right? I feel bad that I’m not — I haven’t actually even planned this at all. So let’s share that sound again —

CHAKRABARTI: Well, hang on, hang on. Here’s what I’m gonna ask you to do because we are just a few seconds away from our next break. So I wanna leave listeners like, on tenterhooks about this. … When we come back, I’m going to hear you laughing nervously. Well, I’ll hear the AI-generated voice laughing nervously and singing operatically.

Part III

CHAKRABARTI: Professor Mollick, we were gonna get your AI-generated voice there to laugh nervously and sing operatically. Is that right?

MOLLICK: Yeah, let’s see how it did, right? I generated that while we were, while you were on break. So let’s see how, how it came out.

CHAKRABARTI: Okay.

(AI VOICE CLONE OF ETHAN MOLLICK) Hi, I’m Ethan Mollick. (LAUGHS) I study how we work with emerging technologies. (SINGSONG VOICE) And right now, no technology is reshaping our world faster than AI.

MOLLICK: So, not necessarily operatic, but I did at least get some singsong in there.

CHAKRABARTI: I liked that actually. It’s surprising how I reacted to — I reacted to that from just a, “Oh, here’s this, that’s like kind of sweet.” It generated that reaction rather than skepticism in me.

I wonder then I think the concern is — if you or I or or anybody is the person generating this stuff because they want to use it as a tool, that’s one thing. But you know, the concerns that we started the show with are, well, what if it’s — obviously there’s zillions of hours of my blah, blah, blah out there on the internet.

Right now, today, someone could take that and literally assert that I’m saying things that I’ve never said. I think it’s that lack of control, in fact, the absence of agency itself, that is the cause of most people’s concern.

MOLLICK: I mean, I agree with you. I think that we started this show by talking about this as a general purpose technology. Those have good and bad effects on the world. And I think a obvious bad effect is deep fakes, involuntary images and voice. I mean, that is a hundred percent a real threat to worry about.

And I think it’s one of many negative effects of AI that will occur. And so that’s the kind of thing where we need to start thinking about how we reshape our systems and regulations to discourage that sort of use. Because for every good use there are potential bad ones as well, right?

So, the fact that I can clone someone’s voice suggests that if you get a call from somebody saying, “I’m a family member, I need to be bailed out of jail.” And you’d be very surprised that person’s in jail, you probably want a family password that only you know, that you could check that somebody is indeed who they say they are. You know, I would be skeptical of any images or video you see online. I mean, there are these negative effects that are very real.

CHAKRABARTI: So, about that last one that you said, about being skeptical about images and videos that you see online. I have found myself recently telling my kids exactly that. Like, I hate to say it to them, but like we’re living in a world where you can’t necessarily trust what your own eyes tell you if it’s coming to you online.

I mean, that’s one of the huge shifts in terms of how human beings perceive, trust the world around them, you know, how we feel about the truth of our own lives. That’s the area which I think is really, really hard for most people.

And do you think that even engaging with the tools as the way you’ve been talking about, learning how to use them in our lives, playing with them so that they become, I don’t know, less mystical, is that going to really help cope with this huge change of not even being able to trust what we see with our own eyes?

MOLLICK: So, I mean, AI detectors don’t really work. You can use these systems a lot and still be taken in by fake narratives. I mean, I will say that’s not new, right? Misinformation has been a major problem online for a very long time. But just like cheating was a larger problem at schools for a long time.

And so AI, again, is a clarifier. It shows us that this is a problem. It makes it easier to create the problem. It doesn’t, you know, the humans are the issue in the end, in doing these things. So I think, you know, it’s interesting how AI kind of plays both roles.

So, for example, the only thing we robustly know in social science on how to lower conspiracy theory beliefs — and this is a replicated finding — is getting into a short discussion with the AI. A three-round conversation with GPT4, which is a now-obsolete, older AI, was enough for people to have lower conspiracy theory beliefs two months later in controlled experiments.

“A three-round conversation with GPT4 … was enough for people to have lower conspiracy theory beliefs two months later.”

So on one hand. That’s very exciting, right? Because nothing else lowers conspiracy theory beliefs and the AI is a rational-seeming agent you could talk to. On the other hand, we could change people’s deeply held beliefs with a short, three-round conversation with AI. So you could see that information and disinformation, they’re both available to us with this tool.

CHAKRABARTI: Oh, that’s so fascinating. So I just wanna be sure I understand what you’re saying. So there’s research evidence that shows that someone who has a deep feeling about, I don’t know, Pizzagate, whatever. That a three-round conversation — so what does that mean? Like three sets of questions that they’re asking the AI can actually change their belief about the conspiracy theory? Is that what you said?

MOLLICK: That’s right. It actually lowers their conspiracy theory belief by, if I remember the numbers, 20% two months later, which is a pretty amazing interaction. And again, that’s been published and a replicated study at this point.

CHAKRABARTI: That’s incredible. Oh, so of course, now I’m just wondering why that is. Do you dare to conjecture about why that might be?

MOLLICK: Well, they actually, the researchers actually give us a reason, which is they thought maybe it’s using some sort of persuasive technique. Actually, it’s just taking their concerns seriously and answering them rationally with, you know, evidence and information turns out to be the key in this case.

CHAKRABARTI: Taking their concerns seriously. Okay. That’s also actually a note for all of us. But, as you pointed out, the double-edged sword is that it’s changing deeply held beliefs. Which can be used for good or for bad.

I wanna just switch gears here for just a second and, again, go back to this idea that we should all try to familiarize ourselves with these tools and play around with them so we understand them better. I feel like — I’m questioning my own presumptions here — that that would be most useful for, I don’t know, people in the creative fields or the thinking fields. You know, professors, radio people, et cetera.

But there’s a vast number of human beings out there whose daily lives aren’t necessarily so hinged upon the digital world. I mean, what’s your argument for folks whose jobs have little to do with the creative fields to engage with these tools?

MOLLICK: There’s a few things. I mean, one is it can be a lot of fun, right? So like, you know, we were having a good time making these voices up. You were clearly enjoying kind of, you know, asking the system questions. Like, there is entertainment.

There is the ability to, you know, again, most people do have conversations with people, they come up with ideas. Even, you know, thinking about exploring new kinds of ideas. We have some, there’s research that was done by the person who’s actually, ironically, now the head economist at OpenAI, who found that something like 50% of Americans have had a startup idea but never pursued it.

A professor of entrepreneurship, the idea that you could talk through potential ideas, get advice on and second opinions on issues that matter to you from a tireless, you know, assistant, I think have potentially very useful and transformative. It gets something explained to you and how to do it.

It’s also really good at giving advice. I mean, you know, again, don’t trust the AI as your first line for anything. But if you take a picture of something and say, “How do I fix this?” You often get very good answers. So I think there’s value in all of that.

And then also, our jobs are not just one thing. If you’re a roofer, you’re spending some of your time doing roofing work, but maybe you’re spending some of your time sending out proposals and the AI can help you write those proposals. So work and personal lives intersect AI in many ways.

CHAKRABARTI: Okay. You know, I’m also thinking about we have to get smarter about which tools we use. And I feel like it’s — there’s a consumer beware responsibility that we all unfortunately have as individuals now.

Because I don’t know, like what, back in the day, there was what? Gemini? That AI that first, when it first came out, it was providing like really, like super-woke or progressive kind of answers to questions. More recently, because Elon Musk asked for different information to be fed into Grok, it went like, completely pro-Hitler on folks when asked questions.

So there’s still this idea that we’re using tools whose parameters or guardrails have been set by other people. And like, how do you best advise individuals to cope with that?

MOLLICK: So, and it turns out both those stories are actually kind of complicated, right?

So the reason why Bard, which was the precursor to Google’s Gemini, was producing, you know, and when you asked it for an image of a World War II German soldier, you’d sometimes get like a Black soldier or, you know, or Black George Washington or whatever it was, was because these systems actually have biases in them already. And they were trying to fix the biases by saying, “Consider diversity when you make decisions.”

Part of the reason why Elon Musk’s Grok started talking about how it was Hitler was that people were encouraging it to do that. And then whenever it started to do web searches about what people are saying about Grok, it would find out that people said, “Grok says it’s Hitler.” So it starts saying it’s Hitler. So it’s very complicated in a lot of ways how these systems actually interact with the world.

I think there is worries about bias, right? These systems do have biases built into them. There’s a weird fact that as the AI models get larger, their politics all sort of converge on a vaguely leftist, vaguely centrist kind of viewpoint, and that includes Chinese models and American models. So we don’t understand a lot about the biases of the system yet.

“As the AI models get larger, their politics all sort of converge on a vaguely leftist, vaguely centrist kind of viewpoint … So we don’t understand a lot about the biases of the system yet.”

So I do think that’s worth worrying about. But they also could change their politics. If you talk to an AI in Korean, it answers with a much more Korean view of the world than if you talk to it in English, and there’s a controlled experiment on that as well. So I think we are still learning about how to interact with these systems and how they interact with our views and approaches.

CHAKRABARTI: Hmm. Then there’s also the question, professor Mollick, I have about when the — it’s one thing to get people individually more educated and again, I’m very attracted to this idea of like, using our own human agency in terms of how we want AI to enhance our lives, right? To shore up our weaknesses  rather than replace our strengths. That’s something that you mentioned in another interview — actually, I think it was with Ezra Klein and the New York Times.

But then I do have to — like, your take on the areas in which we have no control over how AI is being used. You know, like the big tools that companies are launching or the ways in which it is replacing people’s jobs. I mean, how would you rate thus far, looking at your, when you were advising the White House, political leaders’ understanding of what guardrails we could or should be using? Or their understanding of how transformative AI is actually going to be?

MOLLICK: So first, you know, let’s start with a note of modesty here on this front, which is like I talk to all the AI labs on a regular basis. I talk to all these leaders that you mentioned. And I don’t think anybody knows exactly what’s gonna happen with AI and how transformative it’s gonna be — and in what ways, exactly.

So I think the right thing in politics is probably a fast reaction to change, right? You don’t necessarily regulate electricity when it’s first generated, but you absolutely regulate how it’s used. And so in an ideal world, we would see fast responses by experts to emerging problems rather than necessarily trying to pre-write all of the potential risks in advance because we can’t anticipate all of those.

“In an ideal world, we would see fast responses by experts to emerging problems rather than … trying to pre-write all of the potential risks in advance. Because we can’t anticipate all of those.”

I think that what ends up happening in a lot of AI discussions is a focus on existential risk, which I think is very real. Like, there’s a lot of very serious people worried about AI’s ability to murder us all or become self-aware. And there’s a lot of people who dismiss that and say, “AI will cure all disease.” I don’t know who’s right. So I think it’s worth spending some time worrying about that.

But I do worry that that kind of discussion obscures the agency we have over how this is used in companies, how this is used in advertising, how this is, you know, what kinds of behavior we wanna encourage and discourage. And that is something, you know, we’re talking about at the individual level, but also company level, policy level. We get to make choices about that. And I do worry that people are not thinking enough about those smaller but very significant choices.

CHAKRABARTI: Mm. Okay. You know, we just have a few minutes left here. And just — was it yesterday? Or just this week, the Trump administration released what it calls America’s AI Action Plan. And it’s quite a long document outlining the administration’s priority on how to develop AI for the advantage of the United States. I don’t know if you’ve had a chance to look at that. Have you?

MOLLICK: I’ve spent a little time with it. And I’m not an expert. But neither are AI, so I can make some stuff up, too.

CHAKRABARTI: (LAUGHS) Okay. But, so in terms of, to be fair, I hear you when you said you hadn’t had a chance to look at all of it. But is that the kind of action plan — lemme put it more broadly — that you would like to see more of in terms of, you know, political leadership on AI?

MOLLICK: So I think it’s, as you said, it’s pretty extensive at very high level. A lot of it is directing the National Institutes of Standards to do something or the Department of Education, whether or not that is still operational, to do tasks.

So I think there is value in an executive-level plan that prioritizes AI and assigns experts to start assessing and solving problems. So there is actually a very admirable part of the plan around how do we better assess AI and figure out ways to have AI better explain its own decisions? Interpretability, it’s called. How do we increase control? So there is a lot of that kind of action that’s needed. And I think it sets up some directives that would be very important.

The question is how fast are they executed and by whom? Right? And especially in a government that’s rapidly changing and downsizing. So I think that there’s real value in articulating a vision, and I would hope that that happens at the level of universities, at the level of, you know, NPR, about what the vision for AI is. But then we actually have to — we can’t just have a vision. We have to start acting on that very quickly and shaping where things go.

“We can’t just have a vision. We have to start acting on that very quickly and shaping where things go.”

CHAKRABARTI: I am very enamored of the idea that the more people understand these tools, at least the subset of publicly available LLMs, the more that that gives us the agency to perhaps say when other major changes in AI come, or their tools are launched that people don’t like, we can say, “Hey, no. We understand sort of how this stuff works, so therefore we don’t want you to make this development.”

I get that. It’s like a more empowered public. But it feels like still we’ve got a long ways to go on that front. But I very much appreciate you, professor Mollick, for getting us started on that. So thank you so very much for joining us.

MOLLICK: Thank you for having me, and that is indeed in my own voice.

CHAKRABARTI: (LAUGHS) Thank you so much.

The first draft of this transcript was created by Descript, an AI transcription tool. An On Point producer then thoroughly reviewed, corrected, and reformatted the transcript before publication. The use of this AI tool creates the capacity to provide these transcripts.