Your Next CEO (or Manager) Might Be an AI
Or, if not an AI, your actual human boss will probably be augmented by one
Here’s something that might not surprise you: When the weather is nice, CEOs play golf instead of working.
Ben-Rephael et al. (2024) actually quantified this in their paper on executive effort. Researchers obtained Bloomberg terminal usage data for C-suite executives at publicly traded firms and matched it with SEC filings. The findings? A one-standard-deviation increase in good weather corresponded to roughly 20 fewer hours of effort per quarter for CEOs. Effort also declined when annual targets became unreachable—why work hard in the second half if it won’t affect your bonus?
Research also indicates that the acquisition of power reduces perspective-taking and empathy—which may explain why the C-suite plays golf on Friday, while demanding longer work hours from everyone else.
Human CEOs have issues. They may not care enough about society or their workers, but they’re also not singularly focused on shareholder value. They’ve got plenty of room left for status, leisure, reputation, and control. Hence the usual greatest hits of agency problems: Perks (yes, the corporate jet), the quiet life, headline-friendly philanthropy, and a little empire-building on the side.
There’s another problem with human CEOs: They have specific types and competencies that may not match what a firm needs. Dahlstrand et al. (2024) found that firms led by the “wrong” CEO type lose up to 20% of revenue per employee. Eliminating CEO mismatch across their sample would raise aggregate productivity by approximately 9%.
By contrast, an AI “CEO” won’t crave applause, corner offices, or a weekend place in Aspen. It doesn’t want $1 billion per year in compensation and won’t skip out to the golf course after a bad first half. The AI CEO will have available the sum of all human knowledge, real-time access to all data inside the company, and the ability to be present at multiple meetings simultaneously. And it can shift type (e.g., operations focused or visionary leader) on request. In the near future, it’s likely to be able to decide – on its own – what type of CEO is needed for a given situation.
Of course, there may be other downsides to consider.

An AI CEO that’s available 24/7, calm, rational, and strategic sounds ideal—unless, of course, it’s biding its time while planning to eliminate humanity. (I’m not being entirely flippant; for more details, see the Wikipedia article on existential risk from AI.)
And the agency problem doesn’t disappear—it just moves upstream. Who programs the AI CEO’s goals? What guardrails does it have? Who audits what it’s doing? And the humans around it may treat it like a very expensive ventriloquist’s dummy: “Don’t blame me—the algorithm wanted the jet.”
But aside from the above: How technologically feasible is it for an AI to do the work of a CEO?
Keep in mind that all the examples I’m going to discuss are about older versions of Large Language Models (LLMs). As Mollick (2024) put it, “Whatever AI you are using right now is going to be the worst AI you will ever use.” The capabilities will continue to improve.
AI Can (Mostly) Outperform Human CEOs
Mudassir et al. (2024) found that AI can mostly outperform human CEOs even now.
University of Cambridge researchers ran 344 participants (students and senior executives) in competition with GPT-4o—a model that’s already obsolete—through a simulated CEO role in a U.S. automotive industry digital twin.
GPT-4o consistently beat the human players on growth, product design, market share, and profitability. When it failed, it got fired faster by virtual boards, largely because it tended to over-optimize for short-term gains. It also couldn’t handle black swan events (such as pandemic-style demand collapses)—human players adapted better to those.
But the bottom line: AI is probably already useful for augmenting a human CEO.
Where AI Hits Hardest: The White-Collar Surprise
Contrary to earlier waves of automation that displaced factory workers and routine clerical staff, generative AI is now penetrating the upper echelons of the labor market.
Felten et al. (2023) established that exposure to generative AI is highest for highly paid, highly educated, white-collar occupations—lawyers, PR specialists, and instructors—not assembly-line workers. Eloundou et al. (2023) found that roughly 80% of U.S. workers have at least 10% of their tasks exposed to LLMs, with higher-wage jobs showing greater exposure than lower-wage positions.
A field experiment with 758 BCG consultants by Dell’Acqua et al. (2023) demonstrated that this isn’t merely theoretical: On creative and strategic tasks (such as ideation, market segmentation, and persuasive writing)—the bread and butter of elite professional services—AI users finished 12% more tasks, 25% faster, and at 40% higher quality.
A more recent study with 776 P&G professionals by Dell’Acqua et al. (2025) offers even more striking results. Individuals working with AI matched the performance of two-person teams without AI. Teams using AI were significantly more likely to produce exceptional (top 10%) solutions.
Perhaps most surprisingly, AI use increased positive emotions (excitement, energy) and decreased negative ones (anxiety, frustration) relative to a human working alone—the AI provided similar emotional benefits as working with a human.
Working in a team with an AI looks increasingly plausible. And that brings us a step closer to the AI leading the team.
AI as Master Persuader
One of the CEO’s essential roles is being “persuader-in-chief”—the person who must convince boards, employees, investors, and customers to follow a particular direction.
LLMs have proven to be remarkably effective persuaders, often outperforming humans by significant margins.
Salvi et al. (2024) found that GPT-4 with access to personal information was 82% more persuasive than human debaters. A field experiment on Reddit’s r/ChangeMyView (2025) found AI persuasion rates three to six times higher than humans. In head-to-head comparisons, LLM-generated arguments have an 80–90% probability of being judged more persuasive than arguments written by incentivized human persuaders.
The Wade Test: Can AI Be the CEO’s Voice?
Choudhury et al. (2024) looked at a core question: Can generative AI effectively automate CEO communications?
Their field experiment was conducted around a specific CEO, called Wade. An AI was trained on the CEO’s prior communications, then 105 employees were randomly shown either Wade’s own answers or AI-generated answers to ten questions.
The result: Employees couldn’t tell the difference.
But here’s where it gets interesting. When employees believed a response was AI-generated—regardless of the actual source—they rated it as less helpful. Labeling answers as AI-generated reduced perceived helpfulness even when the answers came from the actual human CEO.
The good news: We’ve got an AI that can communicate like the CEO.
The bad news: Its success currently depends on people not realizing they’re dealing with the AI.
But with wider use and social acceptance, that might change.
What This Means for You
Many of the tasks that defined “knowledge work” as protected from automation—synthesis, persuasion, creative problem-solving—are now within AI’s expanding capability boundary. As Dell’Acqua et al. (2023) noted, we still face a jagged frontier in AI capabilities; it cannot do everything a human knowledge worker can do, but that frontier is constantly shifting, and the tasks that are human-only for knowledge workers are shrinking.
And being a CEO is the ultimate knowledge work.
Even with current technology, AIs can do many of the things a CEO does. Jokes and snark aside, I think we’re a considerable way off from a company truly having an AI CEO, for all sorts of legal, socio-political, and practical reasons.
What’s more likely: CEOs will begin to use AI to augment themselves, delegating more as they and their boards become comfortable with AI’s abilities. This is already happening. Eric Yuan of Zoom, Reid Hoffman, and other high-profile leaders are experimenting with AI clones—sophisticated digital twins that use their voice, likeness, or knowledge bases to scale their presence.
If you are an early career knowledge worker—a starting management consultant, a new hire in a marketing department, or similar—look for opportunities to manage AI “team members” now. This is what you’ll be doing a lot of over your career.
But there’s another reason: Weidmann et al. (2025) found that leadership effectiveness with AI agents strongly predicts leadership effectiveness with human teams (ρ = 0.81). Successful leaders of both AI agents and humans share the same behaviors: more questioning and conversational turn-taking. Learning how to lead AI well now – will quite possibly make you a better leader of humans later.
If you’re in any kind of knowledge work role (and particularly if you are in a leadership role), it’s time to start thinking about how you can augment yourself with AI.
If you are a board member, it’s time to start thinking about how your C-suite is engaging with AI—and whether the organization should monitor it. Should your C-suite be experimenting with digital clones of themselves? What guardrails do you want in place? Should the ability to collaborate successfully with AI be a competency you explicitly evaluate for your C-suite?
Across all these roles, one pattern is clear: Wage gains from AI are accruing disproportionately to workers who can leverage these tools effectively. The premium isn’t for human judgment and problem solving alone—it’s for the capacity to leverage AI judgment effectively. Keep in mind, as Riedl & Weidmann (2025) noted, the ability to collaborate with AI is distinct from solo problem-solving ability. Using a 667-person interactive benchmark across math, physics, and moral reasoning tasks, the authors show that the capacity to work effectively with AI is a separate, measurable skill—not merely a byproduct of being good at the underlying task. Collaborating with AI is a trainable, separable capability.
The Bottom Line
Your next boss probably won’t be an AI. But it will likely be a human augmented by AI capabilities. And, in the long run, true AI CEOs might be coming.
“Shame on me if OpenAI is not the first big company run by an AI CEO.” —Sam Altman, Conversations with Tyler podcast, November 2025.
And aside from automation bias (our tendency to over-rely on technology) and the results of several studies showing that AI+Human often underperforms AI alone—what could possibly go wrong?
“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” —Sam Altman, Airbnb Open Air Conference, June 2015
Skynet, Daleks, and Colossus the Forbin Project—here we come!
What’s your experience working with AI? Are you already augmenting yourself—or your team? If you are a board member, or an advisor to a board or CEO, are you thinking about creating AI clones of your C-suite? What are your policies for C-suite use of AI? I’d love to hear in the comments.
Thanks for reading! Subscribe for free to receive new posts using this link.
About Steven Strauss: From 2014 to 2025, Strauss was the John L. Weinberg/Goldman Sachs Visiting Professor at Princeton University. Immediately prior to Princeton, he was on the faculty of the Harvard Kennedy School and was a 2012 Harvard University Advanced Leadership Fellow. Before Harvard, he served in the Bloomberg administration in New York City and as a management consultant with McKinsey’s London office. He holds a Ph.D. in Management from Yale University.
Bibliography
Altman, S. (2015). Remarks at Airbnb Open Air Conference. June 2015.
Altman, S. (2025). Interview on Conversations with Tyler podcast, hosted by Tyler Cowen. November 2025.
Ben-Rephael, A., Carlin, B., Da, Z., & Israelsen, R. (2024). “Uncovering the Hidden Effort Problem.” Working Paper.
Bohren, J. A., et al. (2024). “AI-Generated Ideas and Human Creativity.” Working Paper.
Carrasco-Farré, X. (2024). “The Linguistic Foundations of AI Persuasion: Moral-Emotional Language and Cognitive Load.” Working Paper.
Choudhury, P., Vanneste, B. S., & Zohrehvand, A. (2024). “The Wade Test: Generative AI and CEO Communication.” Harvard Business School Working Paper 25-008.
Dahlstrand, A., László, D., Schweiger, H., Bandiera, O., Prat, A., & Sadun, R. (2024). “CEO-Firm Matches and Productivity in 42 Countries.” Harvard Business School Working Paper 25-033.
Dell’Acqua, F., Ayoubi, C., Lakhani, K., Lifshitz, H., Sadun, R., Mollick, L., Mollick, E., Han, Y., Goldman, J., Nair, H., & Taub, S. (2025). “The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise.” Harvard Business School Working Paper 25-043.
Dell’Acqua, F., McFowland III, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality.” Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 24-013.
Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.” arXiv preprint.
Felten, E., Raj, M., & Seamans, R. (2023). “Occupational Heterogeneity in Exposure to Generative AI.” Working Paper.
Hackenburg, K., et al. (2025). “Information Density and Persuasion in Large Language Models.” Working Paper.
Johnston, H., & Makridis, C. (2025). “AI Augmentation and Wage Gains.” Working Paper.
Mollick, E. (2024). Co-Intelligence: Living and Working with AI. Portfolio/Penguin.
Mudassir, H., et al. (2024). “AI Can (Mostly) Outperform Human CEOs.” Harvard Business Review, September 2024.
Reddit Field Study. (2025). “AI Persuasion in r/ChangeMyView: A Field Experiment.” Working Paper.
Riedl, C., & Weidmann, N. B. (2025). “Quantifying Human-AI Synergy.” Northeastern University/UCL Working Paper.
Salvi, F., et al. (2024). “On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial.” arXiv preprint.
Weidmann, B., Xu, Y., & Deming, D. J. (2025). “Measuring Human Leadership Skills with AI Agents.” NBER Working Paper No. 33662.