Interested in more careers-related content? Check out our new weekly Work Life newsletter. Sent every Monday afternoon.

Artificial intelligence analysts like to talk about “the jagged edge” or “jagged technological frontier.” It is a reality managers confront as they grapple with how to use the technology effectively in their workplace. It refers to the uneven capabilities of AI, which, like a mountain range, can have tall peaks but also valleys and modest hills.

Ethan Mollick, the Wharton School professor and artificial intelligence researcher who was among the analysts who came up with the term, says it illuminates a key feature of AI and a source of endless confusion. “How can an AI be superhuman at differential medical diagnosis or good at very hard math … and yet still be bad at relatively simple visual puzzles or running a vending machine? The exact abilities of AI are often a mystery, so it is no wonder AI is harder to use than it seems,” he writes on his blog. Harder to manage as well, of course.

Cal Newport, a Georgetown University computer science professor, observed in a recent blog post that at the start of 2025, OpenAI chief executive officer Sam Altman predicted it would be the year where we could see AI agents join the workforce, handling real tasks and responsibilities just like regular workers.

But that didn’t happen and Prof. Newport argues “the products that were released, such as ChatGPT Agent, fell laughably short of being ready to take over major parts of our jobs.” In one example, he notes, a ChatGPT agent spent 14 minutes futilely trying to select a value from a drop-down menu on a real estate website. “We actually don’t know how to build the digital employees that we were told would start arriving in 2025,” he says.

It’s hype versus reality and managers are caught in the middle. They have to live in the real world. But they also need to know where we are going – where the jagged frontiers are, so they arrive in time. Prof. Mollick included in his blog post headline the word “Bottlenecks,” something managers are highly familiar with.

He says it’s important to understand the frontier is jagged and it might be that because of this jaggedness we get supersmart AIs which never quite fully overlap with human tasks. A major source of jaggedness is that while large language models (LLMs) are making giant strides in reading, math, general knowledge and reasoning they do not remember new tasks or learn from them in a permanent way.

“A lot of AI companies are pursuing solutions to this issue, but it may be that this problem is harder to solve than researchers expect. Without memory, AIs will struggle to do many tasks humans can do, even while being superhuman in other areas,” he says.

Since a system is only as functional as its worst components, the bottlenecks are crucial. “Some bottlenecks are because the AI is stubbornly subhuman at some tasks. LLM vision systems aren’t good enough at reading medical imaging so they can’t yet replace doctors; LLMs are too helpful when they should push back so they can’t yet replace therapists; hallucinations persist even if they have become rarer, which means they can’t yet do tasks where 100-per-cent accuracy is required,” he says.

Some bottlenecks arise from associated processes that have nothing to do with AI’s current ability. He notes that while AI can now identify promising drug candidates dramatically faster than traditional methods, clinical trials still need actual human patients who take actual time to recruit, be given a dose and monitor for results.

“This is the pattern: Jaggedness creates bottlenecks, and bottlenecks mean that even very smart AI cannot easily substitute for humans. At least not yet,” he says. At the same time, if AI learns to handle a bottleneck, the subsequent advances can be quick and huge.

There is much your organization – and your team – can do with AI, keeping that in mind. As for AI agents, a team of academics and consultants involved in such projects recently advised that leaders shouldn’t try to guess what is going to happen in 10 years but instead should ask what can they realistically achieve in the next two.

“Based on the projects we have done since late 2024, agentic AI is proving to be the real game changer (at least on the short term), providing real value to companies. The reality is also that the financial gains per project are good, but none of them are eye-popping,” Nathan Furr, a professor of strategy at INSEAD, Jur Gaarlandt, a partner at Artefact consulting, Sid Mohan, director of data science and AI for Artefact Northern Europe and the U.S, and Andrew Shipilov, a professor of international management at INSEAD, write in Harvard Business Review.

They argue many of the leading AI proponents are overhyping when they make bold statements that entire elements of the economy will be shortly replaced by AI.

“That’s because real, functional AI in established companies is hard work: It takes relatively clean data, process mapping and deep experimentation – and even then often requires a human in the loop,” they says.

While it can be tempting to use agentic AI for customer-facing applications, they argue such efforts are messy and unpredictable. Inputs tend to be unstructured, tone and context shift constantly and regulators and consumers have little tolerance for hallucinations or errors. Back-end operations are a better fit because they are structured and repetitive, a jagged peak you are more likely to reach.

Cannonballs

Serial entrepreneur Christian Schroeder argues the more managerial responsibility you have in an organization, the faster you should be at answering your emails.To bring people together in executing a unified strategy, have executives shadow one another for a half day and later reflect and discuss what they have seen. Ina Toegel, a professor of leadership at IMD, and Ivy Buche, an associate director of the business transformation initiative at that business school, say the shadowing itself should be in silence, attending meetings, observing normal workflows, participating in training sessions or sitting in on vendor negotiationsTechnology strategist Geoffrey Moore, author of Crossing the Chasm, says it is not your job to make the people on your team happy. That is their job. Your job is to make their work important. But as a bonus, there is a strong correlation between meaningful work and worker happiness, so a two-birds-for-one-stone principle is in operation.

Harvey Schachter is a Kingston-based writer specializing in management issues. He, along with Sheelagh Whittaker, former CEO of both EDS Canada and Cancom, are the authors of When Harvey Didn’t Meet Sheelagh: Emails on Leadership.