{"id":419980,"date":"2026-01-19T20:48:08","date_gmt":"2026-01-19T20:48:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/419980\/"},"modified":"2026-01-19T20:48:08","modified_gmt":"2026-01-19T20:48:08","slug":"jagged-edges-and-bottlenecks-the-confusing-uneven-capabilities-of-ai","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/419980\/","title":{"rendered":"Jagged edges and bottlenecks: The confusing uneven capabilities of AI"},"content":{"rendered":"<p class=\"c-article-body__text text-pr-5\">Interested in more careers-related content? Check out our new weekly <a href=\"https:\/\/www.theglobeandmail.com\/newsletters\/#worklife\" rel=\"nofollow noopener\" title=\"https:\/\/www.theglobeandmail.com\/newsletters\/#worklife\" target=\"_blank\">Work Life newsletter<\/a>. Sent every Monday afternoon.<\/p>\n<p class=\"c-article-body__text text-pr-5\">Artificial intelligence analysts like to talk about \u201cthe jagged edge\u201d or \u201cjagged technological frontier.\u201d It is a reality managers confront as they grapple with how to use the technology effectively in their workplace. It refers to the uneven capabilities of AI, which, like a mountain range, can have tall peaks but also valleys and modest hills.<\/p>\n<p class=\"c-article-body__text text-pr-5\">Ethan Mollick, the Wharton School professor and artificial intelligence researcher who was among the analysts who came up with the term, says it illuminates a key feature of AI and a source of endless confusion. \u201cHow can an AI be superhuman at differential medical diagnosis or good at very hard math \u2026 and yet still be bad at relatively simple visual puzzles or running a vending machine? The exact abilities of AI are often a mystery, so it is no wonder AI is harder to use than it seems,\u201d he <a href=\"https:\/\/www.oneusefulthing.org\/p\/the-shape-of-ai-jaggedness-bottlenecks\" rel=\"nofollow noopener\" target=\"_blank\">writes<\/a> on his blog. Harder to manage as well, of course. <\/p>\n<p class=\"c-article-body__text text-pr-5\">Cal Newport, a Georgetown University computer science professor, observed in a recent blog post that at the start of 2025, OpenAI chief executive officer Sam Altman predicted it would be the year where we could see AI agents join the workforce, handling real tasks and responsibilities just like regular workers. <\/p>\n<p class=\"c-article-body__text text-pr-5\">But that didn\u2019t happen and Prof. Newport <a href=\"https:\/\/calnewport.com\/why-didnt-ai-join-the-workforce-in-2025\/\" rel=\"nofollow noopener\" target=\"_blank\">argues<\/a> \u201cthe products that were released, such as ChatGPT Agent, fell laughably short of being ready to take over major parts of our jobs.\u201d In one example, he notes, a ChatGPT agent spent 14 minutes futilely trying to select a value from a drop-down menu on a real estate website. \u201cWe actually don\u2019t know how to build the digital employees that we were told would start arriving in 2025,\u201d he says. <\/p>\n<p class=\"c-article-body__text text-pr-5\">It\u2019s hype versus reality and managers are caught in the middle. They have to live in the real world. But they also need to know where we are going \u2013 where the jagged frontiers are, so they arrive in time. Prof. Mollick included in his blog post headline the word \u201cBottlenecks,\u201d something managers are highly familiar with.<\/p>\n<p class=\"c-article-body__text text-pr-5\">He says it\u2019s important to understand the frontier is jagged and it might be that because of this jaggedness we get supersmart AIs which never quite fully overlap with human tasks. A major source of jaggedness is that while large language models (LLMs) are making giant strides in reading, math, general knowledge and reasoning they do not remember new tasks or learn from them in a permanent way. <\/p>\n<p class=\"c-article-body__text text-pr-5\">\u201cA lot of AI companies are pursuing solutions to this issue, but it may be that this problem is harder to solve than researchers expect. Without memory, AIs will struggle to do many tasks humans can do, even while being superhuman in other areas,\u201d he says. <\/p>\n<p class=\"c-article-body__text text-pr-5\">Since a system is only as functional as its worst components, the bottlenecks are crucial. \u201cSome bottlenecks are because the AI is stubbornly subhuman at some tasks. LLM vision systems aren\u2019t good enough at reading medical imaging so they can\u2019t yet replace doctors; LLMs are too helpful when they should push back so they can\u2019t yet replace therapists; hallucinations persist even if they have become rarer, which means they can\u2019t yet do tasks where 100-per-cent accuracy is required,\u201d he says.<\/p>\n<p class=\"c-article-body__text text-pr-5\">Some bottlenecks arise from associated processes that have nothing to do with AI\u2019s current ability. He notes that while AI can now identify promising drug candidates dramatically faster than traditional methods, clinical trials still need actual human patients who take actual time to recruit, be given a dose and monitor for results. <\/p>\n<p class=\"c-article-body__text text-pr-5\">\u201cThis is the pattern: Jaggedness creates bottlenecks, and bottlenecks mean that even very smart AI cannot easily substitute for humans. At least not yet,\u201d he says. At the same time, if AI learns to handle a bottleneck, the subsequent advances can be quick and huge. <\/p>\n<p class=\"c-article-body__text text-pr-5\">There is much your organization \u2013 and your team \u2013 can do with AI, keeping that in mind. As for AI agents, a team of academics and consultants involved in such projects recently advised that leaders shouldn\u2019t try to guess what is going to happen in 10 years but instead should ask what can they realistically achieve in the next two. <\/p>\n<p class=\"c-article-body__text text-pr-5\">\u201cBased on the projects we have done since late 2024, agentic AI is proving to be the real game changer (at least on the short term), providing real value to companies. The reality is also that the financial gains per project are good, but none of them are eye-popping,\u201d Nathan Furr, a professor of strategy at INSEAD, Jur Gaarlandt, a partner at Artefact consulting, Sid Mohan, director of data science and AI for Artefact Northern Europe and the U.S, and Andrew Shipilov, a professor of international management at INSEAD, <a href=\"https:\/\/hbr.org\/2025\/11\/ai-agents-arent-ready-for-consumer-facing-work-but-they-can-excel-at-internal-processes\" rel=\"nofollow noopener\" target=\"_blank\">write<\/a> in Harvard Business Review. <\/p>\n<p class=\"c-article-body__text text-pr-5\">They argue many of the leading AI proponents are overhyping when they make bold statements that entire elements of the economy will be shortly replaced by AI. <\/p>\n<p class=\"c-article-body__text text-pr-5\">\u201cThat\u2019s because real, functional AI in established companies is hard work: It takes relatively clean data, process mapping and deep experimentation \u2013 and even then often requires a human in the loop,\u201d they says. <\/p>\n<p class=\"c-article-body__text text-pr-5\">While it can be tempting to use agentic AI for customer-facing applications, they argue such efforts are messy and unpredictable. Inputs tend to be unstructured, tone and context shift constantly and regulators and consumers have little tolerance for hallucinations or errors. Back-end operations are a better fit because they are structured and repetitive, a jagged peak you are more likely to reach. <\/p>\n<p class=\"c-article-body__text text-pr-5\">Cannonballs<\/p>\n<p>Serial entrepreneur Christian Schroeder <a href=\"https:\/\/christianschroeder.substack.com\/p\/10x-productivity-eight-hacks-that\" rel=\"nofollow noopener\" target=\"_blank\">argues<\/a> the more managerial responsibility you have in an organization, the faster you should be at answering your emails.To bring people together in executing a unified strategy, have executives <a href=\"https:\/\/hbr.org\/2026\/01\/to-execute-a-unified-strategy-leaders-need-to-shadow-each-other\" rel=\"nofollow noopener\" target=\"_blank\">shadow<\/a> one another for a half day and later reflect and discuss what they have seen. Ina Toegel, a professor of leadership at IMD, and Ivy Buche, an associate director of the business transformation initiative at that business school, say the shadowing itself should be in silence, attending meetings, observing normal workflows, participating in training sessions or sitting in on vendor negotiationsTechnology strategist Geoffrey Moore, author of Crossing the Chasm, <a href=\"https:\/\/bradenkelley.com\/2025\/12\/bringing-energy-back-to-work\/\" rel=\"nofollow noopener\" target=\"_blank\">says<\/a> it is not your job to make the people on your team happy. That is their job. Your job is to make their work important. But as a bonus, there is a strong correlation between meaningful work and worker happiness, so a two-birds-for-one-stone principle is in operation. <\/p>\n<p class=\"c-article-body__text text-pr-5\">Harvey Schachter is a Kingston-based writer specializing in management issues. He, along with Sheelagh Whittaker, former CEO of both EDS Canada and Cancom, are the authors of When Harvey Didn\u2019t Meet Sheelagh: Emails on Leadership.<\/p>\n","protected":false},"excerpt":{"rendered":"Interested in more careers-related content? Check out our new weekly Work Life newsletter. Sent every Monday afternoon. Artificial&hellip;\n","protected":false},"author":2,"featured_media":325209,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,4081,61],"class_list":{"0":"post-419980","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-ordid20000","14":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/419980","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=419980"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/419980\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/325209"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=419980"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=419980"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=419980"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}