{"id":296733,"date":"2025-12-03T13:03:15","date_gmt":"2025-12-03T13:03:15","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/296733\/"},"modified":"2025-12-03T13:03:15","modified_gmt":"2025-12-03T13:03:15","slug":"complete-rethink-of-business-models-needed-to-realise-ais-benefits","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/296733\/","title":{"rendered":"Complete rethink of business models needed to realise AI\u2019s benefits"},"content":{"rendered":"<p>When ChatGPT took the world by storm in late 2022, the chatbot instilled a mixture of excitement and fear in boardrooms as executives contemplated the potential of harnessing a new, cutting-edge technology and the implications of missing out on the next industrial revolution. Everyone, it seemed, was talking about the need to embrace artificial intelligence.<\/p>\n<p>But three years on, the big question remains: how? AI has left the lab and is being rolled out widely across industries, but the past few years of experimentation has demonstrated there is no one-size-fits-all use case. The race is on to find a return for the large sums of money invested in AI and the future of the sector hinges on whether its tools find a permanent foothold.<\/p>\n<p>In 2024, investment in the sector reached $252.3bn, according to the latest <a href=\"https:\/\/hai.stanford.edu\/ai-index\/2025-ai-index-report\/economy\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">Stanford AI Index<\/a>, well below the record of $360.7bn in 2021, but still 13 times the level a decade earlier.<\/p>\n<p>Companies are now looking at how to maximise the benefits of AI while minimising the risks. This has made the expansion of the technology slower and more complicated than expected, says Haritha Khandabattu, senior director analyst at consultancy Gartner. <\/p>\n<p>\u201cMany enterprises lack the foundational applications and data that\u2019s needed to leverage all kinds of AI solutions, including agents,\u201d she adds. \u201cFor companies to realise benefits from AI, they must completely rethink the business and how the processes work.\u201d<\/p>\n<p>Economists disagree how much AI systems will affect the labour market. The most alarmist estimates \u2014 often from bosses of AI companies, such as Anthropic\u2019s Dario Amodei \u2014 paint a picture of a job apocalypse. But a recent study from Yale University and the Brookings Institution think-tank found that generative AI <a href=\"https:\/\/www.ft.com\/content\/c9f905a0-cbfc-4a0a-ac4f-0d68d0fc64aa\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">has not had<\/a> a more dramatic effect on employment than earlier technological breakthroughs, and found little evidence that AI tools have put people out of work. <\/p>\n<p>Some <a href=\"https:\/\/www.ft.com\/ai-jobs\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">jobs<\/a> were deemed more at risk to replacement by AI than others. Customer service agents were seen as particularly vulnerable and developers, including OpenAI, Google and Anthropic, have tried to make their chatbots mimic engaging personalities so users are more involved because they feel they are interacting with a human rather than a computer program.\u00a0<\/p>\n<p>\u201cThe more engaged you are, the higher the chances that you probably are going to pay for a subscription,\u201d says Giada Pistilli, AI ethics researcher at Sorbonne University<\/p>\n<p>But some early adopters of AI-powered customer service bots, such as payments company Klarna, have had to rehire humans after unsatisfactory results with AI tools struggling to deal with complex, real-world scenarios.\u00a0<\/p>\n<p>One risk in making AI models more engaging and friendly is that they end up being too <a href=\"https:\/\/www.ft.com\/content\/72aa8c32-1fb5-49b7-842c-0a8e4766ac84\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">sycophantic<\/a>, which could lead to them reinforcing harmful ideas. OpenAI recently found this out the hard way, when they tweaked their AI models to be more \u201chelpful\u201d, resulting in chatbots that were overly agreeable. The San Francisco-based start-up rolled the change back after user complaints.\u00a0<\/p>\n<p>One risk in making AI models more engaging and friendly is that they end up being too sycophantic<\/p>\n<p>In education too, AI\u2019s use is highly contested. Teachers are trying to find ways to minimise cheating, and new ways to harness the tools for learning. But some teachers have rejected it outright, and have started asking students for handwritten essays. <\/p>\n<p>On the other hand, there have been some big successes for AI, largely because the tools have been used for tasks that their models are actually suited to.\u00a0Take large language models (LLMs), which power the likes of ChatGPT. Technology companies like to sell the idea that they are magical everything machines, which is not the case. <\/p>\n<p>LLMs function by predicting the next likely word in a sentence based on data they have been trained on.\u00a0This makes them excellent at tasks such as pattern recognition and summarisation, and solving easily verifiable problems, such as coding. \u201cIt\u2019s all about probability,\u201d says Amr Awadallah, a former Google executive and founder of Vectara, a generative AI agent start-up. On the flipside, they perform badly when dealing with complex socio-technical problems that require an understanding of how humans think and work.<\/p>\n<p>Perhaps the best examples of successful AI adoption are when the tools have been used to augment humans rather than replace them, such as in scientific research. Robots have entered the labs and are already taking the drudgery and labour out of <a href=\"https:\/\/www.ft.com\/content\/684a5f85-6061-45aa-a00a-beb9a7241c74\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">laboratory experiments<\/a>.<\/p>\n<p>Financial and professional services companies have had success using AI tools in tasks such as flagging potential fraud by identifying suspicious patterns of behaviour in banking transactions or trawling mass data sets during audits to find high risk transactions.<\/p>\n<p>Coding is one of the most popular use cases for LLMs as computer code is much easier to analyse than complex real-world scenarios. Unsurprisingly perhaps, the tech sector has been the most enthusiastic adopter of its own creation. AI developers are using their own AI tools to improve their models and find vulnerabilities. <\/p>\n<p>But even among tangible productivity gains, real challenges remain. In regulated industries such as accountancy and banking, companies have to think carefully about how to integrate AI tools, which are prone to mistakes. <\/p>\n<p>Coding is one of the most popular use cases for LLMs as computer code is much easier to analyse than complex real-world scenarios<\/p>\n<p>Companies have to ensure their data practices are updated for the AI age, and that staff are accountable for decisions and know any tool\u2019s limitations.<\/p>\n<p>Technology companies have tried to mitigate against these problems by adding extra safeguards, such as reminders that language models \u201challucinate\u201d or make things up, and by offering sources and citations. Critics say these measures are not enough. And despite the companies\u2019 best efforts, humans have a tendency to believe computers are right, even when they are not. <\/p>\n<p>There have, for example, been cases of \u201cdeath by GPS\u201d, where users have been killed after unquestioningly following directions from their navigation app into dangerous areas.<\/p>\n<p>In the <a href=\"https:\/\/www.ft.com\/content\/e6feb06e-2fef-4966-8dde-a97df69dce52\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">military<\/a>, which has enthusiastically adopted AI, there are concerns that commanders risk falling into \u201cautomation bias\u201d, where they trust the system by default, or \u201caction bias\u201d, where they feel compelled to act because the system demands it.<\/p>\n<p>This problem is compounded by authoritative sounding chatbots that are right about things often enough. Experts have warned of small or subtle errors creeping into important documents and polluting our information landscape. \u201cOrganisations are realising that AI is not a solver of all the problems,\u201d says Gartner\u2019s Khandabattu.<\/p>\n","protected":false},"excerpt":{"rendered":"When ChatGPT took the world by storm in late 2022, the chatbot instilled a mixture of excitement and&hellip;\n","protected":false},"author":2,"featured_media":296734,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-296733","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/296733","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=296733"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/296733\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/296734"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=296733"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=296733"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=296733"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}