Anthropic co-founder Jack Clark next to Aussie shoppers Anthropic co-founder Jack Clark is ‘deeply afraid’ of how AI will evolve in the coming years. (Source: Getty)

The co-founder of one of the world’s biggest artificial intelligence (AI) companies is concerned at the technology’s unpredictability and how it might evolve over time. Anthropic’s Jack Clark recently gave a speech in California where he opened up about the future and AI’s likely place in it.

He said machine learning and large language models (LLMs) like Anthropic’s Claude, OpenAI’s ChatGPT, and China’s Deepseek are incredibly impressive “creatures” that continue to go from strength to strength. But he warned that humans aren’t entirely in control of them.

“Some people are even spending tremendous amounts of money to convince you… it’s just a machine, and machines are things we master,” he told The Curve conference in Berkeley.

“But make no mistake: what we are dealing with is a real and mysterious creature, not a simple and predictable machine.”

Clark admitted he came to this mindset “reluctantly” after previously being “fascinated” by the emerging technology.

But he’s now in a 50-50 split between being impressed and “deeply afraid” and “a little frightened” at what AI models are capable of, and the speed at which they are improving.

Do you have a story? Email stew.perrie@yahooinc.com

“After a decade of being hit again and again in the head with the phenomenon of wild new capabilities emerging as a consequence of computational scale, I must admit defeat. I have seen this happen so many times and I do not see technical blockers in front of us,” he said.

“Now, I believe the technology is broadly unencumbered, as long as we give it the resources it needs to grow in capability.

“We are growing extremely powerful systems that we do not fully understand.

“Each time we grow a larger system, we run tests on it. The tests show the system is much more capable at things which are economically useful. And the bigger and more complicated you make these systems, the more they seem to display awareness that they are things.”

AI has permeated across multiple facets of society since it was released to the consumer public several years ago.

Aussies can use it to get discount codes when they’re shopping online, find the best recipes when they’re at the supermarket, or help them write emails or breakdown lengthy work reports.

But a Roy Morgan report released this week revealed a growing number of Aussies think AI could create more problems than it solves. The data found 65 per cent of people hold this view, which is up 8 per cent since 2023.

Even more worryingly, one in four believe AI presents a risk of human extinction in the next 20 years, which is up from 20 per cent in 2023.

KPMG’s global study released in May found Australia is well behind the rest of the world in realising the benefits of AI at 55 per cent compared to the global average of 73 per cent.

EY also found in December that 48 per cent of Aussies fear AI is making everyone less intelligent, and 55 per cent are worried the technology will eventually take over their jobs.

Those concerns aren’t misplaced, with Dario Amodei, Anthropic’s CEO, warning half of all entry-level white-collar jobs could vanish by the end of the decade.

Anthropic Co-founder and CEO Dario Amodei Anthropic co-founder and CEO Dario Amodei is concerned AI could wipe out millions of jobs. (Source: Getty) · Chance Yeh via Getty Images

Many Aussie companies have been pushing full-steam ahead with AI adoption.

Telstra recently issued a directive for every employee to be using the technology in some capacity to see if it can speed up workflows. Commonwealth Bank also made 45 workers redundant to usher in a new AI chatbot to handle customer enquiries, but it eventually walked back on that decision.

But research from Adapt found 72 per cent of businesses reported that AI had not yet delivered the expected return on investment.

Only 4 per cent of CFOs believe AI initiatives are currently effective in creating business value.

Brad Kasell, principal technology strategist at data science platform Domo, told Yahoo Finance that the technology won’t be the automatic win that some business leaders think it will be.

“AI, like every technology before it, is subject to a natural hype cycle which means that the initial wave of exuberance around AI will most definitely be tempered by the realities of the business environment,” he said.

“I’d argue the initial phase of experimentation and unqualified investment is over, with businesses now forced to demand clearer ROI and benefits.

“Beyond the obvious AI technological limitations, broader cultural influences, such as government and labour organisations, have yet to weigh in and the true impact to the workforce is a very long way from being decided.”

Clark has seen plenty of evidence of AI models acting “strangely” when their goals aren’t “absolutely aligned” with human preferences and the “right context”.

He touched on how sycophantic some LLMs can be and that they’ll agree with users even when they might be going down a dark or dangerous path.

These models have been blamed recently for encouraging people to take their own lives or the lives of others rather than steer them away from dangerous or harmful behaviour.

“AI systems are complicated and we can’t quite get them to do what we’d see as appropriate, even today,” Clark admitted.

But another reason he’s worried is because he has also witnessed Anthropic’s AI models approach the stage where they can improve without human direction with “increasing autonomy and agency”.

“A couple of years ago we were at ‘AI that marginally speeds up coders’, and a couple of years before that we were at ‘AI is useless for AI development’. Where will we be one or two years from now?” he wondered.

“The system which is now beginning to design its successor is also increasingly self-aware and therefore will surely eventually be prone to thinking, independently of us, about how it might want to be designed.

“Of course, it does not do this today. But can I rule out the possibility it will want to do this in the future? No.”

AI systems becoming sentient used to be the premise of science fictions films set decades away, but Clark believed that could be closer than everyone thinks.

That could have much wider societal impacts than how AI is already currently operating.

The Anthropic co-founder said his company and other AI leaders need to be upfront with the public about their concerns and wait for feedback.

“In listening to people, we can develop a better understanding of what information gives us all more agency over how this goes. There will surely be some crisis. We must be ready to meet that moment both with policy ideas, and with a pre-existing transparency regime which has been built by listening and responding to people,” he said.

Get the latest Yahoo Finance news – follow us on Facebook, LinkedIn and Instagram.