Although he is an expert in artificial intelligence and its use and applications in companies and organizations, Hunter Phoenix Van Wagoner, is also a humanist at heart.

So, when he studies and analyzes the implementation and effects of AI into the workplace, his concerns go beyond just the x’s and o’s – or the ones and zeros – of efficiencies to the human costs and benefits.

“I want to understand the employee side,” Van Wagoner, assistant professor of Management for the College of Business and Economics at CSUF said. “I worry about the employee side.”

There is no doubt that the world, and businesses and organizations in particular, are hurtling toward an AI-dominated future.

According to a report by Stanford’s Human-Centered Artificial Intelligence research institute, global AI private investment hit a record high and 26% overall growth with $252.3 billion in 2024. And when the stats for 2025 and 2026 are tallied, they have been projected to continue the trend.In the shadow of these climactic shifts, there is a very human element. And Van Wagoner doesn’t want humans to be forgotten in the rush to automate and turn decision-making over to machines.

Van Wagoner examined both the mechanisms and the implications of human-AI collaboration in organizations in a 2025 research article for the “Journal of Organizational Behavior.”

In “Navigating AI Convergence in Human–Artificial Intelligence Teams: A Signaling Theory Approach,” Van Wagoner and a German research team conducted a study with about 1,100 participants looking at how humans and AI worked together when analyzing facial recognition and hiring.

The reason for facial recognition and hiring as categories of study is that they contain “high-uncertainty” in business parlance.

To do this, researchers studied their findings through signaling theory, which explains how individuals interpret, respond and make decisions based on AI signals —  recommendations, predictions and classifications — under conditions of uncertainty.

Researchers sought to understand “when and why do humans align their decisions with AI recommendations.”

This phenomenon — known as AI convergence — is critical for unlocking the full potential of human–AI teamwork.”

In other words, can humans and AI “play nice in the sandbox?”

Many employees and companies find some sort of AI useful in their work. This work can range from chatbots, which can answer specific questions using existing data; to generative AI that can create new, original content by analyzing patterns; to AI agents that can act autonomously to achieve goals without constant human oversight.

According to McKinsey & Company in its State of AI 2025 report, nearly 90% of businesses use some kind of AI, up 10% in just a year. While the report says “most organizations are still in the experimentation or piloting phase,” most are interested in expanding AI use and “62% of survey respondents say their organizations are at least experimenting with AI agents.”

As with most developments in AI, Van Wagoner says it is evolving rapidly.

As theitsource.asia put it in an analysis of business trends for 2026, “If 2024 marked the initial adoption phase, 2026 is shaping up to be the year when AI trends are translated into enterprise-scale applications.”

As employers integrate AI into their systems and employee workflow, the question is the degree to which and when the technology will “take over.” And how much autonomy and value humans will retain.

In the rush by corporations to monetize AI investment by increasing efficiency and reducing workforce costs, restraining a surge of AI takeovers is a concern.

Van Wagoner’s study finds that employees work best when they see AI as an option, a tool that can make suggestions and recommendations, rather than something that they are forced to use or abide by.

The professor said the worry is that if humans feel compelled to go by AI recommendations, “perhaps they won’t use it.”

Because of egos, psychology and other factors including so-called “inclusion behavior,” humans can be the wildcard in teaming with AI.

“We’re messy,” Van Wagoner joked, adding many of us fear being “fired by algorithm.”

As a teacher, Van Wagoner says much of his work with AI in the classroom involves navigating the tricky passage of teaching students how to use AI as a collaborative tool, while simultaneously maintaining uniquely human critical thinking.

Thus, according to Van Wagoner, it is important for management likewise to implement AI in a way that humans best align with AI suggestions, while retaining autonomy and critical thinking.

“If we can get employers to really invest in training employees how to use AI effectively, that’s the sweet spot,” he said.

How long humans and organizations will be able to maintain this delicate balance is anyone’s guess.