Navio Kwok, PhD, is a leadership adviser, specializing in organizational psychology, at leadership advisory firm Russell Reynolds Associates. Fawad Bajwa leads Russell Reynolds Associates’ AI, analytics and data practice globally.
Our growing reliance on artificial intelligence tools means we can now outsource any question – along with the critical thinking that used to come with it.
In a global survey, Russell Reynolds Associates (RRA) finds that nearly one in two leaders use AI in their day-to-day workflow or have piloted a generative AI program.
In the general population, Pew Research Center reports one in three U.S. adults have used ChatGPT, almost double that of 2023. A survey of Canadian students by KPMG Canada finds that more than seven in 10 rely on generative AI for their schoolwork.
Usage alone isn’t the issue – the real risk is substitution. Each time we outsource the work of thinking, we reinforce the habit of not thinking.
In a study by MIT’s Media Lab, researchers asked participants to write SAT essays and assigned them to one of three conditions, either using ChatGPT, Google or nothing at all. Measuring their neural activity, ChatGPT users engaged their brains the least, struggled to remember what they wrote and reported the lowest ownership over their work.
Over time, this erodes the mental muscles and processes that help us learn and make sound decisions.
To develop critical thinking, we must go through the painstaking process of memorizing facts – interspersed with rest periods – so our brains have enough knowledge stored in long-term memory to form new connections and insights. Despite seemingly having all the answers at our fingertips, using AI ends up atrophying the cognitive capabilities necessary to judiciously evaluate the information that we receive.
Unfortunately, humans are incredibly easy to deceive.
A decade before ChatGPT entered the zeitgeist, researchers from the University of Waterloo found that people readily mistake well-phrased nonsense – sentences that have absolutely no meaning but follow rules of grammar and syntax – as deeply meaningful. And, those who spread such nonsense are more likely to fall prey to it. This is especially troubling in the context of AI, given its propensity to lie and generate answers that have style but no substance.
For organizational leaders, there are implications for incoming and existing talent.
Interview for real expertise
Be it called critical thinking, reasoning or judgement, these are uniquely human capabilities that AI cannot replicate. However, as job candidates increasingly use generative AI to prepare for interviews, they may give the impression they have expertise that they don’t actually have.
During the interview, hiring managers should ask certain follow-up questions that reveal the thought processes behind candidates’ decisions and actions. Each question below assesses an underlying indicator of genuine expertise.
Drawing on insight published in MIT Sloan Management Review, have candidates:
Show their work by asking them to explain in greater detail what they did to achieve the outcome.Explain the why behind what they did by asking them the underlying principles that guided their decisions.Adapt to the context by asking them in which situations (for example, organizations or industries) their approach would not be as effective.Weigh their options by asking them what other approaches they considered. Challenge what they did by asking them for the strongest reason against their approach and how they would respond.
(Re)establish expectations of value over volume
Among leaders embracing AI, there are those who heavily index on seizing opportunities on work tasks. In RRA’s global survey, nearly four in five leaders say they are excited about AI’s potential to improve productivity, accelerate decision-making and free people for higher-value work. Yet, in the rush to adopt AI at scale, leaders are asking their teams to prioritize volume over value.
The result is ‘workslop’ – AI generated content that appears to be of high-quality but lacks the substance to meaningfully advance a task. A study published in Harvard Business Review finds that workers spend an average of almost two hours to address each instance of workslop; in a company of 10,000 employees, this translates to more than $9-million annually in lost productivity.
To mitigate this risk, leaders must be clear on expectations. This means slowing teams down when depth matters and explicitly training employees to interrogate AI-generated content rather than accepting it at face value. Or, consider the “radical” option to not use AI at all.
It also requires modelling the right behaviours at the top. Leaders cannot send mixed messages by asking for rigour and discernment while also pushing people to, and rewarding those who, do more with less. The lesson is clear: if you sacrifice quality for speed, you will have neither.
Protect the experiences that create expertise
When economic conditions get tough, organizations often scale back the very experiences that help people grow – stretch assignments, mentorship and opportunities to wrestle with ambiguity. Yet, these moments are the raw material for expertise to take shape. They force people to clarify their thinking, test assumptions and sharpen the judgment that no AI tool can make.
Leaders must therefore protect – not postpone – these developmental experiences. Give people the room and permission to think out loud, make sense of complexity and generate their own insights. These aren’t perks; they are the conditions under which expertise is formed and sustained.
This column is part of Globe Careers’ Leadership Lab series, where executives and experts share their views and advice about the world of work. Find all Leadership Lab stories at tgam.ca/leadershiplab and guidelines for how to contribute to the column here.