De redactie van NRC selecteert de beste artikelen uit The Economist voor een breder perspectief op internationale politiek en economie.
How much should managers let bots do the thinking?
Dit artikel komt uit The Economist
Calculators didn’t make everyone innumerate. GPS navigation systems made driving easier. In any conversation about the cognitive effects of artificial intelligence, these two earlier technologies are reasonably likely to come up. Each is a useful entry-point into two big questions. How might AI change the way people think, and should managers do anything in response?
Using calculators and GPS devices are examples of „cognitive offloading”—a deliberate decision to delegate a specific task to technology. In both cases, it has been worth it. Calculators improve students’ mathematical performance, helping to build problem-solving skills and self-confidence. GPS means drivers no longer have to pull over and faff about with maps. It’s harder to get completely lost; it’s easier to avoid terrible traffic.
De redactie van NRC selecteert de beste artikelen uit The Economist voor een breder perspectief op internationale politiek en economie.
But there are costs, too. In a 2019 paper, Mark LaCour of the University of Louisiana at Lafayette and his co-authors deliberately programmed calculators to give a group of undergraduates the wrong answers to certain problems. In general they found that there was very little suspicion of slightly inaccurate calculations. Even when answers were patently absurd, some people seemed to accept them without question.
The use of GPS navigation devices can also sap people’s ability to think for themselves. A study conducted by Louisa Dahmani of Harvard Medical School and Véronique Bohbot of McGill University found that greater lifetime use of GPS by drivers was associated with worse spatial memory. Other research shows that pedestrians who navigate with their phones take longer routes and make more stops than physical-map users.
A similar pattern is also visible in online search. Using the internet to look up information is clearly efficient, but there are trade-offs. The „Google effect” refers to a research finding that people have worse recall of information they expect to be able to find online.
AI supercharges these trade-offs. Handing specific tasks to models will often make sense: they are much better than humans at many things. But AI’s range of capabilities, allied to a convenient conversational interface and a seductively confident persona, raises the prospect less of delegation than of wholesale capitulation. Hence „cognitive surrender”, a term coined by Steven Shaw of the Wharton School of the University of Pennsylvania in a recent paper written with his colleague, Gideon Nave.
Messrs Shaw and Nave asked volunteers to answer demanding questions with the assistance of AI, and a little like Mr LaCour’s calculator experiment, randomly introduced errors into the machine’s answers. When the model gave accurate responses, the people using it outperformed a control group of people relying on their own brainpower. When the AI gave the wrong answers, the people using it did much worse than the control group. In other words, people stopped thinking for themselves.
At the moment bosses are more focused on getting employees to use AI than fussing about its effects on how they think. But most employers also value critical thinking: models are still prone to embarrassing errors, for one thing, and novel situations require skilled humans to step in. So it is worth asking what managers can do to encourage cognitive resistance.
They can deliberately hire workers who enjoy thinking. People with high „need for cognition” (yes, dear reader, that means you) are somewhat, though not entirely, protected against the risk of cognitive surrender, says Mr Shaw. Incentives and feedback can help, too. One of the experiments in his paper introduced monetary rewards for getting things right, and also notified participants during the test whether an item had been answered correctly or not. These techniques encouraged AI users who were being fed the wrong answers to override the model more (though they still did worse than people who relied on their own judgment).
Engineering AI-free periods may have value, too. Another recent study, by Stefanos Poulidis of INSEAD and his co-authors, recruited over 200 chess-club students to train on an AI-assisted platform. Some of the students were automatically given AI tips at a limited number of specific moments; others could click a button at any time to get advice. The students who had on-demand access achieved less than half the performance gains of those who had no say over when they got help. Offloading is fine. Giving up is another matter.
© 2026 The Economist Newspaper Limited. All rights reserved.