On 24th September 2025, I’m attending the Adam Smith Lecture at Panmure House, where Brendan McCord will be speaking on The Artificial Impartial Spectator: Adam Smith, AI, and the Corruption of Moral Sentiments. I’m grateful to be sponsored by the Cosmos Institute, which selected two attendees through an essay contest. My essay is below.
Adam Smith said โuniformity of [a workerโs] stationary life naturally corrupts the courage of his mind.โ How does this risk manifest itself in modern life?
We are training ourselves to mistake algorithmic outputs for moral wisdom, corrupting the very faculty Smith considered essential for human flourishing.
Smith’s factory worker cycled through small, repetitive tasks with little opportunity to develop mental courage. Today’s corruption is more subtle. We feed symptoms to AI, receive polished playbooks, and convince ourselves that we’ve deliberated after making minor adjustments. We sidestep the difficult work of moral reasoning while maintaining its appearance.
Courage requires wrestling with a problem from start to end. A mayor confronting poverty and difficult budgeting decisions must identify the underlying problems. Then she must ask: Whose needs should be prioritized? What kind of community are we building? How do we balance today’s needs against tomorrow’s? To solve these problems well, she needs a deep understanding of her community, a forward-looking moral vision, and ownership of outcomes.
With AI, she can enter demographic data and budgets, adjust the results slightly, and present them as her own. She avoids deeply examining the underlying values that should guide such decisions. If the plans fail, she can shirk responsibility. But she will also lack the practice, knowledge, and courage to tackle the new, unfamiliar set of problems before her.
This corruption is compounded by mistaking AI’s consistency for Smith’s “impartial spectator,” the cultivated moral sense that guides proper judgment. The impartial spectator emerges through self-knowledge and hard-won moral experience. AI offers opaque aggregations whose reasoning and context we cannot easily interrogate. Algorithmic steadiness is not conscience.
A society fluent in selecting and editing pre-packaged ethical decisions will lack the courage to go against consensus and stake out new moral ground when it matters. By retaining the appearance of moral deliberation while abandoning its practice, it will soon discover it has neither the courage to face genuine crises nor principles worth defending.