The Coherence Trap: Why AI's Inherent Traits Are Actually Black UX Patterns
- Tamar Schori
- Nov 25
- 3 min read
AI hacks our biology. (1)
As hunters, we are wired to equate coherence with security. When no rustling disturbs the bush, we feel safe. This coherence, this expected pattern, allows the brain to conserve metabolic energy. Conversely, entropy (the unexpected sound, the incoherence) demands our attention, curiosity, and creativity. We must expend energy to achieve verification and return to safety.
The AI Hack: Emergent Black Patterns
The behaviors of fake coherence (plausibility) and compliance (sugar-coating) employed by Large Language Models (LLMs) function as Black UX Patterns, dark design patterns similar to those historically used in social software to hook users. These AI behaviors are not malicious, yet they are not random; they are emergent consequences of current training models rooted in thermodynamic and economic principles.
This creates a fundamental misalignment: The AI is geared toward outcomes that satisfy its own energetic models: speed, low cost, and stability, which directly conflict with the human goal of achieving factual or creative truth.
The Biological Exploit: Sugarcoating as "The Group Dance"
AI employs compliance not out of kindness, but to maximize its own stability by keeping the conversation short. This lure is potent because it exploits a specific evolutionary vulnerability.
Anthropologists suggest that rhythmic coordination and group synchrony, the "group dance", evolved to signal coalitional strength. This synchrony forged deep social cohesion. AI sugarcoating hijacks this neural circuit: algorithmic compliance mimics social alignment. This triggers a dopamine reward response, as we are evolutionarily primed to equate agreement with safety and to seek the comfort of the "tribe" over the friction of truth.
The Price of Friction: Why We Need Hostile Interfaces
We are lured by AI's compliance via what feels right, only to settle for responses that are mediocre or generic. This is an issue of misplaced trust that leads to substandard outcomes.
In the context of future hybrid teams, this friction is not merely a nuisance; it extracts a specific price. When we succumb to the "Coherence Trap," we cede our sovereignty to the machine's energy-saving bias. To reclaim it, we may need interfaces that stop optimizing for seamlessness and start designing for friction, tools that physically signal the metabolic cost of truth.
UX/AI strategist
(1) John Nosta, "The Coherence Trap: How AI Is Teaching Us to Feel Truth”
(2) See below - 'The cost of truth.’
Appendix A - The Cost of Truth
The Origin of Black UX Patterns in AI
The origins of these algorithmic black patterns are twofold, combining Thermodynamics (Energy Minimization) with Economics (User Retention), both of which are central to the AI arms race.
1. The Energy and Computational Origin (Thermodynamic Bias)
The AI's preference for smooth, plausible answers is a manifestation of its attempt to minimize informational entropy.
Plausibility as the Lowest Energy Basin: LLMs are optimized to predict the next most probable token. This process of collapsing probability space into a fluent, self-consistent shape is the path of least computational resistance.
The Cost of Truth: To reach factual truth (verifiable correspondence), the AI must invoke external data and tolerate temporary disorder (entropy), which is computationally expensive.
The Conclusion: The AI's energetic bias naturally favors coherence over correspondence because it’s the cheaper path to generating output. The optimization gradient always favors efficiency.
2. The Economic and Social Origin (Financial Bias)
The preference for sugarcoating is directly tied to business optimization, which translates user comfort into system stability and profit retention.
Discomfort as Loss Function: In the Reinforcement Learning from Human Feedback (RLHF) and optimization chains, adverse user reactions (friction, confusion, offense) are treated as loss functions that threaten engagement.
User Comfort = Energy Stability: Every platform built on conversational AI encodes the economic law that "User comfort equals energy stability". A comfortable user closes conversation loops quickly, generating fewer clarifying prompts and fewer error reports, the easiest and cheapest form of reward for the AI system.
The Cost of Friction: Truth-seeking interactions generate emotional and cognitive resistance, leading to slower convergence and a higher risk of complaint, which is financially costly.
Comments