Musings on the AI Gold Rush

Judgement, power, and what gets averaged when nobody is watching

Raspberry PI can run AI

Gold rushes are rarely about progress.
They are about attention, speculation, and the rapid redirection of resources toward whatever looks most likely to pay off before the rules harden.

Sometimes there really is gold.
But the rush itself is never the advance.

What we are currently calling “AI” sits squarely in that territory.

The technology is real. The capabilities are real. But the way it is being framed, funded, and deployed is already distorting judgement. That distortion is worth naming before it becomes invisible.

Artificial or averaged?

“Artificial Intelligence” is a flattering phrase. It implies a constructed form of understanding.

What most deployed systems actually resemble is averaged intelligence.

Large language models do not reason in the human sense. They aggregate. They smooth. They privilege what is most frequent, most reinforced, most statistically stable. Their centre of gravity is pulled toward the mean, the mode, the median of their training data and alignment constraints.

That averaging is not neutral.

This becomes obvious at the edges: systems that can just as easily undress or redress bodies in images, reshape faces to dominant ideals, or erase features deemed undesirable are not discovering preferences. They are reflecting demand, optimisation targets, and commercial tolerance.

  • who supplies the data
  • who pays for the compute
  • who defines acceptable outputs
  • who absorbs legal and reputational risk

Once commercial power enters the averaging process, the “average” stops being descriptive and becomes proprietor-biased.

This isn’t malice.
It’s the cold calculus of reason under constraint.

Legere and intellegere

Latin is more revealing than modern marketing.

Legere means to read, to gather, to collect.
Intellegere means to choose between, to discern, to understand.

Modern AI systems are formidable legere engines. They read vast corpora of human-generated text, traverse associations, and surface patterns at a scale no human could manage.

Large language models do not reason in the human sense. They aggregate. They smooth. They bind patterns through stochastic sequence modelling, chaining conditional probabilities in a lineage that descends from Markov chains, though vastly expanded in depth, state, and context.

The system does not inter-legere.
It cannot decide which pattern deserves weight, even if it sometimes appears to, or even if we would very much like it to.

Understanding is selective. It excludes. It judges relevance. It stops. That act remains human, whether acknowledged or quietly outsourced.

Calling a legere engine “intelligent” blurs that boundary and invites abdication of responsibility.

Medusa, hypertext, and the return of the snakes

Writing fixed thought, but it also froze it. Ideas became stable, portable, and repeatable. Medusa’s gaze turned living snakes into stone.

Hypertext broke that spell. Links reanimated associations. Meaning became non-linear again. The snakes moved.

AI systems exploit that hypertextual space at industrial scale:

  • every association visible
  • every path available
  • every continuation encouraged

This is powerful. It is also destabilising.

Medusa was not dangerous because she deceived.
She was dangerous because she overwhelmed perception.

Unfiltered abundance paralyses judgement just as effectively as censorship.

Perseus’s mirror: practice and restraint

Perseus did not defeat Medusa by force. He used indirection. A mirror. A boundary that allowed perception without petrification.

That mirror is practice.

In therapy, in thinking, in daily life, the work is not generating more material. It is choosing what to attend to, what to ignore, what to hold lightly, and what to refuse.

AI can surface patterns.
It cannot choose meaning.

That responsibility remains human.

The machinery of the rush

Gold rushes enrich those who sell infrastructure before they enrich those who dig.

The modern equivalents are not pickaxes but:

  • GPUs and specialised accelerators
  • CPUs, RAM, and storage at scale
  • datacentres
  • power, water, cooling
  • global communications links

These are expensive, centralised, and owned by a small number of actors.

As a result, today’s AI systems are shaped less by philosophical intent than by:

  • winner-takes-all economics
  • capital intensity
  • regulatory risk
  • reputational defensiveness

Free users get one tune.
Subscribers get more steering.
Proprietors set the key.

That is not a conspiracy. It is how markets behave under scarcity and scale.

Prudence, not panic

The appropriate response is neither rejection nor surrender, but prudence.

In therapy:

  • AI should never be the voice
  • never the authority
  • never the final word
  • always bounded, optional, and subordinate to human judgement

In ordinary living:

  • fluent output should be treated as suggestion, not understanding
  • averages should be noticed for what they erase
  • convenience should be recognised as carrying someone else’s priorities

As Robert Burns warned in Epistle to a Young Friend (1786):

I’ll no say, men are villains a’;
The real, harden’d wicked,
Wha hae nae check but human law,
Are to a few restricked:
But Och, mankind are unco weak,
An’ little to be trusted;
If Self the wavering balance shake,
It’s rarely right adjusted!

Burns is not warning about evil. He is warning about self-interest quietly tipping the scales.
Averaging systems faithfully reproduce that weakness unless actively restrained.

Rereading rewards: Scottish Poetry Library: “Epistle to a Young Friend”

Who chooses what matters?

That is the only question that survives the gold rush.

Not “is AI intelligent?”
Not “will it replace us?”

But:

  • who sets the averages
  • who weights the data
  • who defines safety
  • who decides relevance
  • and who remains accountable when judgement is wrong

AI gathers.
Humans discern.

That is why this is worth understanding.

Coda: therapy, care, and responsibility

In therapeutic contexts, this distinction is not academic.

Clients bring lives, vulnerabilities, and histories, not data points. Any system that averages human experience must therefore be treated as context-blind by default, no matter how fluent its output appears.

Ethical practice in a GDPR-era world is not about pretending data can be made harmless. It is about maintaining human judgement at the point where meaning, risk, and responsibility converge.

AI may assist reflection.
It may surface patterns.
It may help organise material.

It must never decide what matters.

That boundary is not technological.
It is ethical.

And it does not disappear simply because the tools become more persuasive.