Humans, Animals, Machines: Notes from Paul Thagard's Cognitive Science Lens
A synthesis of Waterloo-era cognitive science questions: where consciousness might begin in biology, what computation captures, and what Chinese Room limits still imply for AI.
- AI Philosophy
- Cognitive Science
- Paul Thagard
- Consciousness
- Chinese Room
One of the most useful frames I got from Waterloo came from Paul Thagard’s cognitive science teaching, especially the course theme: intelligence in humans, animals, and machines.
At the time, it felt mostly theoretical. Now it feels operational.
Thagard’s broader work treats mind as natural and mechanistic while still multi-level: neural activity, mental representations, social context, and explanatory coherence all matter together, not in isolation.
For builders in 2026, that matters a lot.
Quick definitions (plain English)
- Computational theory of mind: the view that some aspects of thought can be modeled as information processing.
- Phenomenological consciousness: subjective felt experience, the “what it is like” part of mind.
- Syntax vs semantics: syntax is rule-based symbol handling; semantics is meaning.
- Ontology: what a system is in reality, not just what it appears to do.
Start from biology: where does consciousness begin?
If humans are continuous with animal evolution, then intelligence is not a binary switch that appears out of nowhere.
We can trace increasing capacities across life:
- simple sensing,
- pain/pleasure gradients,
- memory and prediction,
- social signaling,
- planning,
- metacognition.
Then the hard question appears:
If humans are conscious, what about chimps? If chimps, what about monkeys? If monkeys, what about dogs or rats? What about lobsters, worms, insects, tardigrades, amoebas? What about plants?
The boundary is not obvious.
The deeper we go down the evolutionary stack, the more “self” starts to look like an adaptive control function: maintain integrity, secure resources, avoid terminal damage, reproduce.
Even single-cell life exhibits basic self-maintenance dynamics.
That does not immediately prove phenomenological consciousness, but it does complicate simplistic stories that reserve all interiority for humans alone.
Now start from computation
From the other end, we can model and reproduce some high-level cognitive functions without biology:
- language modeling,
- memory retrieval,
- reasoning traces,
- planning scaffolds,
- multimodal synthesis.
LLMs can now approximate formal rational behavior in many contexts where humans historically had a monopoly.
So the puzzle sharpens:
If form and behavior can be approximated computationally, does that imply inner experience? Or are we seeing function without feeling?
Searle, revisited with one extra step
Searle’s Chinese Room remains clarifying.
A person in a room manipulates Chinese symbols using a rulebook and produces fluent outputs. Outside observers think the room “understands Chinese.” Searle says no: syntax is not semantics.
Now push the thought experiment one step further.
Remove the person entirely. Keep only the mechanism: inputs, symbol operations, outputs.
What remains is plainly a machine process.
It computes. It maps. It does not obviously understand in the human experiential sense.
That does not make it useless. It makes the ontology explicit.
Machine intelligence vs biological intelligence
My current view is strict on this distinction:
- Machine intelligence is real in the functional sense.
- Biological intelligence is historically evolved, embodied, and affectively grounded.
- We do not yet have strong evidence that machine intelligence entails biological-style consciousness.
The two can overlap behaviorally while diverging ontologically.
That is uncomfortable, but it is the cleanest frame I know.
Why this matters for product builders
If you confuse functional competence with human-like interiority, product decisions drift toward anthropomorphic theater.
If you dismiss machine competence because you cannot prove consciousness, you miss genuine capability.
So the practical discipline is:
- measure behavior honestly,
- describe ontology conservatively,
- design accountability around human responsibility.
The meaning question
Machines are products of human civilization. They are extensions of our representational and mechanical imagination.
In that sense, they remain tools, even extremely advanced tools.
They may one day exceed us in many performance domains, but the assignment of purpose is still a human act.
Machine supports existence. It does not author existence.
QED, at least for now.
References
- Paul Thagard’s work and publications: paulthagard.com
- John Searle, Chinese Room argument background: Stanford Encyclopedia of Philosophy
Related notes:
- Revisiting Searle’s Chinese Room
- Imperfect Humans, Perfect Simulacra
- Running a Factory for Agent Teams
Your curious human friend,
Oli
March 3, 2026