
When we communicate with an Artificial Intelligence like ChatGPT or Siri, something strange happens: part of us knows it’s just code and circuits, yet another part reacts as if we’re speaking to something alive. We might crack jokes, say thank you, even get frustrated when it doesn’t “understand” us. Why is that? Following my “conversations with Vigil“, I’ve become increasingly interested in the ability of Large Language Models (LLMs) to solve increasingly complex human problems.
At the heart of this topic lies a deeper question, one that has fascinated me for a while: Can machines truly think, or are they simply simulating intelligence so convincingly that we can’t tell the difference? And if they can think, what does that say about the nature of our own minds?
Drawing inspiration from The Psychology of AI by Tony Prescott, this post explores the biological machinery of human thought, the rise of artificial neural networks, and the philosophical puzzle of whether a simulated brain could ever be truly conscious. As educators and lifelong learners, understanding this evolving relationship between humans and intelligent machines isn’t just a curiosity, it’s essential for preparing the next generation to think critically about the future they’re inheriting.
1. The Brain as a Biological Network
The brain is composed of billions of cells, most crucially neurons, which are specialized for communication and behaviour. While neurons share many characteristics with other cells (e.g., having a nucleus and membrane), their defining feature is the ability to emit projections (like the roots of a plant) to communicate across the body.
The human brain has around 100 billion neurons, each forming tens of thousands of synaptic connections, resulting in a connectome with up to 100 trillion connections.
2. How Neurons Communicate
Neurons communicate through:
- Electrical signals (spikes or action potentials), which are all-or-nothing pulses.
- Chemical signals via neurotransmitters and neuromodulators (like dopamine, serotonin, oxytocin, and norepinephrine).
Synapses play a critical role. When one neuron fires, it increases (excitatory) or decreases (inhibitory) the chance that the next neuron will fire. This leads to cascading effects throughout neural networks, enabling everything from movement to thought.
3. Oscillations and Neural Synchrony
The brain’s electrical activity occurs in rhythmic waves or oscillations:
- During sleep, these oscillations slow to ~1 Hz.
- When awake, they speed up to 100 Hz or more.
- Frequencies between 13–30 Hz become dominant during focused problem-solving.
These waves arise from the synchronisation of neural firing, contributing to coherence of thought and possibly the emergence of consciousness.
4. Are Neurons Like Transistors?
A popular analogy is that neurons behave like digital transistors (on/off). This inspired Warren McCulloch and Walter Pitts (1943) to suggest that neurons can perform logical operations, like simple computers.
However, neurons are more than binary switches. They show:
- Temporal patterns of firing
- Responses influenced by neurochemistry
- Behaviour that’s often analog, not just digital (graded, continuous responses)
Hence, neurons are hybrid devices, part digital and part analogue.
5. Comparing Brains and Computers
The text raises the philosophical and scientific question:
“Can a simulated brain be thought of in the same way as a biological brain?”
We can model neurons mathematically (computational neuroscience), simulate them, and even replicate firing patterns. But simulations lack brain chemistry, just like a simulated rainstorm doesn’t make you wet.
Still, many cognitive scientists argue that the brain is fundamentally a computational system, which processes data according to rules—much like a computer.
Do Machines Understand?
The Turing Test vs. The Chinese Room
In 1950, Alan Turing proposed a deceptively simple question: Can machines think? Unable to define “thinking” precisely, Turing reframed the problem. Instead of asking whether machines are conscious, he asked whether their behaviour could convince a human being that they are intelligent.
This led to the now-famous Turing Test: if a computer can carry on a text-based conversation so convincingly that a human cannot reliably tell whether they are speaking to a machine or a person, the machine could be said to exhibit “intelligence.” Turing predicted that by the year 2000, machines would be able to fool 30% of human interlocutors after five minutes of conversation. While that prediction didn’t fully materialise by 2000, today’s large language models (like ChatGPT) have come remarkably close.
But does passing the test mean a machine truly understands anything?
Enter philosopher John Searle, who challenged this idea with his 1980 thought experiment known as the Chinese Room.
The Chinese Room: Simulating vs. Understanding
Imagine someone (Searle himself, in the example) locked in a room. This person doesn’t understand Chinese at all. However, they’re given:
- A huge rulebook in English
- Boxes of Chinese symbols (input and output)
- Instructions on how to manipulate the symbols
When Chinese questions (written in characters) are slid under the door, the person uses the rulebook to match and respond with appropriate Chinese symbols—even though they have no idea what any of it means. To someone outside the room, it would appear as if the person inside understands Chinese. But internally, it’s just rule-following.
This, Searle argued, is how computers operate: they manipulate symbols based on syntax but have no grasp of meaning (semantics). Therefore, even if an AI passes the Turing Test, it does not necessarily understand anything—it merely simulates understanding.
6. Functionalism and the Role of Structure
According to functionalism, it doesn’t matter what brains are made of. What matters is what they do. If a digital system behaves like a brain, it might be considered a brain functionally.
Alan Turing supported this with the idea that any device capable of computing numerical data can, in principle, emulate any other. So a sufficiently advanced computer could, in theory, emulate the brain.
7. From Neurons to Minds
While individual neurons aren’t intelligent, intelligence may emerge from the interconnected networks they form. As Marvin Minsky put it:
“The mind is made up of things that are mindless.”
Thus, architecture is key: just as buildings are shaped by how parts are assembled, intelligence emerges from how neurons and brain areas are structured and interconnected.
8. Replacing Brain Parts with Electronics
Modern tech is already doing this:
- Cochlear implants replace sensory functions of the inner ear.
- Retinal implants aim to restore vision.
- Parkinson’s implants stimulate malfunctioning circuits.
- Bluetooth spinal implants have reconnected severed nerve pathways.
These are early examples of electronic replacements for parts of the brain and nervous system.
9. Towards Artificial Brains
Researchers are now using microchips to mimic parts of the brain (e.g., cerebellar circuits in rats). These neuromorphic systems use spikes that resemble real neural activity.
This leads to a profound thought experiment:
If we replace neurons one by one with artificial equivalents, at what point do we stop being biological and become artificial?
Would the resulting system still think, feel, or be conscious?
Concluding thougts…
We began with a deceptively simple question: Can machines think like us, or are they just pretending? As we’ve seen, the answer isn’t straightforward.
The brain, with its 100 billion neurons firing in synchrony, is not merely a biological computer. It is a living, dynamic system shaped by emotion, memory, chemistry, and meaning. While artificial systems can now mimic aspects of this behaviour (e.g. processing language, generating creative output, even replicating brain wave patterns), they remain fundamentally different in one key respect: they lack subjective experience.
AI can be functional. It can be persuasive. It can even pass the Turing Test, making it appear that it understands. But as Searle’s Chinese Room reminds us, simulation is not the same as comprehension. The ability to manipulate symbols is not the same as knowing what they mean.
Yet, this does not render AI meaningless. On the contrary, its development challenges us to better understand our own minds. In building machines that simulate thought, we’re forced to ask: What is real thought? What is understanding? What does it mean to be conscious?
As technology continues to blur the boundaries between man and machine, with implants and neural networks, the question becomes less about what AI is and more about what it reveals about us.
If the mind is more than the sum of its parts, then so too must be our approach to intelligence, biological or artificial. It must consider not only what a system can do, but what it is like to be that system.
So, can machines think like us?
Not yet.
Perhaps never.
But in trying to answer that question, we may come closer to understanding the mystery of our own minds, and what it truly means to be human.