author: niplav, created: 2024-03-23, modified: 2024-04-27, language: english, status: finished, importance: 3, confidence: unlikely
I don't think that humans are (equivalent to) Turing machines,
but people who defend the view are barking up the right tree.
I often hear the sentiment that humans are Turing
machines, and that this
sets humans apart from other pieces of matter.
Examples are this
thread
and a short section in an interview with David
Deutsch:
So all hardware limitations on us boil down to speed and memory
capacity. And both of those can be augmented to the level of any other
entit y that is in the universe. Because if somebody builds a computer
that can think faster than the brain, then we can use that very computer
or t hat very technology to make our thinking go just as fast as that. So
that's the hardware.
[…]
So if we take the hardware, we know that our brains are Turing-complete
bits of hardware, and therefore can exhibit the functionality of
running any computable program and function.
and:
So the more memory and time you give it, the more closely it could
simulate the whole universe. But it couldn't ever simulate the whole
univ erse or anything near the whole universe because it is hard for it
to simulate itself. Also, the sheer size of the universe is large.
I've always found those statements a bit strange and confusing, so it
seems worth it to tease apart what they could mean.
The question "is a human a Turing machine" is probably meant to
convey "can a human mind execute arbitrary programs?", that is
"are the languages the human brain emit at least recursively
enumerable?",
as opposed to e.g.
context-free
languages.
- My first reaction is that humans are definitely not Turing
machines, because we lack the infinite amount
of memory the Turing machine has in form of
an (idealized) tape. Indeed, in the Chomsky
hierarchy
human aren't even at the level of pushdown
automata
(since we lack an infinitely deep stack),
instead we are nothing more than finite state
automata.
(I remember a professor pointing this out to us that all physical
instantiations of computers are merely finite-state automata).
- Depending on one's interpretation of quantum
mechanics,
one might instead argue that
we're at least nondeterministic finite
automata
or even Markov
chains. However,
every nondeterministic finite automaton
can be transformed into a deterministic finite
automaton,
albeit at an exponential increase in the
number of states, and Markov chains aren't more
computationally powerful (e.g. they can't recognize Dyck
languages, just
as DFAs can't).
- It might be that Quantum finite
automata
are of interest, but I don't know enough about quantum physics
to make a judgment call.
- The above argument only applies if we regard humans as closed systems
with clearly defined inputs and outputs. When probed, many proponents
of the statement "humans are Turing machines" indeed fall back to
a motte that in
principle a human could execute every algorithm, given enough time,
pen and paper.
- This seems true to me, assuming that the matter in universe
does not have a limited amount of computation it can perform.
- In a finite
universe
we are logically
isolated
from almost
all computable
strings, which seems pretty relevant.
- Another constraint is from computational
complexity; should we treat things that
are not polynomial-time computable as basically
unknowable?
Humans certainly can't solve NP-complete
problems
efficiently.
- I'm not sure this is a very useful notion.
- On the one hand, I'd argue that, by orchestrating
the exactly right circumstances, a tulip could
receive specific stimuli to grow in the right
directions, knock the correct things over, lift
other things up with its roots, create offspring
that perform subcomputations &c to execute arbitrary
programs. Conway's Game of Life certainly manages
to!
One might object that this is set up for the tulip
to succeed, but we also put the human in a room with
unlimited pens and papers.
- On the other hand, those circumstances would have to
be very exact, much more so than with humans. But that
again is a difference in degree, not in kind.
After all, I'm coming down with the following belief: Humans are
certainly not Turing machines, however there might be a (much weaker)
notion of generality that humans fulfill and other physical systems don't
(or don't as much). But this notion of generality is purported to be
stronger than the one of life:
$$\text{Turing-completeness} \Rightarrow \text{Context-sensitive} \Rightarrow \text{Context-free} \overset{?}{\Rightarrow} \text{Proposed-generality} \overset{?}{\Rightarrow} \text{Life} \overset{?}{\Rightarrow} \text{Finite-state automata}$$
I don't know of any formulation of such a criterion of generality,
but would be interested in seeing it fleshed out.