So all hardware limitations on us boil down to speed and memory
capacity. And both of those can be augmented to the level of any other
entity that is in the universe. Because if somebody builds a computer
that can think faster than the brain, then we can use that very computer
or that very technology to make our thinking go just as fast as that. So
that's the hardware.
[…]
So if we take the hardware, we know that our brains are Turing-complete
bits of hardware, and therefore can exhibit the functionality of
running any computable program and function.
So the more memory and time you give it, the more closely it could
simulate the whole universe. But it couldn't ever simulate the whole
universe or anything near the whole universe because it is hard for it
to simulate itself. Also, the sheer size of the universe is large.
I've always found those statements a bit strange and confusing, so it
seems worth it to tease apart what they could mean.
The question "is a human a Turing machine" is probably meant to
convey "can a human mind execute arbitrary programs?", that is
"are the languages the human brain emit at least recursively
enumerable?",
as opposed to e.g.
context-free
languages.
My first reaction is that humans are definitely not Turing
machines, because we lack the infinite amount
of memory the Turing machine has in form of
an (idealized) tape. Indeed, in the Chomsky
hierarchy
human aren't even at the level of pushdown
automata
(since we lack an infinitely deep stack),
instead we are nothing more than finite state
automata.
(I remember a professor pointing out to us that all physical
instantiations of computers are merely finite-state automata).
It might be that Quantum finite
automata
are of interest, but I don't know enough about quantum physics
to make a judgment call.
The above argument only applies if we regard humans as closed systems
with clearly defined inputs and outputs. When probed, many proponents
of the statement "humans are Turing machines" indeed fall back to
a motte that in
principle a human could execute every algorithm, given enough time,
pen and paper.
This seems true to me, assuming that the matter in universe
does not have a limited amount of computation it can perform.
On the one hand, I'd argue that, by orchestrating
the exactly right circumstances, a tulip could
receive specific stimuli to grow in the right
directions, knock the correct things over, lift
other things up with its roots, create offspring
that perform subcomputations &c to execute arbitrary
programs. Conway's Game of Life certainly manages
to!
One might object that this is set up for the tulip
to succeed, but we also put the human in a room with
unlimited pens and papers.
On the other hand, those circumstances would have to
be very exact, much more so than with humans. But that
again is a difference in degree, not in kind.
After all, I'm coming down with the following belief: Humans are
certainly not Turing machines, however there might be a (much weaker)
notion of generality that humans fulfill and other physical systems don't
(or don't as much). But this notion of generality is purported to be
stronger than the one of life:
I don't know of any formulation of such a criterion of generality,
but would be interested in seeing it fleshed out.