522 words

2 minute read

Computers can be thought of as generating a computational *space*. Imagine a boundless 3D space, inside which abstract machines can be built and run. These machines would do all the operations and manipulations on data that computer programs do. They do mathematical operations, they parse and process data, they send data to peripherals and so on.

When the computer is off this space is empty. When the computer starts up a relatively small machine is summoned into the space in order to boot up the rest of the computer’s machinery. When the computer is finished booting up the compute space has a good deal of machinery in it running constantly. But there is still plenty of space left for new machinery to be summoned into existence and run in arbitrary ways.

When we want to run a new machine, we communicate with some always running daemon inside the machine which can fetch new machinery for us. We tell it which thing we want to summon and it goes to fetch it from disk or from the Internet perhaps. In either case what comes back into the compute space is a serialized bit string representation of the machine, which must then be interpreted and assembled into a proper machine that can be fired up.

The compute space of course has machines always at the ready which can help assemble and animate these bit string blueprints into proper machines, so the little machines work to build up this new machine. Now the machine has been brought out of the cold storage of disk, or brought in from some far off place on the Internet, and it is brought into the dynamic and *living* compute space which is the memory space.

We can imagine the compute space has some fixed central volume wherein machinery can run very fast and many machines can keep track of their work all at once. The size of this space is determined by the amount of *memory* the computer has. We can imagine the compute space extends out beyond the memory space, but out there things are static, they don’t really move, the machinery out there doesn’t run until we pull it into the living memory space and start it up. But of course the cold and static disk-space is very very large, so we move all kinds of unused machinery and data out there for storage.

Every (non-trivial) compute space is essentially the same, except that they have bigger or smaller memory and disk spaces, and they may be able to run their machinery faster or slower…

Quantum Turing machines and “normal” Turing machines have access to, or generate, the same computational space — but qTMs operate *fundamentally* faster in that space. So there is nothing that quantum computers do which classical computers cannot do, given enough time. That is to say the qTM is not more powerful than the traditional Turing machine, but it *can* operate faster.

Finite automatas and other lower machines have access to a *strictly smaller* space, so there are things that Turing machines can compute which they cannot compute *ever*.