4 Ten Important Differences Between Brains and Computers

background image

10 Important Differences Between Brains and Computers

“A good metaphor is something even the police should keep an eye on.” – G.C. Lichtenberg

Although the brain-computer metaphor has served cognitive psychology well, research in cognitive

neuroscience has revealed many important differences between brains and computers. Appreciating

these differences may be crucial to understanding the mechanisms of neural information processing,

and ultimately for the creation of artificial intelligence. Below, I review the most important of these

differences (and the consequences to cognitive psychology of failing to recognize them): similar

ground is covered in this excellent (though lengthy)

lecture

.

Difference # 1: Brains are analogue; computers are digital
It’s easy to think that neurons are essentially binary, given that they fire an action potential if they
reach a certain threshold, and otherwise do not fire. This superficial similarity to digital “1’s and 0’s”

belies a wide variety of continuous and non-linear processes that directly influence neuronal

processing.

For example, one of the primary mechanisms of information transmission appears to be the rateat

which neurons fire

– an essentially continuous variable. Similarly, networks of neurons can fire in

relative synchrony or in relative disarray; this coherence affects the strength of the signals received

by downstream neurons. Finally, inside each and every neuron is a leaky integrator circuit,

composed of a variety of ion channels and continuously fluctuating membrane potentials.

Failure to

recognize these important subtleties may have contributed to Minksy & Papert’s infamous

mischaracterization of perceptrons, a neural network without an intermediate layer between input

and output. In linear networks, any function computed by a 3-layer network can also be computed by

a suitably rearranged 2-layer network. In other words, combinations of multiple linear functions can

be modeled precisely by just a single linear function. Since their simple 2-layer networks could not

solve many important problems, Minksy & Papert reasoned that that larger networks also could not.

In contrast, the computations performed by more realistic (i.e., nonlinear) networks are highly

dependent on the number of layers

– thus, “perceptrons” grossly underestimate the computational

power of neural networks.

Difference # 2: The brain uses content-addressable memory

In computers, information in memory is accessed by polling its precise memory address. This is

known as byte-addressable memory. In contrast, the brain uses content-addressable memory, such
that information can be accessed in memory through “spreading activation” from closely related
concepts. For example, thinking of the word “fox” may automatically spread activation to memories

related to other clever animals, fox-hunting horseback riders, or attractive members of the opposite

sex.

background image

The end result is that your brain has a kind of “built-in Google,” in which just a few cues (key words)

are enough to cause a full memory to be retrieved. Of course, similar things can be done in

computers, mostly by building massive indices of stored data, which then also need to be stored and

searched through for the relevant information (incidentally, this is pretty much what Google does,

with a few twists).

Although this may seem like a rather minor difference between computers and brains, it has

profound effects on neural computation. For example, a lasting debate in cognitive psychology

concerned whether information is lost from memory because of simply decay or because of

interference from other information. In retrospect, this debate is partially based on the false

asssumption that these two possibilities are dissociable, as they can be in computers. Many are now

realizing that this debate represents a

false dichotomy

.

Difference # 3: The brain is a massively parallel machine; computers are modular and serial

An unfortunate legacy of the brain-computer metaphor is the tendency for cognitive psychologists to

seek out modularity in the brain. For example, the idea that computers require memory has lead
some to seek for the “memory area,” when in fact these distinctions are far more messy. One

consequence of this over-simplification is that

we are only now learning that “memory” regions (such

as the hippocampus) are also important for

imagination

, the

representation of novel goals

,

spatial

navigation

, and other diverse functions.

Similarly, one

could imagine there being a “language module” in the brain, as there might be in

computers with natural language processing programs. Cognitive psychologists even claimed to

have found this module, based on patients with damage to a region of the brain kno

wn as Broca’s

area. More recent evidence has shown that language too is computed by widely distributed and

domain-

general neural circuits, and Broca’s area may also be involved in other computations

(see

here for more on this

).

Difference # 4: Processing speed is not fixed in the brain; there is no system clock

The speed of neural information processing is subject to a variety of constraints, including the time

for electrochemical signals to traverse axons and dendrites, axonal myelination, the diffusion time of

neurotransmitters across the synaptic cleft, differences in synaptic efficacy, the coherence of neural

firing, the current availability of neurotransmitters, and the prior history of neuronal firing. Although
there are individual differences in something psychometricians call “processing speed,” this does not

reflect a monolithic or unitary construct, and certainly nothing as concrete as the speed of a

microprocessor. Instead, psycho

metric “processing speed” probably indexes a heterogenous

combination of all the speed constraints mentioned above.

Similarly, there does not appear to be any central clock in the brain, and there is debate as to how

clock-

like the brain’s time-keeping devices actually are. To use just one example, the cerebellum is

often thought to calculate information involving precise timing, as required for delicate motor

background image

movements; however, recent evidence suggests that time-keeping in the brain bears more similarity

to

ripples on a pond

than to a standard digital clock.

Difference # 5

– Short-term memory is not like RAM

Although the apparent similarities between RAM and short-

term or “working” memory emboldened

many early cognitive psychologists, a closer examination reveals strikingly important differences.

Although RAM and short-term memory both seem to require power (sustained neuronal firing in the

case of short-term memory, and electricity in the case of RAM), short-term memory seems to hold
only “pointers” to long term memory whereas RAM holds data that is isomorphic to that being held

on the hard disk. (See

here

for more about “attentional pointers” in short term memory).

Unlike RAM, the capacity limit of short-term memory is not fixed; the capacity of short-term memory

seems

to fluctuate with differences in “processing speed” (see Difference #4) as well as with

expertise and familiarity.

Difference # 6: No hardware/software distinction can be made with respect to the brain or

mind

For years it was tempting to imagine that the

brain was the hardware on which a “mind program” or

“mind software” is executing. This gave rise to a variety of abstract program-like models of cognition,

in which the details of how the brain actually executed those programs was considered irrelevant, in

the same way that a Java program can accomplish the same function as a C++ program.

Unfortunately, this appealing hardware/software distinction obscures an important fact: the mind

emerges directly from the brain, and changes in the mind are always accompanied by changes in

the brain. Any abstract information processing account of cognition will always need to specify how

neuronal architecture can implement those processes

– otherwise, cognitive modeling is grossly

underconstrained. Some blame this misunde

rstanding for the infamous failure of

symbolic AI

.”

Difference # 7: Synapses are far more complex than electrical logic gates

Another pernicious feature of the brain-computer metaphor is that it seems to suggest that brains

might also operate on the basis of electrical signals (action potentials) traveling along individual

logical gates. Unfortunately, this is only half true. The signals which are propagated along axons are

actually electrochemical in nature, meaning that they travel much more slowly than electrical signals

in a computer, and that they can be modulated in myriad ways. For example, signal transmission is
dependent not only on the putative “logical gates” of synaptic architecture but also by the presence

of a variety of chemicals in the synaptic cleft, the relative distance between synapse and dendrites,

and many other factors. This adds to the complexity of the processing taking place at each synapse
– and it is therefore profoundly wrong to think that neurons function merely as transistors.

Difference #8: Unlike computers, processing and memory are performed by the same

components in the brain

Computers process information from memory using CPUs, and then write the results of that

processing back to memory. No such distinction exists in the brain. As neurons process

background image

information they are also modifying their synapses

– which are themselves the substrate of memory.

As a result, retrieval from memory always slightly alters those memories (usually making them

stronger, but sometimes making them less accurate

– see

here

for more on this).

Difference # 9: The brain is a self-organizing system

This point follows naturally from the previous point

– experience profoundly and directly shapes the

nature of neural information processing in a way that simply does not happen in traditional

microprocessors. For example, the brain is a self-repairing circuit

– something known as “trauma-

induced plasticity” kicks in after injury. This can lead to a variety of interesting changes, including

some that seem to unlock unused potential in the brain (known as

acquired savantism

), and others

that can result in profound cognitive dysfunction (as is unfortunately far more typical in traumatic

brain injury and developmental disorders).

One consequence of failing to recognize this difference has been in the field of neuropsychology,

where the cognitive performance of brain-damaged patients is examined to determine the

computational function of the damaged region. Unfortunately, because of the poorly-understood

nature of trauma-induced plasticity, the logic cannot be so straightforward. Similar problems underlie
work on developmental disorders and the emerging field of “cognitive genetics”, in which the

consequences of neural self-organization

are frequently neglected

.

Difference # 10: Brains have bodies

This is not as trivial as it might seem: it turns out that the brain takes surprising advantage of the fact

that it has a body at its disposal. For example, despite your intuitive feeling that you could close your

eyes and know the locations of objects around you, a series of experiments in the field of

change

blindness

has shown that our visual memories are actually quite sparse. In this case, the brain is

“offloading” its memory requirements to the environment in which it exists: why bother remembering

the location of objects when a quick glance will suffice? A surprising set of

experiments by Jeremy

Wolfe

has shown that even after being asked hundreds of times which simple geometrical shapes

are displayed on a computer screen, human subjects continue to answer those questions by gaze

rather than rote memory. A wide variety of evidence from other domains suggests that we are only

beginning to understand the importance of embodiment in information processing.

Bonus Difference: The brain is much, much bigger than any [current] computer

Accurate biological models of the brain would have to include some 225,000,000,000,000,000 (225

million billion) interactions between cell types, neurotransmitters, neuromodulators, axonal branches
and dendritic spines, and that doesn’t include the influences of dendritic geometry, or the

approximately 1 trillion glial cells which may or may not be important for neural information

processing. Because the brain is nonlinear, and because it is so much larger than all current

computers, it seems likely that it functions in a completely different fashion. (See

here

for more on

this.) The brain-computer metaphor obscures this important, though perhaps obvious, difference in

raw computational power.

background image

I have to disagree with you on a number of these points (namely, the first half). While they are all
generally true regarding the computers we work with during

our daily blogging activities, I don’t think they

are entirely true when taking into account a more sophisticated understanding of what a “computer” is.

Difference # 1: Brains are analogue; computers are digital

Calling something a computer doesn’t necessarily mean that it must have digital circuits, though analog
circuits clearly are rare. So this is good to point out, but there is interest in analog circuitsespecially
in

building artificial neural networks

,.

Also, I fail to see how rate of firing is significant in digital versus analog. If the importance of the continuity
of firing is the average signal strength over a given time period then a signal sent digitally in intervals
should be eq

uivalent in its ability to cause the next set of neurons to fire. There might be something I’m

missing here, but as you stated it I don’t see how this necessitates analog circuitry.

Also, I think the Minsky & Papert reference is off the mark. It certainly was a sad day for AI, but again I
don’t see what this has to do with analog versus digital. What you mentioned demonstrates their
fallacious thinking about how one can structure neural networks and what they can compute, but what
does that have to do with whether they are analog or digitial?

Alan Turing firmly believed it was possible to simulate the continuous nature of the brain on a digital
machine; however I think this is still an open question. At any rate, I think the more important question is,
given that one can build a machine using analog circuits and call it a computer, whether the entire brain
can be described using math equations.

Difference # 2: The brain uses content-addressable memory

I think this is really only a superficial distinction. If you implement a neural network on a computer, then all
of the inner workings of the CPU and its methods of memory allocation become irrelevant. If a neural
network can be simulated digitally, then where or how it is implemented is a non-issue, as any
implementation will be equivalent. (So the question is then analog versus digital.)

On the other hand, I can see how this architectural difference can be important to point out to people who
have no idea how the brain or computers are constructed.

Difference # 3: The brain is a massively parallel machine; computers are modular and serial

In the same way that implementation wasn’t an issue before, it isn’t here either. Parallel processing can
be implemented equivalently on a serial machine. Also, this isn’t even true anymore, as most if not all
supercomputers used for research have dozens, hundreds, or even thousands of processors. And even
consumer level machines are becoming parallel, with dual-core CPUs coming out in the past few years.

The modularity issue I’m intrigued by. Clearly the areas of the brain are not as discrete as those in our
computers, but don’t we still refer to experience occurring in the neocortex? Although I really don’t know
enough about this (and I want to know more!) there must be some level of modularity occurring in the
brain. My gut instinct is telling me here that a brain based completely on spaghetti wiring just wouldn’t
work very well…

Difference # 4: Processing speed is not fixed in the brain; there is no system clock

You might call

me out for nitpicking here, but CPUs don’t require system clocks. Asynchronous

processors are being actively researched and produced.

A key advantage is that clockless CPUs don’t consume energy when they aren’t active. Machines based
on synchronous process

ors, on the other hand, constantly have the “pulse” of the clock traveling through

the system (and the frequency at which it “beats” determines the speed of the CPU). The pulse of the
clock continuously coursing through all of the circuits also results in

“wasted cycles,” meaning power is

being used when the CPU isn’t doing anything, and heat is being dissipated for no reason.

Difference # 5

– Short-term memory is not like RAM

background image

Again, this seems to be a superficial architectural difference. I think that if your intent is to simulate the
brain using artificial neural networks, then the how the RAM or hard drive works is inconsequential.

I’ll admit that it is something worth pointing out to someone who does take the brain/computer analogy too
far (which is I gu

ess exactly who you’re targeting here) or doesn’t know much about computers or brains.

Difference # 6: No hardware/software distinction can be made with respect to the brain or mind

This one I completely agree with. I always get the feeling when I read philosophy of AI papers that some
of the philosophers take the sentiment “the mind is the program being executed on the machine that is
the brain” too far. Consequently, and I feel this is actually a central problem with philosophy of AI, they
pay too little attention to how the brain actually operates and try to think about how to implement
consciousness on a computer without considering how the activities of the brain relate to the mind.

Anyway, I think it would be fair to describe the brain as an asynchronous, analog, and massively parallel
computer where the hardware itself is inherently mutable and self-organizing.


Wyszukiwarka

Podobne podstrony:
Genomic differences between C glabrata and S cerevisiea
Parallels Between Biological and Computer Epidemics
What are differences between being a ciwilian and military(1)
Difference Between Cluster Computing VS Grid Computing
Whats the Difference Between LCD, LED, and Plasma
Between Death and Rebirth Ten Lectures by Rudolf Steiner
Differences between the gut microflora of children with autistic spectrum disorders and that of heal
Influences of Cultural Differences between the Chinese and the Western on Translation
Albert, Buzan Functional differentiation and sectors between Sociology and International Relation
A Behavioral Genetic Study of the Overlap Between Personality and Parenting
Berkeley, Three Dialogues between Hylas and Philonous
80 VB261 1 A Expect Difference Between QCAT4 QCAT5
52 737 754 Relationship Between Microstructure and Mechanical Properts of a 5%Cr Hot Works
PRACTICAL SPEAKING EXERCISES with using different grammar tenses and constructions, part Ix
15 Multi annual variability of cloudiness and sunshine duration in Cracow between 1826 and 2005
Duality between Electric and Magnetic Black Holes

więcej podobnych podstron