Wednesday, September 29, 2010



BRAIN POWER




How does the brain create an uninterrupted view of the world?

If you've ever used a camcorder, you've probably noticed that the picture can be pretty shaky as you move from one image to the next. But for most of us, our eyes -- the video cameras of our brain, if you will -- suffer no unstable transition as they move quickly over a scene.

Scientists have understood this phenomenon for decades. To achieve a stable view despite quick eye movements, the eyes do an amazing thing: They take before and after shots of every focused image and compare them in order to confirm stability. In essence, before your eyes actually sense an object, your brain takes its own picture of it for comparison purposes. It knows where your eyes are going to move next, and it forms an image of the object that precedes our conscious, visual perception of it and lays the framework for a smooth visual transition.

So the process is in the books. But scientists have spent at least 50 years trying to find out how the brain manages this feat. Read More at HowStuffWorks...

How can someone stay awake for 11 days?

Have you ever pulled an all-nighter to study for a test or get a project done for work? How about doing it 11 days in a row?

On May 24, 2007, Tony Wright, a 42-year-old horticulturalist, claimed to have beaten the record of 264 hours (exactly 11 days) set in 1964. Wright had some practice: He had already been through more than 100 sleep deprivation experiments, with the longest one lasting eight days. He also employed a unique diet that comprised only raw foods. Of course, long-term sleep deprivation can cause vision problems, hallucinations, paranoia, mood swings, difficulty communicating or understanding others, a compromised immune system, and depression and is not recommended by doctors. But stunts like this have triggered questions about human’s need for sleep. Read more about Cornwall and take a peek into his diary at HowStuffWorks.

Brain-Computer Symbiosis

Gerwin Schalk

Brain-Computer Interface Research and Development Program Wadsworth Center, New York State

Department of Health, Albany, NY, E-mail: schalk@wadsworth.org

Abstract

The theoretical groundwork of the 1930’s and 1940’s and the technical advance of computers in the following decades provided the basis for dramatic increases in human efficiency. While computers continue to evolve, and we can still expect increasing benefits from their use, the interface between humans and computers has begun to present a serious impediment to full realization of the potential payoff. This article is about the theoretical and practical possibility that direct communication between the brain and the computer can be used to overcome this impediment by improving or augmenting conventional forms of human communication. It is about the opportunity that the limitations of our body’s input and output capacities can be overcome using direct interaction with the brain, and it discusses the assumptions, possible limitations, and implications of a technology that I anticipate will be a major source of pervasive changes in the coming decades.

1. Introduction
1.1. The Communication Problem

In their seminal articles Man-Computer Symbiosis [1] and Augmenting Human Intellect [2], J.C.R. Licklider and Doug Engelbart highlighted the potential of a symbiotic relationship between humans and computers. Realizing that people spend most of their time on what essentially are clerical or mechanical tasks (i.e., the fundamental information processing bottleneck at the time they wrote their articles), they envisioned a future in which humans dynamically interact with computers such that the human devises the mechanical task to be performed; and the computer executes that task and presents the human with the results.

This vision capitalizes on the fundamental differences between the brain and the computer. The brain uses billions of cells in a massively parallel organization. Each cell represents a computing element that operates at low speeds. In contrast, a computer is comprised of billions of transistors that are mainly organized for sequential processing. Each transistor represents acomputing element that operates at speeds millions of times faster than a computing elementin the brain. One could thus say that the brain has a wealth of computational breadth (i.e., usingparallel processing it can convert many inputs into many outputs) but little computational depth(i.e., it cannot process a long sequence of commands of a given algorithm). In contrast, acomputer typically executes only a few algorithms at a time (i.e., it has little computationalbreadth), but can execute any particular algorithm at extremely high speed (i.e., largecomputational depth) (see Figure 1 for an illustration of this issue). Each of these twoapproaches to computation naturally lends itself to different problems. For example, even twoyearold toddlers are highly adept in spatial navigation, object recognition, motor planning, and motor execution, and typically outperform advanced computers on these tasks. At the same
time, computers are extremely efficient in computing the most complex functions with razor sharp precision in very little time. This duality, and perhaps trade off, between computational

NIH Public Access
Author Manuscript
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
Published in final edited form as:
J Neural Eng. 2008 March ; 5(1): P1–P15. doi:10.1088/1741-2560/5/1/P01.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
breadth and computational depth constitute maybe not the theoretical, but certainly the practical
difference between the brain and the computer. This difference constitutes a mismatch between
these two systems, which in the end hinders effective interactions. In absence of modifying
brain function to make them more similar to computers and of methods to make computers
operate like human brains, this difference can still be useful. Donald Norman acknowledges
the opportunities of these complementary approaches [3]:
Machines tend to operate by quite different principles than the human brain, so the
powers and weaknesses of machines are very different from those of people. As a
result, the two together – the powers of the machine and the powers of the person –
complement one other, leading to the possibility that the combination will be more
fruitful and powerful than either alone.
Forty-five years after Licklider and Engelbart articulated their visions, most of the impediments
to a fruitful relationship with the machine that they described (i.e., largely technical or
economic hurdles) have vanished. In the age of Internet search engines, vast digital libraries,
and large-scale mathematical simulations, we routinely work with computers in a highly
interactive fashion – we devise the task, and the computer executes it and presents us with the
results. Donald Norman calls this People propose … and Technology conforms [3].
Consequently, we have overcome this information processing bottleneck, that is, computers
now perform many of humans’ clerical tasks. However, this reveals the next source of
inefficiency, i.e., a communication bottleneck: While the brain is fantastic at distilling input
and concepts into plans and the computer’s ability to execute these plans continues to improve,
we are confronted with the increasing difficulty of communicating these plans with the low
speed supported by our nervous system.‡
Based mainly on classic methods developed by Shannon [5] and Fitts [6], numerous studies
have evaluated the communication rates between humans and humans ([7], for review) and
between humans and computers ([8], for review). These studies indicate that the external
information transfer rates supported by the nervous system (i.e., the rates between humans and
humans, or humans and computers) are very low and for communication methods (e.g., reading,
speaking, Morse code, eye tracker, mouse or joystick movements) range from around 1 bit per
second to not more than 50 bits per second (see Figure 2). In addition, many people with certain
neurological conditions (such as Amyotrophic Lateral Sclerosis, Muscular Dystrophy,
Cerebral Palsy, or brain stem stroke) are confined to communication rates that can be even
lower. In contrast, computers can not only communicate, but also store and process information
at a rate exceeding 1 terabits per second [9]. In other words, even discounting the two orders
of magnitude improvement in computing technology that is predicted by Moore’s law for the
next decade, there already is a 12 orders of magnitude difference between the external
communication capacity of the nervous system and the external and internal communication
and processing capacity of the computer. Moreover, while our motor system is highly adept at
controlling movement of our limbs, those limbs have been optimized to address the challenges
experienced by our ancestors, but not necessarily to address the complex challenges of today.
For example, our hands and fingers are adequate for the manipulation of tools, but not
necessarily optimal for communication.
The context-independent nature of this communication further impedes communication. Our
brain has at its disposal highly complex semantic relationships that put the input to the brain
into context. However, we need to use syntactic commands void of any semantics when we
communicate to a computer, which makes communication less efficient [10].
‡This idea is similar to the Theory of Constraints (e.g., [4]) that postulates that, for example in a manufacturing plant, total system output
is limited by the slowest operation in the process.
Schalk Page 2
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
The low communication rate between the brain and the computer, the constraints of our motor
system, and the communication’s highly syntactic and thus context-independent nature,
constitute the most fundamental inefficiencies as well as the biggest potential for improvements
in human efficiency on tasks that are constrained by this low speed and the physical limits of
our bodily movements. For example, a jet pilot might have to execute a number of syntactic
commands in sequence (e.g., turn left and then accelerate), when it would be more efficient to
communicate a semantic command (e.g., follow a particular target). Human-Computer
Interaction, an area within Computer Science, has been aware of these issues and has engaged
in many efforts (such as context-aware software or the Semantic Web) that attempt to address
them. Because the capacity to represent and relate information constitutes a major advantage
of the brain over the computer (and we thus cannot easily reproduce these capacities in a
computer), and because these efforts cannot address the low communication rate of our sensory
and motor system, all current corresponding efforts are thus restricted to merely alleviate the
symptoms of this fundamental communication problem.
This paper lays out a proposed solution to this problem that I expect to be realized in the coming
decades. The expectation is that direct communication between the brain and the computer can
overcome the low rate, context independence, and/or physical constraints imposed on current
means of communicating between the brain and the computer. While this possibility has been
contemplated in science fiction for some time (e.g., [11,12,13,14,15,16,17,18]), many studies
over the past two decades have already demonstrated that non-muscular communication is
possible and can, despite its early stage of development, already serve useful functions [19].
Thus, this article is not science fiction. It is about realistic improvements to existing technology
that will lead to a close and highly interactive relationship between the brain and the computer,
and about the major implications of these developments.
1.2. Feasibility
As bold as the assertion of direct brain-computer communication may sound, its
implementation, and all the powerful implications derived from it, merely rests on two
assumptions. First, a direct interaction with the brain requires understanding of the language
of the communication. The promise of this notion was most eloquently described by Ramón
y Cajal about 100 years ago:
To know the brain is the same thing as knowing the material course of thought and
will, the same thing as discovering the intimate history of life in its perpetual duel
with eternal forces, a history summarized and literally engraved in the defensive
nervous coordination of the reflex, the instinct, and the association of ideas.
Second, it also requires a physical interface that can communicate the symbols of this language
with the requisite clarity to and from the brain so that they can be understood the same way as
if those symbols originated from within the brain.
1.2.1. Assumption 1: Understanding the Language—Many studies over the past
decades have demonstrated that it is feasible to understand the language of the brain. (For the
purpose of this article, the term language refers to the set of brain signals that communicate
information. A metaphor for these brain signals is the term symbols where each symbol is
represented by an electrical, chemical, or metabolic signature, and is produced by
communication primitives such as action potentials.) With these studies, it has become
increasingly clear that mental faculties can be decomposed into a multitude of informationprocessing
systems (which Minsky called agencies [20]) and that brain activity in these systems
can be analyzed or modified to detect and change function in the associated mental faculties.
For example, studies have shown that it is possible to stimulate motor or sensory areas to induce
particular motor function or sensory perception (i.e., to communicate from the computer to the
Schalk Page 3
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
brain), and that it is also possible to analyze brain signals to decode motor function and sensory
perception (i.e., to communicate from the brain to the computer).
Based solely on the language of the brain and its individual symbols, it thus appears feasible
to interact with the brain on the basis of the mental faculties realized by these areas, even with
the sensing and decoding technologies in use today. In other words, this suggests that it should
be possible to decode, or produce, a clear and complete representation of the actually
experienced or imagined visual, auditory, movement, language, olfactory, tactile, or taste
sensations encoded by the symbols communicated within the brain. Because our plans can also
be described in terms of such features [21,22,23,24,25], it should be possible to replace or
augment the inadequate communication of an intent from the brain to the computer by an
interpretation of this information, and to replace or augment the communication of these results
back to the brain. (For the purpose of this article, intent corresponds to the state of the brain
areas that activate brain areas actually producing a particular behavior (e.g., executive functions
in parietal lobe).)
The language problem can be stated as the task of determining the symbols (i.e., brain signal
features) that accompany actual or imagined actions or sensations, or intended plans for action.
One should not be distracted by the dramatic problems that we face understanding how brain
functions encode semantic relationships and use them to produce intent. For the purpose of
removing the current communication bottleneck in many tasks, it is sufficient to understand
the brain’s intent and not necessary to understand the ways in which the brain produces this
intent. At the same time, this limited understanding of brain function will ultimately limit the
possible interactions between the brain and the computer. These limitations are discussed later
in this paper in Section 2.3.
1.2.2. Assumption 2: An Adequate Interface—An efficient physical interface between
the brain and the computer would effectively measure and influence the electrical or chemical
properties of the brain cells in proximity to the interface to measure or induce action potential
or neurotransmitter activity (Section 3 later in this paper describes several possible device
technologies). Studies indicate that these different types of activity have different functions in
the nervous system. Electrical activity in the brain (i.e., action potentials that are produced by
the cell body and communicated from the cell’s axon to other adjacent neurons) is mainly
responsible for communication and information processing. Chemical properties typically
communicate the results of past information processing so as to produce changes in the brain
that optimize future processing. For example, increased neurotransmitter production triggered
by increased electrical activity may start chemical signal cascades that eventually modify gene
function that modify future cell behavior.
The interface problem can be stated as the task of designing a physical structure that can interact
with requisite speed, safety, and sensitivity with the brain using electrical and/or chemical
means. This problem is, while a complex issue that will require considerable attention,
ultimately an engineering problem with clearly defined mechanical, electrical, and chemical
specifications that can be expected to be solved.
1.3. Breaking the Bottleneck
Breaking the communication bottleneck by adding additional communication channels from
the brain to the computer could have profound implications on the way we interact with and
benefit from the computer. Additional information may increase the overall communication
rate and thus could provide a mechanism to increase human efficiency. Alternatively,
augmented awareness about the current state of the brain could make interaction with
computers a more natural experience that in the end may not differ from the way we interact
with and experience our own body. For example, we might simply focus attention to an Internet
Schalk Page 4
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
link to follow it rather than producing complicated motor commands to move and click a mouse,
or we might merely feel that a particular menu selection is not appropriate rather than having
to learn the same by reading text on a screen. In summary, the processes that transform our
intent into the actions necessary to achieve it could become simpler if we had better access to
the current state of the brain.
The concept of the perfect interface that allows humans to interact with computers without
performing complex arbitrary procedures has long been a matter of discussion in the usability
community. Donald Norman described this concept as follows:
A device that knows about its own environment and that of its user could transparently
adapt to the situation, leading to the idea of the invisible computer as discussed by
Weiser [26]. This in turn is a step towards a disappearing interface as demanded by
Norman [27].
The following sections review the current state of the two requirements that are necessary to
realize this vision, i.e., understanding the language of the brain and the physical interface.
2. The Language of the Brain
Given a suitable physical interface, one may use two different languages to communicate with
the brain. First, one may communicate using the same symbols that the brain uses during its
normal function (i.e., decoding information from or inducing information into the brain). Using
this approach, the communication process between the brain and the computer could be faster
and more efficient (because the brain’s intent does not have to be translated into motor
commands); it could also augment conventional communication with the context defined by
information derived from the brain (see Figure 3). Second, one may communicate with the
brain by establishing a new mutual language, i.e., by defining a set of symbols that is not
normally used by the brain to communicate information, or by associating a set of existing
symbols with a new meaning (e.g., associating the amplitude of the mu rhythm in the
electroencephalogram with velocity of cursor movement). This procedure creates a new
communication channel that does not rely on the brain’s normal output pathways of peripheral
nerves and muscles [19]. Because this option does not involve our body’s sensory and/or motor
systems, it renders the communication process between the brain and the computer independent
of the constraints of the replaced conventional system(s), and could thus be useful to people
with motor disabilities, or to people who are otherwise limited by their body’s communicative
abilities (such as surgeons whose eyes provide them with an inadequate picture and whose
hands do not have the accuracy and degrees of freedom that would allow them to perform as
desired).
2.1. Using the Brain’s Existing Language
2.1.1. Decoding Information from the Brain—Many studies over the past decades have
demonstrated that information from sensory or motor systems in the brain can be decoded to
retrieve details about currently perceived sensations and executed movements, or even about
the currently imagined sensations or movements. Examples have been described in the
somatosensory system, visual system, auditory system, olfactory system, motor system, and
language system.
Somatosensory cortex represents a somatotopic map of particular sensory modalities such as
temperature or touch. This map is commonly referred to as the homunculus model that was
first described by Penfield (see [28]). Stereotypical stimuli, such as touch of a particular finger,
result in specific activity changes (measured as changes in the frequency of discharge of neural
action potentials) in the corresponding area of the cortex. Decoding information from these
Schalk Page 5
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
areas in the brain would give the computer a detailed picture of the brain’s current actual or
imagined sensory experiences.
In the visual system, different areas of cortex represent the luminosity and color of visual input
(i.e., a retinotopic map). In addition, starting with the work of Hubel and Wiesel (see [29]),
neuronal assemblies have been found to be responsive to, and thereby encode, complex visual
stimuli such as lines at particular orientations, certain shapes, or even faces (see [30] and
[31]). Decoding such information would afford the computer a comprehensive understanding
of perceived or imagined visual images (e.g., [32]) and their higher-level semantic properties.
In auditory cortex, areas appear to be mapped to tones of different frequencies (i.e., a
tonotopic map). In addition, Knudsen and Konishi identified an area in the midbrain of owls
that contain cells (i.e., space-specific neurons) that encode the particular spatial location of a
sound (see [33]). Decoding information from auditory cortex could thus communicate to the
computer actual or imagined pitch and location information.
The olfactory system is able to discriminate different odors. It contains receptors that are
preferentially responsive to particular smells. By measuring responses from assemblies of cells,
many odors can be clearly distinguished [34,35].
The motor system is very similar to other systems in the sense that features that are adjacent
in a particular feature domain (e.g., such as position, direction, or velocity of hand movements)
have representations that are spatially adjacent to each other in cortex. Since at least the late
1960s it has been known that the firing of motor cortical neurons is correlated with muscular
force and other movement parameters (e.g., [36,37,38,39,40,41,42,43]). Furthermore,
subsequent studies showed that appropriate decoding algorithms can accurately predict the
position and velocity of limbs [44,45,46,47,48] or eyes [49,50] in non-human primates. These
studies confirmed a directional sensitivity of motor cortical neurons that is known as cosine
tuning: cells fire fastest if the direction of limb movement equals their preferred direction, and
fire slowest if the limb is moved in the opposite direction. Interestingly, a similar relation has
been described between the direction of eye movements and cell discharge in the paramedian
pontine reticular formation (see [51]), the mesencephalic reticular formation (see [52]), and
the internal medullary lamina of the thalamus [53]. Moreover, increasing evidence strongly
supports the hypothesis that, as in the systems described previously, imagined movements have
brain signal signatures that are similar to those associated with actual movements [54,55,56,
57,32,58]. Furthermore, studies also indicate that particular parts of cortex not only encode
particular aspects of actual or imagined movements (such as the particular direction of an actual
or imagined hand movement), but also more general aspects of movement planning (such as
the brain’s intent to move a hand to a particular location prior to translation of this plan into
actual motor commands (e.g., [24])). Using this information, a computer could execute
commands based on specific or abstract movement plans.
The language system also consists of a number of different areas with different functional
characteristics. For example, Broca’s area is responsible for the production of spoken language
(i.e., motor programs for controlling speech sounds); Wernicke’s area is responsible for the
comprehension of language (i.e., the interpretation of spoken and written words); visual cortex
is involved in processing written language; and motor cortex is responsible for the production
of speech sounds (i.e., for controlling vocal muscles). In addition, there is recent evidence that
even the representation of syllables and phonemes is encoded in brain signals (see [59,60]). A
computer could use this information to learn about the spoken or imagined words produced by
the brain.
Finally, the capacity to decode information from the brain has even been extended into personal
experiences such as sympathy and empathy (e.g., [61] and [62], respectively). Moreover, as
Schalk Page 6
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
mentioned above, it is becoming increasingly evident that the brain signals that accompany
imagined movements, sensations, and feelings are, while smaller, similar in characteristics to
signals that accompany actual movements, sensations, and feelings. This opens the possibility
that one could not only decode or produce actual, but also imagined, experiences.
In summary, many studies have shown that the activity in particular mental faculties can be
decoded to determine the nature of actual or imagined movements and sensations. These studies
have typically analyzed only one mental faculty in isolation, and many have been conducted
in animals for practicality or safety issues. This prohibits realization of the promise set forth
in this paper. Thus, the challenge at hand is to remove these practicality and safety issues so
that comprehensive study of a number of faculties simultaneously becomes possible.
2.1.2. Inducing Information Into the Brain—The same way that information from the
brain can be used to determine the state of many different agencies in the brain, similar
information could be induced into the brain using the same understanding of the symbols and
the language of the brain’s internal communication. Even sixty years ago, the eminent scientist
Vannevar Bush, then Director of the Office of Scientific Research and Development,
hypothesized in As We May Think [63] about such a possibility:
By bone conduction we already introduce sounds: into the nerve channels of the deaf
in order that they may hear. Is it not possible that we may learn to introduce them
without the present cumbersomeness of first transforming electrical vibrations to
mechanical ones, which the human mechanism promptly transforms back to the
electrical form?
It took until recently for this vision to become reality, but auditory prostheses are already in
widespread use (e.g., [64]). These prostheses work by introducing into the auditory nerve
([65] for review) or auditory cortex (e.g., [66]) electrical impulses that encode pitch information
similarly to, albeit currently somewhat cruder than, those encoded by the electrical impulses
produced by a healthy cochlea. Advances are also made towards interfacing more complex
systems, such as the visual system with the retinal implant [67,68,69,70].
In summary, there is no reason to believe that current systems could not eventually decode or
produce sounds or visual images rivaling in clarity those produced by our own body apparatus.
This could at least be partially achieved simply by engineering sensor and stimulator devices
with an appropriately large number of electrodes. Given this development, it also appears
feasible and practical to extend these systems to interact with all movements, sensations, and
emotions using a single device. While this possibility opens up many new avenues for
restoration or augmentation of motor and sensory function, it also raises several ethical issues,
which are outlined in Section 6 later in this paper.
2.2. Establishing a New Language: Brain-Computer Interfaces
The second option for interacting with the brain is by establishing a new mutual language, i.e.,
essentially creating a new communication channel that does not rely on the brain’s normal
output pathways of peripheral nerves and muscles [19]. While this new language is based on
the same neural communication primitives (such as action or field potentials) used by the brain
in its internal communication, the symbols or function of this language may be different. Over
the past two decades, a variety of studies have evaluated this possibility. They assessed whether
brain signals recorded from the scalp, from the surface of the brain, or from within the brain
could provide new augmentative technology that does not require muscle control (e.g., [71,
72,73,74,75,76,77,78,79,80,46,81,82])(see [19] for a comprehensive review). These braincomputer
interface (BCI) systems measure specific features of brain activity (i.e., the symbols
Schalk Page 7
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
of this communication that are typically mutually established between the brain and the
computer) and translate them into device control signals.
These studies show that direct communication with the brain is possible and that, despite its
early stage of development, simple language, and consequently relatively modest
communication rates (i.e., no more than 25 bits/min or 0.41 bits/sec [83]), it might already
serve useful purposes for paralyzed individuals who cannot use conventional technologies. To
people who are locked-in (e.g., by end-stage amyotrophic lateral sclerosis, brainstem stroke,
or severe polyneuropathy) or lack any useful muscle control (e.g., due to severe cerebral palsy),
a current BCI system could give the ability to answer simple questions quickly, control the
environment, perform slow word-processing, or even operate a neuroprosthesis or orthosis
[84,85,86]. While the communication rate of present BCI systems is modest, it is almost as
fast as certain communication methods (such as the Morse code) and only two orders of
magnitude lower than the fastest external communication rate supported by our nervous system
(see Figure 2).
2.3. Issues and Limitations
Direct communication with the brain will eventually be limited by five issues that relate to the
difficulty of establishing the language of communication.
The first issue relates to the calibration procedure that determines the symbols of this language.
This procedure utilizes an understanding of the mental faculties to be decoded (i.e., a reference
task such as actual or imagined motor movements, speech, etc.) to establish the relationship
between the reference task and signals from the brain. For example, current techniques may
use linear regression to determine the linear relationship between particular brain signal
features (such as amplitudes in certain frequency bands at relevant locations) and a particular
output parameter (such as the direction of hand movement). Thus, this calibration procedure
can only be performed if such a reference exists, and therefore will be impossible for mental
faculties that do not correspond with measurable actions. This issue has been recognized for a
long time in philosophy where it is known as the reference problem [87].
The second issue relates to the stability of the brain’s existing language. The brain is not a static
processing unit but rather undergoes continuous adaptations in response to external and internal
influences. In other words, the particular symbols that the brain uses to represent and
communicate information can be expected to change over time. This will require adaptations
in the computer and/or more continual calibration procedures.
The third issue is what could be called the language identification paradox. Because a strong
theoretical basis (and thus, a mathematical model) for the brain’s internal communication
currently does not exist, any calibration procedure needs to rely solely on mathematical
techniques (e.g., machine learning) to establish the relationship between a reference action and
all possible symbols (i.e., brain signal features). As the number of possible symbols in the
language and the number of reference tasks increase with better sensor technologies, the
determination of this relationship will, paradoxically, also become more difficult. Thus, with
current mathematical techniques and current understanding of the brain’s internal
communication, this issue will result in an increasing demand for more data from an increasing
number of reference actions, and hence soon become impractical. Thus, advances in sensor
fidelity will eventually also demand advances in mathematical techniques (e.g., [88]) and/or
better understanding of the brain’s internal communication.
The fourth issue is that the communication system may be falsely activated by existing tasks
(e.g., actual movements) that also produce symbols of this language. As an example, a
communication system that is controlled by imagined hand movements may also be activated
Schalk Page 8
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
by actual hand movements (which typically produce similar neural signatures). This issue may
limit the utility of this type of communication for control tasks that would augment (rather than
replace) bodily actions.
The fifth and final issue relates to communication that relies on a new language. Establishing
a new language requires, by definition, mutual adaptation of the brain and the computer. The
more complex the syntax and taxonomy of the new language, the longer this training process
will become. Practical considerations will eventually limit this time and thus the complexity
of the language.
3. The Interface
Efficient communication between the brain and the computer requires a physical interface that
supports rapid bidirectional communication with a large number of sites in the brain, that is
clinically safe, and that can communicate symbols that are indistinguishable from the brain’s
internal communication. While there is currently no technique that can satisfy all of these
requirements, several promising avenues for further research exist. The following three
sections describe available techniques, future developments, and potential issues.
3.1. Currently Available Technologies
A variety of methods for monitoring brain activity currently exist, and could in principle
provide the basis for direct communication between the brain and the computer. These include,
besides electrophysiological methods (i.e., electroencephalography (EEG),
electrocorticography (ECoG), or recordings from individual neurons within the brain),
magnetoencephalography (MEG), positron emission tomography (PET), functional magnetic
resonance imaging (fMRI), and functional near infrared imaging (fNIR). However, MEG, PET,
fMRI, and fNIR are currently technically demanding and expensive. Furthermore and more
problematically, PET, fMRI, and fNIR, which give a measure of brain activity based on
metabolic activity, have limited temporal resolution and are thus less amenable to rapid
communication. Non-invasive and invasive electrophysiological methods (i.e., EEG, ECoG,
and single-neuron recordings) are at present the only methods that can measure and in part
alter brain activity with the requisite speed.
Non-invasive electrophysiological methods use electrodes on the scalp to record the
electroencephalogram (EEG) [89,73,90,71,72,73,74,75,76,77,19,91,92,93,94]. EEG is
convenient, safe, and inexpensive, but has low spatial resolution [95,96] and is susceptible to
artifacts from sources outside the brain. Furthermore, non-invasive methods can typically only
be used to measure brain function (i.e., communicate from the brain) but not directly alter brain
activity (i.e., communicate to the brain)§. In summary, EEG signals can provide the basis for
safe and uni-directional communication of limited resolution. At present, the degree of
potential improvement in fidelity, in particular those that can be achieved in relatively
uncontrolled situations, is unclear.
Invasive methods use microelectrodes implanted within the cortex to record single-neuron
activity [97,45,98,46,99,80,81,100]. While intracortical microelectrodes can detect or alter
communication between individual brain cells, their widespread implementation is currently
impeded mainly by the difficulties in maintaining stable long-term recordings [101,102], by
the substantial technical requirements of single-neuron recordings, and by the need for
intensive continual expert oversight. In summary, intracortical microelectrodes combine good
§One exception is Transcranial Magnetic Stimulation (TMS), which has low spatial specificity and can be uncomfortable in its use.
Another exception is biofeedback of brain activity, which can be used to alter behavior.
Schalk Page 9
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
signal fidelity with limited practicality. At present, the degree of potential improvement in
practicality are unclear.
While it may eventually be feasible to use implanted microelectrodes to record from a large
number of individual neurons practically and safely over long periods, this is currently (and
probably for the foreseeable future) not possible. This appears to be problematic, because many
scientists have assumed that only action potential or field potential recordings from small
groups of neurons can accurately reflect detailed aspects of actions (e.g., such as the direction
and speed of hand movements, the position of individual fingers, or different phonemes in
speech). Recent studies have provided strong evidence that this notion is not justified; that, in
fact, decoding of detailed aspects of motor or speech function is possible, in humans, using
electrocorticographic (ECoG) signals recorded from the surface of the brain.
ECoG has higher spatial resolution than EEG (i.e., tenths of millimeters vs. centimeters [95]),
broader bandwidth (i.e., 0–500 Hz [103] vs. 0–50 Hz), higher characteristic amplitude (i.e.,
50–100 μV vs. 10–20 μV), and far less vulnerability to artifacts such as EMG [95] or ambient
noise. At the same time, because ECoG does not require penetration of the cortex, it is likely
to have greater long-term stability [104,105,106,107] and to produce less tissue damage and
reaction than intracortical recordings.
Using ECoG, a recently published report [108] demonstrated that it is possible to decode the
position and velocity of hand movements in humans. More importantly, it showed that the
accuracy of that decoding was comparable to what has previously been demonstrated only by
studies using intracortical microelectrodes in monkeys. This finding, i.e., that field potential
activity recorded from the surface of the brain can be as informative for relevant questions as
single-unit activity recorded from within cortex, is further supported by ongoing work [108,
109] that extends these encouraging findings to finger movements and speech.
In sum, traditional non-invasive and invasive techniques currently have, and likely for the
foreseeable future will continue to have, issues with robustness, fidelity, and/or practicality.
At the same time, it is reasonable to anticipate that, with appropriate engineering improvements,
ECoG recordings could combine robustness and fidelity with clinical practicality.
3.2. Future Development
As described above, at present only traditional electrophysiological methods (i.e., EEG, ECoG,
and single-neuron recordings) have the characteristics and maturity necessary for
comprehensive investigations in this area. However, a number of novel sensor technologies
that could complement or replace these existing techniques are emerging. These emerging
technologies include devices that can measure neurotransmitter release with very high spatial
resolution (i.e., 200 μm) and reasonable temporal resolution (i.e., about one second) [110]; fine
wires that are placed in the brain’s vasculature [111]; stimulation devices that are based on
ultrasound or microwaves [112,113,114,115]; neuronal axons that have been stretched up to
several centimeters, retaining their function [116]; biocompatible polymers with penetrating
carbon nanotubes [117]; electro-chemical biosensors using nanotubes [118,119]; actuated
neurotransmitter-based stimulation [120,121]; optical stimulation of targeted genetically
modified cell types with millisecond resolution [122]; or harnessed biologically grown brain
cells [123].
It appears entirely plausible that further development and integration of these techniques may
result in a device that may receive and generate electrical signals and neurotransmitters, so that
in terms of its functional properties it may not be distinguished from structures in the brain.
Schalk Page 10
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
3.3. Issues and Limitations
In addition to the problems of current devices listed above, further development of the physical
interface will also face additional issues. That first issue is that, as an increase of the number
of sensors will become progressively practical, meaningful, and economical, the number of
wires that have to be installed to connect the devices to processing units will also increase. At
a large number, wires may become too voluminous to be practical. Fortunately, this problem
is similar to that in other technical domains, such as in voice or data networks. The usual
solution to this problem is that individual signals be multiplexed (e.g., in the time or frequency
domain), so that multiple signals can be transmitted using only one wire. Because the
bandwidth of brain signals is low, solving this problem should only require appropriate
adaptation of existing technology or modest additional development.
The second issue concerns resolution. While an ideal sensor would derive an accurate electrical
and chemical sample from every cell in the brain, this will most likely remain impractical. This
restriction may ultimately limit the types of interaction, in particular because it is known that
different types of cortical representation can be interleaved in neuronal populations within
close distances. At the same time, recent studies have shown that relevant information is also
spatially widely distributed in the cortex (e.g., see [124,108] for examples in motor cortex).
Other studies have demonstrated that field potentials, i.e., the spatial summation of large
numbers of neurons, hold information that in relevant aspects is comparable to that derived by
single-unit recordings [125,108]. These results indicate that it is possible to acquire substantial
information from the brain without recording from action potentials.
The third issue relates to the practicality of invasive procedures. As described above and in
Section 3.1, there is strong evidence that detailed information can be acquired from the brain
without its penetration. Furthermore, sensors and implantation procedures can likely be further
optimized, such that the implant could be placed in a relatively minor surgery and can provide
stable long-term recordings. However, any surgery will always limit potential users to those
that can derive a substantial benefit from this technology. Because practically all of the
established and novel techniques listed in Section 3.2 also require an invasive procedure, this
issue may continue to impede wide-spread dissemination.
4. As We May Think
The previous sections described the communication bottleneck as the fundamental impediment
to exploiting the mutual advantages of the brain and the computer, and illustrated the two
requirements that have to be met in order to break this bottleneck, i.e., an adequate language
and interface. Subsequent sections will discuss the expected development and the profound
implications of the expected possibilities of this brain-computer interfacing technology.
4.1. Towards the Limit
To further elucidate the limits and the further development of this novel way to communicate,
we may first review what is possible today and then ask how we might increase the modest
capacities of current brain-computer interfacing technologies. Current devices have been
demonstrated in many studies to support simple communication. These capacities can be used
by people with or without disabilities to communicate their wishes to their environment. At
the same time, the rate of this communication is rather low, i.e., typically not more than 25 bits
per minute.
To examine how this current modest performance could be improved, it is illustrative to consult
some mathematics: In Mathematical Theory of Communication [126], Claude Shannon showed
that any noisy communication channel has a channel capacity measured in bits per second.
Schalk Page 11
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Consider a communication channel of bandwidth B Hz and a signal-to-noise ratio . The
channel capacity C in bits per second is then defined as . Because the properties
of any communication channel, including the electrical, chemical, or metabolic ones that are
relevant to brain-computer communication, can be expressed in this form, this formula can be
used to calculate the capacity of any communication channel between the brain and the
computer. In lay terms, the total information rate thus depends on the clarity of the transmitted
information (i.e., the sensing/stimulation resolution in a particular domain (e.g., spatial,
temporal, frequency, chemical, etc.) and on the amount of noise incurred at the sensor/
stimulator or during transmission) and on the number of such communication channels.
Hence, I postulate that the communication rate between the brain and the computer will increase
with the number of mental faculties that can be interacted with and with the clarity of that
interaction. This concept strongly suggests that, as technologies improve to interact with more
areas of the brain with higher fidelity, the communication rate between the brain and the
computer will also increase. At the same time, it is not clear which factors will eventually limit
this improvement. The brain contains about 100 billion neurons (e.g., [127,128,129]) and the
theoretical upper bound for the information rate was estimated at 300 bits per second per neuron
[130,131]. It was actually measured, in a number of different brain systems, at about 80 bits
per second per neuron [132]. These considerations and measurements suggest a high upper
bound for the information rate. Whatever the true limit, there is no reason to believe that we
should not be able to substantially increase the communication rate from the current maximum
of 25 bits/min.
4.2. Expected Performance and Price Development
The radical promise of these novel communication capacities will remain elusive if they remain
a theoretical possibility rather than a practical reality, and practical reality is determined by at
least two important factors: performance and price.
Many historic examples in technical history, including ones in sensor and communication
technologies, have exhibited radical and sustained improvements (i.e., 40–60% performance
increase per year, which is often called Moore’s Law) resulting from adequate research
activities. Two of these examples are illustrated in Figure 4, and others in [133]. In addition,
many examples show that typically, the unit cost of a product declines by typically 20–30%
each time the cumulative output of that product doubles (this is often called the Law of
Experience).
These observations strongly indicate that the performance (i.e., number and sensitivity) and
price of sensors/stimulators should increase and decrease, respectively (Figure 5), assuming
that research activities in this area will continue. Fortunately, Brain-Computer Interface
research has recently experienced large and accelerating research activities (see Figure 6).‖
To further examine the possibilities of even today’s technologies, we may visit an example of
a hypothetical device that can detect one thousand signals with high fidelity. Such a device
could use sensors and electronics patterned on thin films (which allows economical high
channel counts) and could be placed on the surface of the brain (where they can detect highfidelity
signals at modest clinical risks [79]). Such patterned CMOS electronics have recently
been described and used in a number of studies (e.g., [135,136,137,138,139]). A small thin
‖One practical caveat is that the developments in these other areas were accompanied by or even required large up-front investments that
drove the price per item (e.g., per transistor, copy of a software program, etc.) down to almost zero, and these large investments are
typically only made if the primary target market is large and accessible within a few years.
Schalk Page 12
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
film could contain electronics to realize amplification, analog-to-digital conversion, and
extraction and wireless transmission of signal features. These signal features could be received
by an external computer and converted into device commands (such as the many examples of
current brain-computer interfacing technology illustrates). Because even a full-fledged
microprocessor with dramatically more transistors can be designed to use only about 1 Watt
of power (e.g., [140]), we may use this figure as an upper bound for the necessary power
consumption. Rechargeable and implantable Lithium-Ion batteries already exist that could
support almost one full day of operation for such a device away from a charging station (e.g.,
[141]). The features that are extracted by the electronics may be transmitted to an external
computer over a Bluetooth-based wireless link. Class 2 Bluetooth devices consume about 2.5
mW, have a range of about 10 meters, and can transmit up to 125 KBytes per second (see
[142,143]). (The recently announced Bluetooth 2.0 standard already provides 3–10 times that
bandwidth.) At 1000 channels and 2 bytes per sample, this device could transmit 60 signal
samples or signal features per channel per second (without any data compression), which is
sufficient to support rapid communication.
In summary, a device that can detect large numbers of brain signals with high fidelity could
be created using current technology given adequate funds; and clearly, the performance and
price of this hypothetical device can be expected to dramatically improve over time. In
consequence, there is every reason to believe that rapid communication between the brain and
the computer is not only a theoretical possibility, but will also become technically possible and
practical. Given this expectation, we may begin to elucidate the expected impact of this new
technology.
4.3. Expected Adoption and Impact of Brain-Computer Interfacing Technology
As any other innovation, brain-computer interfacing technology will begin to be adopted once
its value to an individual exceeds the cost to that individual. As the improvements in
performance and price described above, this adoption or technology diffusion process has been
observed and described for many different innovations [144]. Typically, it only takes a modest
amount of time until 50% of the market has adopted the new technology, and complete market
penetration is achieved after twice that time [144]. For example, using data from radio,
television, VHS recorders, cable and satellite TV, DVD players, the Internet, and wireless
phones, a recent article [145] calculated that it only took an average of 13 years to achieve 50%
market penetration. These examples suggest that Brain-Computer Interfacing technology
might also be adopted, at least by particular user groups, over a relatively short period of time.
I anticipate that this process will eventually proceed in mainly three groups of users (Figure
5). Each of these three different user groups will begin to benefit from this new communication
capacity as its price and performance improve to a certain point.
With relatively modest improvements, brain-computer interfacing technology will become a
practical and safe, albeit simple and slow, communication aid. It will thus soon become of
interest to the first group of adopters: handicapped individuals who are currently limited for
essentially all tasks by their limited communication capacity. For these people, even the modest
rates of communication that will initially be achieved should dramatically improve quality of
life.
With further improvements, technology will improve such that it rivals or exceeds some of the
conventional human capacities. The second group that I expect to benefit from improved
communication abilities are thus healthy individuals for whom communication is currently a
pressing and limiting issue in many of their tasks. For example, limited communication input
and output capacity is a serious issue for soldiers in combat. (In absence of the ability to increase
these capacities of the brain, the military is currently trying to optimize this communication
given our body’s constraints.) In consequence, as soon as communication rates between the
Schalk Page 13
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
brain and the computer start to rival those that can currently be achieved with our sensory and
motor systems, I expect that this group of users will begin to adopt this new technology.
If it becomes possible to design an (ideally non-invasive) interface (see Section 3.3) that can
support high performance at an affordable price, brain-computer interfacing technologies will
become of interest to the third group of users – most other members of society – that could use
these technologies for a wide variety of purposes. At the same time, this new communication
capacity will constitute a radical and disruptive innovation that will not be immediately
compatible with existing practice and that will evoke change in many complementary
processes. It will thus take some time, perhaps a few decades, until this technology has been
fully integrated in human societies [146,147,144].
In summary, I expect that, as performance increases and price decreases, brain- computer
interfacing technology will become beneficial to an increasing number of individuals, that the
direct and indirect effects of its use will become increasingly pervasive, and that the
implications on individuals and society will grow in parallel. I thus anticipate that this
development of brain-computer interfacing technology will in many ways mirror the
development of computers (that addressed the previous bottleneck in human productivity) and
of other General-Purpose Technologies (GPTs) [148]. GPTs have been found to have a wide
variety of major effects on private and social performance [149]. For example, Information
Technology and the Internet have wide applications and productivity-enhancing effects in
numerous downstream sectors with high social rates of return that often exceed private rates
of return [150,151], and their dissemination is having a sustained, long-lasting impact on
productivity and economic growth. Brain-computer interfacing technology can thus be
expected to have a similar profound impact not only on individual, but also on societal
performance.
5. Brain-Computer Symbiosis
To illustrate the anticipated impact of Brain-Computer Interfacing technology, let us visit
examples of their applications to the three user groups listed above.
The physically handicapped will primarily benefit from restoration of function. I anticipate
that this restoration will initially mainly concern simple communication and control functions
and eventually extend into full restoration of movement capacities using existing or artificial
limbs. Because there are about 225,000–290,000 individuals with spinal cord injury in the US
alone who would benefit tremendously from restored capacities, I anticipate that the
commercial application of brain-computer interfacing technology will become a significant
driver of progress once system performance improves to the point at which it becomes
interesting to this large group of individuals.
As system performance increases further, individuals who are often limited by their
communication capacity could benefit from this technology in a number of ways. First, direct
communication from the brain could entirely eliminate the roughly 100 ms delay that is
currently introduced by our nerves and muscles. Second, direct communication from the brain
could practically eliminate the constraints imposed by the movement capacities supported by
our limbs. Specifically, rather than optimizing interfaces to the static capacities of our body,
we could optimize the whole system, human and computer. For example, imagine a jet pilot
who currently has to deal with many controls for the many degrees of freedom the airplane
supports. Because the number of degrees of freedom of the airplane exceeds the degrees of
freedom of our motor system (or at least is very inadequately matched to it), the jet pilot might
have to operate specific functions in sequence rather than in parallel. Using direct
communication from the brain, the degrees of freedom that the pilot can support could be
matched to the degrees of freedom of the airplane, which would transform the airplane from
Schalk Page 14
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
an external tool to a direct extension of the pilot’s nervous system, in which different areas of
the pilot’s motor system would be responsible for controlling movements of the airplane rather
than movements of the pilot’s limbs. In addition, sensors in the plane could be connected to
the brain’s sensory areas such that these measurements can provide the pilot with information
about the current state of the plane, much in the same way that our bodily sensors provide us
with comprehensive information about the state of our body.
In summary, I anticipate that for these first two groups of users there will be many applications
that will prove beneficial and thus will be commercially attractive. At the same time, the full
potential of direct brain-to-computer communication will only be realized when this
technology can benefit most members of society. As soon as interfaces can be built that can
interface safely, economically, and concurrently with most of the major systems in the brain,
many applications will emerge that will augment our senses and our communication capacities
with others and with computers. It will be then that enhanced communication capacities will
pervade the fabric of society with a multitude of side effects on many other technologies and
processes.
6. Ethical Issues
The previous sections outlined the potential benefits of brain-computer interfacing technology.
Similar to any other type of technology, these potential benefits also come with inherent ethical
concerns (see [152] and [153]), which mainly include issues of privacy and liability. These
two concerns are described in the following paragraphs.
The first concern relates to privacy. The state of our brain normally expresses itself almost
exclusively only through our actions, and it avails itself for modification only through our
senses. As described in Section 2.1.1, direct assessment of the state of different systems in our
brain could be used to add context to existing communication, and thus be beneficial. At the
same time, this assessment necessarily has to be processed by a computer. This raises privacy
concerns, because this information may not be securely stored and thus accessible to third
parties. Furthermore, as described in Section 2.1.2, the capacity to induce information into the
brain may provide us with the ability to base our actions on a better assessment of the
environment. Because this information is provided by a computer, it could be accessed and
modified by third parties, which may allow them to influence our actions. As alarming as this
may sound, several existing techniques (e.g., subliminal advertising, brainwashing strategies,
etc.) are specifically designed to effectively modify behavior. Just as society has responded to
these techniques (e.g., by banning subliminal advertising) or to other issues of privacy (e.g.,
by creating privacy regulations (such as HIPAA in the United States)), society will have to
establish necessary guidelines for responsible use of this new technology.
The second concern is liability. Most people would agree that, under normal circumstances,
we are fully responsible for our actions. However, if our intent was effected by a brain-computer
interface, incorrect actions may be produced simply by incorrect detection of correct intent. In
this case, who would be liable for potential damages: the provider of the detection algorithm
or the individual? How would one even determine that our intent was incorrectly detected?
Alternatively, a brain-computer interface could be configured to utilize commands from
executive functions before they are being screened by the brain’s validation processes. Thus,
an intent that under normal circumstances would not have been acted on may be effected when
using a brain-computer interface. In this case, detection of the temporary intent could be correct,
but the action would still be undesirable. In both of these scenarios, this problem progressively
increases with increasing communication speed. For example, when a user utilizes a word
processor by controlling a cursor towards the desired letter, incorrect movements could be
Schalk Page 15
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
detected using visual feedback, and thus may be corrected. This ability for correction decreases
with increasing selection speed.
7. Conclusions and Recommendations
This article discussed the promise that interactions with the brain could improve or augment
conventional forms of human communication. To many, the vision presented here will be as
utopian as J.C.R. Licklider’s and Doug Engelbart’s predictions about the significant utility of
computers almost 50 years ago. However, the foundations of technical innovation and
economics that drove this development have not changed. Because the present vision depends
largely on technological improvements rather than on hopeful speculation, and because its
realization is subject to the same forces that have governed the course of many previous
technical developments, it is, in the end, a logical step in our own evolution. The hope is that
the resulting partnership of the brain and the computer will be able to think, act, and feel in
ways that humans have never thought, acted, and felt before.
While this paper (in Sections 2.3 and 3.3) discusses several issues that need to be overcome,
the currently biggest limitations are the fidelity, practicality, and/or safety issues of available
sensors. Thus, many of the promises described in this article could be realized with better
sensors. The design of such an improved sensor will require full appreciation of the problem
at hand, which is to design a system that can accurately, practically, and safely interact with
the brain over extended periods, and that can use this capacity to communicate beneficial
information between the brain and the computer. This demands an integrated approach
dedicated to providing people with improved brain-based communication and control options
as opposed to isolated efforts in neuroscience, engineering, or signal processing.
Acknowledgments
The author would like to acknowledge the helpful discussions with Dr. Stern, Dr. Bringsjord and Mr. Deutschmann,
as well as the critical reviews of draft versions of this paper by Drs. Carp, Gerhardt, Shain, Temple, Turner, and
Wolpaw. I am also indebted to Dr. Greg Hughes for many invaluable discussions and for critical reviews of this paper.
Without him, this paper would not exist.
References
1. Licklider JCR. Man-computer symbiosis. IRE Transactions on Human Factors in Electronics 1960;1:4–
11.
2. Engelbart, Douglas C. Augmenting human intellect: A conceptual framework. AFOSR-3233. 1962
3. Norman, DA. Things That Make Us Smart: Defending Human Attributes in the Age of the Machine.
Reading, MA: Addison-Wesley; 1993.
4. Goldratt, Eliyahu M.; Cox, Jeff. The Goal. North River Press; 2004.
5. Shannon, Claude E. Prediction and entropy of printed English. The Bell System Technical Journal
1951;30(1):50–64.
6. Fitts, Paul M. The information capacity of the human motor system in controlling the amplitude of
movement. Journal of Experimental Psychology 1954;47(6):381–391. [PubMed: 13174710]
7. Reed, Charlotte M.; Durlach, Nathaniel I. Note on information transfer rates in human communication.
Presence 1998;7(5):509–518.
8. MacKenzie, I Scott. Fitt’s law as a research and design tool in human-computer interaction. Human-
Computer Interaction 1992;7:91–139.
9. NetworkWorld. HPs latest switch hits the high end. 2005.
http://www.networkworld.com/newsletters/lans/2005/0606lan2.html
10. Sapir, Edward. Communication. Encyclopaedia of the Social Sciences (New York) 1931;4:78–81.
11. Thomas, C. Firefox. New York: Holt, Rinehart, and Winston; 1977.
12. Gibson, William. Neuromancer (Remembering Tomorrow). Ace Books; 1995.
Schalk Page 16
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
13. Anno, Hideaki. Neon Genesis Evangelion (Japanese: Shin Seiki Evangerion) Anime Television
Series. Gainax; 1995.
14. Clarke, Arthur C. 3001 - The Final Odyssey. Del Rey.; 1997.
15. Kurzweil, Ray. The Age of Spiritual Machines: When Computers Exceed Human Intelligence.
Penguin; 2000.
16. Morgan, Richard. Altered Carbon. Del Rey; 2003.
17. Asher, Neal. Gridlinked. Tor Books; 2003.
18. David, Peter. Spider-Man 2. Del Rey; 2004.
19. Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM. Brain-computer interfaces
for communication and control. Electroenceph Clin Neurophysiol 2002 June;113(6):767–791.
20. Minsky, ML. The Society of Mind. New York: Simon and Schuster; 1988.
21. Bracewell RM, Mazzoni P, Barash S, Andersen RA. Motor intention activity in the macaque’s lateral
intraparietal area ii. changes of motor plan. J Neurophysiol 1996;76(3):1457–1464. [PubMed:
8890266]
22. Mazzoni P, Bracewell RM, Barash S, Andersen RA. Motor intention activity in the macaque’s lateral
intraparietal area. i. dissociation of motor plan from sensory memory. J Neurophysiol 1996;76(3):
1439–1456. [PubMed: 8890265]
23. Snyder LH, Batista AP, Andersen RA. Coding of intention in the posterior parietal cortex. Nature
1997;386(6621):167–170. [PubMed: 9062187]
24. Cohen YE, Andersen RA. A common reference frame for movement plans in the posterior parietal
cortex. Nature Reviews Neuroscience 2002;3:553–562.
25. Shenoy KV, Meeker D, Cao S, Kureshi SA, Pesaran B, Buneo CA, Batista AP, Mitra PP, Burdick
JW, Andersen RA. Neural prosthetic control signals from plan activity. Neuroreport 2003;14(4):591–
596. [PubMed: 12657892]
26. Weiser M. Some computer science problems in ubiquitous computing. Communications of the ACM.
1993
27. Norman, Donald. Why interfaces don’t work. In: Laurel, Brenda, editor. The Art of Human-Computer
Interface Design. Amsterdam: Addison-Wesley Professional; 1992.
28. Penfield, W.; Rasmussen, T., editors. The Cerebral Cortex of Man. New York: MacMillan; 1950.
29. Hubel DH, Wiesel TN. Receptive fields, binocular interaction and functional architecture in the cat’s
visual cortex. J. Physiol 1962;160:106–154. [PubMed: 14449617]
30. Kanwisher, Nancy; McDermott, Josh; Chun, Marvin M. The fusiform face area: A module in human
extrastriate cortex specialized for face perception. J Neurosci 1997;17(11):4302–4311. [PubMed:
9151747]
31. Haxby, James V.; Gobbini, M Ida; Furey, Maura L.; Ishai, Alumit; Schouten, Jennifer L.; Pietrini,
Pietro. Distributed and overlapping representations of faces and objects in ventral temporal cortex.
Science 2001;293:2425–2430. [PubMed: 11577229]
32. Decety, Jean; Jeannerod, Marc. Mentally simulated movements in virtual reality: does Fitt’s law hold
in motor imagery? Behav Brain Res 1996;72:127–134. [PubMed: 8788865]
33. Knudsen E, Konishi M. Mechanisms of sound localization in the barn owl (tyto alba). Journal of
Comparative Physiology 1979;133:13–21.
34. Schild D. Principles of odor coding and a neural network for ordor discrimination. Biophys J 1988
Dec;54(6):1001–1011. [PubMed: 3233263]
35. van Duuren, Esther; Nieto Escamez, Francisco A.; Joosten, Ruud NJMA.; Visser, Rein; Mulder,
Antonius B.; Pennartz, Cyriel MA. Neural coding of reward magnitude in the orbitofrontal cortex of
the rat during a five-odor olfactory discrimination task. Learn. Mem 2007;14(6):446–456. [PubMed:
17562896]
36. Evarts EV. Relation of pyramidal tract activity to force exerted during voluntary movement. J
Neurophysiol 1968;31:14–27. [PubMed: 4966614]
37. Evarts EV. Activity of pyramidal tract neurons during postural fixation. J Neurophysiol 1969;32:375–
385. [PubMed: 4977837]
38. Humphrey DR, Schmidt EM, Thompson WD. Predicting measures of motor performance from
multiple cortical spike trains. Science 1970;179:758–762. [PubMed: 4991377]
Schalk Page 17
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
39. Schmidt EM, Jost RG, Davis KK. Reexamination of the force relationship of cortical cell discharge
patterns with conditioned wrist movements. Brain Res 1975;83:213–223. [PubMed: 1109294]
40. Thach WT. Correlation of neural discharge with pattern and force of muscular activity, joint position,
and direction of intended next movement in motor cortex and cerebellum. J Neurophysiol
1978;41:654–676. [PubMed: 96223]
41. Hepp-Reymond MC, Wyss UR, Anner R. Neuronal coding of static force in the primate motor cortex.
J Physiol 1978;74:287–291.
42. Hamada I, Kubota K. Monkey pyramidal tract neurons and changes of movement parameters in visual
tracking. Brain Res. Bull 1979;4:249–257. [PubMed: 111780]
43. Cheney PD, Fetz EE. Functional classes or primate corticomotoneuronal cells and their relation to
active force. J Neurophysiol 1980;44:773–791. [PubMed: 6253605]
44. Fetz EE, Finocchio DV, Baker MA, Soso MJ. Sensory and motor responses of precentral cortex cells
during compatible passive and active joint movements. J Neurophysiol 1971;43:1070–1089.
[PubMed: 6766994]
45. Georgopoulos AP, Schwartz AB, Kettner RE. Neuronal population coding of movement direction.
Science 1986;233:1416–1419. [PubMed: 3749885]
46. Laubach M, Wessberg J. Cortical ensemble activity increasingly predicts behavior outcomes during
learning of a motor task. Nature 2000;405:567–571. [PubMed: 10850715]
47. Reina GA, Moran DW, Schwartz AB. On the relationship between joint angular velocity and motor
cortical discharge during reaching. J Neurophysiol 2001 Jun;85(6):2576–2589. [PubMed: 11387402]
48. Schwartz AB, Moran DW. Arm trajectory and representation of movement processing in motor
cortical activity. Eur J Neurosci 2000 Jun;12(6):1851–1856. [PubMed: 10886326]
49. Gnadt JW, Mays LE. Neurons in monkey parietal area LIP are tuned for eye-movement parameters
in three-dimensional space. J Neurophysiol 1995;73(1):280–297. [PubMed: 7714572]
50. Pesaran B, Nelson MJ, Andersen RA. Dorsal premotor neurons encode the relative position of the
hand, eye, and goal during reach planning. Neuron 2006 Jul;51(1):125–134. [PubMed: 16815337]
51. Henn V, Cohen B. Coding of information about rapid eye movements in the pontine reticular
formation of alert monkeys. Brain Res 1976;108:307–325. [PubMed: 819098]
52. Buettner, U.; Hepp, K.; Henn, V. Neurons in the rostral mesencephalic and paramedian pontine
reticular formation generating fast eye movements. In: Baker, R.; Berthoz, A., editors. Control of
Gaze by Brain Stem Neurons. Amsterdam: Elsevier; 1977. p. 309-318.
53. Schlag, J.; Schlag-Ney, M. Visuomotor properties of cells in cat thalamic internal medullary lamina.
In: Baker, R.; Berthoz, A., editors. Control of Gaze by Brain Stem Neurons. Amsterdam: Elsevier;
1977. p. 453-462.
54. Georgopoulos AP, Massey JT. Cognitive spatial-motor processes. Exp Brain Res 1987;65:361–370.
[PubMed: 3556464]
55. Kosslyn, Stephen M. Aspects of a cognitive neuroscience of mental imagery. Science 1988;240:1621–
1626. [PubMed: 3289115]
56. Georgopoulos AP, Lurito JT, Petrides M, Schwartz AB, Massey JT. Mental rotation of the neuronal
population vector. Science 1989;243:234–236. [PubMed: 2911737]
57. Decety, Jean. Do imagined and executed actions share the same neural substrate? Cog Brain Res
1996;3:87–93.
58. McFarland DJ, Miner LA, Vaughan TM, Wolpaw JR. Mu and beta rhythm topographies during motor
imagery and actual movements. Brain Topogr 2000;12:177–186. [PubMed: 10791681]
59. Siok, Wai Ting; Jin, Zhen; Fletcher, Paul; Tan, Li Hai. Distinct brain regions associated with syllable
and phoneme. Human Brain Mapping 2003;18:201–207. [PubMed: 12599278]
60. Schalk, G.; Anderson, N.; Wisneski, K.; Kim, W.; Smyth, MD.; Wolpaw, JR.; Barbour, DL.;
Leuthardt, EC. Program No. 414.11. 2007 Abstract Viewer/Itinerary Planner. Washington, DC:
Society for Neuroscience; 2007. Toward brain-computer interfacing using phonemes decoded from
electrocorticography activity (ECoG) in humans. Online
61. Decety, Jean; Chaminade, Thierry. Neural correlates of feeling sympathy. Neuropsychologia
2003;41:127–138. [PubMed: 12459211]
Schalk Page 18
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
62. Jackson, Philip L.; Meltzoff, Andrew N.; Decety, Jean. How do we perceive the pain of others? A
window into the neural processes involved in empathy. Neuroimage 2005;24:771–779. [PubMed:
15652312]
63. Bush, Vannevar. As we may think. The Atlantic Monthly. 1945
64. National Institute on Deafness and Other Communication Disorders. Neural prosthesis development
– current progress reports. 2005.
http://www.nidcd.nih.gov/funding/programs/npp/neuralprostheses_reports.asp
65. Zeng FG. Trends in cochlear implants. Trends Amplif 2004;8(1):1–34. [PubMed: 15247993]
66. Liu X, McPhee G, Seldon HL, Clark GM. Histological and physiological effects of the central auditory
prosthesis: surface versus penetrating electrodes. Hearing Research 1997;114(1–2):264–274.
[PubMed: 9447940]
67. Hetling JR, Baig-Silva MS. Neural prostheses for vision: designing a functional interface with retinal
neurons. Neurol Res 2004;26(1):21–34. [PubMed: 14977054]
68. Hallum LE, Suaning GJ, Lovell NH. Contribution to the theory of prosthetic vision. ASAIO J 2004;50
(4):392–396. [PubMed: 15307555]
69. Lakhanpal RR, Yanai D, Weiland JD, Fujii GY, Caffey S, Greenberg RJ, de Juan E Jr, Humayun MS.
Advances in the development of visual prostheses. Curr Opin Ophthalmol 2003;14(3):122–127.
[PubMed: 12777929]
70. Besch D, Zrenner E. Prevention and therapy in hereditary retinal degenerations. Doc Ophthalmol
2003;106(1):31–35. [PubMed: 12675483]
71. Farwell LA, Donchin E. Talking off the top of your head: toward a mental prosthesis utilizing eventrelated
brain potentials. Electroenceph Clin Neurophysiol 1988 December;70(6):510–523. [PubMed:
2461285]
72. Wolpaw JR, McFarland DJ, Neat GW, Forneris CA. An EEG-based brain-computer interface for
cursor control. Electroenceph Clin Neurophysiol 1991;78:252–259. [PubMed: 1707798]
73. Sutter EE. The brain response interface: Communication through visually-induced electrical brain
responses. J Microcomp App 1992;15:31–45.
74. McFarland, Dennis J.; Neat, GW.; Wolpaw, JR. An EEG-based method for graded cursor control.
Psychobiology 1993;21:77–81.
75. Pfurtscheller G, Flotzinger D, Kalcher J. Brain-computer interface – a new communication device
for handicapped persons. J Microcomp App 1993;16:293–299.
76. Birbaumer N, Ghanayim N, Hinterberger T, Iversen I, Kotchoubey B, Kubler A, Perelmouter J, Taub
E, Flor H. A spelling device for the paralysed. Nature 1999 March;398(6725):297–298. [PubMed:
10192330]
77. Kubler A, Kotchoubey B, Hinterberger T, Ghanayim N, Perelmouter J, Schauer M, Fritsch C, Taub
E, Birbaumer N. The Thought Translation Device: a neurophysiological approach to communication
in total motor paralysis. Exp Brain Res 1999 January;124(2):223–232. [PubMed: 9928845]
78. Kennedy PR, Bakay RA, Moore MM, Goldwaithe J. Direct control of a computer from the human
central nervous system. IEEE Trans Rehabil Eng 2000 June;8(2):198–202. [PubMed: 10896186]
79. Leuthardt EC, Schalk G, Wolpaw JR JR, Ojemann JG, Moran DW. A brain-computer interface using
electrocorticographic signals in humans. J Neural Eng 2004;1(2):63–71. [PubMed: 15876624]
80. Taylor DM, Tillery SI, Schwartz AB. Direct cortical control of 3D neuroprosthetic devices. Science
2002;296:1829–1832. [PubMed: 12052948]
81. Serruya MD, Hatsopoulos NG, Paninski L, Fellows MR, Donoghue JP. Instant neural control of a
movement signal. Nature 2002;416(6877):141–142. [PubMed: 11894084]
82. Carmena JM, Lebedev MA, Crist RE, O’Doherty JE, Santucci DM, Dimitrov DF, Patil PG, Henriquez
CS, Nicolelis MA. Learning to control a brain-machine interface for reaching and grasping by
primates. PLoS Biology 2003;1(2):193–208.
83. Wolpaw JR, Birbaumer N, Heetderks WJ, McFarland DJ, Peckham PH, Schalk G, Donchin E,
Quatrano LA, Robinson CJ, Vaughan TM. Brain-computer interface technology: a review of the first
international meeting. IEEE Trans Rehabil Eng 2000 June;8(2):164–173. [PubMed: 10896178]
84. Wolpaw JR, McFarland DJ, Vaughan TM. Brain-computer interface research at the Wadsworth
Center. IEEE Trans Rehabil Eng 2000 June;8(2):222–226. [PubMed: 10896194]
Schalk Page 19
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
85. Pfurtscheller G, Guger C, Muller G, Krausz G, Neuper C. Brain oscillations control hand orthosis in
a tetraplegic. Neurosci Lett 2000 October;292(3):211–214. [PubMed: 11018314]
86. Kubler A, Kotchoubey B, Kaiser J, Wolpaw JR, Birbaumer N. Brain-computer communication:
unlocking the locked in. Psychol Bull 2001 May;127(3):358–375. [PubMed: 11393301]
87. Van Orman Quine, Willard. Word and Object (Studies in Communication). MIT Press; 1964.
88. Schalk G, Brunner P, Gerhardt LA, Bischof H, Wolpaw JR. Brain-computer interfaces (BCIs):
Detection instead of classification. J Neurosci Meth 2008;167:51–62.
89. Vidal JJ. Real-time detection of brain events in EEG. Proceedings of the IEEE 1977;volume 65:633–
641.
90. Elbert T, Rockstroh B, Lutzenberger W, Birbaumer N. Biofeedback of slow cortical potentials. i.
Electroencephalogr Clin Neurophysiol 1980 Mar;48(3):293–301. [PubMed: 6153348]
91. Wolpaw JR, McFarland DJ. Control of a two-dimensional movement signal by a noninvasive braincomputer
interface in humans. Proc Nat Acad Sciences 2004;101(51):17849–17854.
92. Kübler A, Nijboer F, Mellinger J, Vaughan TM, Pawelzik H, Schalk G, McFarland DJ, Birbaumer
N, Wolpaw JR. Patients with ALS can use sensorimotor rhythms to operate a brain-computer
interface. Neurology 2005 May;64(10):1775–1777. [PubMed: 15911809]
93. Vaughan TM, McFarland DJ, Schalk G, Sarnacki WA, Krusienski DJ, Sellers EW, Wolpaw JR. The
Wadsworth BCI research and development program: at home with BCI. IEEE Trans Neur Syst
Rehabil Eng 2006 Jun;14(2):229–233.
94. Müller KR, Blankertz B. Toward noninvasive brain-computer interfaces. IEEE Signal Processing
Magazine 2006;23(5):126–128.
95. Freeman WJ, Holmes MD, Burke BC, Vanhatalo S. Spatial spectra of scalp EEG and EMG from
awake humans. Clin Neurophysiol 2003;114:1053–1068. [PubMed: 12804674]
96. Srinivasan R, Nunez PL, Silberstein RB. Spatial filtering and neocortical dynamics: Estimates EEG
of coherence. IEEE Trans. Biomed. Eng 1998;45:814–826. [PubMed: 9644890]
97. Fetz EE, Finocchio DV. Operant conditioning of specific patterns of neural and muscular activity.
Science 1971 Oct;174(7):431–435. [PubMed: 5000088]
98. Kennedy PR, Bakay RA. Restoration of neural output from a paralyzed patient by a direct brain
connection. Neurorep 1998;9:1707–1711.
99. Wessberg J, Stambaugh CR, Kralik JD, Beck PD, Laubach M, Chapin JK, Kim J, Biggs SJ, Srinivasan
MA, Nicolelis MA. Real-time prediction of hand trajectory by ensembles of cortical neurons in
primates. Nature 2000;408:361–365. [PubMed: 11099043]
100. Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, Branner A, Chen D,
Penn RD, Donoghue JP. Neuronal ensemble control of prosthetic devices by a human with
tetraplegia. Nature 2006 Jul;442(7099):164–171. [PubMed: 16838014]
101. Shain W, Spataro L, Dilgen J, Haverstick K, Retterer S, Isaacson M, Satzman M, Turner JN.
Controlling cellular reactive responses around neural prosthetic devices using peripheral and local
intervention strategies. IEEE Trans Neural Syst Rehabil Eng 2003;11:186–188. [PubMed:
12899270]
102. Donoghue JP, Nurmikko A, Friehs G, Black M. Development of neuromotor prostheses for humans.
Suppl Clin Neurophysiol 2004;57:592–606. [PubMed: 16106661]
103. Staba RJ, Wilson CL, Bragin A, Fried I, Engel J. Quantitative analysis of high-frequency oscillations
(80–500 hz) recorded in human epileptic hippocampus and entorhinal cortex. J Neurophysiol 2002
Oct;88(4):1743–1752. [PubMed: 12364503]
104. Loeb GE, Walker AE, Uematsu S, Konigsmark BW. Histological reaction to various conductive
and dielectric films chronically implanted in the subdural space. J Biomed Mater Res 1977 Mar;11
(2):195–210. [PubMed: 323263]
105. Bullara LA, Agnew WF, Yuen TG, Jacques S, Pudenz RH. Evaluation of electrode array material
for neural prostheses. Neurosurgery 1979 Dec;5(6):681–686. [PubMed: 160513]
106. Yuen TG, Agnew WF, Bullara LA. Tissue response to potential neuroprosthetic materials implanted
subdurally. Biomaterials 1987 Mar;8(2):138–141. [PubMed: 3555632]
107. Margalit E, Weiland JD, Clatterbuck RE, Fujii GY, Maia M, Tameesh M, Torres G, D’Anna SA,
Desai S, Piyathaisere DV, Olivi A, de Juan E, Humayun MS. Visual and electrical evoked response
Schalk Page 20
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
recorded from subdural electrodes implanted above the visual cortex in normal dogs under two
methods of anesthesia. J Neurosci Methods 2003 Mar;123(2):129–137. [PubMed: 12606062]
108. Schalk G, Kubanek J, Miller KJ, Anderson NR, Leuthardt EC, Ojemann JG, Limbrick D, Moran
DW, Gerhardt LA, Wolpaw JR. Decoding two-dimensional movement trajectories using
electrocorticographic signals in humans. J Neural Eng 2007;4:264–275. [PubMed: 17873429]
109. Kubanek, J.; Miller, KJ.; Ojemann, JG.; Wolpaw, JR.; Schalk, G. Program No. 414.10. 2007 Abstract
Viewer/Itinerary Planner. Washington, DC: Society for Neuroscience; 2007. Decoding finger
movements from electrocorticographic signals (ECoG) in humans. Online
110. Burmeister, Jason J.; Pomerleau, Francois; Palmer, Michael; Day, Brian K.; Huettl, Peter; Gerhardt,
Greg A. Improved ceramic-based multisite microelectrode for rapid measurements of l-glutamate
in the CNS. J Neurosci Meth 2002;119:163–171.
111. Jones, Willie D. Fiber to the brain – polymer nanowires threaded through the bloodstream may be
a practical way to enter the cranium. 2005. http://www.spectrum.ieee.org/oct05/1910
112. Edrich J, Zhang T. Ultrasonically focused neuromagnetic stimulation. Proceeding of the Annual
Conference on Engineering in Medicine and Biology 1993;15(3):1253–1254.
113. Field AS, Ginsburg K, Lin JC. The effect of pulsed microwaves on passive electrical properties and
interspike intervals of snail neurons. Bioelectromagnetics 1993;14(6):503–520. [PubMed:
8297395]
114. Dalecki D, Child SZ, Raeman CH, Carstensen EL. Tactile perception of ultrasound. J Acoust Soc
Am 1995 May;97(5 Pt 1):3165–3170. [PubMed: 7759656]
115. Gavrilov LR, Tsirulnikov EM, Davies IA. Application of focused ultrasound for the stimulation of
neural structures. Ultrasound Med Biol 1996;22(2):179–192. [PubMed: 8735528]
116. Pfister BJ, Iwata A, Meaney DF, Smith DH. Extreme stretch growth of integrated axons. J Neurosci
2004 Sep;24(36):7978–7983. [PubMed: 15356212]
117. Loftus, David J.; Leng, Theodore; Fishman, Harvey. Retinal light processing using carbon
nanotubes. US patent. 6,755,530. 2004.
118. Kong, Jing; Franklin, Nathan R.; Zhou, Chongwu; Chapline, Michael G.; Peng, Shu; Cho,
Kyeongjae; Dai, Hongjie. Nanotube molecular wires as chemical sensors. Science 2000;287:622–
625. [PubMed: 10649989]
119. Mendoza E, Borowiak-Palen E, Sharpe K, de Silva SGM. Multiwalled carbon nanotubes as platforms
for the design of biosensors. NSTI-Nanotech 2005;1:426–429.
120. Peterman, Mark C.; Noolandi, Jaan; Blumenkranz, Mark S.; Fishman, Harvey A. Localized chemical
release from an artificial synapse chip. Proc Natl Acad Sci USA 2004;101(27):9951–9954.
[PubMed: 15218102]
121. Peterman MC, Mehenti NZ, Bilbao KV, Lee CJ, Leng T, Noolandi J, Bent SF, Blumenkranz MS,
Fishman HA. The artificial synapse chip: a flexible retinal interface based on directed retinal cell
growth and neurotransmitter stimulation. Artificial Organs 2003;27:975–985. [PubMed: 14616516]
122. Aravanis AM, Wang LP, Zhang F, Meltzer LA, Mogri MZ, Schneider MB, Deisseroth K. An optical
neural interface: in vivo control of rodent motor cortex with integrated fiberoptic and optogenetic
technology. J Neural Eng 2007;4:143–156.
123. James CD, Spence AJ, Dowell-Mesfin NM, Hussain RJ, Smith KL, Craighead HG, Isaacson MS,
Shain W, Turner JN. Extracellular recordings from patterned neuronal networks using planar
microelectrode arrays. IEEE Trans BioMed Eng 2004;51(9):1640–1648. [PubMed: 15376512]
124. Davidson PR, Wolpert DM. Widespread access to predictive models in the motor system: a short
review. J Neural Eng 2005 Sep;2(3):313–319.
125. Pesaran B, Pezaris JS, Sahani M, Mitra PP, Andersen RA. Temporal structure in neuronal activity
during working memory in macaque parietal cortex. Nat Neurosci 2002 Aug;5(8):805–811.
[PubMed: 12134152]
126. Shannon, CE.; Weaver, W. The Mathematical Theory of Communication. Urbana: University of
Illinois Press; 1964.
127. The Scientific American Book of the Brain. Vol. volume 3. New York: Scientific American; 1999.
The Editors of Scientific American.
128. Wade, Nicolas, editor. The Science Times Book of the brain. Vol. volume 150. New York: The
Lyons Press; 1998.
Schalk Page 21
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
129. Katz, Laurence M.; Chang, Anne, editors. Magill’s Medical Guide. Salem Press; 2005.
130. Rieke, Fred; Warland, David; deRuytervanStevenick, Rob; Bialek, William. Spikes: Exploring the
neural code. Cambridge, MA: MIT Press; 1999.
131. Eliasmith C. Is the brain analog or digital? Cognitive Science Quarterly 2000;1(2)
132. Borst, Alexander; Theunissen, Frederic E. Information theory and neural coding. Nature Neurosci
1999;2(11):947–957. [PubMed: 10526332]
133. Kurzweil, Ray. The Singularity Is Near : When Humans Transcend Biology. Viking Adult. 2005
134. Gray, Jim; Szalay, Alexander S. Where the rubber meets the sky, giving access to science data. 2005.
http://research.microsoft.com/~Gray
135. Gleskova, H.; Wagner, S. Fabrication of thin-film transistors on polyimide foils. In: Mittal, KL.,
editor. Polymides and Other High Temperature Polymers: Synthesis, Characterization and
Applications. Vol. volume 2. The Netherlands: Utrecht; 2003. p. 459-465.
136. Lacour, Stephanie Perichon; Huang, Zhenyu; Suo, Zhigang; Wagner, Sigurd. Stretchable gold
conductors on elastomeric substrates. Applied Physics Letters 2003;82:2404–2406.
137. Wagner, Sigurd; Gleskova, Helena. Digest of Technical Papers, Korean Information and Display
Society. Seoul; 2002. Silicon thin-film transistors on flexible foil substrates; p. 263-267.
138. Sturm, JC.; Hsu, PI.; Huang, M.; Gleskova, H.; Miller, S.; Darhuber, A.; Wagner, S.; Suo, Z.; Troian,
S. Technologies for large-area electronics on deformable substrates. In: Claeys, CL.; Gonzales, F.;
Murota, J.; Saraswat, K., editors. ULSI Process Integration II. Proc. Electrochemical Soc. 2001-2;
2001. p. 506-517.
139. Wagner, Sigurd; Gleskova, Helena; Cheng, I-Chun; Wu, Ming. Thin-film transistors and flexible
electronics. In: Bergmann, Ralf B., editor. Growth, Characterization and Electronic Applications
of S-based Thin Films. Trivandrum: Research Signpost; 2002. p. 1-14.
140. Transmeta, Inc. Crusoe processor model tm5800 features. 2001.
http://www.charmed.com/PDF/TM5800.pdf
141. Honda H, Shiba K, Shu E, Koshiji K, Murai T, Yana J, Masuzawa T, Tatsumi E, Taenaka Y, Takano
H. Study on lithium-ion secondary battery for implantable artificial heart. Proceedings of the IEEE/
EMBS 1997:2315–2317.
142. Forret, Peter. Bandwidth chart. 2007. http://web.forret.com/tools/bandwidth_chart.asp
143. Wikipedia. List of device bandwidths. 2007.
http://en.wikipedia.org/wiki/List_of_device_bandwidths
144. Rogers, Everett M. Diffusion of Innovations. Free Press; 2003.
145. Lawrence, Stacy. Digital media make their mark. Technology Review. 2005
146. Griliches Z. Hybrid corn: An exploration in the economics of technological change. Econometrica
1957;25:501–522.
147. Mansfield, E. Industrial Research and Technological Innovation. New York: Norton; 1968.
148. Helpman, Elhanan, editor. General Purpose Technologies and Economic Growth. Cambridge,
Massachusetts: MIT Press; 1998.
149. Indjikian, Rouben; Siegel, Donald. The impact of investment in IT on economic performance:
Implications for developing countries. World Development 2005;33(5):681–700.
150. Mansfield, Edwin; Rapoport, John; Romeo, Anthony; Wagner, Samuel; Beardsley, George. Social
and private rates of return from industrial innovations. The Quarterly Journal of Economics 1977;91
(2):221–240.
151. Tewksbury JG, Crandall MS, Crane WE. Measuring the societal benefits of innovation. Science
1980;209(4457):658–662. [PubMed: 17821174]
152. Clausen J. Ethical aspects of brain-computer interfacing in neuronal motor prostheses. International
Review of Information Ethics 2006;5:26–32.
153. Merkel, R.; Boer, G.; Fegert, J.; Galert, T.; Hartmann, D.; Nuttin, B.; Rosahl, S. Ethics of Science
and Technology Assessment. Vol. volume 29. Heidelberg: Springer; 2007. Intervening in the brain:
Changing psyche and society.
Schalk Page 22
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript



Figure 1. The systems problem

The brain can process information from many different sources in parallel (much computational
breadth (horizontal arrows)), but is fairly slow in processing any particular algorithm (little
computational depth). In contrast, the computer typically only processes information from few
sources (little computational breadth), but is extremely fast executing any particular algorithm
(much computational depth (vertical arrow)). In addition, the communication speed between
the brain and the external world (indicated by the thin red communication pipe) is slow.
Schalk Page 23
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript



Figure 2.

Comparison of communication rates between humans and the external world (sources: [7,8]).
(a) Speech received auditorily; (b) Speech received visually using lip reading and supplemented
by cues; (c) Morse code received auditorily; (d) Morse code received through vibrotactile
stimulation.
Schalk Page 24
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript


Figure 3. The communication process

Semantically rich representations in the brain are translated into syntactic keywords, void any
semantics and encoded into motor actions that are transmitted and detected by a computer. The
reverse process takes place in the computer without restoration of the original semantic
relationships.
Schalk Page 25
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript

Figure 4. Example performance increase that represents advances of many technical developments
These examples illustrate the exponential growth in the number of transistors in Intel
microprocessors (source: http://www.intel.com) and the total number of CCD elements in the
world’s best telescopes (source: Jim Gray and Alexander S. Szalay [134]).
Schalk Page 26
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript

Figure 5. Expected performance/price development and associated technology diffusion
Based on historical examples, performance and price of brain-computer interfacing
technologies can be expected to improve (A). These devices will begin to be adopted by
different user groups as their price and performance makes them attractive to each group (B).
Schalk Page 27
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript

Figure 6. Increasing research activity in Brain-Computer Interface (BCI) Research
This figure illustrates the exploding increase in research activity (in number of peer-reviewed
papers) over the past 15 years. Results are collected from relevant databases and represent the
subset of research activity that studies communication using a new language (2.2). (Values for
2007 are extrapolated.)
Schalk Page 28
J Neural Eng. Author manuscript; available in PMC 2009 August 6.
NIH-PA
Researchers at Caltech have developed a mobile, four-wheeled robot that could help refine artificial retinas and other prostheses used by the visually impaired.
At first glance, Cyclops resembles a bot you might find on the battlefield, and it's hard to imagine what connection it could have to restoring sight. But dig a little deeper and it starts to make sense that a remote-controlled robot with an onboard camera could deliver some very useful data.
The digital camera can emulate left-to-right and up-and-down head movements. The idea is that as artificial vision prostheses increasingly become a reality, scientists could use the mobile robotic platform to mimic those devices--and more importantly, to get a better sense of how well they work for people who wear them.
The researchers might do that by asking the robot outfitted with an artificial vision aid to navigate obstacles in a corridor or follow a black line down a white-tiled hallway to see if it can find--and enter--a darkened doorway. All the while, they could try out different pixel arrays (say 50 pixels vs. 16 pixels), as well as image filters (for factors such as contrast, brightness enhancement, and grayscale equalization) to venture an educated guess as to what settings maximize a subject's sight.

Wolfgang Fink
(Credit: Caltech)
But "we're not quite at that stage yet," researcher Wolfgang Fink says of such independent maneuvering. Fink is a visiting associate in physics at Caltech in Pasadena, Calif., and founder of the school's Visual and Autonomous Exploration Systems Research Laboratory, where where he and Caltech visiting scientist Mark Tarbell are collaborating on Cyclops with the support of a grant from the National Science Foundation.
The pair designed and built the body of the battery-operated rover using off-the-shelf parts, then furnished it with an onboard computing platform that allows for processing and manipulating images in real time using software they created called "Artificial Vision Support System."
Cyclops, so named because it's monocular, is about 12 inches wide by 12 inches long and 10 inches tall (the camera can be mounted on a mast to make Cyclops the height of an average person). It weighs about 15 pounds, Fink estimates, and can move at an "expedited walking speed" of about 2 to 3 feet per second.
For now, the platform itself is controlled remotely, via a joystick, and can be operated through a wireless Internet connection. "We have the image-processing algorithms running locally on the robot's platform," Fink says, "but we have to get it to the point where it has complete control of its own responses."
Once that's done, he adds, "we can run many, many tests without bothering the blind prosthesis carriers."
No fancy camera needed
The Cyclops camera is basic--an inexpensive consumer FireWire model. And that does the job just fine.
"Current retinal implants have anywhere from 16 to 50-plus pixels, whereas any cheap camera has a quarter million or more," explains Fink, who in addition to his work at Caltech is a professor of microelectronics at the University of Arizona. "Any camera will by far surpass the resolution of an implant." The only thing that's really important is that the camera produces images at a good clip--say, 30 frames per second.
Scientists worldwide--including Fink and Tarbell, who participated in the U.S. Department of Energy's Artificial Retina Project--are working on electronic eye implants and other systems that let people with retinitis pigmentosa and age-related macular degeneration recognize objects and navigate through their environments unassisted.

MIT's prototype implant has a flexible substrate, power and data receiving coils, an electrode array, and a stimulator microchip. Cyclops could help scientists refine such devices.
(Credit: Shawn Kelly/MIT)
Retinal implants use miniature cameras to capture images, which are then processed and passed along to an electrode array in an implanted silicon chip.
In a prototype under development at MIT, users would wear special glasses fitted with a small camera that relays image data to a titanium-encased chip mounted on the outside surface of the eyeball. The chip would then fire an electrode array under the retina to stimulate the optic nerve. The glasses would also wirelessly transmit power to coils surrounding the eyeball.
The DOE estimates that less than 40 people around the world have been implanted with artificial retinas. They include a 50-year-old New York woman with a progressive blinding disease who in June was implanted with an experimental device made by Sylmar, Calif.-based Second Sight. The surgery, which was conducted by a team from York-Presbyterian Hospital/Columbia University Medical Center, has partially restored the woman's vision, according to the hospitals.
But designing implants and other visual enhancements poses unique design challenges. Chief among them: how can you measure the enhancements if you can't see what the person wearing them sees?
Next best thing
The Cyclops system poses an alternative to repeatedly testing the few people implanted with artificial retinas or having subjects with healthy retinas gauge low-resolution images on a computer monitor or head-mounted display (that approach produces a less realistic picture, according to Fink).
"A sighted person's objectivity is impaired," he says. "They may not be able to get to the level of what a blind person truly experiences...The next best thing to actually using a blind person is having a machine where you can dictate what the visual input is for navigation."
Fink and Tarbell--who detail their work in an upcoming issue of the journal Computer Methods and Programs in Biomedicine--have filed a provisional patent on the Cyclops technology on behalf of Caltech. The pair has not yet used Cyclops to get feedback from someone with a real implant, but hope to do so in the near future.


Read more: http://news.cnet.com/8301-17938_105-10378593-1.html#ixzz10x6AUr4T

Science Now, Electronics That Obey Hand Gestures
By ASHLEE VANCE
LAS VEGAS — The technology industry is going retro — moving away from remote controls, mice and joysticks to something that arrives without batteries, wires or a user manual.
It’s called a hand.
In the coming months, the likes of Microsoft, Hitachi and major PC makers will begin selling devices that will allow people to flip channels on the TV or move documents on a computer monitor with simple hand gestures. The technology, one of the most significant changes to human-device interfaces since the mouse appeared next to computers in the early 1980s, was being shown in private sessions during the immense Consumer Electronics Show here last week. Past attempts at similar technology have proved clunky and disappointing. In contrast, the latest crop of gesture-powered devices arrives with a refreshing surprise: they actually work.
“Everything is finally moving in the right direction,” said Vincent John Vincent, the co-founder of GestureTek, a company that makes software for gesture devices.
Manipulating the screen with the flick of the wrist will remind many people of the 2002 film “Minority Report” in which Tom Cruise moves images and documents around on futuristic computer screens with a few sweeping gestures. The real-life technology will call for similar flair and some subtlety. Stand in front of a TV armed with a gesture technology camera, and you can turn on the set with a soft punch into the air. Flipping through channels requires a twist of the hand, and raising the volume occurs with an upward pat. If there is a photo on the screen, you can enlarge it by holding your hands in the air and spreading them apart and shrink it by bringing your hands back together as you would do with your fingers on a cellphone touch screen.
The gesture revolution will go mainstream later this year when Microsoft releases a new video game system known at this time as Project Natal. The gaming system is Microsoft’s attempt to one-up Nintendo’s Wii.
Where the Wii requires hypersensitive hand-held controllers to translate body motions into on-screen action, Microsoft’s Natal will require nothing more than the human body. Microsoft has demonstrated games like dodge ball where people can jump, hurl balls at opponents and dart out of the way of incoming balls using natural motions. Other games have people contorting to fit through different shapes and performing skateboard tricks.
Just as Microsoft’s gaming system hits the market, so should TVs from Hitachi in Japan that will let people turn on their screens, scan through channels and change the volume on their sets with simple hand motions. Laptops and other computers should also arrive later this year with built-in cameras that can pick up similar gestures. Such technology could make today’s touch-screen tools obsolete as people use gestures to control, for instance, the playback or fast-forward of a DVD.
To bring these gesture functions to life, device makers needed to conquer what amounts to one of computer science’s grand challenges. Electronics had to see the world around them in fine detail through tiny digital cameras. Such a task meant giving a TV, for example, a way to identify people sitting on a couch and to recognize a certain hand wave as a command and not a scratching of the nose.
Little things like the sun, room lights and people’s annoying habit of doing the unexpected stood as just some of the obstacles companies had to overcome.
GestureTek, with offices in Silicon Valley and Ottawa, has spent a quarter-century trying to perfect its technology and has enjoyed some success. It helps TV weather people, museums and hotels create huge interactive displays.
This past work, however, has relied on limited, standard cameras that perceive the world in two dimensions. The major breakthrough with the latest gesture technology comes through the use of cameras that see the world in three dimensions, adding that crucial layer of depth perception that helps a computer or TV recognize when someone tilts their hand forward or nods their head.
Canesta, based in Sunnyvale, Calif., has spent 11 years developing chips to power these types of 3-D cameras. In the early days, its products were much larger than an entire desktop computer. Today, the chip takes up less space than a fingernail. “We always had this grand vision of being able to control electronics devices from a distance,” said Cyrus Bamji, the chief technology officer at Canesta. Competition in the gesture field has turned fierce as a result of the sudden interest in the technology. In particular, Canesta and PrimeSense, a Tel Aviv start-up, have fought to supply the 3-D chips in Microsoft’s Natal gaming system.
At last week’s Consumer Electronics Show in Las Vegas, executives and engineers from Canesta and GestureTek were encamped in suites at the Hilton near the main conference show floor as they shuttled executives from Asian electronics makers in and out of their rooms for secretive meetings.
Similarly, PrimeSense held invitation-only sessions at its tiny, walled-off booth and forbade any photos or videos of its products.
In one demonstration, a camera using the PrimeSense chip could distinguish among multiple people sitting on a couch and even tell the difference between a person’s jacket, shirt and under-shirt. And with such technology it’s impossible, try as you might, to lose your remote control.
symbionic mind will used to channel wireless, virtual reality information directly to the cortex, bypassing conventional sensory channels
It will eventually be possible to build sophisticated intelligence amplifiers that will be internal extensions of our brains, significantly more powerful than present day computers, which may even be directly wired to the brain for both input and output. The Symbionic mind will be used to channel wireless, virtual reality information directly to the cortex, bypassing existing sensory channels. The result would be virtual reality experiences in cyberspace creating seamless alternate realities indistinguishable from reality. It is also probable that by the time this technology is readily available that it will be mandatory implants for all world citizens.
SECOND DECADE SYMBIONICS AND BEYOND
Journal of Evolution and Technology Vol. 8 - March 2002 - PDF Version
http://jetpress.org/volume8/symbionics.html
Glenn F. Cartwright
glenn.cartwright@mcgill.ca
Adam B. A. Finkelstein
adam.finklestein@mcgill.ca
Department of Educational and Counselling Psychology
McGill University
Montreal, Canada
Based on a paper presented at the Ninth General Assembly of the World Future Society,
Washington DC, July 31, 1999 ©2002 - Glenn F. Cartwright
ABSTRACT
Reviewing progress in the last decade towards the symbionic mind -- a sophisticated, direct, neural interface between the brain and the environment -- we speculate that in the future the symbionic mind will used to channel wireless, virtual reality information directly to the cortex, bypassing conventional sensory channels. The result will be participation in virtual reality experiences in cyberspace creating seamless, alternate realities indistinguishable from reality. Such eventualities will inevitably lead to innovative altered states, fresh conscious perceptions, new experiences of the sublime, and the possible merging of human realities into a single consciousness, necessitating a redefinition of individuality. More exciting is the possibility of real-time feedback from the cortex through the symbionic mind to constantly tailor virtual reality experiences. Might the functions of our existing nervous system eventually be superseded by the symbionic mind, changing what it means to be human and creating a virtual "guardian angel" to guide us though the new millennium?
SECOND DECADE SYMBIONICS AND BEYOND
At the First Global Conference on the Future held in Toronto in July, 1980 the idea of Symbionic Minds was first presented. In the original paper (Cartwright, 1980a) and in subsequent papers (Cartwright, 1980b; 1983a; 1988, 1989), intelligence amplifiers were visualized; connected to human brains, capable of independent, intelligent action and existing symbiotically with us.
Such sophisticated devices would be significantly more powerful than present day computers and would be wired directly or indirectly to the cortex for both input and output. These brain prostheses would amplify and strengthen all the intellectual abilities we now take for granted as comprising intelligent human activity. They would be called "symbionic" minds (from the words symbiotic + bionic) because of the close, interdependent relationships that would almost certainly exist between them and us, and because they will make us, to some degree, bionic.

It is the design and development of such brain-computer interfaces that comprises the new science of "symbionics". Originally conceived as comprising four independent research areas, the concept now embraces the following seven:
1. emgors,
2. brain pacemakers or cerebellar stimulators,
3. biocybernetic communication,
4. neurometrics,
5. artificial intelligence
6. biotechnology, and
7. virtual reality.


Figure 1 - The Puzzle of Symbionics

. An obvious use would be to have it control an artificial limb called a myoelectric arm (Glass, 1986). 1. EMGORS
The first of these is the development of "emgors" (electromyogram sensors) which are now used to enable amputees to control artificial limbs in an almost natural manner. The aim of this research is to create artificial limbs that respond to the will of the patient by finding in the stump of the severed limb the brain's own natural impulse called the myoelectric signal or electromyogram (EMG), improving it through amplification or other means, and using it to control electromechanical devices in the prosthetic appliance
Remarkable progress in engineering has evolved the crude, prosthetic arm into a fully functional artificial replacement.
The Leverhume Oxford Southampton Hand has been developed at the Oxford Orthopaedic Engineering Centre as a myoelectric replacement arm for amputees. It is designed to allow the patient adaptive control over hand functions in a prosthesis that resembles the natural model. The Southampton hand can perform many independent movements with a small amount of user input (Kyberd & Chappell, 1994).
Commercial companies are distributing myoelectric arms such as the Utah Arm from Motion Control Inc. This myoelectric arm has a near-natural look, feel, and use. The Utah Arm can pronate, supinate, be exchanged for other terminal devices and can operate on a standard, 9-volt battery. Muscular control of artificial devices is a current reality (Motion Control Inc, 1999).
In the future, the same principles may be used to benefit everyone by allowing us to control mentally an extensive assortment of useful devices.

2. BRAIN PACEMAKERS
The second area of development is in brain pacemakers or chronic cerebellar stimulators. These followed the creation of cardiac pacemakers and were based on research involving the electrical stimulation of the brain. Chronic cerebellar stimulation (CCS) has been used with children with spastic movements to help them achieve some measure of control over their muscle functions. Such mental pacemakers are now being used to prevent patients from falling into deep depressions, to avoid epileptic seizures, and to reduce intractable pain. Patients who suffer from psychosis and for whom chemotherapy has failed, can be been treated with CCS to help them on the path to normal behavior. The technique has been used with neurotics, schizophrenics, and others who have experienced the feelings of extreme anger often associated with psychosis or violent behavior (Heath, 1977). Other cerebellar stimulators have been implanted to minimize the spasticity and athetosis associated with cerebral palsy (Cooper et al., 1976). In the patients treated for cerebral palsy, significant improvements were noted in both cognition and memory (Cooper & Goldman, 1987). In addition, it has been suggested that other forms of brain stimulation (CSAT - Chronic Stimulation of Anterior nucleus of Thalamus) might be employed to reduce other syndromes such as Alzheimer's disease, autism, Huntington's chorea (Cooper & Upton, 1985), and obsessive- compulsive behavior (Cooper et al., 1985).
Partly related to cortical stimulation is the experimental work on electrical muscle stimulation which permits electrical impulses to be fed directly to inactive muscles paralyzed by injured spinal cords (Petrofsky, Phillips, & Heaton, 1984; Petrofsky, Phillips & Stafford, 1984; Phillips & Petrofsky, 1984). (A 1985 TV-movie called "First Steps", starring Judd Hirsch and Amy Steel popularized the research of bioengineer Dr. Jerrold Petrofsky of Wright State University, Dayton, Ohio, and his attempt to make student Nan Davis walk again.)
Deep brain stimulators have been used successfully for the treatment of Parkinson's disease. Patients with Parkinson's exhibit tremors in many areas of their body, associated with overactive cells deep inside the thalamus of brain. Recently, one of the most highly effective treatments to reduce these tremors is to have a deep brain implant, where patients can have an electrode implanted in the thalamus that constantly stimulates these overactive cells and inhibits them from firing. Instead of destroying cell tissue, these stimulators allow patients to function normally by reducing or eliminating tremors caused by these abnormally overactive cells.
Deep brain stimulation has been recommended as a viable treatment for Parkinson's disease reducing tremors in nearly 80% of patients, yielding marked benefits without the adverse side effects common with medication (Kumar, R., Lozano, A.M., Kim, Y.J., Hutchison, W.D., Sime, E., Halket, E., Lang, A.E., 1998; Arle, J.E. & Alterman, R.L., 1999).
The mere existence today of simple versions of such devices as brain and muscle stimulators to help alleviate specific medical conditions points the way to a potentially bright future for the more complex models of tomorrow.
3. BIOCYBERNETIC COMMUNICATION
In the third area of development, biocybernetic communication, experimental work is underway in an attempt to interpret brain wave patterns to link them to specific thoughts. In early work at Stanford University, researchers were able to have a subject move a white dot around a computer screen merely by thinking about it (Pinneo et al., 1975). The subject’s cortical activity was picked up by surface electrodes on the scalp, interpreted by a computer, and translated into corresponding actions on the screen. An obvious goal of biocybernetic communication would be to use thought to control a wide variety of appliances. For example, it is now possible to harness thought to facilitate a broad variety of human activities from controlling simple video game actions to controlling computers.
In your body:
Kevin Warwick at the Department of Cybernetics of the University of Reading in England claimed to be the world's first cyborg. In August of 1998, Professor Warwick underwent surgery to implant a small transponder (23mm long and 3mm in diameter) encased in glass, inserted under the skin of his arm.
This implant emitted radio frequencies that communicated with external devices that allowed Warwick to interact with machines, hoping to become part machine himself. This silicon chip communicated with various computer receivers, identifying Warwick automatically. When he entered his home, he was personally greeted; room lights would turn on in his presence and off in his absence along with other individualized effects (Cuen, 1998; McClimans, 1998; Witt, 1999). Warwick became a cyborg, part man and part machine allowing for automatic, ubiquitous communication between the two.
Although interesting, Warwick’s implant did not directly relate to the development of the symbionic mind. The implant he received was merely an electronic beacon without intrinsically intelligent behavior. He may as well have carried an external ID card. Warwick could not assert control over this chip, nor did he have any direct impact on its operation.
On your body:
Although WearCam designer Steve Mann (originally at the Wearable Computing Project, MIT Media Laboratory (http://www.media.mit.edu/wearables/) and now at the University of Toronto Humanistic Intelligence Lab) developed methods of exporting his field of vision, this does not strictly constitute a symbionic mind. The extension of this work, however, from wearable computing to its control by the human cortex would constitute a definite step towards the creation of the symbionic mind. Already, Wearable Computing Project members at the MIT Media Laboratory are investigating the transmission of computer signals through the human body. The modification of these by the human brain would constitute a further step towards the symbionic mind.
On your head:
Any device which now exists would be intrinsically more useful were it under the direct control of the human brain (c.f. Birch, 1989). This is the aim of Erich Sutter's Brain Response Interface (BRI) unit at the Smith-Kettlewell Institute of Visual Sciences in San Francisco (Sutter, 1990; 1992). The prototype device used four electrodes implanted in a patient's brain to determine which computer command the patient wants executed. One configuration made available some 2,048 user-programmable control options (Rosenfeld, 1989; http://www.csun.edu/cod/94virt/wec~1.html). Success in this endeavor, of course, depends ultimately on deciphering the nerve code of mental activity.
Commercially, IBVA Technologies (http://www.ibva.com) has developed a method for harnessing signals from the brain and using it to control computer technology. The Interactive Brainwave Visual Analyzer (IBVA) is an interactive biofeedback control of brainwave functions. The IBVA picks up electrical brain activity through a scalp monitor and can translate brainwave signals into any electronic signal that can control mouse movements, game joysticks, buttons, and any other electronic device. Many recording artists have used the IBVA system to control midi synthesizers and digital audio mixers in order to create music with their minds. Others have used The IBVA system to control CD players in their homes (DeVito, 1999). By giving ordinary individuals direct control of computer devices by their brains, IBVA may have taken us one step closer toward the symbionic mind.
In your head:
Dr. Roy Bakay and Dr. Phillip Kennedy of Emory University have gone a step further. In the fall of 1998, Bakay successfully implanted a chip inside the head of a paralyzed patient. The patient, known as J.R., had suffered a stroke and, completely paralyzed, was unable to speak or move even though he retained his cognitive abilities. Bakay hypothesized that he could intercept J.R.’s brain signals and train him to use these redirected signals to control a computer. Bakay was successful not once, but twice. By using a high-resolution brain scan (MRI), Bakay determined a highly active area of J.R.'s brain in the motor cortex (Wiechman, 1998; Herberman, 1999). Bakay implanted two small cones that transformed chemical neural signals into radio transmissions which were picked up by the computer. Each cone controlled one axis of movement in two dimensions (up-down and right-left). J.R. used the radio signal was used to control a cursor. Without the ability to move or speak, J.R. could type on an on-screen keyboard to communicate (Wiechman, 1998; Herberman, 1999). J.R. could communicate, albeit slowly, with individuals, a feat not previously possible due to his paralysis.
Bakay's contribution to technologies advancing the symbionic mind demonstrates direct computer control from patterns of thought. J.R. demonstrates a telekinetic ability to control a computer and use it to communicate with others. Brain control of computers is no longer limited to the realm of science fiction.
It is the extension of these kinds of biocybernetic research which may result in mental communication between individuals and machines, and even between individuals, in a manner similar to telepathy but based on proven scientific principles and sophisticated technology.

4. NEUROMETRICS
In the associated area of neurometrics, the study of evoked- response potentials (EPs) in the cortex has produced interesting results. These are achieved by measuring minute voltage changes that are produced in response to a specific stimulus like a light, a bell, or a shock but which are of such small amplitude as to not show up on a conventional electroencephalogram (EEG). An averaging computer sums the responses over time to make them stand out against background noise. Since the background noise is random, it tends to be cancelled out. Through the use of this technique, it has now been established that the long latency response known as the P300 wave (positive potential, 300 millisecond latency) is usually associated with decision-making activity (Lerner, 1984). Though the wave appears after each decision, it is often delayed when a wrong decision is made. Theoretically then, it should be possible to construct a device to warn us when we have made a bad decision, to alert us when we are not paying attention (a boon to air traffic controllers) or to monitor general states of awareness. It is also possible using EPs to distinguish motor responses from cognitive processes, and decision-making processes from action components (Taylor, 1979). As its objectivity (patient cooperation is not needed) and non- invasiveness come to be appreciated, more and more clinical applications of EPs are beginning to appear (Ziporyn, 1981a; 1981b; 1981c), and it is likely that the number of non-clinical applications will also arise.

5. ARTIFICIAL INTELLIGENCE
The fifth area is that of artificial intelligence which includes the study of pattern recognition, problem solving, and speech comprehension with a view to reproducing these abilities in computers (Crevier, 1993). During the last decade, there has been a renewed interest in the study of neural nets to model cortical functions on computers (Pagels, 1988).
The field of artificial intelligence is pushing the boundaries of what science considers intelligence and can have a great impact on the development of the symbionic mind. Scientists such as Avery Brooks of the MIT Artificial Intelligence Laboratory have been pioneers in the development of a more holistic, global AI. In classical AI much research has been devoted to building complex systems in very specific, non-realistic worlds. In previous research, expert systems were created that could not function outside of their own domains of application. A chess-playing program, for example, could not converse about the weather. Classical AI had researched itself into a corner, no longer able to apply their "intelligent" creations to the real world. Brooks and other researchers realized the incredible limitation that classical AI has placed upon itself. They brought forth a new, alternative view of Artificial Intelligence. This “Nouvelle AI” is based on the grounding hypothesis that: "...to build a system that is intelligent, it is necessary to have its representations grounded in the real world " (Brooks, 1990).

In Nouvelle AI, simple creatures are constructed, using real world models. Instead of reducing intelligence to simple computer functions, Nouvelle AI assumes that intelligence is a combination of many behaviors, not a simple list of computer functions. If robots can perform simple, realistic, applicable behaviors, they would be emulating simple intelligence that exists in the real world. More and more Artificial Intelligent systems and computer chips are using neural nets and fuzzy logic in order to control complex processes (Gould, 1995). Fuzzy logic systems are able to approach the world more holistically, dealing with real-world problems and ambiguities without reducing them to simple, non-realistic computer functions. Such applications of AI will add intelligent functions to the symbionic mind embodying cognitive science research to interface with the human mind.

6. BIOTECHNOLOGY
Increasing importance is the work in the sixth area, biotechnology, sometimes referred to as genetic engineering. In small laboratories around the world, scientists are at work attempting to use genetic engineering principles to construct tiny biological microprocessors of protein or "biochips" (Futuristic computer biochips..., 1981; McAuliffe, 1981, Posa, 1981; Whatever happened to molecular electronics?, 1981; Milch, n.d., Schick et al., 1988). The advantage is that by using the techniques of recombinant DNA, very small devices (VSDs) can be assembled with great precision. As unbelievable as it sounds, such biochips may even be designed to assemble themselves, perhaps even in three-dimensional forms in the microgravity of outer space (McAlear, n.d.) If such biochips can be successfully constructed, it is likely they will have higher density and higher speed, and will consume less power than conventional chips (Drexler, 1986). This in itself will be no mean achievement because of the continuing reduction in circuit size below that of a living cell.
Successful though the silicon chip is, new circuits the size of molecules and smaller are already being developed which could significantly damage the silicon chip industry and ultimately lead to the creation of a molecular computer. Biochips would have a greater probability of successful implantation in the cortex due to their higher degree of biocompatibility. One company in America has received a grant from the National Science Foundation for a feasibility study of the creation of a direct interface between the central nervous system and an integrated circuit. Their initial plan called for increasing the number of effective electrodes from an 8 x 8 platinum array currently used in clinical trials to an array with 100,000 electrodes. The development of such technology will depend heavily on the use of an implanted integrated circuit and state-of-the-art microfabrication or nanotechnological techniques. The actual device is expected to consist of electrodes connected to an interface of cultured embryonic nerve cells which can grow three-dimensionally and attach themselves to mature nerve cells in the brain (EMV Associates, 1981; The next generation..., 1981). Ultimately, the provision of the appropriate set of genes could enable such a chip to repair itself, DNA codes could be used to program it, and enzymes used to control it (Biotech..., 1981; Drexler, 1986). Already under development as a first step is a device called an "optrode" consisting of a polymer waveguide with a photovoltaic tip capable of photon-electron conversion. Research has been undertaken to study the feasibility of using such a tiny, photoconducting microelectrode to record the firing of a single neuron, or perhaps even to cause it to fire (McAlear, & Wehrung, n.d.). Beyond recording the firing of a single neuron, the firing patterns of whole neuron cultures can now be monitored (Gross et al., 1985; Droge et al., 1986).
At the cellular level, researchers at the Max Planck Institute of Biochemistry have succeeded in creating bio-electronic circuits, a combination of living organic and inorganic materials (Zeck & Fromherz, 2001). The researchers interfaced snail neurons with small electronic chips and demonstrated they could send signals from chips to neurons and back. Such work paves way for the development of a successful interface between living human cells and electronic circuits.

7. VIRTUAL REALITY
Virtual Reality (VR) has received a lot of attention in the last decade. The term "Virtual Reality" was first coined by Jaron Lanier, founder of VPL Research, the first company to build and produce products specifically designed for Virtual Reality systems (Lanier, 2001). Lanier envisioned VR as a virtual space where multiple users could share an experience. Other authors find this definition too simplistic. Cartwright (1994, p. 22) defines VR as "...the complete computer control of the senses. VR becomes a way of sensing / feeling / thinking." VR allows a computer to alter the human experience. Other authors such as Heim (1993) critique the use of the term Virtual Reality as both terms are difficult (at best) to define. Many researchers use terms such as Artificial Reality, Augmented Reality, Virtual Environment and other examples that point to the computer mediation of the senses. The goal of virtual reality is to create alternate realities by manipulating sensory inputs and tricking the brain into believing them. Each of these sensory manipulations, though designed to contribute to the virtual reality experience, teaches us how better to manage sensory input to the cortex.
This mediation of the senses is one of the core elements of the symbionic mind. If the senses are mediated by the symbionic mind, any number of augmentations can take place. X-ray vision could be overlaid on the field of vision of engineers to discover structural problems. Enhanced auditory input could augment the sensory input of musicians and sound engineers. Improved gustatory input could be mediated for wine tasters (to detect otherwise undetectable poisons), improved olfactory detection for detecting gas leaks, enhancing the appreciation of floral displays, and bomb detection. Other enhancements could assist in people detection and recognition, food appreciation, and kinaesthetic augmentation.
The Birth of Symbionics
These seven areas have much in common. For the most part, they deal with the brain directly, with perceptual and thought processes individually, and with intellectual activity primarily. Like other media, they are steadily converging (Brand, 1988). Eventually, a merger will be effected culminating in a routine way of interfacing with the brain either directly using implanted (or grown in place) electrodes, or indirectly by picking up brain waves with external sensors (biocybernetic communication and neurometrics). When that happens, the symbionic mind will have been born.
The symbionic mind may be defined as any apparatus consisting of some useful device, interfaced with the human brain, capable of intelligent action. The most difficult task in its creation will be the design and construction of the interface required to link these devices to the human cortex. Such a complex interface will no doubt represent the major component of the symbionic mind, and the creation of a wide range of standard and optional accessories to attach to it will probably prove to be a comparatively easy task. Such auxiliary brain prostheses or symbionic minds are beginning to be used for appliance control (IBVA), computation, monitoring of particular body functions, problem-solving, data retrieval, general intelligence amplification, and inter- and intra-individual communication. The ultimate revolutionary advance may even be the direct, electronic transmission of human thought!
Symbionic Functions
The most obvious use for a symbionic mind would be to improve human memory. It is easy to see how people with failing memories might benefit from supplementary aids - in this case tiny mind prostheses or "add-on" brains with extra memory storage and better factual retrieval as well as improved procedural processing. Like a memory crutch for the brain, the symbionic mind could be invaluable, not only for patients with Alzheimer's disease but also for everyone else. The benefits in education would be enormous, not only for below average and average students but for the gifted as well (Cartwright, 1982; 1983b; 1983c).
Symbionic minds will do more than just improve memory but as yet one can only speculate as to their full range of uses. Because the symbionic mind will be able to interpret our thoughts, our very wishes will become its commands. Thus it will be able to take dictation directly from our thoughts, improve them through editing, and like the voice-processors of today, rearrange whole paragraphs, perform spelling checks, and supervise the typing of final documents. To some degree, the human brain may be limited by its small number of input senses. But a symbionic mind connected to the brain to amplify its abilities, improve its skills, and complement its intelligence, could be used to handle additional sensory inputs, and to make low level decisions about them, discarding irrelevant data, and passing on more important information to the brain itself. In the future, it may be possible to build into the symbionic mind totally artificial senses and connect them directly to the brain. These artificial senses would simulate most of our existing senses but would bypass currently available receptor organs. Some of these might include components of our existing senses; others will be totally new and the line distinguishing one sense from another may become increasingly blurred.
Exactly what these new senses will be and the uses to which we shall put them must remain, for the moment, in the realm of speculation. However, examples might include senses to detect currently invisible hazards like harmful levels of radiation or pollution in our immediate environment, or to detect television transmissions or Internet data and relay them directly to our brains without the aid of conventional monitors. TV sets and video monitors are merely converters: they convert signals we are unable to receive in our natural state into visual signals on the screen which can be input through our eyes. From the eyes, the signals are converted to electrochemical impulses and sent to the visual cortex for analysis. Imagine a small device which could receive signals but instead of displaying them on a video screen, could channel them directly to the human cortex. The sensation of "seeing" the pictures would still exist but one's eyes would be freed for watching other things. Such devices would not be limited to television and computers but might include radio and telephone reception as well. In all these instances, the normal sensory inputs of eyes and ears would be bypassed.
Preliminary work in this direction was undertaken some years ago at the University of Florida to find ways of implanting up to 100,000 miniature photovoltaic cells to stimulate previously unused parts of the retina in cases of retinal blindness. The Dobelle Institute (http://www.dobelle.com) has developed a visual device that would use neuro-stimulation to create artificial vision for the blind. Early developments of the technology are crude, only allowing differentiation between light and darkness, however, the implications of this development are far reaching. It may soon be possible for science to bypass the eyes entirely and feed visual information (from a camera mounted on eyeglasses) directly to the cortex (Dobelle, 2000). Though the immediate medical goal is to produce a more effective visual prosthesis, the perfection of such a technology has much wider implications for everyone.
In the auditory domain, patients at the Los Angeles Ear Research Institute have been fitted with electronic ear stimulators to stimulate auditory nerves in an attempt to improve hearing. Called cochlear implants, the technology has been proven to help the profoundly deaf hear and many who have had the implants have reported that they are glad they did and would not be without it.
On a more elementary level, the symbionic brain will provide a sophisticated interface between ourselves and a wide variety of household gadgets. The symbionic mind will provide a "thought switch" to enable us to control appliances merely by thinking about them, like the commercial products demonstrated by IBVA.
The symbionic brain will turn lights on and off for us, activate television devices and switch channels (feeding the signal directly to the brain), answer telephone calls and initiate them, and keep household inventories. It will guard us from a number of dangers and protect us in a wide variety of situations. At a party it will monitor our blood alcohol level and warn us when we have had too much to drink. It will keep an eye on other bodily functions including digestion and blood sugar levels, and warn us of impending illness, undue stress, or possible heart attacks. It will guard us while we sleep, listening for prowlers, and sensing the air for smoke. It will attend to all household functions and perhaps ultimately will direct the activities of less intelligent household robots which are sure to come into existence. It will share with us its vast memory store and its ability to recall information virtually instantly - information we thought we had forgotten. It will put us in touch automatically and wirelessly with huge data banks containing information it does not possess itself. It will do math calculations, household budgets, business accounts, and even make monthly payments for us automatically. It will update its own information daily by scanning a number of information sources, perhaps listening to its own information channel, perhaps digesting local newspapers, sifting for information which it should bring to our attention, helping us make sense of the world around us. It will provide a whole new dimension of living to quadriplegics allowing them to perform many of the routine daily tasks essential to life, and restoring to them some measure of control over their lives. It will change the entire realm of communications as we know it today. Merely thinking of someone you wish to talk with by telephone will initiate a search by the symbionic mind to locate that person anywhere in the world and establish a direct link. Though physical telephones will be avoided, the two symbionic minds will be in direct contact over the communications network and thoughts will flow between beings in seemingly telepathic fashion; indeed this may be the closest we will ever come to true telepathy. How ironic that even if telepathy does not exist, we may nevertheless be able to simulate it.
The Future of Symbionics
Feedback from Virtual Reality (VR) to control the body
In the future, the symbionic mind will use input from VR to influence the body. It can be readily seen how the bombardment of the body's senses by VR-generated information can have a direct effect on the systems of the body. Heart rate may increase, respiration quicken, and palms perspire. Improved VR experiences may be tailored to effect specific changes in other bodily senses like smell or balance.
VR input could be used to bypass the usual human senses and be fed directly to the symbionic mind for direct input to the brain. Visual, tactile, auditory, olfactory, and gustatory stimuli could be transmitted directly to the cortex. An example might be infrared information transmitted directly to the symbionic brain and overlaid on the visual system giving the user the sensation of infrared vision. Conventional VR equipment of data-gloves and head-mounted displays would be made obsolete.
Feedback from the Body to control VR
Similarly, but in a reverse direction, it should be possible to use feedback from the cortex to control the inputs to VR to enable the technology to tailor or individualize perceptual experiences. For example, a person in a state of fright because of some VR-related phenomenon would exhibit particular Galvanic Skin Responses (GSR), EEG-readings, increased heart rate, higher adrenaline levels, and orienting responses that can be output by the symbionic mind to the VR apparatus to change the environment and help stabilize the user. In this way, VR could be used adaptively to protect the user from harm.
Transmission Methods
Symbionic minds using wireless full duplex (two-way) transmission could be used to receive broadcast or narrowcast VR. Broadcast VR would transmit a single experience to multiple recipients; narrowcast VR would transmit multiple experiences to a single user.
The provision of wireless, full duplex symbionic technology will also facilitate the unique addressing of every individual perhaps with like Internet Protocol (IP) addresses for electronic identification of computers on the Internet today. In the past, people telephoned a location to find a person. Today with digital cellular telephony we phone a person to find their location. This new technology facilitated a paradigmatic shift from using the telephone to dial a person instead of a place. Currently IP addresses denote physical locations. In the future, they will represent personal, symbionic contacts with specific individuals permitting the creation of Personal Area Networks or PANs (IEEE, 2001; Zimmerman, 1996).
A new era…
The symbionic mind will not be a truly separate brain but will be an extension of us, of our very being. It will not seem to be foreign to us in any way, nor will it pose to us any kind of threat by trying to take us over any more than would our own brain. The symbionic mind will be as much a part of us as a hand or an eye, and it will seem to us simply our own brain doing the thinking. It will be transparent to us. We will not be aware of any separate entity, nor of any other change except an increased ability to perform those intellectual asks we have always performed, and a new capability to accomplish those which were previously impossible.
The new symbionic mind will act purposefully and wilfully but always on our behalf and at our direction. It will be our constant companion and friend, conscience, and alter-ego. The science of symbionics culminating in the development of the symbionic mind may well mark the next significant step in our evolution to a higher plane of existence, and the dawn of a new era.
References
Arle, J.E. & Alterman, R.L. (1999). Surgical options in Parkinson's disease. Medical Clinics of North America, 83(2). p.p. 483-98, vii.
Brooks, R. (1990). Elephants Don't Play Chess. Robotics and Autonomous Systems. 6, 3-15.
Gould, L. (Dec 1995). If AI Ran the Zoo. BYTE, 79-83.
Biotech breathes life into microchips. (1981, November). Engineering Today, 11.
Birch, G. (1989). Direct brain interfaces to technical aids. Vancouver: Neil Squire Foundation, Spring 1989, 5-8.
Brand, S. (1988). The Media Lab. New York: Penguin.
Cartwright, Glenn F. (1980a, July). Symbionic minds: the advent of intelligence amplifiers. Paper presented at the First Global Conference on the Future, Toronto, Canada.
Cartwright, Glenn F. (1980b, October). And now for something completely different: symbionic minds. Technology Review, 83(1), 68, 70.
Cartwright, Glenn F. (1981, October). Toward a new level of awareness: symbionic consciousness. Paper presented at the annual meeting of the American Association for Social Psychiatry, New York City.
Cartwright, Glenn F. (1982). The impact of symbionic technology on education. Toronto, ON: Research and Evaluation Branch, Department of Education, Government of Ontario.
Cartwright, Glenn F. (1983a). The symbionic mind. McGill Journal of Education, 18(1), 5-37.
Cartwright, Glenn F. (1983b, April). Symbionic technology and education. Paper presented at the annual meeting of the American Educational Research Association, Montreal, Canada.
Cartwright, Glenn F. (1983c). Symbionic minds for the gifted. In Shore, B. M., Gagné, F., Larivée, S., Tali, R. H., & Tremblay, R. E. (Eds.) Face to Face with Giftedness. New York: Trillium Press. Chapter 10, pp. 130-137.
Cartwright, Glenn F. (1988). Symbionics. In Unwin, D., & McAleese, R., (Eds.) Encyclopaedia of Educational Media Communications and Technology (second edition). New Haven, Conn: Greenwood Press. pp. 495-499.
Cartwright, Glenn F. (1989). Symbionics: The First Decade. Paper presented at the Sixth General Assembly of the World Future Society, Washington DC.
Cartwright, Glenn F. (1994). Virtual or real? The mind in cyberspace. The Futurist, 28 (2), 22-26.

Cooper, I. S., & Goldman, H. W. (1987). Positive effects of DEP brain stimulation (DBS) on cognition and memory - correlation with metabolic and physiologic parameters. International Journal of Neuroscience, 32(1-2), 832.
Cooper, I.S., Riklan, M., Amin, I., Waltz, J.M. & Cullinan, T. (1976). Chronic cerebellar stimulation in cerebral palsy. Neurology, 26, 744-753.
Cooper, I. S., & Upton, A. R. M. (1985). Therapeutic implications of modulation of metabolism and functional activity of cerebral cortex by chronic stimulation of cerebellum and thalamus. Biological Psychiatry, 20, 809-811.
Cooper, I. S., Upton, A. R. M., Garnett, S., Amin, I., & Springman, M. (1985). Normalization of abnormal glucose metabolism of cerebral cortex in limbic system epilepsy by chronic stimulation of anterior nucleus of thalamus. Acta Neurochirurgica, 78, 174-175.
Crevier, D. (1993). AI: The tumultous history of the search for artificial intelligence. New York: Basic Books.
Cuen, L. (Sept 23, 1998). Chipping at the Future. ABCNEWS.com. Available On-Line: http://more.abcnews.go.com/sections/world/DailyNews/cyborgman.html
DeVito, D. (1999). BrainWave Control. IBVA Technologies, Inc. Available On-Line: http://www.ibva.com
Dobelle, W. H. (2000). Artificial Vision for the Blind by Connecting a Television Camera to the Visual Cortex. American Society of Artificial Internal Organs Journal, 46, 3-9.
Drexler, K. E. (1986). Engines of creation: the coming era of nanotechnology. New York: Anchor Press.

Drexler, K. E., Peterson, C. & Pergamit, G. (1991). Unbounding the future: the nanotechnology revolution. New York: William Morrow and Company, Inc.

Drexler, K. E. (1992). Nanosystems: molecular machinery, manufacturing, and computation. New York: John Wiley & Sons Inc.

Droge, M. H., Gross, G. W., Hightower, M. H., & Czisny, L. E. (1986). Multielectrode analysis of coordinated, multisite, rhythmic bursting in cultured CNS monolayer networks. Journal of Neuroscience, 6(6), 1583-1592.
EMV Associates. (1981, November 1). Brain/computer direct link subject of NSF grant. Press release. Rockville, Maryland.
Futuristic computer biochips: new market for synthesized proteins. (1981, October). Genetic Technology News.
IEEE (2001). 802.15 Working Group for WPANs. Available on line: http://grouper.ieee.org/groups/802/15/
Glass, D. D. (1986, October 17). Physical medicine and rehabilitation. Journal of the American Medical Association, 256(15), 2106-2107.
Gould, L. (Dec 1995). If AI Ran the Zoo. BYTE. pp. 79-83.
Gross, G. W., Wen, W. Y., & Lin, J. W. (1985). Transparent indium-tin oxide electrode patterns for extracellular, multisite recording in neuronal cultures. Journal of Neuroscience Methods, 15, 243-252.
Heath, R. (1977). Modulation of emotion with a brain pacemaker. Journal of Nervous and Mental Disease, 165(5), 300-317.
Herberman, E. (Mar 5, 1999) Mind over Mater: Controlling Computers with Thoughts. ALS News. Weekly Reader Corporation. UMI Company. Available On-Line: http://www.rideforlife.com/n_thought030899.htm
Kaplunovsky, A. (1982, December). Deciphering the nerve code of human mental activity: soviet research. PSI Research, 23-26.
Kumar, R., Lozano, A.M., Kim, Y.J., Hutchison, W.D., Sime, E., Halket, E., Lang, A.E. (1998). Double-blind evaluation of subthalamic nucleus deep brain in advanced Parkinson's disease. Neurology, 51(3), 850-855.
Kyberd, P.J. & Chappell, P.H. (1994). The Southampton hand: an intelligent myoelectric prosthesis. Journal of Rehabilitation Research & Development, 31 (4), pp. 326-334.
Lanier, J. (April, 2001). Virtually There: Three-dimensional tele-immersion may eventually bring the world to your desk. Scientific American. Available On-line: http://www.sciam.com/2001/0401issue/0401lanier.html
Lerner, E. J. (1984, August). Why can't a computer be more like a brain? High Technology, 34-41.
McAlear, J. H. (n.d.) 3D Integrated circuits fabrication in microgravity. Texas A & M University: Centre for Advanced Research in Molecular Electronics.
McAlear, J.H. & Wehrung, J.M. (n.d.). Photoconducting electrode prosthesis. Rockville, Maryland: Gentronix Laboratories, Inc.
McAuliffe, K. (1981, December). Biochip revolution. Omni, 54-58.
McClimans, F. (Sept 2, 1998) Is that a chip in your shoulder, or are you just happy to see me? CNN Interactive. Available On-Line: http://cnn.com/TECH/computing/9809/02/chippotent.idg/index.html
Milch, J. R. (n.d.) Computers based on molecular implementations of cellular automata. Rochester, NY: Eastman Kodak Company.
Motion Control Inc. (1999). Motion Control Utah Arm. Salt Lake City.
Available On-line: http://www.utaharm.com
Pagels, H. R. (1988). The Dreams of Reason. New York: Bantam.
Petrofsky, J.S, Phillips, C.A., & Heaton, H.H. (1984). Feedback-control system for walking in man. Computers in Biology and Medicine, 14, 135-149.
Petrofsky, J.S., Phillips, C.A., & Stafford, D.E. (1984). Closed-loop control for restoration of movement in paralyzed muscle. Orthopedics, 7, 1289-1302.
Phillips, C.A., & Petrofsky, J.S. (1984). Computer-controlled movement of paralyzed muscle - the medical perspective. Artificial Organs, 8, 390.
Pinneo, L. R., Johnson, P., Herron, J., & Rebert, C.S. (1975, August). Feasibility study for design of a biocybernetic communication system. Menlo Park, California: Stanford Research Institute.
Posa, J. G. (1981, September 8). Bioelectronics to spawn circuits. Electronics, 54(18), 48-50.
Rosenfeld, E. (Ed.) (1989, March). New technologies highlight an explosion of activity in future computer-human interfaces. Intelligence, 1-5.
Schick, G. A., Lawrence, A. F., & Birge, R. R. (1988). Biotechnology and molecular computing. Trends in Biotechnology, 6(7), 159-163.
The next generation of microprocessors: molecular 'biochips'instead of silicon wafers? (1981, September 21). Biotechnology Newswatch. p. 7.
Taylor, G. R. (1979). The Natural History of the Mind. London: Secker and Warburg.
Waldrop, M. M. (1987). Man-Made Minds. New York: Walker and Company.
Whatever happened to molecular electronics? (1981, December). IEEE Spectrum, p. 17.
Wiechman, L. (Oct 20, 1998). Implant Lets Paralyzed Man Compute. Associated Press. ABCNEWS.com. Available On-Line: http://more.abcnews.go.com/sections/science/DailyNews/brain_implant981020.html
Witt, S. (Jan 14, 1999). Is human chip implant wave of the future? CNN Interactive. Available On-Line: http://cnn.com/TECH/computing/9901/14/chipman.idg/
Zeck, G. & Fromherz, P. (2001). Noninvasive neuroelectronic interfacing with synaptically connected snail neurons on a semiconductor chip. Proceedings of the National Academy of Sciences, 98, 10457 - 10462. (Available on line: http://www.pnas.org/cgi/content/abstract/98/18/10457)
Zimmerman, T.G. (1996). Personal Area Networks: Near-field intrabody communication. IBM Systems Journal, 35, 3&4. Available on line: http://www.research.ibm.com/journal/sj/mit/sectione/zimmerman.html
Ziporyn, T. (1981a, September 18). Evoked potential emerging as a valuable medical tool. Journal of the American Medical Association, 246(12), 1287-1291.
Ziporyn, T. (1981b, September 18). Evoked potentials give early warning of sensory and behavioral deficits in high-risk neonates. Journal of the American Medical Association, 246(12), 1288-1289.
Ziporyn, T. (1981c, September 18). Add EPs to list of intraoperative monitors. Journal of the American Medical Association, 246(12), 1291 and 1295.
[Sutter 1990] E. Sutter and D. Tran, " Communication through visually induced electrical brain responses," Computers for Handicapped Persons; Springer Verlag 1990, pp. 279-288.
[Sutter 1992] E.E. Sutter, "The brain response interface: communication through visually-induced electrical brain responses," J. Microcomputer Applications, vol. 15, pp. 31-45, 1992.