The Japanese man is sitting with an electrode cap on his head, on a chair connected to a device that looks like a small refrigerator. He looks very sleepy, although he is participating in a historic experiment: He is about to operate a robot solely by the power of thinking. The people around him are poker-faced. One of them hands him a card with a picture of a hand and the words “right hand.” The man with the electrodes thinks, the lights on the refrigerator behind him turn on. And then the camera moves over to Advanced Step in Innovative Mobility (ASIMO), the famous white robot, manufactured by Honda, which is standing next to him. ASIMO comes to life.
“Yes! I received the result, I think it’s correct,” says ASIMO in Japanese. “It’s the right hand!” He takes one step forward and waves his hand. Of all those participating in the experiment, he is the one with the greatest charm and vitality.
This film clip, publicized by Honda last June, is proof of the power of thinking. The area of research called “brain-computer interface,” one of the hot scientific fields today, tries to link up the brain with various types of computers in order to activate the world: where a chip in the brain or electrodes connected to it will read brain signals. Think about a concept you want to know more about and the chip in your brain will survey the Internet and broadcast the information directly to your consciousness, without the unnecessary bother of typing, choosing and clicking.
In laboratories the world over researchers have been “reading thoughts” at an increasing level of sophistication, although the day is far off when someone with the proper equipment will be able to understand from a distance what you really think of the boss, for example. The business application in most cases is still far off, but the potential is huge: from thought-operated computer games to a real improvement in the lives of the ill and disabled: The target audience of brain-machine developments includes all of us.
“If you had asked me 20 years ago, I would have told you it would take another 100 years, but it’s happening much faster,” says Prof. Matti Mintz of the biopsychology research unit in Tel Aviv University’s psychology department. “Within 10 years I anticipate that brain-computer interface will be used for simple but essential functions. A colleague of mine in the United States is already building such a chip for people who have a problem with balance. In Europe and the U.S. they are channeling very large budgets in these directions. After all, the population is aging, and an effort is required to maintain brain functions by various methods.”
“Brain-computer interfaces have become a major topic,” confirms Prof. Hezi Yeshurun, of TAU’s school of computer science. “Everything we do originates in the brain. If we understand the brain’s code, we’ll be able to operate other systems − for example, assembling a bionic hand that can receive signals from the brain and function instead of damaged limbs. To a certain extent we already know how to do that. Let’s say, for instance, that you imagine that you are drumming on the table with your fingers: We will be able to tell with which finger you want to drum.”
Many studies on brain-computer interface are trying to rehabilitate brains damaged by disease, strokes, accidents or simply from aging, to bypass the damaged neurons and enable renewed control over real or artificial limbs. Mintz is working on constructing a computer circuit that will replace areas in the brain that have been damaged due to a tumor or age. In experiments now being conducted on animals, he is learning about the stimuli that reach the damaged area of the brain and about its anatomical connections. Later he builds an artificial circuit whose structure and activity are identical to that of the damaged brain, puts it on a chip and implants it beneath the skin of the skull or the chest, connecting it to the damaged region of the brain.
The chip receives nerve signals from the brain region and activates the limb. Meanwhile, Mintz says, he is dealing only with the regions responsible for reflexive behavior, which is easier to deal with. But one of the advantages of the chip is that it is capable of “learning” the subject’s surroundings in a manner similar to the learning that takes place in the brain. In other words, if the person with the implant comes across a hot iron once, the chip learns that it is very hot and the next time it will identify the shape of the iron and will warn the person to stay away from it − just like a child becoming familiar with his surroundings.
In similar studies in the U.S., monkeys wearing an electrode cap succeeded in moving an artificial hand via the power of thought alone. One can only imagine the magnitude of the change that such a development will bring about in the lives of disabled people. The pursuit of a bionic eye is also in full swing, in Israel among other places.
Helping the blind
In the laboratory of Prof. Yael Hanein, of the TAU electrical engineering department, a plate with electrodes in it, on which various nerve cell formations are growing, is connected to an amplifier that translates the electrical pulses emitted by the neurons into a series of pulses registered by a computer. Hanein and her assistant Mark Shein − in cooperation with the laboratory of Prof. Eshel Ben-Jacob of TAU’s physics department − are growing neurons on a special material called carbon nanotubes.
One of their possible uses is for creating an advanced corneal implant that will be able to help the blind to see. The pursuit of the artificial cornea is not limited to Hanein’s lab. An Australian firm is involved in it, a German company carried out a successful trial last month in which blind people managed to see objects from a distance, and the Israeli firm Nano Retina, which works in collaboration with Hanein, is pursuing the same objective.
Nano Retina estimates the potential sales of the implant at 180,000 units per year, at a cost of $60,000 per implant; in other words, sales could reach a billion dollars. Founded by medical entrepreneur Yossi Gross, the company received $2.5 million from Rainbow Medical, a foundation that belongs to Gross and Efi Cohen-Arazi, and it is expected to receive another $2.5 million in the future. The implant is meant to replace damaged light receptors in people who became blind as a result of a degenerative disease.
“The eye has 15 million light receptors, but in many diseases the light receptors die, even when all the other nerves are still working,” explains Nano Retina’s director, Ra’anan Gefen. The implant is supposed to be on the market in another five years, and will enable only black-and-white vision. It will contain a camera that receives the image, and on the other side, an electrode that will stimulate the remaining nerves. “The electrode has to be the maximum size of five microns, while the size of a standard electrode is about 100 microns,” explains Gefen.
A lot of work is still to be done on the implant: At the moment the researchers in Hanein’s lab are concentrating on studying the dynamics between the neurons on the one hand, and miniaturizing the electrodes on the other.
Hanein is not interested in the business aspect: She is fascinated by the study of communication among brain cells. During her research she succeeded in stimulating the neurons with electrical currents, and seeing how they arrange themselves on the electrode in various formations. The aim is to examine how the formation changes the communication among them.
“One of the big questions is how we progress from a single nerve cell, which can be studied separately, to a brain with billions of cells working together,” she explains. “As in the case of ants, where the individual is of no importance, only the collective, the same is true here: Communication among the components is the main thing. We know a great deal about how isolated cells work, the mechanisms and structure that dictate activity, and a great deal about the brain, which regions do what. The question is how the two ends connect.”
Gefen, on the other hand, is concerned about other things: standards, regulation and miniaturization. In the wake of initial newspaper articles about the firm, he notes, he received letters from blind people and their families who asked to volunteer for the clinical tests, which have yet to begin. “One family with two daughters who are going blind, and a grandmother who has never seen her grandchildren,” he says.
In the U.S., such studies received a big push from the administration of former president George W. Bush, due to the number of soldiers who returned from Afghanistan and Iraq with disabilities.
“What’s interesting about the entire cyborg [bionic human] industry is that it is driven to a great extent by medicine,” notes Doron Friedman, a lecturer at the communications school of the Interdisciplinary Center, Herzliya, and the head of the advanced virtuality lab there. “We are used to thinking about horror films and science fiction, but the idea behind the research is that there’s an industry here involving large sums of money, whose aim is to help people. In medicine there is no ethical question at all: It does work, and it does help in certain cases.”
For its part, the Israel Defense Forces apparently still prefers to invest in weapons instead of in methods for repairing damage caused to soldiers.
Generally, studies in Israel in this field are financed by local research foundations, with relatively small sums, and by European or American foundations, which are more generous. Mintz and his partners from TAU, Mira Marcus, Hagit Messer-Yaron and Yossi Shacham, are funding the research and the students engaged in the work from a generous budget from a European foundation, but it is about to dry up soon.
In the not-too-distant past, groups of local academics tried to establish a large, wide-ranging research foundation, but their effort was abandoned after only one year.
Available funding in Israel is relatively limited and many of the researchers report genuine budgetary distress, which prevents them from going ahead with big projects. Help for the ill is of course the best justification for such research, since no foundation will finance a study whose main purpose is the development of an innovative computer game.
But even commercial firms around the world are involved in electrode research, with the intention of achieving a technological advantage and become innovators in their fields. Japanese car manufacturers, like Honda, for example, have accumulated a great deal of knowledge in the development of sophisticated and automatic assembly lines, and they are exploiting this knowledge in order to develop sophisticated interfaces.
The ultimate goal is to design smart cars that will, for example, be able to open the trunk and even drive for us, according to instructions from the brain. But along the way scientists are developing medical applications such as the wheelchair announced by Toyota last year, which is activated by thought alone: The chair is equipped with a small table with a laptop, and the person seated in the chair wears an electrode cap. Electroencephalograph sensors transmit signals from the brain to the computer, which in turn translates them into directional signals for the chair.
Wheelchairs that are operated by the power of thinking are nothing new, but the one made by the auto manufacturer has reduced the time between a thought and its implementation, which in earlier versions lasted several nerve-racking seconds, to only a thousandth of a second. The target market of the product is mainly invalids, and in light of Japan’s aging population and that of the world in general, this invention would be beneficial to mankind and also makes business sense: According to figures of the magazine National Health Review, in 2002 there were 1.6 million wheelchair-users in the U.S. living outside nursing institutions, although only 155,000 of them used electric wheelchairs, while the others used manual ones. One can only imagine the number of financial and psychological obstacles that will stand in the way of a chair operated by brain waves.
Other commercial firms are taking steps towards developing real products, although at the moment they are only at an initial stage: Friedman, who is cooperating with an Austrian firm called g.tec, describes their new product, which enables patients with advanced amyotrophic lateral sclerosis (ALS, a progressive neurodegenerative disease) to dictate a letter to the computer by means of brain waves. These are patients who cannot move their bodies or speak, but are fully conscious.
By means of the machine, the patient who wears electrodes concentrates on a specific letter of the alphabet, and the electrodes identify brain activity and decipher the letter the activity is referring to. Accuracy in choosing the letter, according to the firm’s Web site, is liable to take between two and 20 attempts, and the use requires a long training period, on two levels: The equipment has to learn to identify the user’s brain waves, and the user has to know how to use his brain in such a way that the machine will be able to understand him.
“It’s still limited, but it works,” explains Friedman. “The main problem is the pace: You can only dictate one letter every few seconds; for a healthy person that’s a frustrating experience that requires a lot of patience and concentration. But for someone who is paralyzed, it is sometimes the only way to communicate.”
But brain-computer interface is not only for medicine: Computer games are an $11.7-billion market in the U.S. alone, according to 2008 figures from the Entertainment Software Association. In effect, firms like Nintendo have been working for years on games operated via the brain.
Meanwhile, Emotiv has already released to the market a package of three games that work on “neuro-feedback,” among them a virtual ping-pong game and another that is called − in a geeky gesture to the products of science fiction that foresaw the future long ago − the Jedi Mind Trainer. The game requires wearing an electrode helmet, which is sold on the company’s Web site for $300, and which is used to move the players. However, Friedman explains that what the helmet reads might not be brain waves, but messages that come from the facial muscles and that “the electrical activity of your muscles and body is far greater than the electrical activity of the brain.”
Renan Gluzman, head of the program for designing and developing computer games at Hamidrasha School of Art in Beit Berl, claims that the games have not yet become sufficiently popular, partly for cultural reasons.
“People are not yet willing to wear a kind of strange helmet; it looks frightening and unattractive to them,” he claims. “And besides, the technology is not fast enough. A neurotic and anxious player who wants immediate satisfaction won’t find it there.”
No less interesting and frightening is the application aimed at digging into our brains in order to make us more vulnerable to marketing efforts: neuro-marketing or neuro-advertising. Two years ago, neuroscience company Nielsen announced an investment in NeuroFocus, which places electrodes on viewers to assess the emotional effectiveness and the level of attention commanded by commercials or Internet film clips, with the aim of making them so attractive to our overloaded brains that we will have no choice but to order Coca-Cola from the neighboring virtual kiosk.
We can think of more sinister uses for the technologies being created; the moment the brain-machine genie is out of the bottle, there is no way of knowing who will use it in order to get money or information. And of course one could say that the whole business is in effect based on “mind reading,” in spite of the reservations of cautious scientists about such hyperbolic terms. In order to use brain waves to operate something, first of all we have to understand what the brain is saying. Fortunately, perhaps, at present the coding of our thoughts is still too complicated to read in its entirety.
“The maps of the brain are relatively known. We know which regions control which functions,” Yeshurun notes. “Whatever you think about leads to a certain pattern of activity in the brain. But if, for example, you are thinking about a cat, that’s already more abstract coding. We can identify whether you want to turn right or left, but we are very far from knowing whether you want to read Tennyson’s poems at the moment. We still don’t know how that decision looks in the brain.”
There has, however, been interesting progress in this area. A group of researchers in Pittsburgh has succeeded in understanding, by reading an fMRI (functional magnetic resonance imaging) brain simulator, which words a person is thinking of from a given list. The researchers exploited the fact that thinking about words and pictures in various semantic categories − for example, buildings, tools or food − activates certain patterns of brain activity. In the study, which was published two years ago, they told a person to think about a specific word and then, having mapped the regions of the brain that were activated, asked the computer to guess which word the subject had thought of. The degree of accuracy, according to their reports, was 90 percent, as long as familiar words were being used.
“These results are interesting, but they should be seen in the proper context,” says Yeshurun, trying to tamp excessive enthusiasm. “We’re talking about a relatively small number of words, separated into categories, that activate relatively familiar regions. It’s still impossible to tell someone to think about something, and then to understand what he’s thinking about.”
A study now being conducted in Yeshurun’s lab goes one step further, and is intended to make it possible for others to understand what we have decided even before we know it ourselves. In the study by Omri Perez, a doctoral student in brain research working under the guidance of Yeshurun and Prof. Yitzhak Fried of the TAU school of medicine, subjects were asked to operate a driving simulator, with their brains connected to electrodes that record neural activity. The second subjects decided whether to turn right or left, they were asked to report it; the aim was to examine whether it is possible to know in which direction they decided to turn even before they themselves were aware of their decision.
A similar study by German researchers, published two years ago, found a gap of up to 10 seconds between the brain activity that indicates that a decision is being made and the direction in which it is tending, and the awareness of the decision that has been made. In other words, researchers can guess what the subject is about to do even before he himself knows.
The researchers concluded that the delay in awareness reflects the brain’s preparation for making the decision and acting on it. “Decision reading” works after the fact. too: In another study, published a year ago by Yeshurun, Prof. Talma Hendler and research student Alon Talmor, subjects were asked to perform a specific activity, for example, to enter a certain room and remove a bill from a wallet in a coat pocket. Afterward subjects were shown films of various activities, and demonstrated a particularly strong reaction to a film in which someone removes money from a wallet. The interesting finding, adds Yeshurun, was that the activity was not located in the emotional parts of the brain, but in “mirror cells” − a region responsible for the reflection of activities that we see. This is an area that causes us, for instance, to mimic the movements of an interlocutor unconsciously.
The brain identified the activity that had been carried out previously, and reacted to it more strongly. It’s not hard to think of practical applications for these findings, for example, constructing a sophisticated truth machine in which Hitchcock films would be screened, for example, in order to expose people suspected of murdering blondes in the shower.
We can think of groups that would have the ability and the interest in using your findings for negative purposes.
Yeshurun: “True. But negative use can be made of any study done in the university, even studies of Hebrew literature.”
The brave pioneer in the field of the study of awareness of decisions was Prof. Benjamin Libet of the University of California, San Francisco, who already in the 1970s discovered the time gap between a decision to act and awareness of making that decision. His findings ignited a stormy debate regarding the ancient philosophical question of free will: Are our decisions, the basis for our ostensible free activities, made before we are aware of them? In other words, does the brain ostensibly decide for us? And to what extent do we actually make our decisions consciously?
“For those engaged in brain research, the question of free will is meaningless, because it is not well defined,” explains Yeshurun. “But in terms of what is generally thought, the fact that your brain has actually decided in your absence and that I can know what you’ve decided before you do, paints a picture of an automaton.”
The question of free will will certainly continue to arise in the coming years. Research in the field of brain-computer interface is progressing along with technological advances, and we can reasonably assume that a designer will be found to turn the electrode helmet into the next hot accessory − or that in the future, we will all walk around with a tiny chip in our brains and will try to alert the washing machine to the fact that we are out of clean shirts. However, while the devices themselves will become increasingly miniaturized, the ethical questions are only expected to grow, as the bionic man becomes the person walking toward us on the street rather than a fictional character.
Want to enjoy 'Zen' reading - with no ads and just the article? Subscribe todaySubscribe now