The American futurist Ray Kurzweil has written a book entitled “The Singularity is near: when humans transcend biology”, published in 2005. In this book he deals inter alia with the topic brain-machine interface. Kurzweil refers to Tomaso Poggio´s, James DiCarlos (both MIT) and Christof Koch´s (California Institute of Technology) attempts to develop a model which describes how the brain recognizes visual objects and how it encodes these objects in data (Kurzweil 2013, 195). In his view, a possible outcome of this research is to be able to transfer images directly into the brain.
The current research in the field of brain-computer-interfaces (BCI) is concerned primarily with the reverse case. It’s about following question: How can information come directly from the brain into the external world? From a therapeutic point of view this is interesting for people who, for example, had a stroke and who are therefore no longer able to communicate with others in a natural way.
Brain-computer-interfaces use electrical, magnetic and metabolic brain activity in order to control machines prostheses, etc. (Birbaumer; Matuz 2013, 239). A BCI-system requires at least three components: the input component, the decoder component and the output component. The task of the input component is to capture the brain activity. There are at least two possible methods. The first is the non-invasive method. In this method, the brain activity is measured without inserting a device into the brain. This is possible by the use of an encephalogram (EEG), a magnetoencephalogram (MEG) or a functional magnetic resonance imaging (fMRT).
The second possibility is the invasive method, which will intervene directly into the brain. For this, the skull will be opened, then electrodes will be inserted into the relevant brain area. However, with such interventions the risk of infection is high. Moreover, there is a risk that the brain tissue will be permanently damaged. This is the reason why invasive methods are rarely used in the treatment of humans. The situation is different in the context of animal experiments with monkeys. Kurzweil refers to Miguel Nicoletis and his team from Duke University. This team has been able to implant sensors in monkey brains. Thus, the animals are able to navigate robot arms by using their thoughts (Kurzweil 2013, 195). What Kurzweil describes is the state of research in 2005. Meanwhile Nicoletis and his colleagues have succeeded in bringing monkeys to control the arms of virtual monkeys with their mind.
So something like this can happen, the brain activity must be decoded. This is achieved by detecting recurring patterns of brain activity by using algorithms. The term “output components” means those devices that are controlled by means of brain activity. This can be, for example, virtual arms, or a language program or neural prostheses. For people who have suffered a stroke, and are no longer able therefore to move their limbs, neural prostheses are a possible option. Neural prostheses are controlled not by human power, but with the power of thought. Research findings in this area indicate that the very idea of movement is sufficient to activate the motor system in the brain (Birbaumer; Matuz 2013, 244). If it is possible to capture these brain activity and to decode and transfer it to an output system, the control of neural prostheses or other machinery is possible.
The so-called P300 potential promises a lot of success. It is a potential in the brain, which is caused by certain stimuli. On the basis of the P300 potential, BCI-systems have been developed with which it is possible to control a web browser via thoughts. Carlos Escolano et al refer to experiments that aim to connect the P300-BCI-system with a wireless remote transmission technology. This should make it possible to navigate a robot by remote control – exclusively with power of thought (ibid., 245).
The big question now is whether we will be able to transfer data from the outside directly into the human brain. In my view, this is associated with a significant problem. We need to keep working to prepare data in a way that the human brain can do something with this data. However, in this case we interpret the human brain as a data processing machine. The brain may indeed be similar in many respects to a data processing machine. But our thinking is more than just the processing of data.
When we think, we begin to reflect on what we think. That is what we want to achieve, when we teach children in school. We do not want that they only assimilate information. They should think about it, so that something happens to them. Only when we are at a back bending distance from something, a reflection can ever be possible. In order to understand something, we must reflect on things.
In other words: Due to previously successful decoding, it may be possible to transfer an image or a book directly into the brain. This however, means, that only the data is transferred directly into the brain. The meaning of the image or the contents of the book can not be transferred. Only by thinking, humans are able to understand the importance of transferred content. In a nutshell: The direct transfer of data into the human brain is limited by the semantics.
1. Birbaumer, Niels and Matuz, Tamara (2013). „Brain-computer-interfaces (BCI) zur Kommunikation und Umweltkontrolle.“ Handbuch Kognitionswissenschaft. Achim Stephan and Sven Walter, eds. (Stuttgart: J.B. Metzler´sche Verlagsbuchhandlung), pp. 239-247.
2. Kurzweil, Ray (2013). Menschheit 2.0. Die Singularität naht. (Berlin: Lola Books GbR).