TLDR – No one knows the architecture of the optimal brain computer interface, but it is likely to be through a solution using implanted electrodes (my personal preference), optical recordings or something completely new.
How does the brain work?
To be able to design a device that measures brain activity an understanding of the brains function is required. This section gives a high-level overview of some of the key elements of brain function. Human brains contain approximately 80 billion neurons, these neurons are interconnected with 7,000 synaptic connections each (on average). The combination of neurons firing and their communication is, in very simple terms the basis of all thoughts conscious and subconscious. Logically if the activity of these neurons and their connections were read in real-time, a sufficiently intelligent algorithm could understand all thoughts present. Similarly, if an input could be given at this level of granularity new thoughts could be implanted.
All human brains abide by the general structure shown in the picture below, certain areas, by and large do certain things. If higher levels of thoughts like creativity, idea generation and concentration want to be read, the frontal lobe is the place to look. If emotions and short-term memory are the target, the temporal lobe is the place to read from.
When a neuron fires there is a response associated with it that different methods of brain-computer interfaces attempt to take advantage of. The main methods are electrical activity and changes in oxygen flow. The image below shows the change in voltage produced when a neuron fires, known as action potentials. An explanation into action potentials can be found here. When neurons communicate current also flows between them. Blood flow to areas with firing neurons increases, with that comes an increase in oxygenated haemoglobin.
A critical review of current methods
Non-invasive brain computer interfaces
Non-invasive brain computer interfaces are methods of reading brain signals without surgery. EEG (electroencephalography) is the application of electrodes on the skull to record brain electrical activity which comes from the action potentials mentioned earlier. EEG is one of the most widely used methods of interfacing with the brain today as it’s generally cheap and easily accessible. However, there are several issues with the technologies ability to interface with the brain at the level of individual neurons. This can be distilled into noise and spatial resolution. Noise being in the introduction of electrical signals that aren’t the desired brain signals and spatial resolution being the size of the area being recorded from each electrode. For example if there was an electrode for each neuron that would be a high spatial resolution and, like in the case of EEG, if there is an electrode for every few million neurons it has low spatial resolution. Noise comes in the form of muscle movement in the face, blood flow in the scalp and everything else in between. Attempts to combat this noise have been occurring for decades but essentially the physical barrier results in a low-quality signal of large areas of the brain. EEG cannot physically be the long term solution to the brain computer interface problem.
MRI (functional Magnetic Resonance Imaging) is the application of a magnetic field to the brain and the measurement of the resonant response as a result of change in the spin of nuclei. The most common measurement is that of blood flow when regions of the brain recieve more blood the frequency of nuclei’s resonant response in that area changes. This measurment of blood flow is dependent on the hemodynamic response, which is the release of glucose to active neurons being different to the release of glucose to inactive neurons. This induces a 3-6 second delay in neurons firing and the detection of the event through fMRI. Ignoring the technical aspects of the performance of MRI machines they are large, expensive and cause anything with ferrous metal inside to be attracted to it. Not ideal properties for the perfect brain computer interface.
MEG (magnetoencephalogramy) is the measurement of changes in magnetic fields as a result of current flow through dendrites. In an isolated room with sensors called ‘superconducting quantum interferences devices’ which must be at a temperature of approximately −273 degrees Celsius, the magnetic fields in the brain are detected. The sensitivity of these sensors is so high that all electronics in the room must be switched off or removed. Major leaps in superconducting materials and magnetic isolation would be required for this method to be the ideal brain computer interface of the future.
NIRS (near infrared spectroscopy) is the use of infrared light passed through brain tissue to detect the changes in oxygenated blood flow. The intensity of light changes as it passes through oxygenated and deoxygenated blood. As previously explained this introduces a 3-6 second delay and is only an indicator of neuronal activity. This method currently only works up to approximately 3cm of brain tissue, after which the light is absorbed and refracted by the brain. The major disadvantage of this method is not being able to access deep brain tissue, attempting to do so with currently technology results in thermal damage to the brain as it absorbs too much 7energy.
Intracortical neuron recording is the insertion of electrodes into the gray matter of the brain to detect electrical signals. These devices exist today; the ability to access such a small amount of neurons gives much better results than their non-invasive competitors. An example of the performance of intracortical recording can be seen in this paper where a paralysed woman controls a robotic arm in 3D with her brain, seen in this video. However, the signal can degrade over time as the cerebral tissue reacts to the implanted electrodes.
ECoG (electrocorticography) is similar to EEG but electrodes are placed directly onto the brain as opposed to onto the skull. The results of using ECoG is a much higher spatial resolution meaning smaller numbers of neurons firing can be analysed. An example of the performance of this technology can be seen in this video where a user-controlled a simulation of a robotic armature in 3D. This issue with ECoG is that it sits on the surface of the brain. Deriving what happens deep inside the brain with this technique isn’t feasible.
How will the optimal brain-computer interface of the future record brain activity?
No one knows what technology will ultimately answer this question, techniques that rely on blood flow don’t provide information in real-time for the firing of individual neurons, which rules out MRI and NIRS (when applied to observing changes in blood flow). EEG suffers from large amounts of noise and a poor spatial resolution, MEG requires extremely sensitive isolated equipment. Arguments could be made for these technologies improving in the future, but not to the extent of being the perfect brain computer interface.
The ideal brain computer interface will be based off an improved intracortical recording device, likely something similar to this injectable electronic mesh. Or an improved optical recording system that ensures significantly less light absorption and therefore no heat damage, which according to this paper I mentioned at the beginning could be possible in the future.
Alternatively, a whole new approach could be used, one that takes advantage of different physical aspects of the brain.For example there are mechanical changes in neurons when fired, such as their membranes swelling. It could even something completely unknown today. However, from what has been discussed I believe an intracortical recording device is the most probable solution.