top of page

fMRI and AI shaping our ability to read your mind!

Updated: Dec 10, 2021

The New Yorker December 6, 2021 pp30-35 |ANNALS OF TECHNOLOGY| “HEAD SPACE” Researchers are pursuing an age-old question: What is thought? By James Somers


Read The New Yorker article for all the detail


Summary by 2244



image from itonline.com



Going back to at least 2006 researchers have been attempting to decipher human brain signals captured by an fMRI scanner, a huge device weighing more than 10 tons. So far we are safe from losing control of our last domain of privacy-our thoughts. Yet academics and businesses are making remarkable progress in understanding fMRI output from the neural function of our brains.


Heartwarmingly, some researchers reported in 2006 that they could communicate with some patients diagnosed as being in a “vegetative state.” These scientists, Martin Monti and Adrian Owen, had developed “two controversial hypotheses.


First, they believed that someone could lose the ability to move or even blink while still being conscious; second, they thought that they had devised a method for communicating with such ‘locked-in’ people by detecting their unspoken thoughts.” Their method using fMRI is based on tracking oxygen flow in the brain by measuring a surrogate molecule-the common element known as iron. Iron is magnetic and is part of the oxygen-carrying molecule hemoglobin (HgB). HgB carries oxygen throughout the body including in the brain. With fMRI, tracking iron is “clear enough to be seen in real time.”


With one patient, Monti and Owen verbally suggest the patient answer a question in the affirmative-Yes by thinking of playing tennis and in the negative-No by thinking about walking around his house.” Different areas of the brain lit up with Yes and No in response to various questions. “Do you have brothers?” Response Tennis-Yes. “Do you have sisters?” Response, House-No. Etc. However, when it came to the question, do you want to die? Response, “no clear answer.” After more studies Owen “estimated that 20% of patients who were presumed to be vegetative were actually awake.”


After decades of work, “Cognitive psychologists armed with an fMRI machine can tell whether a person is having depressive thoughts; they can see which concepts a student has mastered by comparing his brain patterns with those of his teacher.”


What has happened according to Ken Norman (Princeton) is that “fMRI machines hadn’t advanced that much; instead, artificial intelligence (AI) had transformed how scientists read neural data.” The big leap is one can think of the brain in multiple dimensions consisting of small units that fire and then the unique patterns can be analyzed using AI and equated to certain inputs.


In a simple way, each unit signal can be thought of as a musical note, say Aflat, A, Bflat, B, C, CFlat etc. and the pattern of signals can be unified as a musical chord like the piano chord A which consists of A, Csharp and E. In a more complex way fifty dimensions can be mapped as proposed by Charles Osgood seventy years ago. Researchers at Bell Labs expanded this to “a few hundred dimensions” and called the technique “latent semantic analysis” or LSA.


It is suggested that our minds use LSA to encapsulate information. So, for example, after being accustomed to modern airports,when we enter a new airport we grab that LSA and then start processing the details expected such as where is-the Ticket Agent, -the Security Screening, -the Aircraft Passenger Loading Gate and even where is our seat on the plane.


This system of organizing what we see works well for us but derails us as well sometimes. We may have a train-of-thought in our minds but when we step from the kitchen to the bathroom, for example, the new input of the bathroom LSA causes us, sometimes, to forget our prior “train-of-thought.”


So far thought-decoding has been limited to “specific thoughts” defined in a research setting. A “general-purpose thought decoder, … then is the next logical step according to Ken Norman (Princeton).” Owen opines “I have no doubt that at some point down the line, we will be able to read minds…I don’t think it’s going to happen in probably less than twenty years.”


bottom of page