It is now possible to see what a person is looking at by scanning their brain. The technique, published last November by a team of Japanese neuroscientists, uses FMRI to reconstruct a digital image of the picture entering the eye, albeit at very low resolution and only after hundreds of training runs. Still, it’s an awesome development, and many articles covering this research have called it “mind reading” (1, 2, 3, 4, 5). But it really isn’t, and it’s fun to explore what real “mind reading” would imply.
When I hear “mind reading” I want psychic abilities. I want to be able to know what number you’re thinking of, where you were on the night of March 4th, and what you actually think of my souffle. This is the sort of technology that could be badly misused, as the comments on one blog note:
Am I the only one finding this DEEPLY disturbing? It opens the doors to some of the scariest 1984-style total-control future predictions. Imagine you canâ€™t hide your f#&%!ng MIND!
Fortunately, we’re not there yet. Morover, if we did have the technology toÂ read minds, we’d have much bigger societal issues than privacy to deal with. The existence of “mind reading machines” would imply that we possessed good formal models of the human mind, and that is a can of worms.
But back to today. The paper by Yoichi Miyawaki and colleagues describes a technique for exploiting retinotopy, the fact that certain areas of the visual cortex are direct “maps” of the retina. First, a series of 10×10 black and white test images are shown to a someone while their neural activation is recorded by FMRI. The responses to these test images are used to ascertain which areas of the visual cortex correspond to which areas of the subject’s field of vision. When the neural map is complete, it can be read “backwards,” going from neural scanner results to a low resolution representation of whatever the subject is currently looking at.
This is a long way from a tool for the thought-police. First, the algorithm requires training on each new person. Also, an MRI machine is a huge, expensive, complicated piece of machinery which requires the subject to stay very still over a period of minutes — widespread brain scanning is, for the moment, completely out of the question. But most fundamentally, the information recovered is nothing more than what the eye is currently looking at. You might as well just tape a digital camera to the subject’s head. The pictures would be a lot better.
What is it that we imagine for a mind reading machine? Perhaps a printout, in words, of every thought that goes through someone’s mind. But do people really think exclusively in words? What about their emotions, or their unconscious responses, or even the complete set of minor joint aches and temperature sensations all over their body? Or how about a video playback of the events of yesterday evening? Impossible, because that’s not how human memory works. When we think about it carefully, we realize that we have an extremely poor conception of what is actually “in someone’s head.”
Compounding this problem is the fact we can’t even say what’s in our own heads. We think we can, but we can’t. Decades of psychological experiments show that access to the contents of our own minds and the working of our own thought processes is very limited. Consequently, we cannot answer the question “what would a mind-reader read?” through introspection.
This is why, before we could build a mind-reading machine, we would first need formal models of a “mind.” We need the sort of mathematical models that one can manipulate with a computer, because computers will surely be intensely involved in any mind reading technology. If recent developments in linguistics and artificial intelligence research are any guide, these models will be huge, associative, and statistical in nature, nothing like the structured logic we think we possess. For example, Google translates web pages between different languages without using anything like formal grammar models.
In other words, we cannot “read minds” because we have very little idea of how minds might be stored on a computer. This problem is known in AI as “knowledge representation,” and we still know very little about it.
Good formal models of the mind, if possible, are the technological precursor to entire fields of information engineering, and this is why I’m not worried about mind-reading technology per se. We’ll get beneficial things like accurate machine translation and computers that respond to voice queries — no more fighting with software that just doesn’t understand what you want. (Think also of the possibilities for art and expression.) We’ll also get uncomfortable technologies like sickeningly effective advertisements that exploit behavioral quirks we didn’t know we had, and NSA-funded conversation snooping programs that make existing keyword scanners look like the toys that they are. Finally, it would be possible to use accurate human mind models for pure evil: imagine a computer virus that was designed to read your personal files and figure out how best to convince you that the Dictator was beneficent. All of this may sound very far-fetched, but we’re going to build these things if we possibly can: think of how much money Google makes from each percentage point of improvement in ad clickthroughs.
If the Japanese FMRI technique seems positively simplistic in this light, that’s because it is. They have read retinas, not minds. They are extracting a representation we already have abundant experience with: images. Saying that we’ve made a step towards reading minds is ridiculous; Thomas Edison might just as well have claimed to “record thoughts” when he announced the phonograph.
I bother with all of this both because I think science journalism is often done badly, and because I believe that it’s important to get hysterical about the right thiings. One comment posted to a video of the research reads, “this is the beginning of the end of free thought.” Perhaps the continuation of this type of FMRI research really will one day lead to the ability to determine what someone is thinking without invoking their consent, but torture already does that. To me, the ability to represent someone’s thoughts in electronic form has far greater implications than mind-reading per se, and this sort of FMRI research — as impressive as it is — contributes little to that enterprise.