We May Need To Reinvent The Language In The Future

Why Language May Not the Best Way to Transfer Information Between People In the Distant Future

While both the brain and the computer process information at over gigabytes per second, most of the typical human interaction speeds are in the range of bits per second, many many many orders of magnitude lower than that of the brain and computers. Let’s take a look at some typical human interactions:

  • Hand Writing: On average we can write at around 30 words per minute from a memorized text. This is equal to 10s of bits per second in information transfer rate (ITR).
  • Typing: On average we type less than 40 words per minute, which is just 10s of bits per second of data output. Fastest typists key around which is still an information transfer rate of a couple of 100 bits per second.
  • Voice Dictation: A recent at Stanford shows voice dictation can be three times faster than typing. That’s great news, but if you think about it, that’s still in the 100s of bits per second range.

I have introduced a variety of different Human-Computer Interfaces in my . In that article, I covered interfaces from keyboard, mouse, touch screens, voice recognition, gesture control based on accelerometers, depth sensors, radars, electromagnetic fields, as well as non-invasive and invasive brain-computer interfaces. However, if you think about it, none of the human-computer interfaces are direct. They are all by a number of levels separated from out intent and are measuring a by-product of our intent.

  • Level 1 — intention: We have an intent in our mind and make a decision (e.g. to turn the music volume higher)
  • Level 2 — neural activity: We translate that intent into a hand movement strategy (in neuroscience terms, motor movement plan) or language in the brain (e.g. to press a key or say “turn it up”). Your consciousness becomes aware of the decision that has been made after this (Google: Benjamin Libet)
  • Level 3 — body action: the action is performed through a fine planned set of motions of hand muscles or muscles of the tongue and the mouth
  • Level 4 — interface: the interface detects the action (e.g. by change of capacitance in touch screen or by transforming acoustic activity to electrical activity in case of microphone)
  • Level 5 — computer: interface sends the data through a protocol to the computer and the computer performs the required action (e.g. increase the volume)

Keyboard, mouse, touch, voice, and gesture contrl are separated by 3 levels from our intent; biosignals are separated by 2 levels from our intent; and invasive Brain-computer interfaces are separated by one level. These levels of separation can explain the many orders of magnitude difference between the gigabits per second speed in the computer and the brain and the 100 bits per second information transfer rate of interfaces.

A New Language for the Brain-Computer Interface Era

So the ultimate interface is the direct Brain-Brain and Brain-Computer Interface. DARPA is spending over $65M to fund Neural Interfaces and many private companies and researchers have been working on BCIs, examples of which are Neuralink, Kernel, BrainGate, and Facebook’s Building 8, however we are still many years away from the day that healthy people would be willing to undergo a surgery to embed sensors inside their brain as an interface. (Or hopefully, we can find a way to do this without a surgery)

At that point, when we can reliably read data from the brain, the human-invented language will become the speed barrier.

We put our intentions and thoughts in the form of language, and language itself is expressed through alphabet or phonetic sounds.

At some point you may have asked yourself, but wait, we cannot speak or listen faster than a certain rate, and you are absolutely right! We can listen up to around 200 words per minute and can talk at around . even trained himself to listen to audio books at 5x the normal speed.

shows that some blind people can perceive language even at 8x the normal speed which is due to neuroplasticity. Neuroplasticity is the ability of the brain to re-wire itself. Even this is pushing the information transfer rate in the kilobits per second range, at max, not anywhere close to gigabits per second, and this takes me to my next point:

This may be pushing it into the science-fiction realm, but simply put, human-invented language is not fast enough to keep up with the brain and computers. We need to invent a faster means of information transfer for when we can make reliable neural interfaces.

When we have the ability to reliably record signals from all the neurons in the brain — which will not happen anytime soon — we would be able to transfer information at much higher speeds and language itself would be the barrier.

Using language is like using dial-up internet connection — 50 Kilobits per second — where we could read the news. Imagine upgrading to 5G internet — 10 Gigabits per second — where you can in real time video chat with multiple people in HD quality.

This is where we would need to invent a new language, which may not be at all similar to what we are used to today. This new language would be able to put intention and high level thoughts into much more compact packets of data. This not only changes the human-computer interaction but also human-human interaction.

Recently news broke that AI chatbots invented a new language to talk to each other. Source:

I am a cognitive neuroscience enthusiast and used to do a lot of Brain-Computer Interfaces research. I am passionate about human-computer interfaces and always think about ways to make our interfaces and communication between humans and with the digital world around us, information-rich and with less friction. Please follow me on , and here on medium to read more about these topics.

Partner at DCVC ($2Bn VC firm) and author of “Super Founders”. #1 bestseller new release VC book on Amazon.