After 50 Years, Is It Time to Say Goodbye to the Keyboard?

An Overview of Human-Computer Interfaces. What Comes After Touch Screen and Voice Recognition?

An Apple Watch, which is not even considered to be a very powerful computer, is able to process gigabytes of data each second. Our brains have tens of billions of neurons and over a quadrillion connections, and the human brain processes an enormous amount of data each second which we cannot even estimate. Yet, the humble keyboard and mouse, are still to this date the fastest bridge between the powerful human brain and the ultrafast world of 0s and 1s.

The invention of a computer keyboard goes back to over 50 years. Public Domain.
A man falls in love with his voice assistant in the movie, “Her”.

Moving Beyond the Keyboard and Mouse?

Computers are getting embedded into different objects and since we cannot connect a keyboard and mouse to every object around us, we need to find other interfaces. The current solution to interact with smart objects, a.k.a. IoT, is through voice recognition which obviously has limitations such as public use. Let’s take a look at the methods that researchers and companies are working on at the moment.

Touch

Advances in multi-touch technology and multi-touch gestures (like pinching) have made touch screen the favorite interface. Researchers and startups are working on a better touch experience, from understanding how firm your touch is, which part of your finger is touching, and whose finger is touching.

iPhone’s 3D Touch detects force. Source Giphy.
Qeexo can understand which part of your finger is touching the screen
One of my favorite methods is Swept Frequency Capacitive Sensing (SFCS) developed by Professor Chris Harrison at Carnegie Mellon University.

Voice

DARPA funded research in this area in the 70s! but voice until recently was not useful. Thanks to deep learning, now we have got pretty good at voice recognition. The biggest challenge with voice at this moment is not transcribing, but rather perceiving meaning based on the context.

Hound does a great job at contextual speech recognition

Eye

In eye tracking, we either measure the gaze (where one is looking) or the motion of the eye relative to the head. With the reduction in the cost of cameras and sensors as well as increasing popularity of virtual reality eyewear, eye tracking as an interface is become useful.

Eyefluence, which was acquired by Google, allowed you to navigate virtual reality by tracking your eyes
Tobii, which had an IPO in 2015, works with consumer electronics manufacturers to embed their eye tracking technology. Image source: Flickr.

Gesture

Gesture control is the human-computer interface closest to my heart. I have personally done scientific research on various gesture control methods. Some of the technologies used for gesture detection are:

A new research by CMU’s Future Interfaces Group shows spectacular classification by using a high sample rate accelerometer data.
Leap Motion is a consumer device for Gesture Control.
Apple has taken this one step forward by embedding all this in the front camera of iPhone X for FaceID.
AuraSense uses 1 transmitter and 4 receiver antenna in a smartwatch for gesture control
Google ATAP’s Project Soli. Source: Soli’s Website.
Muscle Machine Interface from Thalmic Labs. Source: Thalmic’s Video.

Biosignals

If you haven’t been WOWed yet, let’s take it even further. All the methods that were mentioned above, measure and detect a by-product of our hand gestures.

Myo by Thalmic Labs was one of the first companies to develop a consumer device based on sEMG. Source Imgur.

Brain-Computer Interface

In the past year, a lot has happened. DARPA is spending $65M to fund Neural Interfaces. Elon Musk has raised $27M for Neuralink, Kernel has got $100M funding from its founder Bryan Johnson, and Facebook is working on a Brain Computer Interface. There are two very distinct types of BCIs:

Neurable. Source: Neurable Website
Minority Report Movie. Flickr.

Challenges

It may have occurred to you that given all the interesting technologies mentioned above, why are we still limited to using keyboard and mouse. There are certain features in the checklist to be ticked for a human-computer interaction technology to make its way to mass market.

Accuracy

Would you use a touch screen as the main interface if it only worked 7 out of 10 times? For an interface to be used as the main interface it needs to have very high accuracy.

Latency

Imagine for a moment that the letters you type on the keyboard show up one second after you have pressed the key. Just that one second would kill the experience. A human-computer interface that has more than a couple of hundred milliseconds in latency is simply useless.

Training

A human-computer interface should not require the user to spend a lot of time learning new gestures (i.e. to learn a gesture for each alphabet letter!)

Feedback

The click sound of the keyboard, the vibration of a phone, the small beep sound of a voice assistant, all intend to close the feedback loop. The Feedback loop is one of the most important aspects of any interface design which often goes unnoticed by the users. Our brain keeps on looking for a confirmation that its action has yielded a result.

Researchers are working on touch screens that can provide 3D haptics feedback which would take the touch screen experience to a whole new level. Apple has done a great job with haptic feedback technology

The Future of Human-Computer Interfaces

Due to the challenges mentioned above, it seems like we are still not in a position to replace keyboards, at least not yet. This is what I think the future of interfaces will be:

  • Contextually Aware: You read an article on your laptop about wildfires in northern California, and then ask your voice assistant on your smart headphone “How windy is it in there?”. It should understand you are referring to where the fires are.
  • Automated: With the help of AI, the computer would become better at predicting what you intend to do, so you don’t even need to command it. It will know you need a certain music to be played when you wake up, so you don’t even need an interface to find and play a song in the morning.

Founder turned VC — Partner at DCVC. Author of "Super Founders". Spent 4 years collecting the largest dataset on startups. Order on Amazon.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store