5/21/2014– Kelsey Breseman
Travis Huch from Zuora sent me a few questions leading up to my talk at SolidCon, Beyond the Screen: Humans as Input-Output Devices. Zuora published snippets of my responses alongside those of some thought leaders in the Internet of Things space.
I encourage you to read their piece here: Internet of Things: The Big Picture
Below are my full responses to their questions.
How have connected devices already evolved beyond mere devices to completely interactive tools feeding and responding to human inputs and outputs? Specifically how do currently available devices already improve and enhance our lives providing more freedom, comfort, improving safety and health, etc?
A completely interactive tool, one that seamlessly incorporates humans as a piece of the system, is a tool that people don’t even think about. That’s the end goal: Ubiquitous Computing as Mark Weiser imagined it. Every object is an embedded device, and as the user, you don’t even notice the calm flow of optimization.
The Nest thermostat is a good example of this sort of calm technology. The device sits on your wall, and you don’t spend much time interacting with it after the initial setup. It learns your habits: when you’re home, when you’re not, what temperatures you want your house to be at various points in the day. So you, as the user, don’t think about it. You just live in a world that’s better attuned to you.
There aren’t a lot of devices yet that interact seamlessly with humans in this way– as a society, we’re just beginning to explore ubiquity in computing. Smartphones and wearable devices are reaching in that direction, but I think within five years, we’ll find most of these interfaces fairly clunky.
What groundbreaking human applications of these technologies are still on the horizon? What are some ways these can be used to make our environment more interactive, and responsive?
I think that one of the most interesting things we’ll see in the near future is the creation of non-screen interfaces. Interacting with technology, we rely almost solely on screens and buttons. But in the physical world, we use so many other interfaces.
Although it might be a while before consumer tech does much with the olfactory or gustatory sensations, audio and haptic device outputs are already interesting and fairly accessible. Lechal embodies a simple haptics concept: shoes which vibrate left or right navigational directions as you approach a turn. And though audio has also been used for a long time, innovations such as audio spotlighting open up possibilities for personal/non-disruptive audio without the need to put a plastic device (headphones) on your head.
Those are all inputs into humans, but there’s a lot of fascinating work going on to receive outputs from humans. The consumer-oriented Myo armband uses myoelectrics– the electrical signals from human muscle impulses– to read gestures. Or you could spin your own myoelectric device with this much cheaper muscle sensor. Similarly, you can buy an Emotiv headset to read your brainwaves, or you could DIY it. The implications there are amazing: you can wire up your own body as an electrical input into any electrical system– like a computer, or a robot, or whatever else you might build. You can control physical and digital things just by thinking really hard or by twitching your fingers.
Current electrodes with long wire leads are a bit impractical for everyday wear, but research labs are working on that in a field called epidermal electronics. This field puts electronics right on people’s skin, for example in the form of a temporary tattoo or more like a band-aid. A circuit adhered to your skin could monitor and wirelessly transmit your heartbeat, temperature, motion, location, or any of various other sensor data, 24/7, while keeping a low profile on your body. Graphene is another move in that direction.
Meanwhile, systems in the consumer space explore accepting input from humans without requiring physical attachment. The simple motion detectors on lights, on automatic faucets, on self-flushing toilets are good examples of simple and intuitive interfaces. These accept natural human motions to perform previously manual tasks. More complex interactions move up to gestural control, such as on the Leap or Kinect controllers, or even facial emotion recognition with Emotient.
On the whole, I think (hope) we’re about to get a lot better at interfacing machines with people outside of computer screens.