Pokémon GO has been all over the news since its launch, talked up everywhere from NPR and Forbes to regional and local news sites. Even my old law school blockmates have been posting about it on Facebook (Law students agog over a gaming app! Imagine that.), which isn’t surprising considering the numbers the game has been pulling in: Time calls it a worldwide phenomenon, and SurveyMonkey’s numbers indicate it’s “the biggest mobile game in US history.” (Lately the numbers have tapered off, but that’s something to gab about another day.)
It’s an impressive run for a game that’s built on a pretty simple mechanic: superimposing random creatures on the real world for players to catch. And it’s a mechanic I’ve been thinking a lot about since listening to a recent Naked Scientists podcast which had a segment dedicated to EuroHaptics 2016.
Augmented reality (which is what fuels Pokémon GO) tends to rely heavily on sight and, to a lesser extent, sound to add information to users’ perceived reality. Just look at this overview from LiveScience, which starts by saying, “Augmented reality is using technology to superimpose information on the world we see.“ (emphasis mine) The article’s brief list of examples highlights how AR’s precursors mainly involved adding information to one’s field of view, a predilection that continues in more recent developments like 2013’s Google Glass and today’s phone and tablet apps. (Like, you know, Pokémon GO.) Even the sole mention of finger sensors, in the MIT SixthSense project, mainly involves the manipulation of projected images.
In other words, AR as we usually know it creates visual realities. Touch is supplementary, if not derived completely from the existing physical aspects of the user’s environment. This is all well and good — for those of us who aren’t visually impaired.
Which brings us to haptics, or human-computer interaction using touch and bodily movements. Hearing about the research presented at EuroHaptics 2016, I got to thinking about how the expansion of what constitutes mainstream AR could help a lot of users whose needs aren’t always accommodated by common tech interfaces. Deeper integration of haptic technology — going beyond haptics as supplementary mechanism for sight — could provide more accessible means for navigating and controlling various tech, which could, in turn, make it easier to interact with the real world in general.
One project from EuroHaptics, for example, looked at the effectiveness of haptic feedback mechanisms for pilots landing aircraft at night/in featureless environments. Haptic cues proved helpful in countering the “black hole illusion” that arises from such visual conditions. It’s easy to imagine these mechanisms, with some tweaking and additional sensors, being useful not just in instances with unfavorable visual input, but ones where there are no visual cues at all.
This isn’t a new idea, of course. I mean, EuroHaptics has been convening since 2006, and we’ve heard a lot about various technologies helping people with a range of disabilities. However, there are still a lot of avenues for improvement when it comes to integrating sensory input with a lot of commonly available tech, particularly in AR, and I point this out because there doesn’t seem to be as much of a push in these directions.
I can’t speak for other people, but the futures I grew up dreaming about were often distinguished by vision-centric developments: glassy holograms; light-based interfaces dancing on smooth worktops; vast networks accessed through complicated headsets. As our technology progresses, some of those developments inch closer to being part of our everyday: Google Glass happened, Microsoft now ships the HoloLens, the world is agog over Pokémon GO. However, the question remains as to how many people that “our” actually includes.
True, a lot of that technological progress has also given us robotic limbs (even exoskeletons!) and other impressive solutions to various conditions that limit people’s abilities to interact with both the real and technological worlds. But as this Al Jazeera article points out, a gap remains in the everyday spaces — and that gap raises important questions about what goals we intend these technologies to achieve vis-à-vis disability, and what views of disability those goals and attitudes imply.
In that same article, the filmmaker Regan Brashear asks illuminating questions about the perceptions of disability that come to inform the development of assistive technologies:
“Is it a valuable part of human life that will always be with us, or is it a problem to be fixed or eliminated? These perspectives lead us towards very different futures. One is about fighting for inclusion on all levels of society, ending stigma and developing useful and needed assistive technologies to enhance quality of life in conversation with the intended users. The other perceives disability as an inherent negative to be “fixed” at all costs.”
Augmented reality seeks to enhance our experience of the real world by overlaying important information and expanded controls over our interactions with that world. So far that enhancement has taken primarily visual forms, at least in AR’s mainstream implementations. But conventions like EuroHaptics 2016 tell us that this limitation can’t exactly be called a consequence of technological deficit: there’s a lot of research and development involving senses other than the visual, and it’s happening (and available!) right now.
So why aren’t we seeing more extensive, dynamic deployments of this research in everyday technologies like smartphone-based AR? What’s keeping us from using advances in fields like haptics to implement more widespread — and better — instances of, as that Al Jazeera article says, the “simple technologies and accommodations” that enable persons with disabilities to participate more fully in society? Or to put it in Pokémon GO terms: while droves of us can now head out to try and catch them all, not all of us can take part in the catching, and it’s worth thinking about why.
Augmented reality, as with other kinds of tech, develops towards goals that we set our sights on. Considering the dominance of the visual in these technologies’ current iterations, wouldn’t it be ironic if we missed out on more inclusive technological commons because of a lack of vision?