The ubiquity of smartphones and 4G networks in the last decade have changed our society and economy in dramatic ways. The most visible way in which society has transformed is that it’s become commonplace to see people staring at tiny 2D screens in their hands while tapping, pinching, and swiping on mobile content.
The rise of augmented reality (AR) will create the next generation of the internet, a 3D spatial medium in which we’ll physically live, work, and interact. As this occurs, human-computer interaction (HCI) will permanently change in three significant ways:
- The world we live in will house a 3D internet
- The world will become an immersive design workspace
- Our identities will become further intertwined with the digital world
It’s clear that the next big thing in consumer and enterprise technology are AR smart glasses that overlay interactive digital 3D objects into the real world. How will our lives change when this becomes the dominant form factor? In this piece, we’ve interviewed several AR entrepreneurs who have identified three core ways in which AR will dramatically change human computer interaction.
The world we live in will house a 3D internet
Since the advent of computing, through the rise of the personal computer, to the dotcom boom, to the launch of the iPhone, we’ve interacted with the internet through flat 2D screens. The internet has traditionally been a digital world we have stared at through little glowing windows.
However, as we move into an augmented reality (AR) 3D internet, the way we interact with computers will permanently change. What will HCI look and feel like when we’re looking at digital objects overlaid in our real world, instead of pixels on a tiny screen? The short answer: It depends on who the user is, the context in which they are using AR, the UX design preferences of the developer, among numerous other factors related to the human sensory experience.
Tony Bevilacqua is the CEO of Cognitive3D, a platform that provides 3D spatial analytics and user feedback tools for virtual and augmented reality. He elaborates on this point: “businesses have traditionally used tools like Google Analytics to track behavior and usage on 2D interfaces like web browsers on smartphones and tablets. In doing so, it’s possible to gain valuable user insights about which parts of the page are most interesting to users, how long they stay on the page, and how frequently they return. However, with AR, we’re moving into an era of technology in which we’ll have to account for an immersive experience in which users are moving around in 3D space and time.”
Bevilacqua continues, “This raises other interesting questions about HCI that have, thus far, never been asked. Are your users walking around? Why are they going to certain physical locations? When they get there, are they grabbing 3D objects and moving them around? As we enter an immersive spatial internet, human-computer interaction will increasingly focus on how users physically navigate their environment and interact with digital objects.”
The world will become a design workspace
The reason why 3D painting in Tilt Brush is an incredible experience is because it enables creative expression in a spatial medium that has, thus far, been impossible to do. That is, to replicate a Tilt Brush creation in the real world, it would be necessary to invent a special kind of paint that could magically float in the air. However, with the advent of AR and 3D spatial computing, it will become commonplace to carve masterpieces into the physical space in which we live and work.
Dr. Jack A. Cohen is the CEO of MASSLESS and inventor of the MASSLESS Pen, a smart pen that allows designers to turn 3D space into a creative canvas. He elaborates on this point: “while creating 3D objects and environments is not new, designers have traditionally had to use 2D interfaces to build 3D experiences. This is unintuitive and cumbersome. With AR, we’re now able to use the full 3D space to make 3D models, which is unprecedented and will define many of the imminent changes in HCI. We will be free from the confines of 2D screens and 2D interface devices and will be able to develop our designs in a natural and intuitive way. When the space around us is digital, we have full control over how everything in this space looks and behaves. This is like a super power!”
Cohen continues: “These 3D user interfaces and user experiences are largely an unexplored frontier, which is exciting. HCI will start to get really interesting once we begin sharing our digital spaces for collaboration. This will advance us to the next stage of HCI, and be more of a ‘human-computer-human interaction’, in which the internet becomes a spatial layer for collaborative work.
Our digital and physical identities will be further intertwined
We live in a world in which we have a “real world identity”, defined by our passports and driver licenses, as well as a “digital identity”, defined by the content we’ve put on the internet across forums, blogs, and social media. As AR turns the internet into a ubiquitous spatial canvas we physically navigate at all times, our identities will become further connected to this digital realm.
If you read VentureBeat, you probably have an identity on Linkedin, Facebook, and Twitter: all as a means to represent yourself across the 2D digital world. However, what if you had a realistic 3D avatar that could represent you in the emerging spatial internet?
Morgan Young is the CEO of Quantum Capture, which combines 3D scanning with chatbot technology to create interactive virtual humans. He elaborates on this idea: “as AR continues to emerge, we’ll be able to display our digital representations accurately in a physical sense with 3D avatars – but also in any other way we want, including an animated, stylized, and whimsical manner. We won’t be confined to being ourselves, but will be able to take on any form we like.”
Young continues: “In addition to being able to represent ourselves digitally, we’ll be able to give realistic human avatars to AI entities like Siri and Alexa. That is, with AR glasses, you can actually interact with Siri and Alexa the way you would with a real person. These embodied AI personalities could have 3D bodies that have autonomous behaviors and interactions, driven by intelligent animated behaviors.”
The implications of this are profound. In enabling embodied AI entities, we’ll extend the spectrum of HCI into situations where stores have AI-driven holographic customer support staff on-site. Healthcare training will become more easily conducted as medical students will simulate operations on AI-enabled digital avatars. As these entities proliferate in society at large, the social and ethical guidelines for HCI will increasingly resemble those of interpersonal human relations, as interacting with technology will, one day, be uncannily similar to interacting with regular people.