The cutting edge of AR/VR is in an odd place. We are hurtling toward decentralized global computing, promising us a new, high water mark of immersion both in brand new worlds and in augmenting the world around us. What form this is exactly going to take is hard to know, but luckily, that isn’t stopping anyone. Possibilities range from commercial applications to learning platforms, influenced with the same wonder that we once saw when we looked at our smartphones for the first time a generation prior. Yet, here we are just a few years later, and the pace of development has once again outstripped our ability to keep track of what exactly we are trying to create.   Creating interactivity within these new spaces is currently quite a task. UX conventions we took for granted start to show age when we look at how we move and respond to these new technologies, and bridging the gap between low level functions to end user interaction becomes a much more exciting (and involved) area. Looking at current prototypes of augmented reality concepts, relative minimalism keeps new users from being overwhelmed by the near infinite level of functionality that is now possible. Implementation of these interfaces is its own challenge, often the end product being the result of consensus or compromise due to current platforms being used in development which carry its own challenges. However, all of these current growing pains pale in comparison to the massive long term benefit for both professional and personal use augmented reality and virtual reality open the doors to.



Current implementations of augmented reality interfaces speak to mimicking our 2D screen based world. Left to right, top down hierarchies which people as a whole have become accustomed to. It is the obvious first step and allows an average user to feel comfortable interacting with email windows, or any basic function that already exists on all of our devices. Breaking out information in space in radial patterns, clouds of data overwhelming the user with overly elaborate displays of complexity are now a possibility. As the next generation grows up with these technologies, the complexity of these interfaces and data displays will grow up with them; minimalism becomes less of a pre-requisite and more of conscious decision rather than one of borderline necessity.   The tools available to build interfaces for AR/VR are exploding in functionality and are offering unique abilities never before available. New vector based standards are starting to rear their head for the web, and have what I think are fantastic implementation for AR opportunities. Being able to build and export open standard vector formats from animation tools brings a whole new level of scalability and resolution agnosticism to the applications and tools currently being developed (given current strives toward 4k+, this becomes something to consider long term). Decentralized computing power to generate real time raytraced objects on mobile augmented reality devices sounds just out of reach, but with optimizations both in light, fully scalable vector based tools with remote graphics acceleration is a total possibility within the next few years. It is hard to talk about implementation when the development possibilities have just started to get scratched.



It is hard to imagine a field that does not have some exciting application for AR. Medical imagining, and using body scans and MRIs to generate models of the individual before surgery, mapping injuries and any sort of physical deviations from the norm that could impact the success of the procedure that manifest themselves onto the AR view of the surgeon. Teaching a whole new generation of students in a ‘hands on’ manner with augmented reality allows engineering students to dive into CAD assemblies, interactively model, design, and troubleshoot even civil engineering scale concerns in a whole new manner. Giving pilots more critical information while they fly, allowing architects and clients to visualize the end product on site before construction even begins, providing navigation while driving, and letting real estate agents sell houses around the world (which is well on its way to becoming more and more common already).



Thinking back to a talk in 2010 by Blaise Aguera y Arcas which demo’d phones placing live footage into google maps, on the fly, for people to walk around and experience, is the first ripple of this ubiquitous and immersive reality we are working toward. We are already massively enthralled by flipping through a 2D timeline that infinitely scrolls, showing us images and video of people and places. Now, add the ability to go back to that place where the video was taken, see the video reprojected back into the environment, with the people being projected onto point clouds (because this is the future, and obviously this camera has depth data) and offering real 3D playback of all of our life experiences.   It does seem that at this time there is still a bit of fragmentation between all the technologies that would make these ideas possible, but that gap is closing. We are slowing breaking into the world of world augmented content immersion, and it couldn’t be more exciting.