By Ben Delaney
I just received an informative newsletter from Ultraleap, and in it was an article on a new interface for working in virtual and augmented worlds (https://www.ultraleap.com/company/news/blog/vr-ar-content-browser). Interfaces for xR are among the most challenging issues in the field. People have been trying to figure out the best way to work in virtual worlds since the 90’s, and the variations number at least into the hundreds. Frankly, most of those attempts have not been very successful.
Why is this such a tough problem?
To understand that we need to address two important issues: user interface tradition and design.
Tradition may be the stickier interface design problem. Since the late 80’s, virtually every computer user works with a WIMP interface – Windows, Icons, Mouse and Pointer. For traditional computing there’s nothing wrong with the WIMP. It was designed for a two dimensional workspace, and whether you’re on LINUX, a Mac, or a Microsoft platform you are primarily dealing with a two-dimensional interface. Mobile devices, the next frontier in computing, still have variations of the WIMP interface in place (WIFP? Window, Icon, Finger, Pointer). Though voice commands are quickly becoming more useful on mobile devices, they are often used to navigate WIMP interfaces.
Working in 3D spaces provides significant new challenges to interface designers. Most obvious is depth. Windows are essentially two dimensional objects and virtual and augmented spaces are three dimensional. Trying to map a two dimensional interface into a three-dimensional space is full of problems. Some of the most obvious are apparent even in a traditional windows interface. If you are the type of user who opens many windows at one time, as I do, windows are in front of and behind each other, and occlude what you’re trying to see. The systems that we use on our day-to-day computers include a number of tools to enable one to find hidden windows, such as the familiar alt-tab command that cycles through those windows, allowing you to pick ones that are hidden from view.
Working in a true three-dimensional space has additional issues. Objects in the space may be occluded, located behind, above or below you, or in a distant and invisible area in the virtual world. Some of those objects have properties and values that the user needs to access, but which are not obvious. For example, a book in a three-dimensional world might be simply decorative – a fixture on the shelf in a library. Or it may be possible to open that book and read it. How the book is accessed and how its contents are made available to the user are key elements of the user experience.
Objects may be moving – how does the user catch up to them or stop their motion? What about an object contained in another, or an object that can be disassembled? Objects may include buttons to push or sliders to move. These “smart” objects, so called because of their many characteristics and inherent rules of operation and interaction, need to be easily discoverable and used.
But in addition to smart objects and the other devices and objects populating a virtual world, the user interface also needs to include a variety of operational and navigation controls. These controls could enable large changes in the user’s location, commonly called teleportation, that take you from one place in a virtual world to another. That control is often out of sight.
If one is if one is using a virtual environment for designing, all the design tools need to be available in a menu structure that somehow makes sense in that virtual world. What we see today is typically a 2D interface such as that of AutoCAD or Bentley presented in a three-dimensional space as a flat object that includes a 3D view window. This layout enables the designer to see objects in three dimensions, but uses a menu structure that is strictly two-dimensional and thoroughly traditional.
User interface design becomes a key factor when building virtual spaces for doing work, as well as those for entertainment. Donald Norman has written extensively about the importance of user interface. We especially recommend his book, The Design of Everyday Things as a starting point for all interface designers. He stresses the importance of making interfaces intuitive and uses an example of burner control knobs on a stove – they often have no obvious correlation to the position of the burners that they control – as an example of a bad user interface. Similar bad examples exist in almost every virtual world. It’s difficult to design a visual interface that lines up with objects in a virtual world that may be out of sight, behind the user, or in a different part of the space. And even if one can see the object, it may not be obvious what one can do with it. A bad interface on a stove is a minor inconvenience. A bad interface in a mission-critical application can have a huge cost.
I also highly recommend Edward Tufte’s seminal book, The Visual Display of Quantitative Information. Though it is strongly rooted in a two-dimensional world, Tufte’s thoughts on how to clearly present complex information provide guidance for anyone building interfaces or information presentations.
It is likely that the best user interfaces in virtual spaces are going to combine vocalization and traditional menu structures. It is much easier to point at an object and say, “what is that?” than it would be to locate some sort of traditional two-dimensional menu and navigate several layers to discover the properties of an object or how to operate it.
We might call the new interface the PIDO interface; Point, Interrogate, Discover, Operate. Such an interface would be much more natural in virtual and augmented environments. It works a lot like things do in the real world. IRL (in real life), when we see something unfamiliar, we don’t have to pick that object up and hold it in front of a person to ask them what it is. We don’t need to find a hidden button that reveals its properties. We simply point at it and ask someone to tell us what they know about the object and how to use it.
In a virtual world, a similar process could take place. Pointing at an object could select it. Then one could ask “what are you?” or “how do you work?” and the object could explain what it is verbally, pictorially or with text. Gaining understanding, we could then operate the object in order to accomplish a task.
Obviously, not every object is going to be visible and not every object is going to be functional – pointing to a wall and asking it what it can do would likely be unrewarding. However, pointing at a box and asking it what it contains would be a natural and useful interaction. Pointing to or picking up a device that we believe may be a tool and interrogating it about its functionality would certainly be an easier way to interact with that tool than to search for a manual or to leave the virtual world to find the instructions for that device.
Getting back to UltraLeap’s announcement; what we see is not bad, but neither is it any sort of breakthrough in interface design. Videos in the articles show a virtual hand interacting with two-dimensional menus floating in space. This is fine in a sparsely populated space where all you need to do is interact with the menus. However, in a rich space full of virtual objects, especially if they are moving and occluding each other, this sort of a menu system would be hard to use for many functions. That’s not to say that a lot of thought hasn’t gone into this design. As it says in the article, “stems are bright and boldly colored with no dark backgrounds because we designed them with additive color in mind.” That’s a good idea. However, it is not obvious that the menus are aware of the environment or that they can be specifically attached to objects within that environment. Nor does it appear that they can be accessed or operated through vocalization.
So we have to say to Ultraleap, “Thank you. Good try.” And we say to their interface designers and to interface designers all around the world who are working to enable us to use xR in our everyday work and entertainment, “Please keep trying. And remember; 3D interfaces have to be different than 2D interfaces.”