Placing an Interface in Mixed Reality
We're all accustomed to 2D interfaces – they're a collection of text, buttons, and other fields on a rectangle. On a desktop, we can move the rectangles around. On mobile devices, they're typically fixed to the screen dimensions.
In 3D, this gets a bit more complicated, but we can often skirt around the issues of a fully 3D interface by taking advantage of the fact that the user's display is still actually two dimensional. Many interfaces just take over this display and present as 2D.
We don't have the luxury of taking over screen space in mixed reality. The user will always be able to reposition and reorient themselves in space. We have a lot fewer tricks available to us than a VR solution as well, because we need to keep a consistent spatial relationship with the real world at all times.
There are two basic approaches to tackling this problem. One is a "head locked" UI that follows the user around and constantly hovers in front of them. This works, but doesn't give that "exists in the real world" feel we're looking for in XR. The other approach is to place the UI directly in the world and give it a persistent location. This is a much more XR approach – the one we want to pursue in Weave.
Consider the main menu in Lumin OS. When summoned, it appears a few feet in front of the user, at a fixed position in space. As the user moves around, the menu will turn toward the user if they get too far to one side, and it adjusts up and down if the user's head height changes by a certain tolerance.
Straightforward enough, but one usability issue with this menu is that it can appear partially or even fully embedded in a wall. Because objects are occluded by walls and furniture, the menu be can partly or fully hidden from view. In the worst case, the user could attempt to summon the menu and be unable to find it!
In Weave, we want to support a similar interaction paradigm, with an interface that hangs in space while rotating to stay facing the player. We also want to try to fix the hidden menu problem by not putting it inside walls. So, how should we position it?
The most basic approach is to do a simple raycast out from the user's head height and look direction. If the ray hits any part of the mapped world, we can stop and place the menu here. Sounds good in principle.
Unfortunately, this approach doesn't take into account the width of the menu. It could easily appear in a corner or other crevice and still end up largely occluded by the world.
It also doesn't take into account rotation. If we place the menu directly on a flat wall, and the player changes their viewing angle to the menu, its rotation will cause it to become occluded immediately.
We could offset the placement away from the wall by some fixed amount, but not only would this distance be arbitrary, it doesn't solve for issues like menu placement in the corner.
Rather than a ray, let's try a spherecast from the player's head position into the world. Assuming we have an adequate bounding sphere around the menu, this will allow us to choose a position between the player and our desired distance that doesn't intersect any walls or furniture. The menu will be able to freely rotate within this sphere without intersecting anything. Without constantly doing collision checks, this approach will not solve for significant changes in head height if a table or slanted ceiling or other obstacle is in the way, but this is much more of an edge case than average case.
So let's give it a try. How large should our bounding sphere be?
To calculate this, we'll gather up all the bounds of all objects in the menu we want to display. For 3D objects, we can visit every Renderer in the hierarchy, and add its bounds to a combined bounds.
// Initial bounds at our current position with no size
var combinedBounds = new Bounds(transform.position, Vector3.zero);
// Add bounds of all child renderers
var renderers = GetComponentsInChildren<Renderer>(true);
foreach (var render in renderers)
For 2D objects on a UI Canvas, we can get the corners of its rectangle in the world, and encapsulate those.
// Add bounds of all child rect transforms (2D UI)
var rectXforms = GetComponentsInChildren<RectTransform>(true);
Vector3 corners = new Vector3;
foreach (var rect in rectXforms)
foreach (var v in corners)
Then we can use the distance from the center of our bounds to the corner as the radius of our sphere:
This works, but why does it seem like our menu always ends up further away from everything than it needs to be?
The problem we're running into is that we're creating an encapsulating sphere around a box. The sphere is always going to end up larger than the box by a significant amount. In fact, the worst case is that our menu itself is actually a circle. Picture this circle, enclosed by a bounding box, enclosed again by a sphere. It's huge!
The compromise we're going with for now is to take the largest dimension of the bounds as the radius of our sphere:
Math.Max(Math.Max(extents.x, extents.y), extents.z);
This won't be guaranteed to fully encapsulate our menu, especially at the corners. It's adequate for the average case of the menu being placed against a flat wall. We will be able to walk all around the menu without it rotating into the wall and being occluded. We might be able to do better with a more complex shape cast, like a cylinder, but a sphere is working fairly well for us for now.
There are other usability issues to solve with a UI like this, including what happens if you are standing directly inside of the menu, or what happens if the menu is placed beyond a wall and then the wall is mapped afterward. There are also usability concerns around how to direct the player to find the menu when it is completely out of your field of view – like when it's behind you. But we have ideas for these that we'll be exploring in the future!