VIEW Conference: Google Daydream UI Session

VIEW Conference: Google Daydream UI Session

learning | Posted by John Montgomery | October 31, 2016

Google’s Daydream, due for release in November, is a VR platform intended to provide high quality mobile VR to as many people as possible on the Android ecosystem. At launch, the Google Pixel with Android 7.0 Nougat will be the only phone supported. Over time the aim is to support a variety of mobile phone manufacturers, which is different from Samsung’s Gear VR platform which is locked to Samsung phones.

The Daydream View, announced earlier this year, is a headset and controller combo which is now available for preorder for $79 (USD), £69 (GBP), $99 (CAD), €69 (EUR), or $119 (AUD). The fabric covered headset weighs in at 7.76 oz (220 g) and will work with a variety of Daydream-ready phones moving forward.

The controller has a concave clickable swipe enabled area (kind of like the new AppleTV remote), an “app button” which can be programmed by developers to do what they need, and a home button which is only used by the system to navigate home and do a few other things. There are also volume buttons along the side.

The controller uses a 9 axis IMU that gets its data from an accelerometer, gyroscope, and magnetometer to output an absolute rotation value in VR space. This allows the user to point at objects in the scene and click. It also allows for actions such as swinging, grabbing objects, flicks of the wrist, and more. While not a perfect comparison, it is quite similar to the way one uses Nintendo’s Wii remote.

From a developer standpoint, Google offers several SDKs: Google VR SDK for Android, Google VR SDK for Unity (for adapting existing Unity apps for VR), and the Google VR SDK for Unreal (Unreal Engine natively supports creating Daydream via a plugin).

There is also a github site with 16 different demos of how developers can put the Daydream controller to use.

It’s important to differentiate this platform (and its limitations) from dedicated platforms such as the Occulus Rift, HTC Vive, and Sony’s Playstation headset. Google’s goal is to forgo the limited audience of high performance dedicated hardware and aim to have “billions” of users utilizing their platform, due to the ubiquitousness of the mobile phone platform. It’s meant to be accessible to new users to VR, without having to worry about hardware specs and the like. Therefore, the experiences that are possible on Daydream are quite different from those on dedicated hardware. Pullen’s talk covers Google’s research and observations, as well as recommendations for developing for this platform, which may (or may not) be different from dedicated VR hardware.

According to Pullen, the tendency is for mobile developers to create VR apps which are similar to traditional “micro apps”, which are used for short periods of time when users have a free couple of minutes. Google had to push back against short VR experiences because they found in their research that people who used VR would generally use it for around 30 minutes. In addition, users would generally seek a private space for experiencing VR, likely because of it’s isolating nature. As a user of VR, it can be quite awkward to be viewing content around others since one can’t see the environment around them. So it’s only natural that seeking a private space to watch would be common.

A tendency for developers new to VR is to think of it as having an infinite space for UI, with a tendency to position items everywhere within the space. However, the team at Google feels UI should be focused instead. On most mobile VR viewing devices (including Daydream), a user generally has a 70 degree FOV that can be comfortably seen. As far as turning ones head, while sitting down a user can turn their head 80 degrees but at the extreme of this range, it can be quite uncomfortable. Google suggests a that a comfortable range of motion of 30 degrees to the left and right is a good target for users. So when designing a UI, it is best to begin by keeping important information within the center of the 70 degree FOV and put less important information further away at the edges.

In addition, Google found that many users would view longer VR content when laying down on a couch or bed with their head propped up on a pillow. In this position, ones head is tilted down approximately 15 degrees, meaning that an exact vertical center positioning for UI may not be the best way to go. Shifting the center of the UI down slightly (or at least the important aspects of the UI) is preferred. This is why for Google’s apps, having a title at the top of the screen (which requires no user interaction) and navigation/function buttons weighted towards the bottom is commonplace.

Brian Pullen discussing the YouTube UI, showing concentration of information in center of the screen.
Brian Pullen discussing the YouTube UI, showing concentration of information in center of the screen.


What about depth? Where should the UI canvas be placed in 3D space? Google found that developers generally put it too close to the users (say 1/2 meter), which invades one’s personal space leading to the feeling of not being able to get away form the UI….or even making one crosseyed. The UI plane can also be placed too far way, since at around 20 meters and beyond you end up losing the stereo effect.

In the end, basically every single app Google has designed for Daydream has the UI canvas placed 3.5m away from the user. This is quite surprising since conventional wisdom and a multitude of studies say that placing the UI at the exact same distance as the focal distance is ideal. Pullen understands their observations are different, but they found their placement doesn’t really impact the experience.

Using flat color objects can be problematic as one’s eye can’t easily resolve them, especially as they cross over each other, even if they are at different z-depths. Adding noise texture and/or shadows helps keep the various shapes separated.

What about text? Due to the lower resolution of the mobile device screens, Google recommends not using narrow/thin fonts or even serif fonts whose detail will be lost. As far as font size, it generally has to be quite big as well due to the resolution of the screens. “Make them huge,” says Pullen, “it’s going to be a lot more comfortable for everyone”. What size is big? For a device like Google’s upcoming phone with 13 pixels per degree, they calculate font size to be .027 x distance — where distances is the z distance from the viewer where the font is located 3D space.

As far as interacting with the UI, at this point in time the team at Google believes that having multiple options is critical. They found that users try to use the UI in a variety of ways from swiping, to tapping, to click and dragging — since it is new technology and use don’t know exactly what to do. Pullen feels supporting all these methods is probably best at this point in time until the technology and users familiarity with it becomes more comfortable. Think back to the learning process users had when utilizing the first touch screens on Apple’s iPhone.

That being said, one thing Apple did was “force” their way of interacting with the phone — which became the standard way of interacting with it — instead of waiting for one version to become the primary interactivity method. Apple did what they felt was best and generally many of their UI elements became a “standard” in the mobile scene.

The environment for the UI is obviously an important aspect of the experience. Pullen recommends not using empty voids for the UI, including things like graded cycs. Going hand in hand with this is the fact that completely dead environments with no movement feel false. At the very least, adding 3D particles to the scene (which can fairly be easily done in the engines) can help provide a feeling of depth in which one can place the UI within the world.


In general, the upcoming Google apps utilize “real world” environments for their UI experiences. Since these apps are are targeted towards a wide variety of users, the environments do need to be carefully considered. Pullen related one example that was problematic: an undersea UI where one is completely submerged. Some users  — obstensibly those with a fear of water or who couldn’t swim — would be incredibly uncomfortable, some to the point where they couldn’t breath. Obviously, if you’re developing an app for swimmers or divers this would be acceptable. But it’s a case in point that the immersive nature (no pun intended) of VR can be problematic for some users and not others. Choose the environment carefully.

Navigation is done using the motion sensing controller and gestural actions.
Navigation is done using the motion sensing controller and gestural actions.

Save high detail textures for objects close to the viewer and don’t waste memory or processing power on faraway objects. It’s also important to experiment with the resolutions of textures which are mapped onto objects. Pulled used their YouTube app as an example. When selecting clips, there are tiles visible with image textures which represent the movies — often times these might be the logo or even a movie poster type tile. They found that due to resampling high resolution images and other issues, a texture that was actually less than the screen resolution works best. The bottom line is to not simply assume that larger and higher res is preferred.

Google Daydream is shipping in November and is available for preorder now.

Leave a Reply