VR movement
So, despite my best efforts, my spare time available for experimenting with the Rift SDK has been fairly limited. I’m more convinced than ever though that there is lots of amazing potential there. There’s a lot of cynicism, and rightly so. Many of the same problems that were there in previous iterations of VR are still present. There are a lot of good posts out there covering the most apparent (the motion-sickness / nausea generated by lag, the resolution). I’m not so concerned about those. We used to have to hit 60Hz refresh rates on the dot, and with a lot less rendering power than we have now. Hitting 90Hz is achievable with discipline. The screen resolution is I’m sure going to be addressed by future iterations of the devices. These are known quantity problems.
The unknowns to be addressed come from the parts of the tech that are new. Control, user interaction, is going to be the key. My concern here is that the old systems we used are just not ideal in a VR environment. Traditionally, we’ve been controlling avatars in a virtual world. We’ve had mostly free movement around that world, but there’s always been a clear disconnection – you’re controlling something other than yourself, and the screen shows you the view from their position. We’ve refined the control mechanisms so that feels natural, and trained ourselves to the point where it feels a lot less like we’re rotating an avatar, and more like it’s us. “Look right” becomes a quick flick of the mouse, even though our head doesn’t actually move. The avatar becomes an extension of ourself. That ability to make the control mechanism effectively disappear is key. In the same way it’s easier to drive when gear changing is instinctive and done without thinking; it allows you to focus on the higher level functions.
If you’ve ever watched someone new to games playing a first person or third person game, you’ll know the effect. When someone has to look down at the controller to remind themselves of which joystick to use. You say “look right”, and they have to stop moving their character before they change camera angle. So many of our games are designed to take advantage of the affordances already learned by gamers. It doesn’t matter that they haven’t played your game before, if they’ve played another game in a similar style. More crucially, when designers get it wrong, that failure permeates the whole game. When someone complains that moving your character around feels like driving a tank, that niggle interferes with everything they do in your game. For all of the great things about GTA 4, I struggled with Nico’s movement. I’d miss a door by just a fraction, and then he had a minimum turning circle that meant that I’d end up bashing into the other side of the door frame instead. You get used to it and learn to compensate, sure, but it’s a problem that needs to be overcome.
Bringing it back to VR, the change in viewpoint brings the control issues into sharp relief. The immediate and all-encompassing nature of the viewpoint makes it *you* that’s in the game. You’re not controlling what you see on that screen ‘over there’, you are controlling you. So when the controls feel unintuitive, it’s *you* that feels sluggish and unresponsive. So it’s important to get it right, and I’ve seen a variety of problems with the Rift demos so far.
Assuming that you’re using a joystick or keyboard controls, you effectively have a 2-axis input controlling movement. That we always had with first person games. But in the past there was a fundamental restriction in place. ‘Forward’ was always ‘the direction you’re facing’. There was no option to look to the right while still running forward, unless you were playing a mech or tank game which often mapped view direction to an extra input axis. But the natural mouse/keyboard or joypad controls we’re used insisted that you always look rigidly forward, the same direction your gun was facing. That extra axis (where the body of your avatar/vehicle was pointing in a different direction to the view direction) was discouraged, because people struggled to manage their awareness of the two directions (look and move). Skilled players learned to compensate naturally for this. While running forward, to look right you’d turn and simultaneously start strafing left. But you knew exactly where you were looking and moving at all times, because the restrictions were clear.
In VR, that restriction no longer makes sense. Instead we have different restrictions. Even if standing, the cable to the headset restricts your turning. If sitting, then there is an obvious ‘forward’, which is the direction your torso is pointing. You move your head to the left and right, but forward doesn’t change. However, the real problem is that the Rift headsets at least aren’t really anchored to that ‘forward’ direction. The headset knows what direction it’s facing, but it doesn’t know at what point the headset was facing ‘forward’ as far as the user is concerned.
Instead, all of the demos I’ve seen try to replicate the same restriction as traditional FPS controls have. ‘Forward’ is ‘where you are looking’. Walk forward on your directional axis, and look to the right, and you’ll move to the right. Which seems sensible, until you consider the need for complete freedom around the world. You have a limited head movement circle, so how do you turn completely around so that forward is south instead of north? You’re not going to twist your head 180 degrees round and press forward. So the demos map another axis on top of your head movement. So if you start facing forward and north in the world, turning your head 90 degrees right means that ‘forward’ motion moves you east. Use the rotation control to turn your character 90 degrees right, and you’re moving south. Return your head to centre though, and you’re moving east again. So even though ‘forward’ always moves in a predictable direction, you’re still having to manage awareness of that extra rotation. Your head orientation is being added on top of a base avatar orientation. That input-controlled axis is constantly fighting against the headset rotational axis. You can be turning your avatar right and rotating your head left to keep the view pointed in the same direction.
Having experimented, I think trying to cling onto that old input style is a mistake. It feels horrible when you’re craning your head around to the right because you’ve been using the head orientation as the primary means to choose your direction, only to find that you actually need to turn more than 90 degrees in either direction, at which point you need to fall back on the directional turn controls. Worse, when you’re running forward at speed, you have to keep your viewpoint locked directly ahead, because if you try and glance left or right you’ll start running in that direction. You’ve lost one of the big plus points of VR, freedom of motion in your viewpoint.
Keeping it so that you always have complete freedom of moving your viewpoint is I think key to making the user comfortable. We used to be able to make the controls avatar-centric, but now we need to be aware of the range of motion the user has, and allow them a natural way of expressing a complete range of motion without discomfort.
Instead of assuming that forward is where you’re looking, you need to build in some awareness of the user’s torso. The only natural way I can think of to do that is to add a quick and simple calibration point. Ask the user to look directly forward, and then press a button. From then on, that direction is the reference centre, and should align with the direction of motion of the avatar. Forward means moving in that direction, regardless of where the headset is pointing. Same for strafing right and left. Ideally, like the mech and tank games that have to do this naturally, you’d have an in-view indication of where forward is relative to your viewpoint. That might be your avatar visible from your viewpoint (e.g. a gun or arms), or a HUD indication.