So we’ve all been there…By we I mean developers and testers. The tester finds a bug and the developer says it doesn’t behave like that on their computer. So then you puzzle out what is different between the two machines; well that’s how it should go…
Now take that situation and increase the variables by at least a factor of 10. That is what happens when you bring Oculus Rift into the equation.
So I’m retesting a bug based around the position of the content. I briefly touched on depth of field issues in a previous post, but let’s get into it more. Where you position content in relation to the user is critical. If content is too close to the user; they will feel claustrophobic. If it’s too far; then they won’t be able to experience it as intended. If it’s a little too close; then it makes accessing content below the user’s resting eye level very uncomfortable.
In today’s situation the dev came around to watch me using Oculus Rift as I retested this bug. Not only did we realise it was still a bug, but we also realised the difference in how we experienced the same content. Between the way the headset was calibrated and the positioning and angle of the motion tracker; we realised there was a big difference between our ‘at rest’ eye level.
Through this bug we discovered the need to implement; not only a calibration process for users, but crucially a calibrated setup in the office. We need to be sure that both devs and tester are experiencing the same thing. Seeing the same thing is not enough!
We need to know that a turn of the head will react the same at either dev or tester’s workstation. Now obviously it’s unlikely that the separate workstations can be setup *exactly* the same, but realising the issue is key here!
The realisation gives us the opportunity to learn more, and how to give the user the highest quality VR experience they can get!