Tag Archives: VR

The long quiet followed by a Hard Shake

So I’ve been quiet for quite a while. The blog article ideas have been mounting up, and I’ve not been writing them. A lot of the time I don’t actually enjoy the writing process. I don’t know if you’re meant to enjoy it, but I find it hard to sit down and write out the ideas that are piling up. I have a bit more free time at the moment, so I should be able to force myself to write some out. If anyone has ideas on how I enjoy the process more then I’d love to hear them.

Now onto actual testing stuffs.

A lot of us have heard of the testing technique galumphing which was originated by James Bach.

It’s a technique I used before I knew I used it; as is the case for many of us. We don’t know we necessarily do this thing, but when someone can explicitly describe the action in a way we can understand; that’s when we get the “oh yeah” moment. We’ve often galumphed our way through a site without realising we’re doing it. This is the value in someone like James. Someone that can find a descriptive term for something that was previously tacit knowledge.

What happes when galumphing doesn’t just desribe how you might click around a site, but it actively describes how you might traverse a VR application?

Well the best techniques are the ones that are still relevant to situations beyond which they’re written for. Did James actively think about VR testing when he wrote about galumphing?

Probably not, but he didn’t need to. He understood the concept of how unintended movements (physical or control-system based) will apply to a variety of situations.

Introducing the Hard Shake

So during my VR Testing I have been (concisouly and unconciously) carrying out
lots of galumphing. This has happened to the extent that I feel certain movements within that approach deserve to be named.

So for the first one of these techniques, I name the Hard Shake.

The naming of this happened quite naturally. A tester that I’d recruited talked about an issue they’d found in the app, and demonstrated the movement required to trigger this issue. I then asked whether he could recreate the issue without a Hard Shake.

This wasn’t a term I’d used before, but it instantly felt lke correct. It described a movement I’d carried out numerous times before; often used as a way to transition between steps. It can be used at any point however. It is very useful for uncovering performance issues, and unintended effects from gaze being shook in that fashion. Remember that this is something that can be used at anytime within the headset. I mentioned transitions, but even at times when the user may only be receiving information; it is useful to carry out and see the results.

So here we go, the Hard Shake. It seems simple. I’ve talked to other testers that have done this naturally without thinking. However, when we can explicitly talk about and name techniques; it gives us a platform on which other knowledge can be built. This is harder when the knowledge stays inside our heads and is exercised in a tacit manner.

Part of this issue is connected with how instinctual testers work. This is something I’ll cover in a future post.

Advertisements

Testing Room-scale VR

I tested the HTC Vive headset recently which opens up a whole new realm of possibilities. This obviously means more ways in which issues can manifest.

The HTC Vive is geared towards a Room-scale VR experience. The user will be generally stood up during use. This means more things need to be taken into account by the teams developing and testing these applications, which are mostly games at this point.

Now although Oculus Rift have stated that their headset is capable of Room-scale VR it is geared towards a sat-down experience. Understanding these distinct experiences is necessary before we can start to think about how we are to test them.

Sat down experiences take out the issue of a user bumping into their environment. It would still be possible in some particularly confined setups, but these are edge cases IMO (still not to be fully dismissed). Sat down experiences also take out the worry of becoming entangled in the headset cable and take out the possibility of a user tripping over.

Whilst all the above is true, sat down experiences also lose an element of immersion. Putting the HTC Vive headset on and experiencing the environment for the first time is an incredible feeling. Something as simple as kneeling down to pick an object up feels special. The controllers for the Vive also add to the experience, but they also provide another area for things to go wrong.

Applications being developed and tested for Room-scale VR have to take into account the variety of spaces people will have. Some developers have stated that you don’t need a huge amount of room to experience Room-scale VR. It is said you need enough room to stand up and stretch out in all directions. This video from the makers of Hover Junkers explains:

Whilst the makers of Hover Junkers have been very attentive to room size issues and the range of rooms users will have; we cannot assume everyone else will. We have to be aware that room size issues will come into play. Using boxes/crates to quickly change the test space you have is going to be necessary. It’s all well and good having the great test space you’ve setup in the office but how will that translate to the student dorm room?

Thinking about testing Room-scale VR leads me to think that we need a new heuristic to aid this testing.

Immersion and presence – Why are they important?

Testing is about gaining knowledge. To understand how to test VR effectively; we need to understand VR. In my last post I referenced a paper by Daniel R. Mestre; in this post I will go into what I’ve learnt from this.

So how do immersion and presence work together in the VR experience?

Presence is defined as the sensation of being in the virtual environment

We can think of presence as being a psychological quality. It is our perception of existing inside the virtual environment, it is subjective.

Immersion is capable of producing a sensation of presence

(Ijsselsteijn & Riva,2003)

Let’s think about this connection. Presence is the subjective feeling of being within a virtual environment, and immersion provides a vehicle for this feeling.

“The term immersion thus stands for what the technology delivers from an objective point of view”

(Mestre, 2005)

The connection should be clearer now. Presence is a subjective term; which covers how a user feels about the virtual environment; from a psychological point of view. Immersion covers what the technology can objectively deliver; to give the user a strong feeling of presence within a virtual environment.

Now who is best placed to “measure” immersion levels?

Well obviously I’m going to say testers. We’ve been doing something like this for years, but calling it user experience. Now I’m not saying testing VR is just UX testing, but it is about taking some of those principles and applying it to VR.

We cannot fully control how present a user is within virtual environments, but we can control how immersive a virtual environment can be. If we create an experience which allows complete immersion, then a user is more likely to feel present there.

 


References from the paper “Immersion and Presence” by Daniel R Mestre

http://www.ism.univmed.fr/mestre/projects/virtual%20reality/Pres_2005.pdf

Presence vs. Immersion

This article was bought to my attention. It talks about the concepts of presence vs. immersion, how they relate and cover different aspects of the VR experience.

I’m going to dive into this over the next few days and see how this knowledge can help improve my testing approach.

I’ll be back next week with an article covering what I learn.

Don’t worry, it’s only minor – Bug severity in Oculus Rift testing

Bug severity always raises different opinions. We’ve all submitted a bug and seen it edited down to a lower severity. Severity ratings become a loose guide to the nature of a bug. They can be useful, but seeing the 1-5 rating does not relate enough information on its own.

x8cdw

There are bugs that are a lower priority to fix, but there are no minor bugs when testing in VR.

*ANY* bug can break immersion.

Our aim is to give users the most immersive and seamless experience possible.

A bug may be minor in nature, but its knock-on effects are never minor. A user may recover immersion quicker from a less critical bug but that does not make it minor.

Immersion is totally possible, but only if we make it the smooth experience it needs to be.

 

Doesn’t look like that on my computer – The Oculus Rift version

So we’ve all been there…By we I mean developers and testers. The tester finds a bug and the developer says it doesn’t behave like that on their computer. So then you puzzle out what is different between the two machines; well that’s how it should go…

Now take that situation and increase the variables by at least a factor of 10. That is what happens when you bring Oculus Rift into the equation.

So I’m retesting a bug based around the position of the content. I briefly touched on depth of field issues in a previous post, but let’s get into it more. Where you position content in relation to the user is critical. If content is too close to the user; they will feel claustrophobic. If it’s too far; then they won’t be able to experience it as intended. If it’s a little too close; then it makes accessing content below the user’s resting eye level very uncomfortable.

In today’s situation the dev came around to watch me using Oculus Rift as I retested this bug. Not only did we realise it was still a bug, but we also realised the difference in how we experienced the same content. Between the way the headset was calibrated and the positioning and angle of the motion tracker; we realised there was a big difference between our ‘at rest’ eye level.

Through this bug we discovered the need to implement; not only a calibration process for users, but crucially a calibrated setup in the office. We need to be sure that both devs and tester are experiencing the same thing. Seeing the same thing is not enough!

We need to know that a turn of the head will react the same at either dev or tester’s workstation. Now obviously it’s unlikely that the separate workstations can be setup *exactly* the same, but realising the issue is key here!

The realisation gives us the opportunity to learn more, and how to give the user the highest quality VR experience they can get!

First thoughts – Testing with Oculus Rift

When I put on the headset for the first time; the immediate brightness instantly triggered my ‘design alarm’. Bad contrast and overly bright interfaces are one of my bugbears. It became apparent that it was going to be even more of an issue inside the headset.

It may seem obvious that overly bright interfaces would be worse in VR, but if it’s that obvious, why does it still happen on websites?

I noticed that line weights were dramatically reduced when viewed in Oculus Rift, rather than a monitor. This issue does connect to contrast. If your copy is rendering much thinner in a headset; it’s going to be very difficult to contrast between the copy and the surround.

I’ve been known to fuss a lot about contrast issues, but that’s because I believe it’s very important.

I fully support these people.

There are huge numbers of people with sight problems; both diagnosed and undiagnosed. If you present content that requires concerted effort of the user to read it; then you are isolating a big percentage of your possible audience.

Now extend this idea to VR.

If you create a product that alienates a fair percentage of your audience; they don’t decide to use another VR system. You’re not running a website where you may lose them to a competitor.

Alienating someone means they will most likely be lost to the world of VR. When you’re trying to present ‘the next big thing’ you need each and every person to go ‘WOW’.

If you make one person ‘WOW’ then they tell others, obviously the converse is true. You don’t lose one person to VR if they have a bad experience, you potentially lose more.

Good testing isn’t simply about pointing out issues with a user’s experience. It’s easy to say something has bad contrast and could be hard to read. It’s harder to see the knock-on effects that issue can cause. That’s where good testing comes in. The ability to see the problem and *potential* problems created by it.