Tag Archives: testing

The language we use

A recent debate on Twitter bought an interesting idea to light. The idea that the language used by testers can be separated from ‘testing’. The argument goes,

I don’t want to get hung up on language, I just want to concentrate on testing.

Taken at face value; it’s a reasonable view. Let’s cut the talking, it’s all about the testing.

I don’t think this is feasible. The language we use as testers; is central to what we do and shapes the testing itself.

As is usual in my posts; lets take an example from classic Sociology to illustrate this point.

Becker discusses Labelling theory in his book, Outsiders: Studies in the Sociology of Deviance. Becker says that at one time or another most of us break the law, but only some of us are ever labelled as criminals. Becker says that once labelled as criminals; this not only changes the way that society treats these people, but also how these people see and treat themselves.

So let’s apply some of this to a common issue within our field. Use of testing tools. Now already you can see I’ve started the conversation by calling them testing tools. The language used informs you as to my view of the matter.

These tools are commonly referred to as automated testing. Now many of us have interacted with manager-type people who may say something like,

Can’t we just automate all our testing

In the head of the manager there is a picture that looks like this.

robo23

Labelling the use of testing tools as ‘automated testing’ has knock-on effects.

Those within testing understand that use of automated testing tools isn’t a magic bullet. The language used gives the impression that an automated procedure is an easy procedure. It’s an understandable reaction. There are many fields in which automating procedures have made things very easy. However, the same thing isn’t true within our industry.

Using automation tools doesn’t make things easier, it’s just a different kind of difficult.

Now let’s think about what would happen if the term ‘automated testing’ was never used. The manager wouldn’t have in mind the magic automation robot finding every bug. The picture in mind would be similar to any craftsperson using their tools.

The language we use has repercussions in many ways and for a species which uses language as our primary means of communication; it isn’t something we can easily separate from anything else we do. It is inherent.

The way we talk about testing is part of our testing.

Immersion and presence – Why are they important?

Testing is about gaining knowledge. To understand how to test VR effectively; we need to understand VR. In my last post I referenced a paper by Daniel R. Mestre; in this post I will go into what I’ve learnt from this.

So how do immersion and presence work together in the VR experience?

Presence is defined as the sensation of being in the virtual environment

We can think of presence as being a psychological quality. It is our perception of existing inside the virtual environment, it is subjective.

Immersion is capable of producing a sensation of presence

(Ijsselsteijn & Riva,2003)

Let’s think about this connection. Presence is the subjective feeling of being within a virtual environment, and immersion provides a vehicle for this feeling.

“The term immersion thus stands for what the technology delivers from an objective point of view”

(Mestre, 2005)

The connection should be clearer now. Presence is a subjective term; which covers how a user feels about the virtual environment; from a psychological point of view. Immersion covers what the technology can objectively deliver; to give the user a strong feeling of presence within a virtual environment.

Now who is best placed to “measure” immersion levels?

Well obviously I’m going to say testers. We’ve been doing something like this for years, but calling it user experience. Now I’m not saying testing VR is just UX testing, but it is about taking some of those principles and applying it to VR.

We cannot fully control how present a user is within virtual environments, but we can control how immersive a virtual environment can be. If we create an experience which allows complete immersion, then a user is more likely to feel present there.

 


References from the paper “Immersion and Presence” by Daniel R Mestre

http://www.ism.univmed.fr/mestre/projects/virtual%20reality/Pres_2005.pdf

Don’t worry, it’s only minor – Bug severity in Oculus Rift testing

Bug severity always raises different opinions. We’ve all submitted a bug and seen it edited down to a lower severity. Severity ratings become a loose guide to the nature of a bug. They can be useful, but seeing the 1-5 rating does not relate enough information on its own.

x8cdw

There are bugs that are a lower priority to fix, but there are no minor bugs when testing in VR.

*ANY* bug can break immersion.

Our aim is to give users the most immersive and seamless experience possible.

A bug may be minor in nature, but its knock-on effects are never minor. A user may recover immersion quicker from a less critical bug but that does not make it minor.

Immersion is totally possible, but only if we make it the smooth experience it needs to be.

 

Doesn’t look like that on my computer – The Oculus Rift version

So we’ve all been there…By we I mean developers and testers. The tester finds a bug and the developer says it doesn’t behave like that on their computer. So then you puzzle out what is different between the two machines; well that’s how it should go…

Now take that situation and increase the variables by at least a factor of 10. That is what happens when you bring Oculus Rift into the equation.

So I’m retesting a bug based around the position of the content. I briefly touched on depth of field issues in a previous post, but let’s get into it more. Where you position content in relation to the user is critical. If content is too close to the user; they will feel claustrophobic. If it’s too far; then they won’t be able to experience it as intended. If it’s a little too close; then it makes accessing content below the user’s resting eye level very uncomfortable.

In today’s situation the dev came around to watch me using Oculus Rift as I retested this bug. Not only did we realise it was still a bug, but we also realised the difference in how we experienced the same content. Between the way the headset was calibrated and the positioning and angle of the motion tracker; we realised there was a big difference between our ‘at rest’ eye level.

Through this bug we discovered the need to implement; not only a calibration process for users, but crucially a calibrated setup in the office. We need to be sure that both devs and tester are experiencing the same thing. Seeing the same thing is not enough!

We need to know that a turn of the head will react the same at either dev or tester’s workstation. Now obviously it’s unlikely that the separate workstations can be setup *exactly* the same, but realising the issue is key here!

The realisation gives us the opportunity to learn more, and how to give the user the highest quality VR experience they can get!

First thoughts – Testing with Oculus Rift

When I put on the headset for the first time; the immediate brightness instantly triggered my ‘design alarm’. Bad contrast and overly bright interfaces are one of my bugbears. It became apparent that it was going to be even more of an issue inside the headset.

It may seem obvious that overly bright interfaces would be worse in VR, but if it’s that obvious, why does it still happen on websites?

I noticed that line weights were dramatically reduced when viewed in Oculus Rift, rather than a monitor. This issue does connect to contrast. If your copy is rendering much thinner in a headset; it’s going to be very difficult to contrast between the copy and the surround.

I’ve been known to fuss a lot about contrast issues, but that’s because I believe it’s very important.

I fully support these people.

There are huge numbers of people with sight problems; both diagnosed and undiagnosed. If you present content that requires concerted effort of the user to read it; then you are isolating a big percentage of your possible audience.

Now extend this idea to VR.

If you create a product that alienates a fair percentage of your audience; they don’t decide to use another VR system. You’re not running a website where you may lose them to a competitor.

Alienating someone means they will most likely be lost to the world of VR. When you’re trying to present ‘the next big thing’ you need each and every person to go ‘WOW’.

If you make one person ‘WOW’ then they tell others, obviously the converse is true. You don’t lose one person to VR if they have a bad experience, you potentially lose more.

Good testing isn’t simply about pointing out issues with a user’s experience. It’s easy to say something has bad contrast and could be hard to read. It’s harder to see the knock-on effects that issue can cause. That’s where good testing comes in. The ability to see the problem and *potential* problems created by it.

 

Software testing with Oculus Rift

So recently I’ve had the opportunity to work with a company developing a media player for VR; most notably Oculus Rift.

As a user; putting on the headset does throw you into another world. As a tester that world is even more different. Not only are you coming to grips with what you’re seeing, but at the same time you’re questioning it all. Breaking down what you’re seeing into its respective aspects.

Questioning what you need to think about; that you never have before?

How do you take all your experience and apply it to this situation?

One of the first questions that came to me:

What’s more comfortable; to look up towards content or to look down?

This may seem like a facile question, but it’s deceptively important. One of the main differences with using VR; is the user’s ability to look around the virtual environment. The virtual environment needs to be purposed so the user can do this in a comfortable way.

So you’re in the headset, you move your head up and down. You use different rates of motions and different ranges. Then you look down far enough that your chin hits your chest. Now that is not a nice feeling! The sudden contact with your own body does disturb the immersivity.

But it’s not as straightforward as looking up is nicer than down.

If the content as you look up; is also too close to you. It can give a claustrophobic experience. Now that might be perfect within a specific section of a game, but it’s the last experience you want a user to have; when selecting from a wall of video content, for example.

I am going to write more articles on various aspects of testing with Oculus Rift. I also welcome any other testers, or developers with an interest in testing to contact me. Let’s talk about our experiences and build new heuristics that can form a good foundation to help testing with VR develop.

Going freelance

So no-one ever tells you all the stuff you have to do at the start when going freelance!

You always hear the cool stories and how someone’s life/work balance is much better now etc… etc…

So it’s quite a lot to take in but I’m making the move, so if anyone does want to hire me then get in touch and we’ll see how I can help!

Check the about tab for contact details.