Artikel Tagged ‘Usability-Test’

Studie von Usabilia: Wie schafft man Vertrauen in Websites binnen 30 Sekunden

29. September 2010 Christian Becker Keine Kommentare

1. Press boosts credibility more than a testimonial

Press versus Users Users seem to value the opinion of the media about Mint more than testimonials by their peers. Take a closer look at the results

2. You can earn trust for free

Free We all like to try before we buy. An option to try a product or service for free seems to influence credibility as well. See where people clicked

3. Quote, quote, and quote

Free Mint is using quotes from different sources to boost their credibility. See which sources worked best

4. It’s all in the details

Pie-chart Even the smallest visual details got the attention of participants in the test. Zoom in on the details

  • Share/Save/Bookmark

Usability-Test: Motivierende Aufgaben stellen

Usability test tasks are the beating heart of a usability test. These tasks determine the parts of a system that test participants will see and interact with. Usability test tasks are so critical that some people argue they are even more important than the number of participants you use: it seems that how many tasks participants try, not the number of test participants, is the critical factor for finding problems in a usability test.

But for test tasks to uncover usability problems, usability test participants need to be motivated: they need to believe that the tasks are realistic and they must want to carry them out. So how do we create test tasks that go beyond the mundane and engage participants?

To help our discussion, I’m going to classify usability test tasks into 6 different categories. You don’t need to create tasks in each of these categories — you simply need to review the categories and decide which kind of task will best motivate your participants.

The 6 categories are:

  • Scavenger hunt.
  • The Reverse Scavenger hunt.
  • Self-generated tasks.
  • Part self-generated.
  • ‘Skin in the game’ tasks.
  • Troubleshooting tasks.

Let’s look at each of these in a bit more depth.

Scavenger hunt

This type of task is a great way for you to find out if users can complete tasks with your system. With a scavenger hunt task, you ask users to do something that has one clear, ideal answer: an example of this kind of task (for a web site that sells luggage) might be: “You’re travelling abroad next month and you’re looking for a good-sized bag that you can take on as hand luggage. You want the bag to be as big as possible while still meeting the airline’s maximum luggage dimensions (56cm x 45cm x 25cm). You have a budget of £120. What’s the most suitable bag you can get?” With a good scavenger hunt task there will be one perfect answer, so quiz the design team to find out the best solution to this task and then see if participants can find it.

The Reverse Scavenger hunt

With this type of task, you show people the answer — for example a picture of what they need to look for — and then ask them to go about finding or purchasing it. For example, if you’re testing out a stock photography application, you could show people an image that you want them to locate and then ask them to find it by creating their own keywords. This kind of task works well if you think that a textual description of the task might give away too many clues.

Self-generated tasks

Scavenger hunt and reverse scavenger hunt tasks work well when you know what people want to do with your web site. But what if you’re less sure? In these situations, try a self-generated task instead. With this type of task, you ask participants what they expect to do with the site (before you show it to them), and then you test out that scenario. For example, you might be evaluating a theatre-ticketing kiosk with regular theatre-goers. You begin the session by interviewing participants and asking what they expect to be able to do with the kiosk. For example, you might hear, ‘book tickets for a show’, ‘find out what’s on’ and ‘find out where to park’.

You then take each of the tasks in turn, and ask the participant to be more specific. For example, for the task, ‘book tickets for a show’, you’ll want to find out what kind of shows they prefer, such as a play, a musical or a stand-up routine. How many tickets would they want to book? On what day? For an evening or a matinee performance?

Your job is to help participants really think through their requirements before letting them loose with the system, to make sure that the task is realistic.

Part self-generated

These tasks work well when you have a good idea of the main things people want to do with the site, but you’re less sure of the detail. With a part self-generated task, you define an overall goal (for example, ‘analyse your electricity usage’) and then ask the participant to fill in the gaps. For example, you can do this by asking participants to bring data with them to the session (such as electronic versions of past electricity bills) and allowing them to query their own data in ways that are of interest (for example, ‘what are my hours of peak usage?’)

‘Skin in the game’ tasks

A problem with usability test tasks is that you want participants to carry out the tasks as realistically as possible. But there’s a big difference between pretending to buy a holiday in Spain and really buying a holiday in Spain. No matter how well intentioned they are, participants know that, if they get it wrong, there are no consequences. You can mitigate this risk by giving participants real money to spend on the task.

The easiest way to do this with an e-commerce web site is simply to give participants a redeemable voucher to spend during the test, or reimburse their credit card after they have made a purchase.

A related approach for other systems is to incentivise the participant with the product itself. For example, if you’re testing a large format printer that creates photographic posters, you could ask people to bring in their digital photographs and then get them to use the printer to create the poster they want. The poster itself then becomes the participant’s incentive for taking part.

As well as getting as close as possible to realistic behaviour (mild concerns become pressing issues), this approach also gives you the confidence that your participants are the right demographic, since their incentive is based on the very product you’re testing.

Troubleshooting tasks

Troubleshooting tasks are a special category of test task because people may not be able to articulate their task in a meaningful way. It would be misleading to give a participant a written task that you’ve prepared earlier since by its very nature this will describe the problem that needs to be solved. For example, a mobile phone may display an arcane error message if the SIM card is improperly inserted or a satnav system may fail to turn on. As far as the user is concerned, the product is simply not working and they don’t know why.

For these situations, it makes sense to try to recreate the issue with the product and then ask the user to solve it — either by starting the participant at Google or at your company’s knowlegebase articles. You’ll then get great insights into the terminology that people use to describe the specific issue, as well as seeing how well your documentation stands up to real-world use.

Gefunden auf:

  • Share/Save/Bookmark

OpenHallway – ein Tool für Asynchrone Remote Usability Tests

Open Hallway - Ein Asynchrones Remote Usability Testing Tool

Open Hallway - Ein Asynchrones Remote Usability Testing Tool

Habe soeben ein neues Remote Usability Testing Tool entdeckt. Mit ihm ist es möglich Probanden z.B. per Mail zu einem Test einzuladen.
Alles was man in der Administrationsoberfläche zu tun hat ist den Startlink anzugeben und die gewünschten Scenarios zu hinterlegen. Landet dann ein Proband mit Hilfe des Links auf der Testseite, muss dieser zunächst bestätigen, dass er ein Programm ausführen möchte.
Im Anschluss bekommt er die Scenarios angezeigt, die er zu bewältigen hat und kann dann sofort mit dem Usability-Test beginnen. Dabei wird sein Bildschirm und der Ton (falls er ein Microfon besitzt und dieses angeschaltet) aufgezeichnet.
Der Testleiter hat nach Abschluss des Vorgangs Zugang zu der Aufzeichnung im Backend und kann daraus Schlüsse zur Usability der Seite ziehen. Leider ist bei meinem Test das Video viel zu schnell abgelaufen, so dass sich eine Analyse als äußerst schwer herausstellen würde.Evtl. sollte jeder es einmal probieren und eigene Erfahrungen machen. Ein Testdurchlauf ist kostenlos. Die Lizenzierung erfolgt monatsweise und beginnt bei 50 Euro. Ist also allemal ein Blick wert.

Leider ist dieser Online Service in englisch und kann nicht angepasst werden (CI z.B.), was jedoch bei vielen Kunden gewünscht ist.

Das Tool findet man hier:

  • Share/Save/Bookmark

5 Sekunden Schnelltest

Ein sehr interessanter Discount-Usability-Test Ansatz gibt es von UIE.
Bei diesem kurzen Usability-Test bekommen die Probanden die zu testende Seite (bevorzugt Startseiten) nur 5 Sekunden zu sehen. Dies entspricht der durchschnittlichen Erstkontaktdauer. Die Probanden müssen dann im Anschluss aussagen, wie Sie den Task bewältigt hätten. Eine Methode die ich mir sehr gut im Internet vorstellen könnte (terminierter Picture-Viewer oder in Form eines Live Tests bzw. synchronen Remote Usability-Tests).

Nach kurzer Recherche habe ich auch einen solchen Dienst gefunden:

Hier ist es möglich Screenshots hochzuladen und aus 3 vorgegebenen Testmethoden auszuwählen. Für schnelle und kostengünstige Datenerhebungen bestimmt äußerst interessant.

Hier noch ein Auszug zur Methode an sich:

As we often do in other types of usability tests, we start by giving users a focused task. For the Donation page, we gave users a simple task:

“You’re ready to donate to the Red Cross organization. But you’re unsure of what kind of donation to make. What are your donation options?”

Next, before we show the user our page, we tell them we’ll only display it for 5 seconds. We ask them to try to remember everything they see in this short period.

Once the user views the entire page for 5 seconds, we remove it by either covering it up or switching to another window. Then, we ask them to write down everything they remember about the page. When they finish jotting down their recollections, we ask two useful questions to assess whether users accomplished the task. For the Donations page, we’d ask, “What is the most important information on this page?” and “How would you go about donating to the Red Cross?”

Analyzing the Results

By paying careful attention to users’ initial impressions, we can identify whether the content page is clear and concise. If the page is understandable, users will easily recall the critical content and accurately identify the page’s main purpose…

The Benefits of 5-Second Testing

Limiting the viewing time to 5 seconds, we get a valuable glimpse into what happens during the first moments a user sees a page. When we give users more than 5 seconds to study the page, we’ve found they start looking at the page more like a designer, noticing details they would normally miss or misinterpret.

Frequently, we’ll conduct 5-Second Tests with paper mock-ups or low-fidelity electronic prototypes, such as PDFs or Photoshop page renditions. We can test very early in the development cycle, long before the team builds a functional web site. Often, this early insight can help point out site-wide information design requirements, saving much redesign work down the road.

One of the 5-Second Test’s biggest advantages is how quick it is. When evaluating the Donation page, each user took only 10 minutes! Because this technique is quick and easy to implement, it is perfect to run in locations where we can gather many users at one time, such as trade shows, conferences, and the company cafeteria. We can gather large amounts of user data in a short time.

Den ganzen Artikel gibt es hier.

  • Share/Save/Bookmark

Usability Moderation Leitfaden

15 Tips for good listening:

  1. Do try to read the participant’s “non-verbals” — inflections, gestures, posture and facial expression.
  2. Do work hard at overcoming distractions (such as problems with the recording equipment) that may interfere with good listening.
  3. Do try to stay with speakers who may be hard to follow — those who speak slowly, those whose ideas are poorly organised or those who repeat themselves.
  4. Do use non-verbal communication (eye contact, smiles, occasional head nods) to indicate that you want to hear more.
  5. Do re-state or re-phrase the participant’s statements when necessary so that the participant will know that you have understood him or her. A good way to reflect back is to begin a sentence with, “What I hear you saying is…” or “If I’m hearing you correctly…” or “Let me just check what I understand you’re saying…”
  6. Do admit it if you don’t understand something that the participant said and ask the participant to re-state it.
  7. Do avoid preparing your response to what is said while the participant is still speaking.
  8. Don’t just listen for the factual statements that a participant makes about the interface. These are important but just scratch the surface of what the participant is thinking. Also listen for feelings, attitudes, perceptions and values.
  9. Don’t just listen to what’s said. Participants often spend a lot of time saying nothing, even when you use the phrase, “What are you thinking right now?”. You should also listen for what is not said since this indicates what is being taken for granted.
  10. Don’t interrupt. You’ll get your chance to voice your opinions when you contribute to the report. If you find yourself talking over a participant, bite your lip.
  11. Don’t fake attention. If you find yourself day dreaming, re-orient yourself and ask a relevant question that shows you are paying attention.
  12. Don’t tune out participants just because you think they’re dull. There are always nuggets of useful information, you may just have to work hard to find them.
  13. Don’t get distracted from what participants say by their style, mannerisms, clothing, accent or voice quality.
  14. Don’t allow a participant’s status to have any bearing on how well you listen to him or her. Every participant recruited for a usability test has passed the screener and so each one is equally important. It’s not your place to prioritise one participant over another.
  15. Don’t let your expectations — hearing what you want to hear — influence your listening behaviour.

Mehr hierzu unter:

What every usability test moderator ought to know about good listening

  • Share/Save/Bookmark

Loop11 – Asynchrone Remote Usability-Testing Software

Loop11 - Asynchrone Remote Usability-Test Software

Loop11 - Asynchrone Remote Usability-Test Software

Loop11 ist eine neue asynchrone Remote Usability-Test Software im Beta Stadium.

Mit ihr ist es möglich Usability-Tests unkompliziert über das Web durchzuführen. Probanden gelangen über einen Link oder eine Einladung auf die Seite von Loop und müssen hier, wie bei einem klassischen Test, bestimmte Aufgaben erfüllen, die der Testleiter zuvor im Programm definiert. Die zu bewältigende Aufgabe bekommen die Teilnehmer am oberen Rand des Browser-Fensters zu sehen. Loop legt sich dabei über die bestehende Seite bzw. fordert die Informationen von der zu testenden Seite an und stellt diese 1:1  während des Test dar.Ist eine Aufgabe abgeschlossen kann das der Teilnehmer angeben und gelangt zum nächsten Task.

Die Interaktion wird während der Nutzung aufgezeichnet und kann im Anschluss an den Test (eigentlich sofort) angesehen und ausgewertet werden.

Dies ist kein sonderlich neues aber sehr interessantes Konzept. Zuvor gab es ähnliche Lösungen von z.B. SirValuse mit LeoTrace und Keynote mit WebEffective. Letzteres hatte ich bereits letztes Jahr interessehalber angefragt.

Der Vorteil der Software jedoch ist, das es momentan noch kostenfrei ist Studien zu Testzwecken anzulegen und die Nutzung nur vergütet werden muss, sobald man z.B. mehr wie 2 Tasks testen möchte (so wie ich das sehe). Wenn man dies mit den mehreren Tausend Euro teuren Lösungen vergleicht ist es sicherlich ein sehr interessantes Modell, um schnell und mit geringem Aufwand asynchrone Usability-Tests durchzuführen.

Ein weiterer Vorteil dieser Methodik ist die Probandenakquise, die direkt über das Internet z.B. in Form von Bannern auf Webseiten oder in Form von Mailings stattfinden kann. Hohe Kosten z.B. in Form von Vergütungen, Labormieten und aufwendige händische Auswertungen fallen weg. Auch die häufig problematische Terminfindung fällt weg. Jeder nimmt dann am Test teil, sobald er Zeit dafür findet.

Im Gegenzug fehlt es hier an wertvollen qualitativen Daten, wie z.B. das Feedback während des klassischen Tests im Labor in Form von Worten und der Körpersprache. Der Testleiter kann nicht auf mögliche Fragestellungen reagieren oder an gewissen Stellen nachhaken. Es gilt wohl immernoch die Devise: weniger Aufwand, weniger Erkenntnisse.

Trotz allem ist es eine sehr interessante Software, die ich mir näher ansehen werde.

Ich habe meinen Account beantragt. Sobald ich erste Erfahrungen mit dem Programm gemacht habe, werde ich diese hier posten. Ich bin gespannt…

Wer die Software auch mal ausprobieren möchte, findet sie hier:

  • Share/Save/Bookmark

Highlight-Video Richtlinien von Userfocus

It’s propaganda, not a documentary

Remember the purpose of your highlights video. It’s to encourage the design team to make changes to the system. You’re not a news reporter, dispassionately presenting the strengths and weaknesses of the system: you have a point of view. Leave the balanced reporting for a written report where you have more time and space. If the design team are interested in the specific issues, they’ll go through your bug list. It’s like the first rule of psychotherapy: they need to accept there’s a problem before they can fix it.

Say something positive

You’re about to tell people that their baby is ugly. If the audience comprises people who designed this system, they may not like your attitude. So soften them up with clips showing one or two strengths of the system. Since you don’t have long, make this clip do double duty: try to pick a clip that also demonstrates the “thinking aloud” protocol, so people understand the process you used.

Focus on the top 5 issues

Chances are, you spotted dozens of usability problems. In another report, you’ll describe each of these problems, provide a severity rating and suggest a fix. But in a highlights video you need to summarise and prioritise. So review your problems and identify 5 themes that capture the most important problems. For example, say you spotted one usability problem to do with a confusing label on a form, a second to do with an abbreviation that baffled users and a third to do with organisational-centred navigational terms. You could summarise these as a single issue (“terminology”) and then select the best clips to demonstrate it.

Show 5 clips per theme

For each of your themes, select around 5 clips that demonstrate the problem. When choosing your clips, aim to convey some of the diversity of your participant pool. It’s tempting to focus on the one or two participants that were particularly articulate or fun to watch, but doing this leaves you open to the criticism that the problems you’ve found were due to stupid users. So aim for at least 2 participants per theme and make sure all your participants get heard in at least one of your 5 videos.

Show each problem in 5 minutes or less

You’ll have lots of clips of participants illustrating each theme, so be ruthless with your editing. People expect video clips to get to the point quickly: for example, the average YouTube video lasts 3 minutes. So you should aim for a maximum of 5 minutes to illustrate each theme. If you have 5 clips, that means an average of 60 seconds per clip.

Put your best clips first

When creating the storyboard for each theme, put the most compelling clip first. Then, if the audience buys in and agrees, stop the video and skip to the next one… or keep it rolling if there are doubters. This approach works well because one of the biggest problems in presenting usability test results is often getting through them all!

Help viewers concentrate

When you show a 1024 x 768 screen in all it’s glory it’s often hard for people to see what you’re getting at. People may not be able to see the participant’s mouse — or the participant’s mouse may not be where the action is. This means you should zoom in on specific areas of the screen, or use callouts in the video to explain what’s happening. This is where you need to go beyond usability testing software like Morae and Silverback and use more flexible tools for editing digital video like Camtasia Studio and Screenflow. Both these tools are cheap ways to make your videos look good and for you to feel proud of the results.

Mehr hierzu unter:
The rule of 5: How to create a usability test highlights video you can be proud of

  • Share/Save/Bookmark

Usabilla – Mini Usability-Test Tool

Usabilla ist ein neuer Dient mit dem es möglich ist Screenshots oder Seiten zu integrieren und einem kurzen Test zu unterziehen.
Das Ganze befindet sich im Beta-Stadium und ist zumindest momentan noch kostenfrei. Ein Test ist sehr schnell und wie folgt durchgeführt:

  1. Anmelden
  2. Test anlegen
  3. Screenshots auswählen
  4. Fragen zu jeweiligen Screenshot bestimmen
  5. Probanden per URL zur Umfrage einladen

Der Proband sieht dann nacheinander die Screenshots und kann Punkte und Notizen auf dem Screenshot befestigen.Der Vorteil dieser Methode ist die Dauer. Ein Test ist sehr schnell aufgesetzt und auf der anderen Seite sehr schnell von den Teilnehmern durchgeführt.

Usabilia - Ein Programm zum Diskutieren von Designvorschlägen

Usabilia - Ein Programm zum diskutieren von Designvorschlägen

Ich habe das Tool getestet und muss sagen es eignet sich sehr gut um kollaborativ Designs im Team und mit möglichen Endanwendern zu besprechen. Unter Usability verstehe ich persönlich jedoch die Eigenschaft etwas zu bedienen bzw. die eigentliche Interaktion mit einer Software oder Maschine. Da hier keine Interaktion untersucht wird kann man nicht ganz von einer Usability Optimierung sondern eher von einem klassischen Design-Test sprechen.
Hierfür jedoch eignet sich der Webservice sehr gut.

Probiert es doch ganz einfach auch einmal aus. Die Anmeldung und das Erstellen eines Tests ist wie gesagt kostenlos!

Hier gehts zum Tool:
Usabilla – Transparent Usability – Visual Feedback

  • Share/Save/Bookmark

Online Usability Testing ganz einfach! Dank der UX-Suite

UX-Suite der kuehlhaus AG

UX-Suite der kuehlhaus AG

Usability-Testing ist wichtig, doch meist nicht mit den vorgegebenen Budgets vereinbar. Deshalb ist momentan den Trend zum Discount-Usability-Engineering zu erkennen (z.B. in Form von Experten-Reviews usw.). Doch nichts ersetzt eine empirische Maßnahme. Nun stellt sich also die Frage: Wie kann man Nutzertests fahren ohne das diese rekruitiert, eingeladen und vergütet werden müssen? Ein nicht nur sehr kosten- sondern auch zeitintensiver Prozess. Deshalb hat sich die kuehlhaus AG, eine Internetagentur in Mannheim bzw. ich ;-) , ein paar Gedanken hierzu gemacht und softwaregestütze Evalutationsmethoden entwickelt, welche unter dem Begriff User-Experience Suite zusammengefasst wurden.

Ein Blick lohnt sich!

  • Share/Save/Bookmark

Usability Tests müssen nicht immer dem Lehrbuch entsprechen

Sehr tolle Methode um schnell sehr viele Meinungen zu erhalten bzw. schnell Daten zu erheben, um Designentscheidungen nicht aus dem Bauch heraus zu treffen:

Even if teams don’t do classic usability tests, they still need insights on which to base design decisions.

The secret to usability testing in the wild is that you can conduct usability tests following the basic methodology, just less formally. Call it Usability Testing Lite: sit next to someone using a design and watch them. Teams that do testing in the wild don’t need a lab. They don’t usually record anything. But they do have everyone on the team fully present, with at least one other person from the team in each session held with a user.

They conduct sessions in cafes, or malls, trade shows, or street fairs, even their own reception areas — anywhere the users might be — and ask nicely for a few minutes of time to try out something new.

These are quick, cheap, and insightful sessions. And since these smart teams were able to gather a few insights in a few days rather than a few weeks, they just do another round as soon as they can. They repeat the steps until they start to see trends. Then adjust as more questions come up. The thinking of the best teams is, how could having some insights be worse than doing nothing?

At least they got out of the office, maybe got a reality check on some small part of a design, and started to make a case for having more contact with users. Sounds better than opinion wars to me.

Den ganzen Artikel zur nicht ganz methodenreinen aber sicherlich effizienten Methode findet man hier:
Quick and Dirty Usability Testing: Step Away from the Book

  • Share/Save/Bookmark