For week two we went through the steps of creating an unmoderated test online. My test is being conducted on a smaller online company, StoreHouse tea. Validately is an impressive platform with many different ways of obtaining data. I’m hoping my tasks will help to reveal how navigable the existing interface is when trying to perform directed tasks. It was a challenge to prioritize what areas of the interface to test, and how to go about doing so. I look forward to seeing how participants are able to do.
As election day approaches, I found myself wondering about the UX behind perhaps the single most important application- balloting. Moving the voting process to a digital platform is a controversial conversation, and a move that’s unlikely to happen in the near future. There are understandable concerns with security. But there should still be a conversation about the user experience behind casting a vote. After all, the 2012 election attracted only 58% of Americans as voters. I think user experience is one of many factors that tie into this low turnout, especially when younger voters are the most absent. Our ballots are not exactly straight-forward, and the internet and mobile software has spoiled people with a refreshing movement of uncompromisingly-minimal design. Going from one to the other can be a shock to users. With some minor touch ups to our existing system, we can get more people to participate in our political process.
Reviewing recruiting platforms this week was a useful exercise for me. Being fairly new to User Experience Design, I am unpracticed in evaluating many of these services. I have looked at many SaaS platforms for different applications, but Mechanical Turk was something I was completely unfamiliar with. It’s very easy to see how useful this service can be when time is limited for recruiting.
Conducting testing requires you to decide on a method. Face-to-face and remote testing have their own advantages depending on the resources you have available. For F2F testing, the data you get is very rich because you have the opportunity to ask questions and see body/facial expressions. These reactions are very important and F2F testing allows you to capture it during recorded sessions and while taking notes. It requires strong moderation but face-to-face testing can yield very meaningful data from just a handful of participants.
Remote testing can be effective as well but there are drawbacks. You cannot actively question participants, and many times sessions are not recorded. Some participants will also do a better job of ‘thinking aloud’ than others. Testing this way is cheaper because it doesn’t require participants to be on site or take much time, and because no researcher is needed. This allows for many tests to be taken in a short time frame.
If I were in a situation with unlimited time and resources, I think face-to-face testing is always going to be the better way to go. The reality is that many situations won’t be that flexible, and remote testing can be an effective alternative.
While conducting my own testing on Papa John’s online ordering, I was surprised to witness as many usability issues in action as I did. My participant was a former colleague of mine with a background in software testing, so I expected him to be able to navigate all tasks effortlessly. He’s used to far bigger and complex usability issues, and still there were times when he had to hesitate or made a mistake performing tasks. Going through that testing made our final assignment an exciting endeavor, since I knew my peers’ videos would likely uncover much more than my own.
This ended up falling in line with expectations. Each of the three assessments provided great insight into areas where common usability issues may occur, and everyone seemed to do a great job ‘thinking aloud’. I was able to make determinations about areas of opportunity that I felt confident about after watching four individuals go through the same testing. The assignment and our class as a whole was an eye opening experience as to the prevalence of usability issues online. Surely companies such as Papa John’s have robust design teams, and go through countless iterations, yet still opportunities for improvement exist and they’re not hard to find. It gives me great confidence moving forward in my academics to see how great the need is for usability review.
This weeks assignment required us to play the role of moderator, using our provided script and tasks. It was a difficult exercise; I found myself biting my tongue to refrain from speaking at times, and potentially leading my participant. I was fortunate to have found a participant that has a background in quality assurance for software companies, and had familiarity with think-aloud protocols. He did a great job verbalizing his thoughts, and provided great feedback. If I had chosen a candidate who had less experience, it would have been much more challenging.
If I had to do the assignment over, I would make more alterations to the script, and make it tailored to the individual I would be speaking with. This way, it would feel more natural to the participant and keep things moving smoothly. Overall, I was pleased with how I moderated this assignment. I was nervous, because I knew there was no restart button once a participant had seen testing materials. I was concerned too that my software wouldn’t work out properly and data would be lost as a result.
There were times when I got a little off-task trying to build rapport with my participant and make the experience a little more conversational. I definitely need more practice in order to pull this off effectively, but overall I think things stayed on track fairly well.
When I was finishing up my undergraduate degree, I worked in call center management for a really cool technology company. It was not the most glamorous job, but I enjoyed it, and I learned something about myself: I love metrics. Not so much a math-lover, I have always recognized the usefulness of statistics. Working in this role allowed exposed me to several different ways to track and report this data. This week’s assignment allowed me to think critically about which quantitative approaches may be most useful in the context of a food company such as Chipotle, and represent this data in a way that would be useful in a usability report.
I chose time-on-task testing, but there were a few I thought were good selections. I wanted to capture (between product versions) which approaches were the most efficient for the user. Admittedly, I made some assumptions. I know that most (Chipotle) users have in mind what they like to eat from the establishment. I went on a company lunch once and was astounded to watch 40 people rattle off exactly what they wanted, each with little nuances. Keeping this in mind, I made efficiency my top priority for testing. I wanted to be able to say, “this design took users less time to use successfully, therefore required less cognition, therefore is a better design”. Sometimes conclusions such as these are hard to draw with only one form of quantitative data. I feel as though the approach I chose was an effective one, especially when looking at the data visually. It’s easy for an executive to understand two pieces of data compared to one another in the form of a chart, and see what is working, and what isn’t.
I enjoyed having the opportunity to do group work this week. Seeing different approaches and having discussions about them has benefitted me greatly throughout my coursework here at Kent. It seems that design more than about anything else benefits from having more and more input. As someone whose love of design stems from aesthetics, my favorite distinction between people’s work is style. I have already picked up a few tips and tricks for formatting and style from my peers.
Going through the process of creating a screener and tasks was not easy for me. Luckily enough, pizza companies are very relatable for most people, and a fun business to work around. I was also grateful for my assigned group: advanced users. I like to think that I have membership to this group. I’m now at my 3rd university and well-versed in making use of online ordering platforms, ever since they became available in the past 10 or so years. I leveraged this experience and thought back to countless interactions with similar companies to form this document.
Because we were working around a pizza company, I chose to use scenarios heavily. I think it’s safe to say that even novice users have enough experience with pizza- either their own or from others- to find my scenarios relatable. I crafted six scenarios to work around my twenty-or-so questions in order to get the most out of my participants, and they were very useful in this case to help me find the information I was looking for.
For this week’s assignment, I chose to use a more formal approach with a written document. Our scenario described high-level executives as key stakeholders, so it made sense to build an argument with a written report that could be quickly interpreted and available for later reference. This allowed me to put the situation into context and build a strong case for formative testing. I tried to layout my document in a way that it could be understood just at a glance by emphasizing key points with bold (or in this case, regular) weighted font. A very brief one sentence summary is put at the bottom for the most reluctant readers.
It made sense to me to choose formative testing for several reasons. The situation given to us made summative testing challenging. No working prototype was in place, we had no baseline for testing, and there wasn’t enough time for competitive review. Using formative testing just made sense with time constraints that were involved, because it allowed for evaluation along the way, and iterated testing. This approach give the pizza company (Pizza Pan in my case) its best odds for releasing a sound online service. It was a goal to keep the document to one page, with relatively small text areas for easy scanning. This was to make the information and main points easily accessible to all people involved with the project.