Introducing Usability Testing to Ethos

Rocket logo

Role

Frontend Engineer & UI/UX Designer

Team

3 Engineers: Andrew Chough, Bryan Lam, Hera Kim

2 Designers: Andrew Chough, Bryan Lam

Tools Used

Adobe XD, Git

Duration

January '21 - August '21

Summary

Helped create a standardized usability testing framework within my team to open up dialogues with users as well as capture feedback and incorporate changes in a timely manner.

Background

When I joined Adobe mid-2019, usability testing was nearly non-existent within the team and with any of our products. Since the work that the team focused on was internal, our process with pushing out updates to our products didn’t go through any usability testing and instead relied solely on tickets that were created within JIRA to account for any bugs as well as Slack channels to account for any user experience frustrations or suggestions. Since the tooling that the team was creating was relatively within its infancy for a few years, including the time around when I joined, and the priority was on scaling the project as fast as possible, there wasn’t a heavy priority of making sure that we ran through usability testing when developing new features or improving old workflows.

Problem

For many developers within Adobe, the features that would be released wouldn’t necessarily resolve frustrations about the current user experiences - in rare cases, they would actually create more frustrations within the interfaces.

And while the importance of scaling the tooling to an enterprise level could not be undermined, there was a lack of priority to make sure that we were testing with our users as to whether or not the workflows determined by these new features were the right ones. A lot of assumptions were made from the developers’ perspectives and with a lack of usability testing, sometimes these assumptions were extremely off base.

Design Challenge

How might we integrate a viable and scalable usability testing framework within our products to ensure that we’re receiving the right feedback in an efficient manner?

“Better user experience. Please hire a designer to help with the crap design that you have right now.”
- Response by developer in annual feedback survey

Solution

A usability testing framework that allows developers and product people to double check their work and make sure that the features make sense.

Picture of woman conducting a usability testing session virtually with a user.

Process

Moving from impromptu to agile

The first step in creating a viable usability testing framework was identifying and communicating to the right people why usability testing was necessary. As our tooling became the company standard, it became apparent that our style of pushing out features without visiting them again before they hit production was not going to be viable in the long term. Thus, I communicateD to my manager and higher ups that we needed to implement some form of usability testing for our features moving forward to ensure that users were able to walk through workflows without any pain points and to be reassured that we were creating these features the most sensible way. On top of this, I pointed out that the number of messages within our Slack community channels would decrease in the long term as it’s assumed that any confusing workflows would be identified and corrected within our usability sessions, therefore preventing the complaints to arise in the first place.

Picture of several colleagues discussing around a table.
Igniting the conversations

What I focused on next was ensuring that we had a usability testing framework in place and discussed how the broader team would execute on this for our features. Since our team worked on UI components and experiences, we came to the conclusion that we needed to focus on having our users walk through the features, ask any necessary questions related to the workflows, and probe for their thoughts when they ran into roadblocks. I was able to set up discussions within our team to form a general framework on how we would approach the testing sessions, what kind of questions we would ask, and what type of feedback we would want to hear from the users.

Picture of woman presenting information on a whiteboard between two ideations of an app.
Executing user sessions

Once this process was finalized, the team made sure to use it the next time we were ready to release a new feature into our product. I asked for volunteers in our community channels to set up a one-on-one or sometimes two-on-one (a notetaker would attend) meeting that allowed us to walk through our new features with our users and ask them questions about what did or didn’t make sense while performing certain actions. During these sessions, I emphasized having the user think out loud about what they were thinking as they performed an action and asked follow up questions when we needed more information about anything they said. The results of creating this process and meeting with users to execute these usability sessions was that a dialogue between us and the users was created, ensuring that whatever feedback that was captured was able to be properly translated into a feature improvement.

Picture of a person pointing at a phone.
Generalizing the process

Once all of the user sessions were finished, I compiled the notes from all of the sessions and combined it into a document where everything was laid out. A meeting was then held internally within our team to go through all of the notes from each user, group pieces of feedback together, and prioritize any gaps in the user expectation vs experience or any confusing workflows. From there, I prioritized the ones that were blocking our release to General Availability and worked on fixes for these blockers. Once everything was finally completed, we released the feature to the general public, making sure to keep an eye out on any of the community channels where feedback might be placed.

Picture of colleagues putting up and analyzing post-it notes on a clear glass wall.

Impact

The impact of the new changes were felt from the get go. The features that used the usability testing process received more acclaim from our external users as well as from our wider team internally.

There were fewer, if any, complaints about the newer features and anything that was communicated to us as a problem was immediately prioritized rather than tossed over into the backlog. On top of this, it became easier to communicate the importance of usability testing to others after we ran through the first few runs of these usability testing sessions.

“For the developers that helped developed the new onboarding feature, kudos. It has been a pleasant change."
- Developer after changes were implemented

Reflections

Constraints

The biggest issue that occurred during this entire process was the scale as to which these usability sessions happened. An hour is a long time for anybody and over time, it became harder to gather users for usability sessions, especially since we offered no kind of compensation for their time. Therefore, it became difficult to get a wide variety of users that could walk through the experience of each feature and we had to rely solely on a pretty monotonous pool of people. Fortunately, since our product catered to only developers at Adobe, this proved to be not a huge issue. On top of this, since these sessions ran towards the end of feature development, it was much harder to go back and revise any of the fundamental user workflows that could have been identified and worked on if we kept the scope of the sessions to a smaller section of the feature.

Conclusion

It’s safe to say that the usability testing sessions provided much more good for our team and beyond, as other teams (especially those that didn’t work on any UI-related work) started to devise their own form of usability testing process to use with their users. I’ve worked with some other people in the broader team to work on a generic usability testing framework that can be broadly applied to any team that needs usability testing in their workflow.