B2B, e-commerce, research
Henry He, Platform SDKs Dev Team
Cutting down the clutter.
After numerous user comments from our partners, we knew there was an issue with the starting experience. The previous starting point (Merlin’s Potions) was built up over the years to become a full-fledged commerce site from Home Page to Checkout.
Most of our customers had a variety of commerce platforms and third-party CMS vendors, meaning our partners had to delete parts that didn’t apply to the customer’s site and had constantly given feedback that they felt they were fighting against our product.
The team had already user tested a prototype and found it only necessary to keep 3 starter pages because these were repeating pages in commerce sites without being too heavily tied to the backend between each customer.
The product team wanted to know how this new design would fare once launched and what our users’ reactions were - which is where I stepped in to do this research.
understanding. setting goals.
Coming in mid-way through a project, I worked closely with the PM to understand what user goals we were improving and how we could measure it.
We wanted to see how customizable the Scaffold felt among developers and the perception around pace of initial setup since actual build time varied too much between partners.
75% of partner developers will have a satisfaction rating of "Satisfied" or higher towards the Scaffold following the study in a survey
Retain feedback from partner developers on their initial thoughts about the Scaffold and document improvements for the next version
When first tackling the initial user test plan, I had the luxury of choosing the methodology with my PM. We were leaning towards usability testing, however we realized our product needed a real scenario which couldn’t be replicated in an hour test.
After researching ethnographic methods and weighing the costs, we settled on running a diary study on the first build using the Scaffold with 5 partner developers. We already had an established relationship with the partner participants, which made the diary studies easier to implement.
Since I had never led a diary study before, we researched how other companies had led their research down to the finest details of how many participants etc. There wasn’t a cookie cutter answer to running a diary study and for our particular case since we were working with remote users in a different time zone - Palestine!
Over the span of two weeks starting after the creation of their project repository, we sent a Google Form survey diary every other work day for users to fill out (with automation of course) as they go through the start of their project. Afterwards, we held 1 hour interviews with each user to go in-depth on their responses.
keeping users engaged.
Slack was a huge touchpoint for these surveys. A Slack project channel was already in place for the entirety of the project build, which meant it was the best form of communication between our team and the partner.
I would send messages every other day for the users to fill out and also personally messaged those who may have forgotten to fill out the form. Honestly, it took a lot of effort and reaching out to get user responses, but it was a good experience to have fall back plans in place in case a user wasn’t responding.
To get more qualitative data, we interviewed each user after the two weeks and used their diaries as a talking point to expand on their thoughts. A quantitative metric was measured with a Likert scale on their overall experience using the Scaffold.
We created a Discussion Guide for some generic questions to ask during the interview, but ultimately we both ad-libbed in the interviews to get richer responses from the users.
With diary studies came huge amounts of data to sift through. We created a rainbow spreadsheet of quotes and how many participants said them to chunk all the data points. For each grouping of quotes, we made insights and documented observations and action items.
Through the final report, we found themes around lack of examples or documentation around certain areas such as accessibility. We found this to be a global recurring theme in our product and another reason for the audit of our docs site (see the Developer Center project for more info).
80% user satisfaction. validating user needs.
Although there were many improvements that could still be made, we were able to achieve 80% of developers having a satisfaction rating of "Satisfied" or higher.
To inform future product decisions and direction, we also included a question about which attributes were the most important among developers. To no surprise, more control and flexibility were the top attributes, solidifying some of our initial assumptions around user needs.
improvise. have a plan b.
Research can be scary, especially since this was one of the first larger-scale user research studies that I had helped lead and put together. It’s especially harder when there was no current process in place to distribute the results but that was also the beauty of this project.
Although a report isn’t the greatest to look at, it became a talking point amongst the team for future iterations. I also learned to come prepared with a plan even when interviewing, but to trust the process and allow for some ad-libs. After all, we’re only human and it makes interviewing seem less robotic.