What happens when two techniques get acquainted
As part of a piece of discovery research which had the aim of understanding what users look for when purchasing a product to help treat the menopause; my product team had developed a few hypotheses around what information users needed in order to feel confident buying the item.
As well as conducting qualitative research to get insights on users previous shopping experience in this category; I decided to see how much of a priority the information was for users by using the MoSCoW prioritisation method in combination with a closed ended card sort. Rather than us, as a team, prioritise the features based on what we thought the users saw as must haves or should haves etc, we got the end user to do it in a way that would easily highlight what people consider as essential or irrelevant.
Back to basics, what is a card sort. As defined by Nielsen Norman, a card sort is a ‘UX research technique in which users organize topics into groups. Use it to create an IA that suits your users’ expectations’ [1]. In essence users are given a selection of cards and asked to group them into collections that makes sense to them. It is normally done to understand users mental models in in relation to the site structure and finding what they need.
As defined in Airfocus, the MoSCoW method is classed as ‘a tool for establishing a hierarchy of priorities during a project. It’s based on the agile method of project management, which aims to strictly establish factors like the cost of a product, quality, and requirements as early as possible. “MoSCoW” is an acronym for must-have, should-have, could-have, and won’t-have, each denoting a category of prioritization’[2]. In short, it is a method for product teams to outline what their priorities should be as they build out their roadmap, and as a team agree what their time money and resource should be focused on.
The Setup:
This was conducted using UserZooms card sorting functionality. Participants were given a total of 19 cards each with a hypothesis we believed was important to the user purchasing journey. This ranged from product reviews, to product efficacy, all of which derived from jobs to be done from previous research. It was also a closed card sort with the groups predefined using the MoSCoW categories. The only slight alteration was what ‘w’ stood for – changing it from won’t have to would not use. Participants were then asked to put the cards into one of the different groups based on whether they considered it to be a must have bit of information, something that should be on the product page, something that could be on the product page or something they wouldn’t need on the product page. As the participants had previously bought items, this was based on what had influenced their own experience.
The Results:
The results were fairly conclusive. There turned out to be 5 key must have elements and this was based on them having over 50% of participants grouping them into that category. The remaining hypotheses were considered should haves, could haves and would not needs based on the same principle of whether over 50% of participants put them into that group. There were some cases where some cards did not achieve a majority of 50% in one of the groups. In this case their group was based on the value of their highest group – eg 40% was the highest value in should have so it was considered this, but ranked as a lower level should have.
This was the second part of research on product information and validated the qualitative findings from the first study as part of the triangulation.
The Learns
Overall this was seen as a success by myself and the team. We could clearly see where the priorities should lie when designing the product pages with objective data. From a researchers perspective this method was also fairly easy to analyse, however it wasn’t as simple compared to doing something like a likert or ranking scale, in which the results are easier to draw a conclusion from. Also none of the 19 hypotheses were grouped into would not use which is potentially the acquiescence bias at display as users did not want to come across as disagreeable. This is therefore something that, where applicable, we would look to do again.
Users are directly telling us where the priorities should be by combining two established methods which removes any of the guesswork from our team
Its always encouraging to see your work recognised, so if you found this article interesting, useful or just a general good read, pleas do leave a clap or follow!
Read the full article here