Table of Contents Hide
A designer’s guide to creating and testing icons that users understand.
Icons are usually tiny illustrations, so it’s easy to overlook their weight in a user’s understanding of a product. We take for granted that most users are familiar with common icons, such as a cog meaning settings, or a bell meaning notifications. However, the chances are that we’re going to need to make some custom icons that are specific to our industry or product, and we can’t expect a user to immediately recognise what they mean.
It can be tempting to draw up the first shape that comes to mind, put it next to a text label, and rely on the text to do the heavy lifting. We can do better than that! Let’s elevate our icon’s recognisability and clarity by carrying out some really simple user surveys.
Disclaimer: This article is aimed at designers looking to carry out their own studies. Researchers, this may be too rudimentary for you.
To carry out this research, we’ll use a survey platform that will allow us to reach our target audience.
In surveys, always display the icons at actual size to the participants. If the icons are 24px by 24px on the product, then they should be that size in these studies too. We want to find out if they recognise the shape, and showing them something much larger will make it too easy and skew the results.
Make sure the icons shown to the participants are all the same shade of dark grey, displayed on a white background. Colour can add emotion or certain connotations to a design, so it’s best to remove that aspect entirely. Also, white and dark grey are great for accessibility contrast purposes.
Right, let’s get researching. Ideally, we’ll already have at least three icons that are all trying to convey the same message, so we can test them against each other. But if we’re having trouble thinking up some appropriate symbols, we can always…
We’re going to start really broad by simply asking our survey participants what they think would be a suitable icon. To do that, we’ll need supporting imagery, so mock up a wireframe of a fictional version of the product and put a big red question mark where the icon should be. Not only does the monochrome screen remove brand bias, but it helps make the question mark stand out more — the easier we can make this for them, the better.
We need to make sure the participant understands the question, so let’s use this reliable framework:
a) Context — tell the participant what they’re looking at.
b) Explanation — elaborate on details pertinent to the question.
c) Question — ask them what we want to know.
Here’s an example of a suitable question for this scenario. Feel free to copy this, amend as needed, and paste in to your own survey.
The image here shows a fashion retail website with its menu open.
It uses little symbols above the section headings to give more context to their meaning. For example, the Home section has an icon of a house above it, and the Search section has a magnifying glass symbol above it.
The Deals section’s symbol is hidden by a red question mark. What symbol might you expect to see above the word Deals?
The odds are that such an open-ended question will yield some low quality responses, but that’s okay! Take note of any common patterns or themes in the answers, and design a new batch of icons based on them.
Now we have a decent set of icons to test, let’s flip the script and do the opposite of the previous survey — show the participants the icons outside of visual context and ask what they think it represents. This study will really test the strength of these icons, because they rely solely on their own appearance to convey meaning.
Here’s a good question to accompany this survey — it’s short, and provides just enough context without leading the participant.
Please look at the symbol.
Imagine that you were on a fashion retail website and saw that symbol. What would you think that symbol represented?
Repeat this image and question format for each icon. It’s best to have each icon on a different page, and randomise the order of the pages to eliminate survey bias.
When the results come back, we can be fairly liberal with our pass criteria, because people will have different words in mind for the same idea. As long as an answer falls under the general umbrella category, we can count it. For example, for the Deals icon above, people may respond with “sale”, “discounts”, or “promotion”, and those would all be acceptable.
If over 40% of participants label an icon with a word that meets the criteria, personally I would say that’s a really strong candidate.
It’s time to narrow things down. Let’s take our best-performing icons and pit them against each other in a multiple-choice question. We’re telling the participant what the icons mean, so they fully understand what they’re comparing.
Here are some symbols that may be used to represent the Deals page of a fashion retail website.
Please take a look at the images and choose which one you think best represents Deals.
Of course, the icon with the most votes in this survey could be considered the best option. However, it’s likely that some other icons weren’t far behind, and if so, we should do a bit more testing to be absolutely sure.
Let’s assume the leading two icons from the previous study were the Price tag and the Star. We shouldn’t assume that the visual design we’ve made for each one is the best option, so it’s a good time to head back to the drawing board and design some variants of each icon type.
In this survey, we’re going to use three questions to get a good idea of which concept works best.
Show the variants of one concept (in this example, the Star) and ask which one best represents the meaning:
Fashion retail websites often have a Deals section, where you can see what offers are currently available.
When looking at the navigation of a fashion retail website, which of these small symbols might you expect to see next to the word Deals?
Repeat exactly the same question as before, but showing the variants of the other concept (in this example, the Price tag).
Use your survey software’s logic options to present the participant with the icons they chose from the first two questions, and ask the same question for a final time.
By now we’ve likely found a winning icon, but let’s do our due diligence and check that the users recognise it on the product using a click test. We can do this by taking a high-fidelity mockup of the product, removing the labels next to any icons on the page, and asking the participant to tap where they think it is.
Please imagine that you’re visiting a fashion retail website and are looking to buy some new shoes that are currently on sale. Please tap the area of the site where you would go to find them.
The survey software will tell us how many users interacted with the target area, so by looking at how many were successful as a percentage, we’ll be able to tell if our icon was recognisable and findable.
It’s good practice to follow a click test with questions about how successful they they were in the task, and how difficult they found it. We can do this using the Likert Scale, which allows the participant to answer on a scale from 1 to 7.
Follow-up question 1 — confidence
How confident are you that you tapped the correct Deals button?
Follow-up question 2 — difficulty
How easy would you say it was to find the Deals button?
The participant’s responses to these questions will inform us whether the icon was found because it communicated its meaning effectively, or because there were no better options on the page.
Even though this guide has been listed in an order, the reality is that sometimes your results will mean it won’t make sense for you to follow it chronologically. You’ll often find that you’ll have to go back a step or two because there was no discernible ‘winner’ from a survey. Don’t be afraid to mix and match some of the survey types — now and again your results will call for experimentation to find out the things you need to know. Let your inquisitiveness guide you.
Read the full article here