Table of Contents Hide
There’s a lot of talk right now about how we should be incorporating artificial intelligence into the UX writing and content design process, but not really a lot of talk about how.
This has led to something of a crisis of confidence in the industry. Am I going to be out of a job soon? What happens if I don’t learn these skills quickly? Are AI models going to just take over my roles?
It’s also hard to separate real, useful advice on AI from the number of people who are looking to cash in on a quick trend. This makes it easy to dismiss all advice regarding AI—and given how incredibly popular these tools are becoming, that would also be a mistake.
We fear something more if we don’t understand it. So UX writers and content designers should understand how these models work, what they can and can’t do, and how we can incorporate them seamlessly into our work so they aid us, rather than confuse or slow us down.
Far from being replaced, we actually think these tools are going to make your job as a UX writer or content designer much easier when used in the right ways.
How do large language models work?
Large language models (LLMs) are a form of AI that takes a huge amount of text and tries to identify relationships between those pieces of text. This is incredibly simplified, so if you’d like a deeper explanation check out this video from Google.
Basically, here’s what you need to know: LLMs aren’t the type of science-fiction AI you see in movies. It isn’t a human consciousness. It’s a very, very complex set of rules that govern the relationships between words, phrases, and much longer pieces of text.
LLMs like OpenAI’s GPT or Google’s Bard take a piece of text, then try to generate the next word based on that text. That means the more context you give it, the better the response will be.
For example, If I write the sentence, “Let’s go to the…” then the next word could be anything. But if I said, “Let’s go to the movies at…” you have a smaller set of possibilities of what the last few words could be in that sentence. To put it very simply, LLMs are trained on huge amounts of text and attempt to understand the relationships between those words.
This is critical: LLMs aren’t “thinking” as you or I might. These networks are based on the relationships between words. That means the more context you give them, the better the responses will be.
Starts to sound a lot like good UX writing and content design, doesn’t it? That’s because UX writers and content designers are natural allies for LLMs. The discipline of “prompt engineering” or “prompt design” is based on perfecting a set of instructions given to LLMs…but it turns out that UX writers and content designers have had those skills all along.
Professionals with a nuanced understanding of language—UX writers and content designers—have a natural advantage when using AI to do their work.
How can AI tools help in the UX writing process?
There’s so much noise around AI that it’s hard to understand how helpful these models can actually be. Additionally, a lot of articles about ChatGPT or Bard tend to focus on what’s possible in these models, rather than useful.
Sure, it’s cool that models are able to structure information in a table, or that it can give you “user persona” documents. But unattached from a design process based around real, specific products with real, specific users, they don’t mean anything.
Does that mean they can’t help us? Absolutely not. AI tools are exciting because they can help us for a few reasons:
- They can help us brainstorm ideas at scale
- They can help us create templates or structure key information
- They can help us with small pieces of work that take a lot of time
These aren’t the only examples, but they’re useful for understanding how to use AI in the content design and UX writing process. The key thing to remember is that these tools are best used not as a replacement for a step in the design process, but as targeted, specific help when we’ve already spent time and effort researching our users, working in a team, and understanding the goals.
At best, the informed use of AI allows us to become even more productive. At worst, AI slows us down and makes a worse experience for our users.
Let’s examine the 3 ideas we mentioned above: brainstorming ideas at scale, structuring information, and helping with some small pieces of work. Importantly, let’s examine tactics to use and avoid.
Brainstorm ideas at scale
During the ideation process, it’s helpful to spit out as many ideas as possible in order to work through the most obvious so we can find the inspired ideas. Then we can start drawing conclusions and parallels—finding connections based on our (human) experience that we couldn’t see before.
Why don’t we ask GPT to help us with that? Let’s use this prompt: “We’re creating an app to help people find parking spaces. Give us 50 ideas for what we could call a feature that helps us find the nearest available spot for an extra payment.”
Alright, this is a good list of ideas to start drawing connections between. There’s only one problem…
Ideation is usually never that simple. We work in a complex organization, and so we probably have some extra research in our back pocket about what our potential customers do or don’t like. So let’s try this again, but let’s add some hypothetical context. Our new prompt details, “We need to add some context here from customer research. Our customers don’t like the word “park,” because it makes them feel stressed. Our research also shows users don’t like the word “premium.”
Alright! Now we’re getting something a little bit more useful. “SwiftSpot” sounds like it could be good.
A key point to remember: it’s important to scale our output. There’s no point in spending time creating prompts and giving context when we could have spent that time creating one or two ideas. Instead, go large: ask for a dozen of ideas, especially during the brainstorming phase.
Create templates or structure information
Let’s say we’re in the handoff phase of design, and we need to make sure we have a database of strings for our developers.
Let’s ask GPT: “I’m approaching a design handover and need to give my developers strings for a user interface. Please create a template for organizing my UI text.”
Okay, this is useful enough. But although it’s a template, what we really need is a spreadsheet. And we need to give it some context about extra variants and tones. Let’s ask again using the context, “Can you please make this a spreadsheet template, but please include areas to include text for variants 2-4.”
Okay! That’s getting better. This is starting to look like something we might actually use.
Keep in mind, we’re still giving very generic input. You’ll need to consider the exact types of variants, parameters, and other inclusions you’d need in an output like this.
Small pieces of work that take a lot of time
Okay, we’re deep into designing our app. Let’s say we need to create some variants for a piece of UI text that comes during an onboarding phase. We need to ask people to give us their credit card information. We need a title and a description.
Let’s see what happens when we give it a generic prompt like, “I’m writing an onboarding screen. I need some text for users to give their credit card information. Give me a title and a description and a CTA.”
Generic prompt. Generic result.
But if we’re at this point in the design process, we will have done a lot of research. We’ll know exactly what we need to say, and we’ll have information about the tone in which that information should be conveyed.
Importantly, we also need to remember that AI is useful to scale our output. There’s no point in writing prompts to get 1 or 2 examples when it would take just as much time to write them ourselves. We need to write well-structured, informed prompts that give us a large number of strings to choose from. Let’s try again, but’s make our prompt more specific.
Let’s ask the prompt to write 3 outputs: a title, a description, and a CTA. Let’s give each one text limits, and let’s also provide some style guidance. We know users are concerned about security, so we want to emphasis that in each step.
What do we get? A much more user-friendly and immediately practical set of results.
So we can see there’s a direct relationship between the nuance we put into our prompt and the results we get. Additionally, we can see that these tools weren’t a substitute for the context we learn during the design process that we, humans, take ownership of.
What does this mean?
We the UX writer and content designer had to provide the AI tool with the right context, prompt structure, and instructions to give us useful information. This means we as UX writers need to ensure we are embedded and informed through the entire design process—including interacting with human customers—to get the best output.
Whether you’re using GPT-enhanced plugins to directly create text in Figma or instantly transcribing and synthesizing customer research, just remember: AI is a natural ally for UX writers and content designers.
Use it wisely, use it to scale your output, and use it to augment the design process—not skip it.
Patrick Stafford is co-founder and CEO at UX Content Collective. Connect with him on LinkedIn!
Read the full article here