We will face a double challenge as Generative AI progresses at astonishing speeds. First, there’s the risk of automation of certain tasks, which might blur the lines between currently well-defined roles—second, staying relevant and productive by learning AI skills. AI will probably automate some tasks completely in the long run, but new opportunities will arise. What’s almost certain is that AI skills will represent a competitive advantage when job seeking. Let’s look at its potential for user research, delve into its uses and challenges, and find some valuable insights that will allow us to harness its remarkable capabilities fully.
In the remarkable realm of 2023’s technological advancements, deciphering facial expressions or sarcasm remains just another perplexing challenge for artificial intelligence (AI) tools and services. However, with the recent advances in AI, these tools are catching up fast.
Generative AI has the potential to augment and accelerate many fields. This article will focus on ChatGPT and its uses in User Research for digital products. Before we get deeper into this, let’s review the broader context.
AI language models, including ChatGPT, fit the broader category of natural language processing (NLP) and conversational AI technologies. Furthermore, Generative AI is defined as AI, machine learning (ML), and natural language processing (NLP) technologies that can automatically generate user flows, screen designs, content and develop code.
Kevin Kelly, the co-founder of Wired magazine, predicted the emergence of artificial intelligence as a service in his 2016 book “The Inevitable”. Kelly envisions a future where AI becomes a widespread utility, much like electricity, accessible to everyone. He anticipates that AI will not be limited to specific, isolated applications but will be available as a general-purpose tool researchers can employ to solve diverse problems and enhance human capabilities.
Fast-forward 7 years after Kelly’s book was published, ChatGPT, developed by OpenAI, exemplifies the Generative AI category as the leading chatbot and swiftest expanding consumer application ever, amassing 100 million active users merely two months post its November 2022 launch. (source: Reuters).
However, we tend to be overly optimistic about the immediate impact of new technologies. Roy Amara, an American scientist and futurist, coined a principle that captures this propensity called Amara’s Law:
“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” — Roy Amara
Overestimation in the short term is true for several technologies: big data, 3D printers, the internet of things, cloud computing, augmented reality, virtual reality, brain-computer interface, autonomous vehicles, Web 3.0, blockchain, NFT, cryptocurrencies, smart contracts and the metaverse, to name just a few. This principle will help us set more realistic expectations for new technologies.
The “hype bias” that matches Amara’s law is reflected perfectly in the “hype cycle” concept, introduced by the research and advisory firm Gartner, that illustrates emerging technologies’ maturity and adoption.
According to the hype cycle graph, Generative AI has entered the “Peak of Inflated Expectations”, and it will probably take 2 to 5 years to reach the “Plateau of Productivity”.
The research VP for Technology Innovation at Gartner, Brian Burke, thinks that the consumer-facing capabilities of ChatGPT are just the beginning and that the enterprise uses for generative AI will be far more sophisticated.
The folks at Gartner also expect that:
- By 2025, over 30% of new drugs and materials will be systematically discovered using generative AI techniques, a considerable increase from the current situation where none are found using these methods.
- By 2025, large organisations will generate 30% of their outbound marketing messages with Generative AI, a significant increase from under 2% in 2022.
- By 2030, a major blockbuster film will be produced with AI generating 90% of its content, from text to video, starkly contrasting the 0% AI-generated content in films from 2022.
After a brief look at the broader context of Generative AI, let’s see how this can apply to User Research for digital product design.
The upside of using ChatGPT to improve your UX research is that it will supercharge the speed and quality of the work when used accordingly. Probably the best way to use the AI tool right now is to treat it as a “research buddy” (Pragmatics Studio on YouTube), where you use it to inform and refine your interview questions, how to ask better questions, create more engaging recruiting messages, summarise data and simplify data insights. The secret here is to evolve our capacity to ask AI better questions and not to substitute human-based research.
The downside is the double challenge that presents itself due to Generative AI advances at astonishing speeds: first, the risk of automation of certain tasks as well as increased productivity, which might blur the lines between currently well-defined roles and second, the imperative of staying relevant and effective by learning AI skills.
AI can potentially replace some tasks completely in the long term, but new opportunities will arise. What’s almost certain is that AI skills will represent a competitive advantage when job seeking, and in some cases, it will become a requirement.
When we look at user research, one crucial criterion that’s independent of the tools used, including AI, is that some user research is better than zero research, and as Jared Spool often puts it:
“There’s a technical name for the absence of user research: Guessing”.
Regarding using ChatGPT for user research, as with every tool in a researcher’s arsenal, it depends on how you use it. Overall, User Experience as a profession has historically struggled over the last 20 years with a low-entry threshold. This struggle is caused, in part, by the meteoric rise in popularity and availability of UX and Product Designer roles that contrasts with the low availability of high-quality training programmes. This low-entry threshold has historically generated misunderstanding and confusion within business teams and the organisation’s leadership and has impacted the credibility and performance of UX input.
UX researcher-specific roles traditionally attract professionals with years of experience in psychology, data, statistics and academic research, so the standards tend to be higher. But still, there are plenty of jobs where the budget or the project’s scope will not allow for dedicated research resources, and a UX generalist will cover the whole spectrum, from research and analysis to design and testing.
Recently, the standards have improved, and the resources and communities available to UX professionals have also increased in quality, but low-quality UX deliverables still exist. This uneven distribution of quality is evident among UX freelancing services, where it takes skill to distinguish between high-quality and low-quality services. Design agencies might face the same issue when UX teams face budget and deadline pressures or when trying to impress the client, but the risk is lower compared to freelance services. In-house UX research teams will likely avoid this issue as the deadlines and budgets available are far superior to those in freelance or agency services. In-house teams also have to maintain their reputation as their relationship with the business is ongoing, so the delivery standards are potentially higher, and research has more depth.
With the introduction of Generative AI tools, the low-quality issue of UX services has escalated to new highs overnight. We can expect to potentially see misuse of Generative AI tools, where ChatGPT will be used to mimic users and instantly generate research artefacts such as personas, product requirements, use cases, interview questions, user flows, user pain points, feature sets, user stories, information architecture, competitor analysis, user journey maps, that look authentic but on closer scrutiny are not viable and bring you back to the issue of “guessing”. Jared Spool also calls this: “design by autocomplete”.
Suppose your business wants to contract UX professionals and their services, recruit people from reputable sources and ask for advice from people familiar with the UX landscape. Remember that the old rule of thumb holds: if it’s too good to be true, it probably isn’t.
AI can evolve and improve exponentially, disrupting several industries and impacting how tasks are done and roles are defined. But, as with every previous technological evolution, it also has the potential to create new and exciting paths.
Laughter is my favourite way to help release the collective built-up tensions around the subject of AI, so let’s conclude with a joke in the hope that, in the not-too-distant future, humankind and AI can both look back and laugh together on the topic:
😆 Why did the AI get jealous of the user researcher’s new glasses?
Because it thought they were the latest empathy-enhancing technology!
Read the full article here