Table of Contents Hide
The fine line between persuasion and manipulation: who will protect the user?
Ethics = moral philosophy.
It defines what’s good and what’s evil.
For a long time, Google claimed not to be evil. Until a few years ago, when they removed the “don’t be evil” motto from their code of conduct.
You might wonder. Why did they change it? Were they just too idealistic when they wrote the motto?
Google now uses “You can make money without doing evil.”
Why is this phrase better than “don’t be evil”? Does the “can make money” give them a bit more space to manœuvre? Can is optional. Don’t is imperative.
In Google’s justification, not doing evil means not showing “flashy pop-up ads” to the user. Sure, that’s nice. But, running a business, and being ethical, entails a lot more than honest advertising: tax, lobbying, employee treatment, supply chain, design patterns, data privacy, sustainability, diversity, etc., etc.
The terms evil and ethics are insanely open to interpretation and have different meanings to any individual.
This article isn’t about Google but about the tech world in general. How moral are we as an industry? What is the role of UX designers in ethical business? And who is responsible for ethics?
I recently read the book Evil by Design by Chris Nodder.
The book gave me a bitter taste. At some point, the author speaks about ethics but directly continues to give suggestions on how to be evil.
The book virtually tells you: “Bombs are bad! BTW, here’s a recipe for a Molotov cocktail!”
Creating awareness of dark patterns is valuable. It helps us not to fall into cleverly designed traps.
But, evil by design isn’t written from the user’s perspective. It isn’t a tech survival guide. It’s an instruction for designers. It gives blatant commands on how to mislead the user, such as:
Ensure that users’ eyes are drawn to the items you want them to see, and away from items you’d rather they didn’t see. Move any mandatory disclosures far away from the path of least resistance. Use low-contrast text in “dead” areas of the screen (top right, bottom left) to hide information.
Remove any talk of opt-out activities from actual transactional points. Instead create a separate location (a “privacy center”) where you can obscure the true activities with general statements.
If you are caught doing bad things with user data, apologize profusely and then add more check boxes, explanations, and options to your privacy center, so it’s even harder to divine the correct settings.
Automate the process of spending money wherever possible so that it slips out of awareness for people. Use tokens to remove the sense of spending real money. Make a clear statement about the value of the item being auctioned/sold, but neglect to mention associated costs.
Keep ignorant people ignorant. In other words, don’t make people aware of what they don’t know. That way they will continue to overestimate their competence.
You get the point. This is just a glimpse. The book is 300 pages of imperative design advice.
The book would have felt completely different if it hadn’t been written as advice from an observational point of view. “Reward people early” could have been phrased like “Watch out: There are apps that reward people early.”
Another book with a similar (manipulate) objective is Hooked, by Nir Eyal. This book is often included in lists of books every designer should read.
The book shares psychological models that help us understand how we could get people addicted to our digital products.
A fair part of the book covers BJ Fogg’s Behaviour model. If you want a user to take action (needs to be seen as “cool” in my ‘40s), there needs to be motivation (wants a motorcycle), an ability (money), and a trigger (divorce).
Chris Kernaghan explains the issue well in his article “Should we be creating addictive products? Exploring the paradox of Hooked”
The book presents a framework for creating products that are habit-forming, which theoretically has the potential to increase user engagement and retention. But the reality is that this can also promote addiction and dependency, particularly for vulnerable users. Creating products that are intentionally designed to be addictive raises huge questions.
Yes, designing for addiction can be very dangerous. Sadly, it can change people at their core, as we’ve probably all seen enough in our personal environments.
Although there are some fair concerns about Nir Eyal’s book, he also wrote an antidote book: Indistractable: How to Control Your Attention and Choose Your Life. Fair enough.
Advice on nudging people is obviously not something from the tech era only. Persuasive and manipulative practices have been part of society for ages. It’s basic marketing. Or maybe, advanced marketing.
Some of the essential books on human behaviour and influence are still popular today and can be easily applied in the tech world.
Dale Carnegie’s 1936 book How to win friends and influence people is still on many people’s reading lists. It seems like half of Medium is writing about the book.
George J. Ziogas wondered why the book is so popular in his article “What Is “How to Win Friends and Influence People,” and Why Do People Talk About It All the Time?”
Carnegie’s book has long been viewed as a classic in the field of persuasion and negotiating tactics. […] the author suggests that getting what you want and need is possible by helping other people realize you share the same goals.
[…] some of the most important [tactics] are to avoid pointless (and unwinnable) arguments; show respect for others’ opinions; admit when you’re wrong; find ways to help other people say “yes” to your requests or ideas; try to see things from others’ points of view; and dramatize your ideas. Using these techniques, Carnegie offers non-confrontational ways to persuade people to follow the courses of action you want them to.
If a book written almost a century ago, is still this popular, it certainly compels.
Another iconic persuasion book is Robert Cialdini’s book Influence: Science and Practice. This book has been used as the foundation of many manipulative design practices.
Websites like booking.com, Amazon.com, and Easyjet.com scream Cialdini. They use the main concepts Cialdini explores in the book.
Those concepts are:
- Reciprocity — You give something small to your user so they feel morally obliged to give something back (hence, buy your product)
- Scarcity — People badly want something that others can’t get
- Authority — People aimlessly follow the advice of credible experts: become one
- Consistency — When people start with something small, chances are they’ll eventually continue with something larger that aligns with the original small commitment (foot in the door)
- Liking — We like people who are similar, we like compliments, and we like to cooperate towards mutual goals: make sure you and your user are emotionally aligned
- Social proof — If the mob has an opinion, it must be right
I’m sure you recognise most of these patterns. You might have created some defence mechanisms against these concepts. That doesn’t mean that you are not also frequently subconsciously influenced by them.
A monumental work in the field of behavioural psychology is the book by Nobel prize in Economics winner Daniel Kahneman. His book Thinking, Fast and Slow explains the two systems that drive our decisions.
Kahneman doesn’t present hacks on how to manipulate us. Instead, he sought to explain our behaviour. He shows that we are intrinsically lazy in our decision-making and therefore make bad, irrational, choices.
A few of his concepts are widely known. You might identify some of them in contemporary marketing and tech:
- Mere-exposure bias —One strategy to establish trust with your audience is to bombard them with your brand (Coca-Cola) in a way that is impossible to ignore. Over time, this can create a sense of familiarity and reliability, potentially surpassing the importance of truth.
- Priming — If you constantly show a red bull next to a snowboarder, or a Heineken next to a golf player, you learn to associate the object and the activity. The next time you’ll be on a golf course, if you’d ever be, you subconsciously want a Heineken. Although I highly advise you to take any other beer brand.
- Anchors — I’m currently in Cameroon and ate in a “posh” restaurant yesterday. I had to pay 14 euros for my meal. I was shocked. Obviously, this is not expensive for the Swiss standards I’m used to, but I had been paying less than 5 euros for most of my meals last weeks, so I anchored my expectations to that reality, not to my European standards.
Kahneman’s book is full of models like this, including loss aversion (loss feels 2x more strong than a gain), the endowment effect (we overvalue what we are invested in), and tunnel vision (what you see is all there is).
Mark Looi explains the effects of all these biases in his summary of Kahneman’s book.
Our thinking is riddled with behavioral fallacies. Consequently, we are at risk of manipulation not usually of the overt kind, but by nudges and small increments. Indeed we have learned that by exploiting these weaknesses in the way our brains process information, social media platforms, governments, media in general, and populist leaders, are able exercise a form of collective mind control.
It’s also clear that the bugs in our personal thinking systems are being exploit faster than patches can be applied!
The book Evil by Design ends with a justification for applying all these “hacks”:
“Machiavellian” is used to describe someone who aims to deceive and manipulate others for personal advantage. However Niccolò Machiavelli just used his observations of contemporary and historical affairs to suggest the courses of action that were most likely to help 16th-century statesmen (“merchant princes”) succeed. […] He was interested in setting down the facts and leaving the actions and moral judgments to someone else.
This book gathers observations […] to suggest the courses of design action that are most likely to help modern-day entrepreneurs (the merchant princes of Silicon Valley) succeed.
That’s one way to see it.
The author continues his epilogue by illustrating how we can help children and people with Alzheimer’s overcome their fears with lies.
So perhaps persuasive techniques that use deception or appeal to sub-conscious motivations can have positive or even ethical outcomes. […]What you must decide is how far to push the benefit in your direction rather than in your users’.
Nodder ends the book by sketching the possibilities we have. Being evil (designing for the company’s own gain only), being commercial (benefit to the company and you), being motivational (only benefits the user), or being charitable (benefits society).
The model is intriguing. However, the author doesn’t dismiss the notion of evil design. In fact, the book is just a step-by-step guide on exploiting the user.
You wonder, shouldn’t ethics be intrinsically integrated into how we create products and serve society?
This question is very likely to cross many of our minds. Let’s explore the various possibilities.
Good to know: ethics is a broad concept that touches on many things. Even trivial things like whether printing something private on a company printer is stealing.
In this article, we only focus on the ethics of user manipulation, although many of the things could be applied to the entire ethics domain.
I don’t mean this in a pedantic way, but designers probably have the most potent humanistic competencies in the tech world. They understand best the impact of entrepreneurial and product choices on the end user.
This would make them suitable to be responsible for ethics, but the problem is that they don’t have enough decision-making power. In most organisations, UX is a service, not a driver. UX decisions will always be overruled by product, which has more business performance-driven power and can thus quicker gravitate towards “evil”.
Some companies have entire ethics teams. These teams are still not taken seriously. We’ve recently seen that a fair few companies are significantly reducing the capacity of these teams. Or just dismantling them all together.
We, UX professionals, definitely have a responsibility to raise awareness, but we can’t be held accountable for ethics.
Product, the CEO, or any other leadership role?
It would be nice if those in charge of a company’s entire operations would be responsible. It would be very logical. However, the Chiefs often have a conflict of interest. They work to satisfy revenue, venture capitalists, or wall street. Usually, the short-term bottom line is what is most important to them. “Maximising shareholder value.” The user is the one paying the price for this money-driven management, literally.
History is full of examples where the C-suite makes conscious decisions to prioritise capital gain over the well-being and safety of the users.
The latest AI debates illustrate this again.
Google’s AI tool bard was called “a pathological liar,” by its own staff. It gave answers on scuba diving “which would likely result in serious injury or death.”
Nevertheless, Google’s leadership wanted to go ahead with the launch at all costs.
The staffers who are responsible for the safety and ethical implications of new products have been told not to get in the way or to try to kill any of the generative AI tools in development […]Google’s leaders decided that as long as it called new products “experiments,” the public might forgive their shortcomings.
Gennai (Google’s AI governance lead) overruled a risk evaluation submitted by members of her team stating Bard was not ready because it could cause harm.
— Bloomberg, Google’s Rush to Win in AI Led to Ethical Lapses, Employees Say
So yeah, the responsibility for ethics shouldn’t be in the hands of CEOs and the like. They are simply too much pushed by competition and financial incentives. Ethics takes a backseat in their world.
The Chief Ethics Officer?
Some companies have introduced the role of Chief Ethics Officer. This sounds really appealing. The first time I read about it, I thought, I want that job!
However, many companies with such a role have a shit-ton of blood on their hands. For them, it’s a nonsense role for the optics.
I personally believe that this role is often just window dressing. But if it isn’t, they at least don’t manage their image well:
The existence of roles like Salesforce’s chief ethics and humane use officer or Facebook’s director of responsible innovation can create the appearance of checking a box that they’ve addressed responsible tech, or the perception that responsibility for ethics is owned by solely them. And that would be a mistake.
— Sarah Drinkwater, To build responsibly, tech needs to do more than just hire Chief Ethics Officers
The role is also called Chief Ethics and Compliance Officer. Here, the problem becomes visible. Ethics is about morals. Compliance is a legal field. Those two don’t go well together. Most of those CECOs are former lawyers, appointed in industries like tobacco, oil, and gambling.
I believe, and hope, that the role of the Chief Ethics Officer has a serious place in tomorrow’s business structure. I just hope that the title hasn’t been spoiled yet.
For the moment, the role isn’t mature enough to be able to rely on it. Maybe a single role is not the right way to go anyway.
A chief ethics officer would be too distanced from product and design orgs, where most ethical decisions are made; their duties would come into conflict with those of the CFO, who is already on the hook for financial ethics; and the seniority of the role would mean this person would be seen as an ethical arbiter, an oracle who passes ethical judgment. […]
A successful chief ethics officer would equip teams to make their own decisions, not bestow judgment from above.
— Cennydd Bowles, Thoughts on chief ethics officers
The payment providers?
Companies like Mastercard, Visa, Paypal, etc, have enough power to demand ethical design. They quickly scored some public image points when the pornhub scandal became public. They stopped facilitating online payments and basically blocked the website from running a profitable business.
They used a fairly abstract clause in the contract that gave them enough wiggle room to suspend the payments. This is something they could potentially do for companies that apply evil design. But they won’t.
They used Pornhub as a moral example. The platform was the victim of public debate and the general sensitivity of the adult industry.
If credit card providers would be consistent and suspend firms that use children in their supply chain, they should suspend large numbers of companies.
That would be a slippery slope.
Payment providers obviously have some basic business requirements. They set a bar, but not one that is too high. They would, in theory, have the power to be our moral compass. But they won’t be eager to have this responsibility. It’s probably also not their role.
Perhaps that’s the role of…
Probably our last resort. Some might not like to hear it, but we must use the R-word.
We need to Regulate!
Companies are not going to take responsibility for the harm they are causing. History tells this time and time again. Of course, some companies have good intentions, good products, and good leaders.
But many don’t, and we can not tolerate this.
Misleading users is not OK. It’s causing financial, emotional, and even physical harm. Free-market capitalism is not more important than human dignity.
The European Union agrees with this. It has been putting regulations in place to put tech companies and digital products on a leash:
- e-Commerce Directive, 2000 — obliges webshops to be clear about who they are and offer complaint procedures.
- Unfair Commercial Practices Directive, 2007 — sorts out various scams like misleading advertising, hidden fees, and aggressive sales tactics.
- Consumer Rights Directive, 2011 — further fights deceptive patterns like prechecking options of additional stuff you don’t want. It also includes cancellation and refund rights, transparent pricing demands, and obligations to show delivery times information.
- Web Accessibility Directive, 2016 —demands public websites to be compliant with WCAG 2.1 and additional accessibility requirements.
- General Data Protection Regulation (GDPR), 2021 —prevents companies from frantically harvesting data they don’t need and gives us the right to get access and delete the data that has been collected.
- The Digital Services Act, 2022 — demands platforms to explain how it is possible that you see an Easyjet add, 5 seconds after you texted your brother that you want to go on a holiday.
- The Digital Markets Act, 2022 — various rules that should protect the rights of small webshops to avoid the Amazon’s of this world absorbing all smaller companies.
Next to all of this, the EU also has a consumer protection police. They can take legal action against companies that engage in deceptive practices.
The USA has a few regulations too, The Federal Trade Commission Act (FTC) and the Telemarketing Sales Rule (TSR).
However, the cultural differences between the continents are enormous. In the USA, industries or states are often expected to self-regulate. In the EU, regulations are applicable in all countries and more detailed and enforceable.
The EU has high penalties or sanctions for non-compliance, applied top-down. The US relies heavily on legal action to enforce consumer protection laws.
This article isn’t a promotion of the EU, but I think sketching the regulations for context’s sake is essential.
“My god, stop being so European. People don’t have to be patronised. If they are too stupid to fall for those traps, it’s their own issue, not the company’s.”
Who should be protected, and who shouldn’t? Is it OK to trick a child into buying something? What about someone with Alzheimer’s disease? And a cognitively impaired person? Is is OK to mislead them?
Where do we draw the line?
Who can be considered smart enough to be able to handle deceptive design?
Some digital products are created by teams of UX specialists, psychologists, and other con artists who design patterns to nudge people into doing things they don’t want.
Is this a fair battle? Should the responsibility of seeing these dark patterns be in the hands of any random individual?
You might think that the danger of cigarettes, casinos, or time-sharing flats is clear to everyone. But a big part of our population already falls for these scams. The tech world is much more subtle in its approach, and, therefore, even more misleading.
It seems like the world is always in a cat-and-mouse state.
Tom, the Twitter cat, and Jerry, the Jurisdiction mouse.
Companies come up with something “clever”, and governments respond a while after.
The EU’s regulations are a start, but they are hard to digest for any app creator. I worked on digital products with a fair number of specialists and executives. Very few knew about the regulations they were subject to.
Companies are just not aware of what regulations they ought to apply. You might start to think that Ethics and Compliance are going well together in the end. Understanding what a dark pattern is, and what is not, is too complicated for companies.
The world of accessibility is a bit clearer. It has well-defined guidelines (WCAG) everyone can simply apply. But even these guidelines aren’t enough to ensure fully accessible products.
Maybe we need some sort of global WCAG for dark patterns and ethics. WDDG: Web deceptive design guidelines. We could define several levels and allocate the various industries to the appropriate levels. Imagine: tax systems, banking, etc., must comply with the most strict regulations.
How this would look is something worth exploring in another article.
We are philosophising about the future.
We can choose to be Machiavellian. The Italian philosopher believed that leaders should use any means necessary to maintain power and control, even if it meant being cruel or immoral.
Or we can follow a wise Dutchman who lived in the same period and contemplated similar challenges: Erasmus. He believed in the importance of morality. He advocated for rulers to be virtuous and to govern with compassion and wisdom.
Call me Dutch, call me naive, call me idealistic, but I’m definitely team Erasmus.
Responsibilities for ethics aren’t defined. Ethics isn’t even defined.
What can we do today, as UX practitioners? For a start, we can challenge the demands of our product leaders. Does your company want you to design something to mislead the user deliberately? Scrutinise it. Find out how the design might put the user at risk. Discuss it. Speak about reputation damage, churn rate, NPS, and… moral choices. Dark patterns might sound appealing for products leads to “hack” growth, but they can damage business and brand value in the long run.
Be the ambassador for the user, not a blind order-taking design slave.
Good luck with debating the evil corporate product demands. The user relies on you.
And If you can’t win the debate… there are plenty of companies you can work for who actually want to run an honest business.
Read the full article here