Intelligent interfaces of the future. Making things that think. | by Christopher Reardon

Here’s an example scenario for the potential downsides of having an ever-present, highly intimate AI co-pilot.

“Imagine an ever-present invisible AI that can see and remember everything you do, having intimate knowledge of your every word and action so that it can readily provide contextual information and services that are convenient, personalized, and unintrusive. On the surface, this system would be the ultimate assistant, transforming your life and freeing up your time so that you can focus on the essential things in life.

With that level of data, models could predict your entire day, influencing your every decision without your awareness. Those with the money to afford an AI-enabled assistant would be given a golden ticket in life, relentlessly increasing the divide between rich and poor. People would live in a world of hyper-personalized experiences that, on the surface, would feel magical. People who share more data get more AI ‘superpowers,’ furthering the cycle. People that don’t have access to the AI will still be recorded with no means to opt out of its data gathering. For AI adopters, however, their worldview and expectations of people, in general, will be altered by the thousands of overt and indirect recommendations they receive daily, separating them from those without, degrading society’s shared beliefs and common ground.”

Blue teams (teams focused on positive solutions) could unpack the scenario, categorize the risks described and look for ways to mitigate the adverse outcomes, asking “How might we” questions to ensure that ever-present UIs (AIs) can’t harm people or influence them in ways that aren’t beneficial to their wellbeing or the stability of society.

Product design will be less about sweating the pixels on a UI, worrying about components in the pattern library, or the content decisions that went into your navigation system–although all of those things will still be necessary for some time to come. AI startups are already starting to augment and sometimes replace the designer’s role in many typical product design tasks and will only accelerate and improve with time.

Designers must evolve, leveraging their empathy for users and parlaying it into a new role, one driving strategic design decision-making that influences every function involved in developing, managing, and governing intelligent systems today. To protect society from a dystopian future, designers must facilitate collaborations across organizations and build alliances with ecosystems of experts and stakeholders they haven’t worked with before (like ethicists, civil & human rights advocates, compliance experts, philosophers, psychologists, behavioral economists, and anthropologists) for organizations to create informed, coherent strategies and operating models that equitably benefit all communities. Designers must learn to focus on systems design rather than design systems because AI is built and managed across many competing agendas, systems, processes, and people. In contrast, AIs will ultimately oversee design systems in the future.

Designers are used to navigating and adapting to changing design challenges because they hold a human-centered approach in all that they do. This anchoring focus can see designers through the disruption of intelligent copilot systems. The future of high-impact design will be artfully designing the intelligence (AI mind) of the service, which happens long before any pixel hits the glass. There are no rule books, so designer leaders must focus on bringing together the right stakeholders to make informed decisions on how the AI should behave, what use cases it should be developed against, how it is trained, how to handle when things go wrong, what agency its users will have over their data and the recommendations the AI provides, and whether its value outways its inherent drawbacks. Designers will have to consider whether full automation of some things outways having people do things the old school way because sometimes people should be left to struggle and fail to grow (take education as an example). AIs will enable product teams to manage fleets of personalized apps across different communities, ensuring culture norms, security, and human agency.

  1. Question fundamental assumptions about what problems the world needs to solve and whether AI is the right fit for those solutions. If it’s a universal problem, and the solution can be readily available and work for all, then you might be on to a winner.
  2. Question whether their team’s mission aligns with society’s and the planet’s well-being. Do your revenue goals bias how you might leverage technologies that can influence society?
  3. Expand the concept of the designer’s role beyond someone who solely focuses on the UI of a product. Companies need to bring to bear a liberal arts approach to AI strategic planning, ensuring that engineering and data scientists are well informed about the ethical, legal, and socio-psychological impacts the services they develop can have. Design thinking can empower discussions, workshops, and tactical implementations that synthesize and integrate the requirements from numerous perspectives to ensure equitable outcomes.
  4. Adapting and learning new skill sets ensures AI is correctly and ethically tuned to benefit users, stakeholders, society, and the planet. Question — Should fewer people be working on ‘product’ so more people can work on the ethical decision-making frameworks to manage the intelligent systems?
  5. Consider how design thinking might ensure solutions are equitable and available to all. If everyone needs a $700 phone and broadband internet, you will likely perpetuate inequities that have marginalized communities for centuries.
  6. Consider implementing a top-down strategy that aligns ethical, civic, and legal considerations with business goals so that product teams have a clear line of sight to success. Think about the exponential learning curve the AI of today is on, project what new benefits and harms might come, and work backward from those that inform today’s decisions.
  7. When designing AI-enabled products, be mindful of conflating user convenience with doing good in the world. Life is about learning how to deal with obstacles.
  8. Developing robust risk assessment methods and decision-making frameworks to ensure teams consider the benefits and challenges of managing AI-enabled products. Creating and optimizing, and sequencing red team processes and evaluation methods to ensure product teams maintain momentum while doing their utmost to protect people and society.
  9. Focus on current AI responsibility and privacy concerns, become familiar with regulatory requirements (especially from the EU), and put off visions of the future until the organization’s operational maturity is ready to manage the complexities of “thinking machines.”

Read the full article here

Leave a Reply

Your email address will not be published.

How a single app feature can reduce anxiety in humans? | by Ankita Gupta | May, 2023

How a single app feature can reduce anxiety in humans? | by Ankita Gupta | May, 2023

Understanding the psychology behind waiting and how to manage it in apps via

Goodside · Typewolf

Goodside · Typewolf

How to Support Typewolf Typewolf is an independent typography resource created

You May Also Like