Why humanization of human-to-machine dialogues is one of Generative AI’s biggest wins.
25 years ago, Google launched its most famous product: Google Search. An innovative paradox. Technically a milestone, for human language a stone-age regression. We speak to machines in their language instead of machines communicating with us in our language. This is similar to imitating the animal sounds of our pets. Almost exactly 30 years ago, the term Human-centered Design was coined by Stanford professor John E. Arnold and has been used in various fields of psychology, product design, and anthropology ever since. Human-centered describes the idea of a solution-oriented approach considering and centering human perspectives and emotions.
And there it is, this new search. Developed from a machine-centered approach.
Well, this machine-centered or interaction-based language approach is not an invention of Google but it has, of course, come a long way before that. In 1973, when Xerox Alto opened its machine eyes to a human-centered world and made the graphical user interface (GUI) mass-market ready. The human-to-machine relationship moved into our social middle. And language in that relationship moved to the outer margins.
Are you familiar with the Turing Test? In 1950, Alan Turing developed a test in its course a human questioner enters into alternating communication with a human and a computer without seeing one of them. The test describes the technological turning point where we humans are no longer sure whether we are chatting with a machine or a human.
It could be that we are about to cross this inflection point.
The Dartmouth Conference in 1956 is considered to be the initial birth of Artificial Intelligence. In 1966, the first chatbot, ELIZA, was launched. 20 years later, NETtalk learns to talk, and in 2011, IBM’s Watson beats a human on a quiz show. Generative AI solutions are not random pop-ups but logical consequences of evolutionary technological developments. And all of this is wrapped in availability heuristics. These developments have always been there. They are just now accessible, visible, and usable to over 5 billion people in one fell swoop.
So if we can no longer distinguish machines from humans, in many cases, this has to do with the parallelism of the linguistic outcomes. We probably don’t need to make an excursion into linguistics to understand that today, more than ever, language can be both a connector and a separator. Language provides expression through content and emotion. And language creates a context in a sea of words.
Just as today’s chatbot conversations feel mostly binary, so is their visual structure: one-dimensional, mostly vertical. Mental models have long been an essential part of good user experience. At their core, these mental models describe in simplified terms the human expectations of interaction right before I have even interacted. This means these models create a seamless connection to real-world expectations. However, mental models are not an exclusive concept but vary from person to person. This results in an exciting perspective: What does the visualized mental model of future conversational AI interfaces look like in order to be as close as possible to the real expectations of the respective human? Or, put more simply: How do we get a spatial dimension in conversational interfaces?
Another essential aspect of human communication is a conversation in a group. Group conversations have their own dynamics and follow other, more vibrant dramaturgical amplitudes. So what if future conversational AI solutions allow the inclusion of people from my environment? A kind of social conversational AI, so to speak.
Let’s go a step further and consider Conversational AI solutions like ChatGPT or Google Bard as a kind of friend. A tool in the symbiosis between friendship and advisory. A friend that recognizes and responds to my linguistic level. So what if ChatGPT or Bard will imitate my linguistic level, my preferred writing style?
And let’s go another step further and include heuristics and cognitive bias. Let’s take a confirmation bias e.g. Now, what if such solutions recognize patterns in my beliefs and play out balanced results for me? Like a good friend pointing out when I’m wrong. Just imagine.
Read the full article here