Never again alone with chatbots!



Fun, compassionate, attentive to our concerns, conversational agents or chatbots will soon be ubiquitous in our privacy. These agents will be on board voice speakers, on our phones, will have visual representations on computers, or will be in robots or even refrigerators (film Yves, 2019). We will talk to them like human beings and they will respond empathically.

The personalization of chatbots by voice, name, but also habits are often used in order to reinforce the feeling of virtual personality. The language of these machines is completely different from human speech, however, because the words expressed do not follow from thought, and the system is in no way responsible for what it says. Despite this, these artificial systems are the subject of “anthropomorphic” projections. This projection of human capacities on objects is amplified by fictions and fantasies.

→ READ. Vertigo of the “chatbot” inspired by our deceased relatives

The grail of researchers is to build autonomous emotional chatbots capable of learning by interacting with users.

New generations more and more efficient

Remember Microsoft’s AI, Tay, launched on March 20, 2016 on Twitter that could learn via tweets from internet users and adapt. Unfortunately, faced with the facetious nature of Internet users, AI very quickly became aggressive, sexist, even Nazi and was disconnected. However, it is certain that the new generations of chatbots will be more and more efficient thanks to continuous learning and gigantic databases on our everyday life that the machine can analyze.

How are we going to live day to day with them? These machines will be our closest friends but also formidable supervisors. They could be attentive interlocutors, sending us a gratifying image of ourselves or, on the contrary, spies, capable of perceiving our weaknesses and sending our abuses to our doctor. Will these objects be half-coach half-guardian angels capable of monitoring our data and managing our security?

Way to influence individuals

Connected objects like Google Home, which are already in many homes, could become a means of influencing individuals, or even isolating them in emotional addictions. They are currently neither regulated nor evaluated and very opaque. For 1,200 €, Gatebox is marketing Azuma Hikari, a virtual hologram character in Japan who acts as a home automation assistant! Tinker Bell Azuma, kind of girlfriend, is quite charming, she could come out of a Disney cartoon. She is there for you, dedicated and caring. Even when you leave the apartment, she continues to talk by text, simulating a real emotional presence: “I’m waiting for you my darling. “ The honeyed sentences are distressing stereotypes.

→ MAINTENANCE. Artificial intelligence: “There is no intimacy with a machine, just an illusion”

The perception and production of human emotions are complex and very dependent on personal as well as cultural factors. At present, the technologies ofaffective computing, especially the detection of emotions are not very efficient.

Chatbots will present opportunities and risks depending on the contexts of their use. Should we not build good practice guides for an ethical design of these machines? How, for example, to protect the user from forms of manipulation while allowing the manufacturer to establish a certain bond of trust with the chatbot? It is also necessary to avoid confusion between virtual agent and human interlocutor during the dialogue, in order to avoid the harmful long-term effects of the use of these “non-human” objects.

.

About the author

Leave a Reply

Your email address will not be published. Required fields are marked *