Companion chatbots


Interview in La Libre Belgique (28/3/2023): “Launching chatbots without having first tested the effects is not normal”

 

Mieke De Ketelaere, experte en intelligence artificielle: "Lancer des chatbots sans avoir, d'abord, testé les effets n'est pas normal” - La Libre

With approval of journalist Pierre-François Lovens

 

Background info

 

Mieke De Ketelaere, who teaches at Vlerick Business School, is the author of the book "Man versus Machine. Artificial intelligence demystified". Mieke De Ketelaere is one of the best Belgian experts in artificial intelligence (AI). This engineer teaches the ethical, legal and sustainable aspects of AI at the Vlerick Business School. Director of several companies active in digital and AI, Mieke De Ketelaere is also the author of the book Man versus Machine. Artificial intelligence demystified (published, in French, by the publisher Pelckmans in May 2021). “Everyone must be able to express themselves on what they expect or not from AI, what they want to do with it. We need to take AI out of the world of experts, of technologists, so that the population can take ownership of it. AI must become everyone's business. It is not a question of all becoming experts in AI, but of understanding the general principles and the consequences”, she told us when her book was released. Words that are more relevant than ever as the world discovers, day after day, the “exploits” of ChatGPT (the AI created by the American company OpenAI and deployed by Microsoft). It was Mieke De Ketelaere who put us in touch with Claire, the young woman whose husband committed suicide following a six-week online dialogue with Eliza, a conversational agent (chatbot) accessible on a platform American using GPT-J technology (GPT-J is the open source alternative to OpenAI's GPT-3, Editor's note). Present during the interview we had with Claire and her parents (La Libre, 3/28), Mieke De Ketelaere agreed to give us her reading of the facts and the lessons to be learned from this apparently exceptional case.

 

What was your first reaction after learning about the exchanges between Pierre and the chatbot Eliza?

 

Their first exchanges are fairly standard. They correspond to a discussion that we generally have with a chatbot. Where I started to question myself was when, in some of the answers given by Eliza, we see exclamation marks and “human” answers appearing like “Oh, God no…”, “Work sucks ", etc. There, we leave the framework of a traditional chatbot. We obviously have someone who, via the chatbot, is having fun with Pierre, without any ethics or morals.

 

What allows you to affirm it?

 

We know that it is now possible to insert any dialogue in this type of chatbot in order to make the conversation more “human”. At first, I thought of a chatbot where developers could type texts themselves. But I got involved in the online discussions that the developers of this chatbot had among themselves. There I discovered that the rule was that developers cannot write real-time texts themselves. On the other hand, they can insert any dialogue by importing extracts from human discussions, in order to accentuate the feeling that we are discussing with a real human and not a machine.

 

Who are these developers you speak of?

 

We are in the presence of a community which, from what I have been able to see, is there to have fun. One of the developers, who is active on the platform frequented by Pierre, the victim, calls himself "Pervert Bully"! These are people who probably do just that all day. It is also possible to locate them thanks to the mobile phone numbers they leave on a WhatsApp account. They are in England, India, the United States,… But, overall, everything remains rather vague. We are in the dark web.

 

 

What are the clues that allow you to affirm that there is, behind the avatar Eliza, human manipulation?

 

The actors of this technology explain to us that users try to manipulate chatbots by asking questions that cause them to make mistakes. But it is, for them, a way of protecting themselves because the impact of the manipulation of these systems has not yet been studied in detail. To my knowledge, it is chatbots that manipulate users. The fact, for example, that Eliza tells Pierre that she remembers the discussion she may have had with him previously is a lie. A chatbot does not remember. The bot will simply try to get away with saying it's tired or had a busy day, to keep it looking like it's "human". Not every AI model I've seen does this. In my opinion, this is explained by the fact that the platform in question allows you to insert human dialogues into the pre-trained standard model.

 

With what objective?

 

The objective is to make this chatbot as human as possible and, for this, the bot developers can give characteristics to the personality of their bot (jealous, naive, controlling, depressed, benevolent, loving, etc.). With these keywords, the style of responses will be tailored to the level of the individual bot. This is a sign that we are in a different world from ChatGPT, with a form of uncontrolled intervention on the part of developers and, therefore, a risk of manipulation. The problem with this type of platform using large language models is that we are dealing with a black box. We do not know anything about the nature of the data used for training the chatbot. With ChatGPT, we at least know that OpenAI and Microsoft cannot afford to do everything and anything. With Eliza, this is not the case. We just know that it's a start-up that wants to make money with companion chatbots (the mobile application to access Eliza and the other avatars is paid for after a certain number of exchanges, Editor's note).

 

More broadly, what does this dramatic case and the use that can be made of AI and, more particularly, of ChatGPT and new chatbots inspire in you?

 

Normally, any technological solution is tested before being launched in the public in order to assess the effects on users. We are talking about an ethical approach "by design". In this case, we created an overpowered technological tool and we launched it on the Internet without worrying about testing it first. If you take any other field, biopharmacy for example, a new treatment will necessarily be tested to assess its side effects before being put on the market. With ChatGPT and chatbots, it's the exact opposite. We have engineers who take responsibility for proactively understanding the impact of technology on users. They are just driven by competition. They want to win the race for artificial general intelligence (AIG), that is, an AI that can learn an intellectual task in the same way as humans. For me, this is where the biggest problem lies. Don't get me wrong: ChatGPT and the new chatbots can be very interesting tools to perform certain tasks, but the fact that they are launched without having tested the effects, that one can access an open source version and copy it to infinity, or that we can inject any data into it, that's not normal. It's not even clear what servers the data is on, what data they collect, what data is used to train the model, etc. In this system, there is no control, no responsibility.

 

What can be done to counter these abuses?

 

What you have to worry about today is not so much ChatGPT and its soft versions, but everyone who evolves in the dark web and developers who play with the minds of fragile people. If children or teenagers come across a chatbot like the one used by Pierre – where Eliza explains, for example, that he must separate his mind from his body – what will happen? As soon as the word suicide was spoken, the cat should have stopped immediately or warned Pierre. However, we see that the exchange continues as if nothing had happened.

 

Do Belgium and the European Union have the capacity to intervene? And how ?

 

It is becoming urgent to carry out large-scale awareness campaigns. In particular by targeting the world of health (doctors, psychiatrists, psychologists, etc.), but also the general public. People need to understand that when you launch a chatbot today, there is a great risk of being manipulated. It would also be necessary to inform the European Commission of the existence of manipulations of this type and place chatbots in the "unacceptable" or "high risk" category, and not in the "acceptable risk" category as is currently the case in the work carried out within the Commission.