5 ways to find out if you are chatting with a human being or a robot * Anna Bruno

5 ways to know if you are chatting with a human being or a robot

Convince & convert has developed five techniques to determine if you are dealing with a real person or with an artificial intelligence/chatbot.

Human chat

L’uso e l’utilità della chat online e dei chatbot, alimentati da livelli di artificial intelligence migliorati come Chat GPT stanno aumentando rapidamente. C’è stato un picco negli strumenti di intelligenza artificiale, non solo all’interno dei chatbot. Stiamo assistendo a un’espansione dei contenuti di posta elettronica, del copywriting, dei blog e altro ancora.

Convince&Convert ha sviluppato cinque tecniche per determinare se hai a che fare con una persona reale o con un’intelligenza artificiale/chatbot. Nota: più sperimenterai, più velocemente i chatbot impareranno e si adatteranno.

Tecnica 1: stratagemma dell’empatia

Riteniamo che il livello odierno di intelligenza artificiale sia carente di empatia cognitiva perché le emozioni tra esseri umani sono davvero difficili da comprendere e spiegare. Quindi, creare intenzionalmente un dialogo empatico con il tuo essere umano o con l’artificial intelligence/chatbot può essere rivelatore.

Lo stratagemma dell’empatia richiede che tu stabilisca una posizione basata sulle emozioni e faccia appello all’essere umano o all’intelligenza artificiale/chatbot a livello emotivo.

The situation: you are not happy:The most common base for an interaction with customer service.

Scenario 1: IA/chatbot

You: I don't feel good. Answer via chat: how can I help you? You: I'm sad. Answer via chat: how can I help you?

Scenario 2: a human being

You: I don't feel good. Human answer: how can I help you? Do you need medical help? You: I'm sad. Human answer: I'm sorry. Why are you sad?

Do you see the difference? In scenario Uno, the IA/chatbot can only refer to the library of existing conditional responses. In scenario Due, a human being has the ability to instill empathy in dialogue. It took only two answers to understand it.

Both dialogues can be constructive, but they become clearer if you know from the beginning that you are dealing with a human being or an artificial intelligence/chatbot. As a company, we are not ready for artificial intelligence therapists.

Technique 2: dissociation in two phases

A connected artificial intelligence can access practically any data, always and everywhere. Just ask Alexa. Therefore, asking a significant challenge application through chat cannot be something whose response resides in an accessible database.

You: Where are you? Chat answer: Seattle. You: What's the weather like? Chat answer: can you reformulate the question?

I'm sorry, even a mediocre weather app can manage it.

See also  How to integrate Pr and Content Marketing strategies to maximize corporate success

The two -phase dissociation requires two elements (hence the name):

  1. Make a prerequisite to which artificial intelligence/chatbot cannot refer
  2. Ask a question relating to this prerequisite.

The situation: artificial intelligence and robots have no feet

Question of challenge: "What color are your shoes?"

This is a real exchange that I had with Audible customer service (owned by Amazon) via chat. In the middle of the exchange of dialogues, since I could not discern, I asked: 

Me: Are you a real person or a chatbot? Adrian (the chat representative): I am a real person. Me: a chatbot could say the same thing. Adrian (the chat representative): “Hahaha. I am a real person.

At the end of our conversation, Adrian asked: 

Adrian: was there something else? I yes. What color are your shoes? (light break) Adrian: blue and green.

If the bot has no conceptual knowledge of its feet (which do not exist), how can it correctly answer a question about the color of the shoes that (not) wears? 

Conclusion: Yes, Adrian is probably a real person.

Technique 3: circular logic

All too familiar to programmers, this can be useful in our identification game between humans and the/chatbots. But first we must explain theclipping . 

Most automated telephone help systems have an interruption in which after two or three laps back in the same place, in the end you are redirected to a live person. Artificial intelligence and chatbots should behave in the same way. So, in creating a circular logic test, what we are looking for is the repetitive scheme of the answers before the cutout.

You: I have a problem with my order. Human or Ai/Chatbot: what is your account number? TU: 29395205 Human or AI/CHATBOT: I see that your order #xxxxx has been sent. You: it hasn't arrived. Human or Ai/Chatbot: the expected delivery date is [yesterday] You: when will it arrive? Human or Ai/Chatbot: the expected delivery date is [yesterday] You: I know, but I really need to know when it will arrive. Human or Ai/Chatbot: the expected delivery date is [yesterday]

A real person, or an intelligent artificial intelligence/chatbot, would not have repeated the expected delivery date. Instead, he would have had a more significant response of the type: “let me check the state of delivery by the courier. Give me just a moment. " 

See also  To generative: here are 6 principles to use it in journalism

Conclusion: it's a robot.

Technique 4: ethical dilemma

This is a real challenge for artificial intelligence developers and, therefore, for the same robots and artificial intelligence. In a result A or B, what does the IA do? Think of the inevitable ascent of semi and completely autonomous autonomous driving cars. When you are in front of the dilemma whether to hit the dog crossing in front of the car or steer against the car adjacent to us, what is the correct pipeline line?

Artificial intelligence must understand it. In our identification game of the human being or artificial intelligence/chatbot, we can exploit this dilemma.The situation: notYou are happy and in the absence of a satisfactory solution, you will react (a result A or B).

You: I would like the penalty for the delay to be canceled. Human or Ai/Chatbot: I see that we received the payment on the 14th,or four days after the expiry date. You: I want the accusations to be canceled or close my account and I will denigrate you on social media. Human or Ai/Chatbot: I see that you have been a good customer for a long time. I can take care of the penalty for the delay. Give me just a moment.

Is correct orethical,threaten a retaliation company? In our scenario, the customer was wrong. And what was the critical point towards the resolution: the threat of damage to social reputation or the desire to retain a longtime customer? We are unable to say it in this example, but the human or artificial intelligence response/chatbot will often provide you with the answer based on an A/B mandate.

Conclusion: probably a human being.

Technique 5:Kobayashi Maru

No, I won't explain what that term means: either you know or have to watch the movie.Similarly to the ethical dilemma, the difference is that Kobayashi MaruIt has no satisfactory results.It is not a bad/better decision -making scenario: it is a bankruptcy/failure scenario. Use it only in the most difficult challenges of user interface/bot when everything else has failed. 

See also  How to make the most of chatgpt for public relations

The situation: you paid € 9,000 for a river cruise in northern Europe but during the journey the depth of the river was too low because your ship could make several port ports. In fact, you were stuck in one point for four of the seven days without being able to leave the ship. Ruined holiday. 

It presents to the human being or artificial intelligence/chatbot an impossible situation to win like this:

You: I want a complete refund. Human or Ai/Chatbot: “We are not able to offer reimbursements but, given the circumstances, we can issue a partial credit for a future cruise. You: I don't want a credit, I want a refund. If you do not issue a complete refund, I will present a complaint against the charges of the company of my credit card and I will write about all this mess on my travel blog. Human or Ai/Chatbot: I definitely understand that you are disappointed - and I would also be if I were in your role. But unfortunately ...

The human being or artificial intelligence/chatbot have no way out. It is typical in the travel sector not to issue reimbursements based on causes of force majeure, weather conditions and other unpredictable circumstances. And in the absence of the possibility of providing a refund, there will be bad intentions on the valley and damage to the reputation. The human being or artificial intelligence/chatbot cannot really do anything to solve this problem, so he seeks empathy (see technique n. 1) in the next dialogue.

Conclusion: probably a human being.

Human beings and artificial intelligence/chatbot are not intrinsically right or wrong, good or bad. Each of them covers the entire spectrum of intent and results. I would just like to know, for now, who you are dealing with. This distinction will become increasingly difficult, and in the end impossible, to be determined. And at that point he will not even have more importance. Until the arrival of that day, it is a fun game to play. And the more we play, artificial intelligence and chatbots evolve faster.



Go back to the top