The use and usefulness of online chat and chatbots, powered by enhanced levels of artificial intelligence such as ChatGPT are rapidly increasing. There has been a surge in AI tools, not just within chatbots. We are witnessing an expansion in email content, copywriting, blogs, and more.
Summary
Convince&Convert has developed five techniques to determine if you are dealing with a real person or an artificial intelligence/chatbot.Note: the more you experiment, the faster chatbots will learn and adapt.
Technique 1: The Empathy Ploy
We believe that today’s level of artificial intelligence lacks cognitive empathy because emotions between humans are truly difficult to understand and explain. So, intentionally creating an empathetic dialogue with your human or artificial intelligence/ chatbot can be revealing.
The empathy ploy requires you to take an emotion-based position and appeal to the human or artificial intelligence/chatbot on an emotional level.
The situation: you are not happy: the most common basis for a customer service interaction.
Scenario 1: AI/chatbot
Tu: Non mi sento bene.
Risposta via chat: Come posso aiutarti?
Tu: sono triste.
Risposta via chat: Come posso aiutarti?
Scenario 2: a human being
Tu: Non mi sento bene. Risposta umana: come posso aiutarti? Hai bisogno di aiuto medico? Tu: sono triste. Risposta umana: mi dispiace. Perché sei triste?
See the difference? In scenario one, the AI/chatbot can only refer to the existing library of conditional responses. In scenario two, a human being has the ability to infuse empathy into the dialogue. It took only two replies to realize this.
Both dialogues can be constructive, but things become clearer if you know from the very beginning whether you are dealing with a human or an artificial intelligence/chatbot. As a society, we are not yet ready for AI therapists.
Technique 2: Two-Step Dissociation
A connected artificial intelligence can access virtually any data, anytime, anywhere. Just ask Alexa. Therefore, asking a significantly challenging question via chat cannot be something whose answer resides in an accessible database.
Tu: Dove ti trovi? Risposta chat: Seattle. Tu: Che tempo fa fuori? Risposta chat: puoi riformulare la domanda?
Sorry, even a mediocre weather app can handle that.
Two-Step Dissociation requires two elements (hence the name):
- Make an assumption that the artificial intelligence/chatbot is likely unable to reference
- Ask a question related to that assumption.
The situation: artificial intelligence and robots do not have feet
Challenging question: “What color are your shoes?”
This is a real exchange I had with Audible customer service (owned by Amazon) via chat. In the middle of the exchange, since I could not tell, I asked:
Io: Sei una persona reale o un chatbot? Adrian (il rappresentante della chat): Sono una persona reale. Io: Un chatbot potrebbe dire la stessa cosa. Adrian (il rappresentante della chat): “HAHAHA. Sono una persona reale.
At the end of our conversation, Adrian asked:
Adrian: C'era qualcos'altro? Io si. Di che colore sono le tue scarpe? (leggera pausa) Adrian: Blu e verde.
If the bot has no conceptual knowledge of its own feet (which do not exist), how can it correctly answer a question about the color of shoes it (doesn’t) wear?
Conclusion : yes, Adrian is probably a real person.
Technique 3: circular logic
All too familiar to programmers, this can be useful in our game of identifying humans vs. AI/chatbots. But first, we need to explain the cutoff .
Most automated phone support systems have a cutoff point where, after two or three loops back to the same place, you are eventually redirected to a live person. AI and chatbots should behave the same way. So, when creating a circular logic test, what we look for is a repetitive pattern in the responses before the cutoff.
Tu: Ho un problema con il mio ordine. Umano o AI/chatbot: qual è il tuo numero di conto? Tu: 29395205 Umano o AI/chatbot: vedo che il tuo ordine #XXXXX è stato spedito. Tu: Non è arrivato. Umano o AI/chatbot: la data di consegna prevista è [ieri] Tu: Quando arriverà? Umano o AI/chatbot: la data di consegna prevista è [ieri] Tu: Lo so, ma ho davvero bisogno di sapere quando arriverà. Umano o AI/chatbot: la data di consegna prevista è [ieri]
A real person, or a more intelligent AI/chatbot, would not have repeated the expected delivery date. Instead, they would have had a more meaningful answer like: “Let me check the delivery status with the carrier. Just give me a moment.”
Conclusion : it’s a robot.
Technique 4: ethical dilemma
This is a real challenge for artificial intelligence developers, and thus for robots and AI themselves. In an A or B outcome, what does the AI do? Think of the inevitable rise of semi- and fully autonomous self-driving cars. When faced with the dilemma of whether to hit the dog crossing in front of the car or swerve into the adjacent car, what is the correct course of action?
AI must figure that out. In our game of identifying humans vs. AI/chatbots, we can leverage this dilemma. The situation: you are not happy and in the absence of a satisfactory solution, you will react (an A or B outcome).
Tu: Vorrei che la penalità per il ritardo fosse cancellata. Umano o AI/chatbot: vedo che abbiamo ricevuto il pagamento il 14 , ovvero quattro giorni dopo la data di scadenza. Tu: Voglio che le accuse vengano annullate o chiuderò il mio account e ti denigrerò sui social media. Umano o AI/chatbot: vedo che sei un buon cliente da molto tempo. Posso occuparmi io di stornare la penale per il ritardo. Dammi solo un momento.
Is it right or ethical, to threaten a company with retaliation? In our scenario, the customer was wrong. And what was the tipping point toward resolution: the threat of reputational damage or the desire to retain a long-standing customer? We can’t tell from this example, but the human or AI/chatbot response will often give you the answer according to an A/B mandate.
Conclusion: probably a human.
Technique 5: Kobayashi Maru
No, I’m not going to explain what that term means: either you know it or you need to watch the movie. Similar to the ethical dilemma, the difference is that the Kobayashi Maru has no satisfactory outcomes. It’s not a bad/better decision scenario: it’s a fail/fail scenario. Only use this in the toughest UI/bot challenges when everything else has failed.
The situation: you paid €9,000 for a river cruise in Northern Europe but during the trip the river depth was too low for your ship to make several port stops. In fact, you were stuck in one place for four out of seven days unable to leave the ship. Vacation ruined.
Present the human or AI/chatbot with an unwinnable situation like this:
Tu: Voglio un rimborso completo. Umano o AI/chatbot: “Non siamo in grado di offrire rimborsi ma, date le circostanze, possiamo emettere un credito parziale per una futura crociera. Tu: Non voglio un credito, voglio un rimborso. Se non emetti un rimborso completo, presenterò un reclamo contro gli addebiti alla compagnia della mia carta di credito e scriverò di tutto questo pasticcio sul mio blog di viaggio. Umano o AI/chatbot: capisco sicuramente che sei deluso – e lo sarei anch'io se fossi nei tuoi panni. Ma sfortunatamente …
The human or AI/chatbot has no way out. It’s typical in the travel industry not to issue refunds due to force majeure, weather conditions, or other unforeseeable circumstances. And in the absence of the possibility to provide a refund, ill will and reputational damage will follow. The human or AI/chatbot can’t really do anything to solve this problem, so look for empathy (see technique no. 1) in the following dialogue.
Conclusion: probably a human.
Humans and AI/chatbots are not inherently right or wrong, good or bad. Each covers the full spectrum of intents and outcomes. I’d just like to know, for now, who I’m dealing with. This distinction will become harder, and eventually impossible, to determine. And at that point, it won’t even matter anymore. Until that day comes, it’s a fun game to play. And the more we play, the faster AI and chatbots evolve.










