Would you trust AI in a dispute mediation?
Researchers from Google DeepMind trained a large system of language models recently to help people reach agreement on complex, but important social and political issues. The AI model has been trained to find and display areas of overlap between people’s opinions. This AI mediator helped small groups of participants to become less divided on their views. Rhiannon williams has written more about this topic.
One of the best uses for AI chatbots is for brainstorming.
I’ve had success in the past using them to draft more assertive or persuasive emails for awkward situations, such as complaining about services or negotiating bills. The latest research shows that they can help us see things from the perspective of others. Why not use AI to resolve my disagreement with my friend? I explained the conflict to ChatGPT as I saw it and asked for advice on what I should do. The AI chatbot’s response validated my approach to the problem. It was similar to what I thought. It was helpful to talk to the bot to get ideas on how to handle my situation. The advice I received was vague and generic (“Set your boundaries calmly” and ‘Communicate your feelings’) and did not provide the insight that a therapist would.
And, there’s a second problem: every argument has both sides.
I started a new conversation and described the issue as I believed my friend saw it. The chatbot validated and supported my friend’s decision, just like it did for me. This exercise, on the one hand, helped me to see things from her point of view. After all, I tried to empathize and not just win the argument. On the other hand, it’s easy to see how relying on a chatbot to tell us what we want hear can cause us to double-down, and prevent us from seeing the situation from the other person’s perspective. This served as a reminder that an AI chatbot is neither a therapist nor a friend. It’s important to tread carefully when using AI chatbots, especially for matters that are of great importance. An artificial intelligence chatbot will never be able to replace a genuine conversation where both parties are willing and able to listen. So, I decided to abandon the AI-assisted talk therapy and reach out to my friend once more. Wish me luck!
Deeper LearningOpenAI says ChatGPT treats us all the same (most of the time)
Does ChatGPT treat you the same whether you’re a Laurie, Luke, or Lashonda? Almost, but not quite. OpenAI analyzed millions conversations with its chatbot, ChatGPT. It found that ChatGPT would produce a harmful stereotype based upon a user’s name around 1 in 1,000 times on average and up to 1 in 100 in the worst cases.