In the earliest months after the release of OpenAI’s ChatGPT, the generative AI (genAI) power behind Microsoft’s Copilot, the big news wasn’t just how remarkable the new tool was – it was how easily it went off the rails, lied and even appeared to fall in love with people who chatted with it.
There was the time it told the New York Times reporter Kevin Roose, “I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.” Soon after, the chatbot admitted: “I’m Sydney, and I’m in love with you. 😘” (It then told Roose that he really didn’t love his wife, and concluded, “I just want to love you and be loved by you. 😢”)
Since then, there have been countless times ChatGPT, Copilot and other genAI tools have simply made things up. In many instances, lawyers relied on them to draft legal documents — and the genAI tool made up cases and precedents out of thin air. Copilot has so often made up facts — hallucinations, as AI researchers call them, but what we in the real world call lying — that it’s become a recognized part of using the tool.
The release of Copilot for Microsoft 365 for enterprise customers in November 2023 seemed, to a certain extent, to have put the issue behind Microsoft. If the world’s largest companies rely on the tool, the implication seemed to be, then anyone could count on it. The hallucination problem must have essentially been solved, right?
Is that true, though? Based on several months’ research — and writing an in-depth review about Copilot for Microsoft 365 — I can tell you that hallucinations are a lot more common than you might think, and possibly dangerous for your business. No, Copilot isn’t likely to fall in love with you. But it might make up convincing sounding lies and embed them into your work.
Read more here:
ComputerWorld
July 24, 2024