A lawyer suing the Colombian airline Avianca used ChatGPT to submit a brief full of previous cases made up by the AI software.
However, the opposing counsel pointed out the false cases in his brief and US District Judge Kevin Castel confirmed, “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”
Steven A Schwartz had submitted an affidavit using OpenAI’s chatbot for research and had verified the cases by just asking the bot if it was lying.
Read: China deletes 1.4 million social media posts in crackdown on ‘self-media’ accounts
The bot had confirmed the cases but had failed to provide sources for verification besides two sites, Westlaw and LexisNexis.
The opposing counsel was quick to point out the made-up cases, most of which were unverified and another had wrong dates.
Schwartz said he was “unaware of the possibility that its content could be false.” He said he “greatly regrets having utilised generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”
Read: How to create AI art using right prompts
The case highlights the absurdity of using AI chatbots for professional research without cross-checking or verifying the information.