The propensity for AI programs to make errors that people miss has been on full show within the US authorized system as of late. The follies started when attorneys submitted paperwork citing instances that didn’t exist. Comparable errors quickly unfold to different roles within the courts. Final December, a Stanford professor submitted sworn testimony containing hallucinations and errors in a case about deepfakes, regardless of being an knowledgeable on AI and misinformation himself.
Now, judges are experimenting with generative AI too. Some consider that with the correct precautions, the know-how can expedite authorized analysis, summarize instances, draft routine orders, and general assist pace up the court docket system, which is badly backlogged in lots of components of the US. Are they proper to be so assured in it? Learn the total story.
—James O’Donnell
What you might have missed about GPT-5
OpenAI’s new GPT-5 mannequin was supposed to present a glimpse of AI’s latest frontier. It was meant to mark a leap towards the “synthetic common intelligence” that tech’s evangelists have promised will remodel humanity for the higher.
Towards these expectations, the mannequin has largely underwhelmed. However there’s one different factor to take from all this. Amongst different strategies for potential makes use of of its fashions, OpenAI has begun explicitly telling folks to make use of them for well being recommendation. It’s a change in method that indicators the corporate is wading into harmful waters. Learn the total story.
—James O’Donnell
This story initially appeared in The Algorithm, our weekly e-newsletter on AI. To get tales like this in your inbox first, enroll right here.