AI Gets the Law Wrong 75% Of The Time

A Stanford study has found that AI gets the law wrong 75% of the time.

For one task, researchers asked the AI models to state whether two different court cases agreed or disagreed with each other—a core legal research task. The models did no better than random guessing (a methodology I used to use in law school – unsuccessfully 😃)

They found a high incidence of LLM “sycophancy” – where if you asked a question and the premise of the question was false – the LLM did not pull you up on the false premise. It went along and reinforced your inaccuracy in its answer. That doesn’t sound like a good thing.

They tested more than 200,000 legal questions on OpenAI’s ChatGPT 3.5, Google’s PaLM 2, and Meta’s Llama 2.

So GenAI is not ready for legal research just yet. I am using enterprise MS Copilot and I do not use it for legal research but it is still amazing at doing many things: summarising Teams calls & preparing action items lists, turning articles into PPTs and so much more.

Book Nick To Speak

At your next event

Shake Your Audience With Nick's Digital Distruption
Speaking Topics