Artificial Intelligence Program Just Passed A Law Exam
An artificial intelligence program called Claude received a "marginal pass" on a law exam.
Artificial intelligence is already gunning for the jobs of artists, drivers, writers, voice actors, plagiarists, and racists. Now AI is apparently gunning for the jobs of lawyers, as less than a month after AI has been used to help a man fight a speeding ticket, an AI known simply as “Claude” took a law and economics exam at George Mason University. The previously unknown AI, developed by the research firm Anthropic, received a “marginal pass” on the exam.
The artificial intelligence program, which was funded by accused crypto fraudster Sam Bankman-Fried, is being pitched as a rival to the enormously-successful (and extremely-controversial) ChatGPT. Unlike ChatGPT, Claude is currently in closed beta mode. Claude uses a technique that Anthropic calls “Constitutional AI” that is designed to respond to adversarial questions.
Anthropic says artificial intelligence models that are designed to be “harmless” often become useless when asked adversarial questions, and their technique helps combat that problem.
Of course, the fact that Claude only received a “marginal pass” on the exam indicates that the artificial intelligence program still has a way to go before it can truly replace lawyers. However, considering that the public didn’t even think that AI could replace lawyers until recently, it might not be long before AI manages to challenge the capabilities of even the best lawyers.
The ever-expanding capabilities of artificial intelligence systems have led to some outcry among the affected fields. Artists, writers, voice actors, and other professionals have simultaneously tried to downplay AI’s effectiveness and speak out against the effect it might have on the human professionals it aims to replace. With both Google and Microsoft laying off tens of thousands of employees as the tech firms focus on AI research, it seems some of the critics’ fears are well-founded.
There are other concerns surrounding the rise of artificial intelligence. The use of machine learning to decide what people will see on social media has led to some massive unintended consequences among the world’s population. By focusing entirely on what gains engagement, the use of social media has been tied to radicalization, depression, and even widespread violence in some countries.
This points to one of the key limitations of artificial intelligence– one that doesn’t have to do with its ability to perform other people’s jobs. AI will only ever try to accomplish exactly what its programmers tell it to do and will ignore all other factors as though they’re completely irrelevant. This includes goals that humans would consider obvious, such as “don’t convince teenagers to commit suicide” or “don’t cause a civil war in Ethiopia.”
With AI causing unexpected worldwide consequences with such a straightforward task as “decide what content to show users,” it remains to be seen what unexpected consequences artificial intelligence will have in other fields. With its newfound legal capabilities, AI might also end up responsible for tasks such as “decide who is right about something” and “decide who goes to jail.” Whatever the consequences are, we ultimately won’t see them coming, because no matter how much an AI may seem human, there will always be edge cases where AI acts in completely inhuman ways.