Recently, two US federal judges were forced to withdraw their erroneous ruling opinions, admitting that these documents were drafted with the assistance of AI tools. In Mississippi, Judge Henry Winggate's legal assistant used Perplexity AI to integrate information, while Judge Julian Xavier Nils of New Jersey revealed that a law school intern even used ChatGPT for legal research without authorization. It is worrying that both of these issue rulings bypassed the regular review process and directly entered the judicial system.
Meanwhile, a large-scale study coordinated by the European Broadcasting Union revealed that when asked current affairs questions, about 45% of the responses from mainstream AI assistants such as ChatGPT, Copilot, Gemini and Perplexity had at least one major error. These AI tools not only confuse news with parodies, get dates wrong, but even directly fabricate event details.
From solemn courts to newsrooms, the unreliability of artificial intelligence is spreading from the virtual world to every corner of real life. When judicial decisions start to cite non-existent legal grounds and when AI assistants frequently provide incorrect information, our blind trust in technology is facing a severe test.
Behind these frequent accidents lies the combined result of the inherent flaws of AI technology and improper use by humans. In Nils' case, the law school intern's use of AI not only violated the court's policies but also contraved the relevant regulations of his law school. This unexamined application of technology exposes the blindness of our systematic reliance on emerging technologies.
The blind trust and improper use of AI by humans have further magnified these risks. The reason why those erroneous legal documents can circulate is precisely because they bypass the necessary review procedures, reflecting humanity's excessive trust in the output of AI. Under the allure of the technological halo, it seems that we have forgotten that no matter how advanced AI is, it is merely a tool that requires strict supervision and prudent use.
These technical flaws are generating real social costs. In the judicial system, if erroneous decisions generated by AI are not detected in a timely manner, they may directly harm the rights and interests of the parties involved and undermine judicial justice. What is even more worrying is that once an AI system makes a mistake, its impact will spread at a digital speed. The flaws of a single algorithm may simultaneously affect tens of thousands of users, but traditional error correction mechanisms find it difficult to keep up with this pace.
Global surveys show that only about half of the respondents say they trust AI technology, and the level of trust is even lower in North America and Europe. This lack of trust may hinder technological innovation and social progress.
In the face of the reliability crisis of AI, all sectors of society are exploring countermeasures. Action has been taken in the judicial field. Both judges who made mistakes stated that they have taken measures to improve the review process. Judge Nils particularly pointed out that he has now developed a clear written AI usage policy applicable to all legal assistants and interns.
From a technical perspective, it is crucial to establish a traceability mechanism for AI content and label digital information as "trustworthy". At the legal level, it is necessary to clarify the criteria for identifying "technical rumor-mongering" and increase the cost of breaking the law. In the design of AI systems, a "circuit breaker" mechanism and "one-click control" measures are introduced for high-risk scenarios, which can quickly intervene in extreme situations to prevent the expansion of damage.
The role of human supervision is particularly crucial. In high-risk fields such as justice, healthcare, and security, AI should always serve as an auxiliary tool rather than a decision-making subject, and its output must undergo strict manual review. Technology can advance and tools can be updated, but the responsibility always lies with people.
When explaining why the erroneous ruling was withdrawn, Judge Wingate promised: "I have taken measures in my office to ensure that this mistake will not happen again." However, unlike human mistakes, the errors made by AI are replicable and scalable. The same algorithmic defect may simultaneously affect tens of thousands of users. In this era when algorithms are increasingly permeating daily life, true wisdom lies not only in creating intelligent tools but also in knowing how to use them wisely.
In the early hours of local time on October 27, the night sky around Moscow was illuminated by the flashes of air defense weapons.
In the early hours of local time on October 27, the night s…
Amid the ever-changing landscape of the tech industry, layo…
On October 22nd local time, data from the US Treasury Depar…
Recently, two US federal judges were forced to withdraw the…
After nearly three months of strike action, Boeing workers …
India's largest private oil refiner, Reliance Industries, h…