When self-driving cars collide, who is responsible? Courts and insurance companies need to know what’s inside the ‘black box’

Written by Aaron J. Snowswell, Queensland University of Technology; Henry Fraser, Queensland University of Technology, and Rael Simcock, Queensland University of Technology

The first serious accident of a self-driving car occurred in Australia in March of this year. A pedestrian suffered life-threatening injuries when he was hit by a Tesla Model 3 in “auto-pilot” mode.

In the US, a highway safety regulator is investigating a series of accidents where a Teslas on autopilot crashed into first responder vehicles with flashing lights during a traffic stop.

Car accident on the highway at night with emergency lights flashingA Tesla model 3 collides with a stationary emergency responder vehicle in the US. NBC / YouTube

Decision-making processes for ‘self-driving’ cars are often opaque and unpredictable (even for their manufacturers), so it can be difficult to determine who should take responsibility for such accidents. However, the growing field of “explainable artificial intelligence” may help provide some answers.

Who is responsible when self-driving cars crash?

Even though self-driving cars are new, they are still machines that manufacturers make and sell. When they cause harm, we must ask whether the manufacturer (or software developer) has fulfilled its safety responsibilities.

The modern law of neglect comes from the famous case of Donoghue v Stevenson, in which a woman discovered a decomposing snail in a ginger beer bottle. The manufacturer was found neglectful, not because he was expected to predict or directly control the snail’s behaviour, but because his packaging process was unsafe.

According to this logic, manufacturers and developers of AI-based systems such as self-driving cars may not be able to predict and control everything the “autonomous” system does, but they can take measures to reduce risks. If their risk management, testing, auditing and monitoring practices are not good enough, they should be held accountable.

How much risk management is adequate?

The difficult question would be “how much care and how much risk management is adequate?” In complex software, it is impossible to test every error in advance. How will developers and manufacturers know when to stop?

Fortunately, courts, regulators, and technical standards bodies have experience in setting standards of care and liability for risky but useful activities.

The standards could be quite stringent, such as the EU AI bill, which requires that risks be reduced “as much as possible” without regard to cost. Or it could be more like the Australian Neglect Act, which allows for less strict management of less likely or less serious risks, or where risk management reduces the overall benefit of a risky activity.

Legal issues will be complicated by the opacity of artificial intelligence

Once we have a clear standard of risk, we need a way to enforce it. One approach could be to give a regulator powers to impose penalties (as the ACCC does in competition cases, for example).

Individuals affected by AI systems should also be able to file a lawsuit. In cases related to self-driving cars, lawsuits against manufacturers will be especially important.

However, for these lawsuits to be effective, courts will need to understand the technical processes and standards of AI systems in detail.

Manufacturers often prefer not to disclose these details for commercial reasons. But the courts already have procedures in place to balance business interests with an appropriate amount of disclosure to facilitate litigation.

An even greater challenge may arise when AI systems themselves are opaque “black boxes.” For example, Tesla’s autopilot function relies on “deep neural networks,” a common type of artificial intelligence system where developers can’t be completely sure how or why it arrived at a certain result.

‘Explainable AI’ to the rescue?

Opening the black box of modern AI systems is at the center of a new wave of computer scientists and the humanities: the so-called “explainable AI” movement.

The goal is to help developers and end users understand how AI systems make decisions, either by changing how systems are built or by creating post-factual interpretations.

In a classic example, an AI system mistakenly classifies an image of a husky as a wolf. The “Explainable AI” method reveals the system focusing on the snow in the background of the image, rather than the animal in the foreground.

(Right) Photo of a husky dog ​​against a snowy background.  (on the left) shows the method of Explainable AI in action: an AI system incorrectly classifies the husky on the left as a ‘wolf’, and at right we see this is because the system was focusing on the snow in the background of the image. Ribeiro, Singh & Guestrin

How this is used in a lawsuit will depend on various factors, including the specific AI technology and the damage caused. The main concern will be the extent of the affected party’s access to the AI ​​system.

Trivago case

Our new research analyzing a recent important case in the Australian Court provides an encouraging glimpse of what this could look like.

In April 2022, a federal court sanctioned global hotel reservation firm Trivago $44.7 million for misleading customers about hotel room rates on its website and in television advertisements, following a case brought by the ACCC. An important question was how Trivago’s complex ranking algorithm chose the highest rated view of hotel rooms.

The Federal Court has set rules for discovery with safeguards to protect Trivago’s intellectual property, and both the ACCC and Trivago have called expert witnesses to provide evidence explaining how Trivago’s AI system works.

Even without full access to the Trivago system, an ACCC expert witness was able to provide convincing evidence that the system’s behavior was inconsistent with Trivago’s claim to give customers the “best price”.

This shows how tech experts and lawyers together can overcome the ambiguity of AI in court cases. However, the process requires close collaboration and deep technical expertise, and is likely to be expensive.

Regulators can take steps now to simplify things in the future, such as requiring AI companies to appropriately document their systems.

The road ahead

Vehicles with various degrees of automation are becoming more and more popular, and fully autonomous taxis and buses are being tested in Australia and beyond.

Keeping our roads as safe as possible will require close collaboration between AI and legal experts, and regulators, manufacturers, insurers and users will have roles to play.Conversation

Main image: Tesla / YouTube

Aaron J. Snowswell, Postdoctoral Research Fellow, Computational Law and Artificial Intelligence Accountability, Queensland University of Technology; Henry Fraser, Research Fellow in Law, Accountability and Data Science, Queensland University of Technology, and Rael Simcock, PhD candidate, Queensland University of Technology

This article has been republished from The Conversation under a Creative Commons license. Read the original article.

Subscribe to our free site Tweet embed the news here.

#selfdriving #cars #collide #responsible #Courts #insurance #companies #whats #black #box

Leave a Comment

Your email address will not be published.