The first serious accident of a self-driving car occurred in Australia in March of this year. A pedestrian suffered life-threatening injuries when he was hit by a Tesla Model 3 in “autopilot” mode.
In the US, a highway safety regulator is investigating a series of accidents where a Teslas on autopilot crashed into first responder vehicles with flashing lights during a traffic stop.
Decision-making processes for ‘self-driving’ cars are often opaque and unpredictable (even for their manufacturers), so it can be difficult to determine who should take responsibility for such accidents. However, the growing field of “explainable artificial intelligence” may help provide some answers.
Who is responsible when self-driving cars crash?
Even though self-driving cars are new, they are still machines that manufacturers make and sell. When they cause harm, we must ask whether the manufacturer (or software developer) has fulfilled its safety responsibilities.
The modern law of neglect comes from the famous case of Donoghue v Stevenson, in which a woman discovered a decomposing snail in a ginger beer bottle. The manufacturer was found neglectful, not because he was expected to predict or directly control the snail’s behaviour, but because his packaging process was unsafe.
According to this logic, manufacturers and developers of AI-based systems such as self-driving cars may not be able to predict and control everything the “autonomous” system does, but they can take measures to reduce risks. If their risk management, testing, auditing and monitoring practices are not good enough, they should be held accountable.
How much risk management is adequate?
The difficult question would be “how much care and how much risk management is adequate?” In complex software, it is impossible to test every error in advance. How will developers and manufacturers know when to stop?
Fortunately, courts, regulators, and technical standards bodies have experience in setting standards of care and liability for risky but useful activities.
The standards could be quite stringent, such as the EU AI bill, which requires that risks be reduced “as much as possible” without regard to cost. Or it could be more like the Australian Neglect Act, which allows for less strict management of less likely or less serious risks, or where risk management reduces the overall benefit of a risky activity.
Legal issues will be complicated by the opacity of artificial intelligence
Once we have a clear standard of risk, we need a way to enforce it. One approach could be to give a regulator powers to impose penalties (as the ACCC does in competition cases, for example).
Individuals affected by AI systems should also be able to file a lawsuit. In cases related to self-driving cars, lawsuits against manufacturers will be especially important.
However, for these lawsuits to be effective, courts will need to understand the technical processes and standards of AI systems in detail.
Manufacturers often prefer not to disclose these details for commercial reasons. But the courts already have procedures in place to balance business interests with an appropriate amount of disclosure to facilitate litigation.
An even greater challenge may arise when AI systems themselves are opaque “black boxes.” For example, Tesla’s autopilot function relies on “deep neural networks,” a common type of artificial intelligence system where developers can’t be completely sure how or why it arrived at a certain result.
‘Explainable AI’ to the rescue?
Opening the black box of modern AI systems is at the center of a new wave of computer scientists and the humanities: the so-called “explainable AI” movement.
The goal is to help developers and end users understand how AI systems make decisions, either by changing how systems are built or by creating post-factual interpretations.
In a classic example, the AI system mistakenly classifies an image of a husky as a wolf. The “Explainable AI” method reveals the system focusing on the snow in the background of the image, rather than the animal in the foreground.
How this is used in a lawsuit will depend on various factors, including the specific AI technology and the damage caused. The main concern will be the extent of the affected party’s access to the AI system.
Our new research analyzing a recent important case in the Australian Court provides an encouraging glimpse of what this could look like.
In April 2022, a federal court sanctioned global hotel reservation firm Trivago $44.7 million for misleading customers about hotel room rates on its website and in television advertisements, following a case brought by the ACCC. An important question was how Trivago’s complex ranking algorithm chose the highest rated view of hotel rooms.
The Federal Court has set rules for discovery with safeguards to protect Trivago’s intellectual property, and both the ACCC and Trivago have called expert witnesses to provide evidence explaining how Trivago’s AI system works.
Even without full access to the Trivago system, an ACCC expert witness was able to provide convincing evidence that the system’s behavior was inconsistent with Trivago’s claim to give customers the “best price”.
This shows how tech experts and lawyers together can overcome the ambiguity of AI in court cases. However, the process requires close collaboration and deep technical expertise, and is potentially expensive.
Regulators can take steps now to simplify things in the future, such as requiring AI companies to appropriately document their systems.
Vehicles with various degrees of automation are becoming more and more popular, and fully autonomous taxis and buses are being tested in Australia and beyond.
Keeping our roads as safe as possible will require close collaboration between AI and legal experts, and regulators, manufacturers, insurers and users will have roles to play.
This article has been republished from The Conversation under a Creative Commons license. Read the original article.
#selfdriving #cars #crash #responsible #Courts #insurance #companies #whats #black #box