In the Likeness of the Human Mind AI Liability and the Speculative Fiction of Dune

The author generated this image using Stable Diffusion, an AI art generator. At the time of writing this piece, this art cannot be copyrighted and is not owned by anyone.

 

A note on the use of the word Jihad:

Frank Herbert’s Dune novels use the word Jihad to mean a holy war. This is an erroneous translation that diminishes the breadth of situations the word applies to.  For accuracy, this article also uses the word Jihad in the places where Herbert used the word, with the understanding that the word has been removed from its original religious meaning.

 

In Frank Herbert’s Dune, it is a vital fact that computers, as we understand them, do not exist. Instead, humans called Mentats are trained to do the work of calculators themselves. This distrust of machine intelligence stems from an event known as the Butlarian Jihad, an apparent revolt in the distant past in which machines were destroyed altogether. The distrust of artificial intelligence and the veneration of the human mind have become first principles in the far future setting of Dune. While the specifics of the Jihad are kept extremely vague in the six books Herbert wrote himself, a couple of quotes clue the reader into what may have happened.

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” – Dune

“Thou shalt not make a machine in the likeness of a human mind.” – Dune

“‘The target of the Jihad was a machine-attitude as much as the machines,’ Leto said. ‘Humans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments. Naturally, the machines were destroyed.’” – God Emperor of Dune

These quotes hint at the truth of the Bultarian Jihad, that machine intelligence was a way to make human actions unaccountable. Machines making choices that lead to injury obscure the person ultimately liable for that injury. However, the actual cause is still, eventually, a human. The destruction of machines in Dune was an attempt to sunder the veil and make people accountable for their actions again, rather than passing the buck to machines.

Drawing on Dune, applicable questions for our society arise. What is the critical point at which you cannot find who is liable for a machine’s action? Are we nearing that point with computers that seem to be able to communicate and make decisions with limited human input? Moreover, who should be held responsible when machines do fail?

The explosion in technology regarding large language models (LLMs) like ChatGPT and Bing AI has brought these questions to a head. As Professor Eugene Volokh explains in his essay “Large Libel Models Liability for AI Output,” these programs often simply fabricate information about people. This false information can have real reputational harm and has already been the source of a lawsuit.

Nor is the problem simply about large language models; other computers that make decisions are also in the crosshairs. Tesla is about to defend itself in multiple cases alleging that the autopilot system on its cars caused a fatal crash. These types of AI are different, but the story is the same: a program that is made to be able to react to novel situations does so in a way that causes harm.

 

The Issue of Multiple Potential Causes

So, who is to blame when a robot harms someone? The answer sounds obvious: the person who made the robot. But with the black box that is generative intelligence, it isn’t that simple. Multiple parties can play in the creation of generative intelligences. The company that builds the program may then lease it to a company that feeds it with industry data. Then the end user gets a hold of it and their input into the program also factors into the final output. Juries may not be prepared for the technical know-how necessary to make the determinations of liability. This obfuscation is what Dune gets at, the mere presence of the AI as the initial cause of the harm makes finger-pointing more likely.

A potential solution is that of alternative liability. When the court is unable to determine which member of a group of potentially liable parties is responsible for the behavior of an AI, make them prove among themselves which it should be. This shifts the burden to the companies to prove that, where negligence exists, it was not their own. Otherwise, they can be held jointly and severally liable for the negligence. This form of liability produces a smoking-out effect where each potential tortfeasor is incentivized to prove that it was another one of the parties who is casually linked to the harm. While its application to AI would be novel, the use of alternative liability overall is not.

 

The Issue of Proximate Causation

Another possible point of failure is that harm from an AI is too attenuated to create liability for the manufacturer. Within the black box of a generative intelligence’s internal processes, it may be difficult to prove that the risk of harm was foreseeable, which is necessary for negligence liability. The entire purpose of generative intelligence is that it does not need to be programmed for every potential set of inputs. The risk then becomes that the programs have a near-infinite number of outputs. This makes the tort requirement that a particular harm needs to have been foreseeable much more difficult to prove. While we can easily say that ChatGPT has the foreseeable potential to defame someone, that has only become clear  . There may well be numerous other ways that it could cause harm that even OpenAI cannot reasonably predict.

The courts need to decide how strictly they will keep to the foreseeability principle. It would be easy enough to say that any tort that a human could cause by spreading information, defamation, and the like is foreseeable in large language models. Similarly, it is foreseeable that a self-driving car could crash. But what about other torts, ones that seem less obvious? For example, did you know that large language models have the potential to divulge trade secrets? Especially as these programs increase their ability to adapt to various inputs, their potential outputs become less and less foreseeable.

A broad understanding of foreseeability is desirable here. Allowing the type of harm caused to not have to be strictly foreseeable would go far in ensuring that people injured by AI activity in novel ways are not left holding the bag. It would also help to ensure that these tech companies creating AI programs focus on the accuracy of information and their social responsibility to produce tools that cannot easily be turned toward hurting people.

The risk of easier tort litigation chilling the development of AI is minimal here. The current, somewhat unclear, risk of ligation has not stymied the development of and investment in AI tools. If anything, more clarity in what potential liability these companies could face, and how they can avoid such liability would potentially increase future investment.

 

The inapplicability of §230

But wait, doesn’t Section 230 of the Communication Decency Act lay out liability for emerging technologies? Don’t we protect such innovation from the full brunt of the tort system as a subsidy to encourage innovation? Not exactly. Section 230 of the Communications Decency Act protects carriers of user-generated information from any potential liability for that information. A classic example is that for hosting defaming material, only the user who posted the defaming material can be held liable.

AI programs are different because the content produced is at least partially done by the program itself. Section 230 does not apply to such output, at least on the face of it. Some may argue that the necessary input from the users of the AI program makes such content “user-created” in a sense. However, that argument has not stood up in the context of copyright, and likely will not for liability either.

 

Dune’s Solution to Computer Liability

Frank Herbert does present a solution to the problem of AI liability in Dune. Moreover, it isn’t destroying all computers. Computers still exist after a fashion in Dune, they just are not machines. Mentats, the human calculators that have replaced machines, solve many of the outstanding questions of computer liability. With a human doing the work of a machine, the black box of computer reasoning disappears. A human can tell you how they came to their conclusion and what inputs were most important to their choice. Furthermore, even when they cannot tell you exactly who should be responsible, they and their employers can be held responsible for the harm they cause.

Now, does that mean we should train children from birth to be calculators? This strategy is no more workable than destroying all computers. However, it does demonstrate another way to make liability clearer. Thorough and easy-to-access documentation of these programs is vital to tracing their decision-making process. By taking steps to ensure that AIs don’t present a black box, we can avoid reaching that critical point where liability is impossible to prove.

 

Conclusion

The current system of assigning liability is strained by the development of modern AI technology. It is not that our current tort system cannot handle determining such liability; running up the chain of causation is a time-honored tradition in tort law. However, the obfuscation of the processes by which AI makes its decisions makes it increasingly difficult to assign liability.

The lesson of Dune, as it applies to AI at least, is to remember that humans still ultimately make the decisions of how to build and train these new AI tools. AIs, like all other programs, do what they are told to do, although perhaps more indirectly than previous computer technologies. The Bultarian Jihad in Dune is remembered through myth as an attempt to hold those in power accountable for their choices. It didn’t work, by the time the events of Dune take place, the galaxy had regressed to feudalism. Perhaps the issue was never actually the machines, it was a social order unequipped to impose liability on the creators and users of Artificial Intelligence.

 

Suggested Citation: Andrew Brockmeyer, In the Likeness of the Human Mind AI Liability and the Speculative Fiction of Dune, Cornell J.L. & Pub. Pol’y, The Issue Spotter (October 18, 2023), http://jlpp.org/blogzine/in-the-likeness-of-the-human-mind-ai-liability-and-the-speculative-fiction-of-dune/.

 


Deprecated: file_exists(): Passing null to parameter #1 ($filename) of type string is deprecated in /home/r0bfc7luszh6/public_html/blogzine/wp-includes/comment-template.php on line 1616