Emiliano Marchisio
Civil liability may be understood as indirect market regulation, since the risk to incur in liability for damages provide an incentive to invest in safety. Such an approach is inappropriate in markets of artificial intelligence devices. The current paradigm of civil liability allows redress only insofar “somebody” is identified as liable to pay it (either because of a fault or pursuant to a strict liability rule). However, robots and programs may “behave” far independently from instructions initially provided by programmers and constructors. This represents a disincentive to new technologies (artificial intelligence etc.) insofar as this determines charging producers and/or programmers with liability even if the damage derives from a perfectly “correct” functioning of algorithms and robots. This would not foster safety with respect to technological issues, because there would be no “fault” to blame or prevent. Instead, it would expose producers and programmers to unforeseeable liability, which would disincentive them from entering into the market or developing it, thus hindering technological evolution. Therefore, I think that artificial intelligence requires that redress obligations following damages not caused by negligence, imprudence or unskillfulness (i.e.: when producers and programmers complied with scientifically validated standards) should move from an issue of civil liability into one of financial management of losses. This could mirror, I propose, the current “no-fault” schemes adopted, with respect to, e.g., medical civil liability, in very few jurisdictions such as New Zealand. My paper focuses, in particular, on the market of health-care.
PDFShare this article
Telecommunications System & Management received 109 citations as per Google Scholar report