Abstract

In its first iteration, this paper recommended a variation of Ugo Pagallo’s ‘digital peculium’ liability scheme for hard cases in which fully autonomous robots make decisions absent appropriate linkage to the original programmer outside the scope of pre-programmed uncertainty. Situating Pagallo’s ‘hard cases’ in the larger abstraction laid out by HLA Hart and Ronald Dworkin, this paper concluded by considering whether determination of a right answer or conclusive indetermination of any exists for application of legal accountability to ever-increasing robotic autonomy. Among other reasons, advances in the global framework for automated driving necessitate an update. Reflecting issues arising from the United Nations global framework for automated driving as well as developments in autonomous vehicle testing in the United States, future discussions shall be practical and the question more discrete: At what point ought the law hold supply chain actors liable as opposed to consumers because the decision creating the harm was not a function of consumer misuse but a failure of consumer education? Narrowing the question further, the next iteration operates wholly within the context of autonomous vehicles. But this future discussion also lends itself to extrapolation within the larger field of robotic liability. An updated iteration of this paper prioritizes consumer education at the industry level and standardized nomenclature across jurisdictions. Analogizing to voluntary over-the-air software updates, which remain controversial in some circles, the updated paper shall argue that voluntarily standardized nomenclature at the industry level is imperative for two reasons. First, it is important for the certainty levels required to provide insurance for autonomous vehicles. Second, it is a matter of good business. Selling the product and staying in business once the product is on the market presupposes consumer trust. Placing a premium on public education establishes the transparency needed to build consider trust in innovative technologies. This paper shall also recommend future emphasis on explainable artificial intelligence and deliberate shifting of industry resources to address algorithmic bias.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call