Abstract

The workforce is digitizing. Leading consultancies estimate that algorithmic systems will replace forty-five percent of human-held jobs by 2030. This is a well-documented and alarming trend for the millions of truckers, bankers, and line-workers whose jobs will become obsolete. But now that corporations are using algorithms like employees, another public threat that has received far less attention is also arising: a growing corporate accountability gap. One feature that algorithms share with the human employees they are replacing is their capacity to cause harm. Even today, algorithms discriminate against loan applicants, manipulate stock markets, collude over prices, and cause traffic deaths. Ordinarily, corporate employers would be responsible for these injuries, but the rules for assessing corporate liability arose at a time when only humans could act on behalf of corporations. Those rules apply awkwardly, if at all, to silicon. Some corporations have already discovered this legal loophole and are rapidly automating business functions to limit their own liability risk. This Article seeks a way to hold corporations accountable for the algorithmic harms of their digital workforce. It draws inspiration from responses to earlier corporate efforts to dodge liability by manipulating the formal boundary defining employment. For more than a century, corporations have sought to jilt victims and immunize themselves by shifting operations from employees to various non-employee laborers, like temps, contractors, and gig workers. Lawmakers and scholars have responded to each of these machinations by developing functional tests that recharacterize some of these workers as employees, thereby closing the corporate accountability gap. This Article proposes an analogous approach for algorithms: some algorithms should be treated, for liability purposes, as corporate employees. Drawing on existing functional characterizations of employment, the Article defines the concept of an “employed algorithm” as one over which a corporation exercises substantial control and from which it derives substantial benefits. If a corporation employs an algorithm that causes criminal or civil harm, the corporation should be liable just as if the algorithm were a human employee. This would allow plaintiffs and prosecutors to leverage existing, employee-focused liability rules to hold corporations accountable when the digital workforce transgresses.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.