Our legal system has long tried to fit the square peg of artificial intelligence (AI) technologies into the round hole of the current tort regime, overlooking the inability of traditional liability schemes to address the nuances of how AI technology creates harms. The current tort regime deals out rough justice—using strict liability for some AI products and using the negligence rule for other AI services—both of which are insufficiently tailored to achieve public policy objectives. Under a strict liability regime where manufacturers are always held liable for the faults of their technology regardless of knowledge or precautionary measures, firms are incentivized to play it safe and stifle innovation. But even with this cautionary stance, the goals of strict liability cannot be met due to the unique nature of AI technology: its mistakes are merely “efficient errors”—they appropriately surpass the human baseline, they are game theory problems intended for a jury, they are necessary to train a robust system, or they are harmless but misclassified. Under a negligence liability regime where the onus falls entirely on consumers to prove the element of causation, victimized consumers must surmount the difficult hurdle of tracing the vectors of causation through the “black box” of algorithms. Unable to do so, many are left without sufficient recourse or compensation. This Article proposes a new framework to regulate AI technologies: bestowing corporate personhood to AI systems. First, the corporate personality trait of “limited liability” strikes an optimal balance in determining liability—it would both compensate victims (for instance, through obligations to carry insurance and a straightforward burden of causation) while holding manufacturers responsible only when the infraction is egregious (for instance, through veil-piercing). Second, corporate personhood is “divisible”—meaning not all corporate personality traits need to be granted—which circumvents many of the philosophical criticisms of giving AI the complete set of rights of full legal personhood. Third, innovation incentives weigh heavily in favor of shifting to a tort regime better tailored to the potential promises and unique harms of AI.