Society is waking up to surveillance capitalism, the influence of digital advertising platforms on democracy, and discriminatory algorithms. However, academics have yet to emphasize the civil rights and consumer harm that results from ad-targeting for inferior and harmful versions of essential consumer goods and services. This Article aims to fill that gap. It analyzes how reverse redlining — predatory and discriminatory targeting for harmful products — occurs through segmentation, targeting, and ad-delivery. Providers of inferior social welfare products leverage these tools to manipulate low-income people and consumers of color to buy their products at a staggering efficiency and scale. The advertising platforms often play a role in reverse redlining through the process of optimizing the delivery of advertisements for particular audiences and amplifying advertisers’ message through look-alike tools. Using the for-profit education industry as a case study, this Article decodes the technology, considers legal challenges to the harms that result, and proposes strengthening civil rights and consumer protections. For-profit education offers a paradigmatic example of an inferior social welfare product, because its higher price points and worse outcomes indicate that for-profit colleges and universities (“FPCUs”) can hardly compete for students without manipulative advertising. This Article offers new evidence of the ways for-profit universities identify consumers, leverage advertising platforms to find those consumers, and manipulate them to purchase a poor product, harming hundreds of thousands of consumers. Civil rights, consumer protection, and privacy laws each offer avenues to redress the harms that result, but a patchwork legal regime built in a different era makes redress difficult. In conclusion, the article builds on the foundation of civil rights protections for social welfare products like housing, employment services, and credit to propose new laws to limit the reverse redlining of predatory products. Specifically, it recommends banning targeting, amplification, and ad-delivery optimization, increased transparency, clarified platform liability, and an updated civil rights regime. This social welfare product approach is designed to address First Amendment concerns and to be narrowly tailored to allow for beneficial uses of the technology.
Read full abstract