Americans are increasingly concerned about the threat posed by social media platforms like Facebook, Twitter, and YouTube to “the freedom of speech.” Over the past two decades, the platforms have built virtual public squares, exponentially increasing the power of users to communicate with one another. At first the platforms sought to avoid regulating user-created content, but for several years they have done more robust “content moderation” under pressure from politicians, activist corporations, and users. The most notable incident has been the indefinite suspension of President Trump’s Facebook and Twitter accounts, and the decisions by Apple and Amazon to stop providing services to Parler, Twitter’s right-leaning rival. At the same time, the threat to what James Madison called a “national sentiment” of free speech comes not only from content moderation, but also from those who use the platforms to “shout-down” or bully their opponents. Officials, pundits, and scholars have proposed various regulatory schemes to curb the platforms’ immense power over public discourse. Trump and President Biden have both urged amendment of Section 230 of the Communications Decency Act of 1996, the law that immunizes platforms from damages for the user content they host and for removing any content they find “objectionable.” Others have suggested applying First Amendment content-neutrality rules to the platforms, prohibiting them from removing content on the basis of the ideas conveyed. Indeed, Trump has recently sued the platforms, arguing that they violated the First Amendment by suspending his accounts. Another option, one the states and federal government are already pursuing, is to enforce existing anti-trust laws to promote competition among the platforms, ensuring users have options. Each of these strategies depends on a different conception of the relationship between the government and the platforms. Are the firms more like the New York Times, such that they should be liable for what they publish, but should not be compelled to publish content with which they disagree? Or are they more like the government, such that they should be forbidden from taking sides? Or are they something new? The answer turns on the definition of the freedom of speech and the government’s proper role in the liberal order. Government regulation of platform content is not promising. Despite their outsized power, the platforms are nonetheless private actors facilitating private speech. Subjecting them to liability for user content would have the unintended consequence of dramatically reducing the opportunities for private speech on the internet, and subjecting them to liability for moderating that content would be inconsistent with the traditional and prudent understanding of the First Amendment. To be sure, this lets the platforms have their cake and eat it too. Government enforcement of anti-trust rules, though, would ensure that whatever cake they get is the product of a fair market. Non-governmental institutions, too, have power and responsibility to reduce their constituents’ dependence on social media. The “social” part of social media is the deeper problem. Under the weight of illiberal ideologies, Americans have begun to shrink from what Justice Oliver Wendell Holmes called an “attachment” to “the principle of free thought—not free thought for those who agree with us but freedom for the thought we hate.” If intolerance continues to gain steam in American society, the debate about freedom of speech will evolve from one about content moderation by the platforms to one about the proper bounds of state power to enforce speech codes, and the vast difference between private and governmental power to restrain speech, now blurry, will become all too clear.