Instigator / Pro

The United States federal government should repeal Section 230 of the Communications Decency Act.


The debate is finished. The distribution of the voting points and the winner are presented below.

Winner & statistics
Better arguments
Better sources
Better legibility
Better conduct

After not so many votes...

It's a tie!
Publication date
Last updated date
Number of rounds
Time for argument
Twelve hours
Max argument characters
Voting period
Two weeks
Point system
Multiple criterions
Voting system
Contender / Con

No information

Round 1
Please include all sources!!

I affirm the resolution

Contention one is holding corporations accountable.

Under Section 230, companies lack incentive for regulation. Funk 23 reports: “[Section 230] shields… sites hosting content from legal liability for most material created by users.”

This can be calamitous. Warner 21 explains, “Section 230 has provided a ‘Get Out of Jail Free’ card to …companies even as their sites are used by scam artists, … and violent extremists to cause damage and injury.”

(224) With that, subpoint A is deterring extremism.

As the use of social media has increased, so has terrorism. VCU 23 finds, “incidents involving domestic terrorism [are] at a 25-year-high.” while social media has never been more prominent.

Social media fuels these activities. Jensen 18  writes, “social media played a role in the radicalization processes of nearly 90% of extremist.” “Up from a mere 8% in 2005” adds VCU 23 

Sher 23 corroborates minimal government regulation of social media enables extremist ideologies, endangering millions of Americans.

They further “platforms like Facebook and Twitter involved in the process of instigating extremism evade legal responsibility by relying on Section 230”

Luckily, repealing solves. Gate 23 continues, without Section 230, “The government would be able to prosecute companies with automated systems tied to acts of terrorism, [and] extremism.”

Additionally, widespread access to social media platforms provide ample opportunity for an increase of terrorism, not solely in the US, but worldwide, reports OJP 17.

Increasing terrorism is detrimental. VH 22 found, terrorism “Attacks have become more deadly with the lethality rising by 26%.”

SRD 23 adds that in 2021, nearly 24,000 people were killed by terrorists while another 16,300 were injured. 

Widespread terrorism decreases food security. GTI 23 found that 58% of the 830 million facing food insecurity live in the 20 countries most affected by terrorism.  

(167) Subpoint B is drug trafficking.

AAC 23 reports, “73% of people who report purchasing illegal drugs did so over social media apps.”

Hohmann 21 adds, “Section 230…offer[s] platforms a haven for drug-trafficking.”

However, Mann 23 finds “tech companies have been slow to use their technology to help law enforcement trying to catch drug dealers”

Moreover, Bergman 23 furthers that “[D]rugs… promoted and sold on social media platforms…are often sold by cartels that substitute the drugs with deadly fentanyl without the buyer’s knowledge.”

The effect is deadly. Hoffman 22 reported “108,000 drug fatalities in the United States last year” due to tainted pills.

Overdosing furthers the drug trafficking detriment. The CDC found “Drug overdose is a leading cause of … mortality in the United States” and “increase[ing] at an annual rate of 4%.” adds NCDAS 23 

Affirming solves. Manchin 23 explains, “holding… corporations …who enable these illicit drug marketplaces, are critical to preventing overdoses and stemming the flow of dangerous drugs into our communities.”

(227) Contention two is Bank Runs.

Lasalo 23 reports “US financial institutions could be vulnerable to social-media-induced bank runs…that… use misinformation and bots to…create chaos in the financial system.”

This has been seen historically. Cookson 23 adds “economists who studied the tweets that predated the [Silicon Valley] bank run [found]... social media fuel[ed the] financial panic"

Merler 23 finds, “Financial misinformation via social media has proved before to be a powerful crisis engine.”

They further, “in 2011, a…Twitter rumour triggered a bank run on Swedbank”

Standford 23 reports, “The recent rise in interest rates by the Federal Reserve has increased the fragility of the U.S. banking system to the point that a substantial number of institutions are at risk of failing”

Lip 23 explains, “When multiple banks are involved, it may create a cascading industry-wide panic that can lead to a financial crisis and economic recession.”

The effects of a recession would be detrimental. Duignan 23 found that during the 2008 recession unemployment rose 5% while GDP declined by 4.3% causing 15.6 million to become unemployed, quantifies Hinkley 18.

Moreover, EPI corroborates “poverty rate rose[by] 3.7 million people.”

Additionally, like 08, the effects will be worldwide. According to ITR 23 “the 2024 recession will not be confined to the US… as the US is such a large part of the global economy.”

Thus I are proud to affirm.

I negate the resolution.

Order is fw, my case, then my opponents case. 


Utilitarianism - The winner of this round should be the case that maximizes the greatest amount of good for the greatest amount of people. This should provide a fair way to weigh each case.

Con Case

A repeal of section 230 will lead to two possible reactions from the court system and social media companies, under-moderation, and over-moderation. 

Contention one is under-moderation

Rozenshtein 23 finds that prior to Section 230, in the case Stratton Oakmont, Inc. v. Prodigy, a court found the Money Talk board liable for user generated content, because it excercised editorial control over the content of the messages posted on the platform. It was clear that Stratton Oakmont perversely incentivized platforms not to moderate content, since it was Prodigy’s decision to moderate some content that led the court to hold it liable as a publisher for any content it allowed to remain on its platform.

Ryan-Mosley 23 furthers if Section 230 is repealed or reinterpreted individual users of sites may suddenly be liable for run-of-the-mill content moderation. 

Community moderation is essential to platforms like Reddit and Wikipedia. In an amicus brief to the Supreme Court, wikimedia writes that Wikipedia . . . simply could notexist without Section 230.

Undermoderation could lead to an increase in extremism, while destroying community moderation can destroy platforms like Reddit and Wikipedia that rely on allowing users to create their own spaces. 

Contention two is over-moderation

The alternative for social media platforms to allowing users free reign over the content they post is to overmoderate. 

Opening up social media platforms to litigation from user generated content will incentivize companies to delete potentially objectionable content with a broad brush. Fire 23 finds repealing or undermining Section 230 would lead only to less expressive freedom and viewpoint diversity online — to the detriment of us all.

Pro Case

Contention one subpoint A: Deterring extremism

1.Turn the contention, as platforms that choose to not moderate at all will avoid liability for allowing extremism to fester. 

2. State legislatures are attempting to regulate the internet, determining what is extremism, and what is censorship. Kern 2023 finds that State legislators have introduced more than 100 bills in the past year aiming to regulate how social media companies handle posts. Texas passed a bill aimed at preventing users from censoring users viewpoints.

Kern furthers that Blue states are joining the trend as well, though Democrats’ emphasis is pressing social media companies to establish policies for reporting hate speech, violent content and misinformation.

Section 230 serves as a shield for social media companies against this kind of legislation. A repeal of section 230 may lead to a patchwork of laws where Democrats seek to push for higher moderation, and republicans attempt to "ban censorship". The resulting fractured internet will lead to more polarization and partisanship. 

Contention one subpoint B: Drug Trafficking

1. For platforms that choose to undermoderate, turn the contention, as the supply of drugs on social media will increase due to lack of moderation.

2. Giving more tools to police to bust drug dealers leads to more overdoses. Facher 2023 paradoxically finds that in the week following a major opioid bust, fatal overdoses in the same neighborhood doubled. When police officers arrest drug dealers customers find other dealers which sell drugs that contain higher levels of fentanyl or new adulterants altogether, like xylazine or lidocaine.

Contention two: Bank runs

1. For platforms that choose to undermoderate, turn the contention due to a decrease in moderation.

2. For platforms that choose to over-moderate, reducing bank runs may prove difficult. Regarding Silicon Valley Bank, Cookson 2023 finds that SVB had weak balance sheets. Social media platform users pointing that out isn't misinformation, and social media companies wouldn't be held liable for a crime that wasn't commited. Merler 2023 points to a key difficulty in handling bank runs broadly, stating: Because they had to be dealt with in such haste, authorities took problematic decisions. There's no reason to believe the same issue would not apply to social media regulation. 

Round 2
First on Utilitarianism.

This round should be evaluated as to whichever side provides more net benefits/harms. Whether or not those benefits/harms are to the people is up to the weighing/meta weighing interpretations.

First moderation as a whole:

Good moderation is possible: Wang 23 attests, “(AI)-based moderation systems have been [...] used by social media companies to identify and remove inappropriate user-generated content (e.g., misinformation) on their platforms. AI is a good way to make increased moderation possible. Money is not a problem in the case of moderation.

Next, under-moderation.

Realize, as of right now, there is no moderation. Sites are allowed to run rampant, which end to their detriment. Look to our case, terrorism, drug trafficking, etc. We can see, in the NEG world, the status quo, the unenforced present does not work

In terms of Oakmont v. prodigy. That is irrelevant. The reason section 230 was created was for the moderation at the beginning of the internet and made to regulate at a time when Social media wasn't a huge deal. Now social media has never been more prominent. As long as that is true section 230 is outdated. These companies need to be moderated

When people know they can make a systemic change by targeting the root, the platform that's allowing the hate, they will fight back against it. Since the companies don't want to continuously get sued they will moderate.

Site like Wikipedia can absolutely exist, because correct moderation, will mean these people can be saved. 

Now over-moderation

Now realize my oponent's are trying to paint this as black and white, either under or over moderation. This is not the case. Since based on the previous it is clear there won't be under-moderation, we look towards over-moderation.

This will not be an issue because sites are profit maximizing. While these places will of course moderate their site, they will have no need to over moderate because they will lose too many participants.

The point my opponents are trying to make is that it would be censoring free speech. However, remeber free speech does not apply to cases of harming others: Smith 21 of a Harvard review states “All of the duty-of-care proposals on the table today address content that is not protected by the First Amendment...There are no First Amendment protections for speech that induces harm." It is exactly the same as all other types of limitations on free speech. This is not over moderation but simply moderation.

Again on censorship/ over-moderation of marginalized voices:  Joad 16 finds “[it would be] wrong to claim that giant technological corporations possess power that goes beyond the control of elected governments." At the end of the day the government will still decide and if it's harming speech or people they will not vote in their favor.

In the NEG world there is more censorship then the AFF will have: Censorship would be increased in the NEG world. Rodriguez 23 says, "Current content moderation systems already disproportionately silence Black people and other historically marginalized populations, even when they do not violate platform rules.”

Mastantuono 23 announces, “Recent studies have linked the regular use of digital platforms with [...] diminished confidence in American democracy due to the proliferation of “fake news.” 

When the choice is between a world in which people are plagued by excess terrorism and increased drug trafficking as well as plagued by the detriment of ever increasing bank runs killing the economy and risking a recession, it is clear the NEG world is providing far too many deaths and too much detriment.

Now onto the Pro case

1. First, They cannot just refuse to moderate because the constant lawsuits and copious amounts of money would lead to regulation and moderation. Next, realize the turn wasn't weighed and therefore cannot properly be evaluated in the round. We however, have the probability in this case because if a mother goes to a courtroom and says my child was killed due to illegal fentanyl from drug trafficking a judge will be likely to vote in their favor. As these complaints increase so too will the amount of money these companies own and it will become unsustainable for them.

2. Next, on state legislatures have not done enough. Section 230 is old, outdated, and not working (Wang 23) . Look to our entire case. The only solution is repealing section 230 and moderating.

3a. On democrats. Realize that not everyone in our country is in a blue state. The split is ever-changing not static.

3b: on misinformation specifically. Realize that misinformation is still a huge problem. The regulations IN THE NEG WORLD have not worked, a repeal of section 230 is what changes that.

Polarization in the status quo had never been more of an issue, as long as this is the case, the internet is what unites. Both democrats and republicans are trying to create the best world. Without section 230, and with moderation, a SOLUTION in hand, that's when unity happens.

On Drug Trafficking:

1. First realize companies have themeans to stop illicit drug transactions via their platforms but choose not to due to section 230 (CRS 22

2. Not true. Look to our case they will be forced to due to the costs associated with these lawsuits.

3.  Next turn their own response against them because the internet without moderation is what gives dealers more leeway. We need to crack down on it and since the internet is one of the most prominent places where these drugs are sold start there. Arresting and stopping the spread of these drug businesses are two sperate things. Due to declining advertisement, they have less customers. By arresting another will simply take his place.

Contention two: Bank runs

1. First remember there will be moderation.

2. Realize they fundamentally misunderstand what happened with bank runs. They spread misinformation which leads to bank runs. By curbing the flow of this misinformation, we are solving.

Thus we affirm.

My opponent is providing a broad strokes approach and telling you there will be surgical results. Online moderation will be unable to hit a goldilocks zone after section 230 is repealed, thus I negate. 

Order will be framework, my case, then my opponents case


I agree to net benefits/harms. Our impacts are similar so there shouldn't be much disagreement on framework. 

Con Case

On Moderation As a Whole

AI moderation is not yet developed enough to provide high-quality moderation. 

1. At the same time that AI gets better at combatting disinformation, AI will get better at creating it. Morrish 23 writes the race between fact-checkers and those they are checking on is an unequal one. The scale of what generative AI can produce, and the pace at which it can do so, means that this race is only going to get harder.
2. AI has fundamental flaws that disqualify it from being able to fact check fully-automatically. Morrish 23 continues large language models cannot detect nuance in language and don’t know what facts are. LLMs also lack knowledge of day-to-day events, meaning they aren’t useful when fact-checking breaking news. 

3. AI moderation may reproduce and amplify current biases and stereotypes. Rzeszucinski 23 writes to be as robust as possible, the algorithm needs a vast amount of training data. Each different entry has the possibility to be profoundly biased. To perform mitigation on each of these would require an infinite amount of work.

On Under-moderation

Clearly moderation on the internet exists today. The website we're on right now is an example. 

My opponent ignores Oakton v. Prodigy. Note that Oakton v. Prodigy was paired with Cubby Inc. v. Compuserve in which the court ruled that Compuserve could not be held liable for user generated content because Compuserve was a distributor. Rozenshtein 23 writes In reaching this conclusion, the court emphasized that CompuServe did not review content before it was released on the forum and made available across CompuServe. 

The pair of decisions provides a legal defence for companies that don't moderate their platform, while effectively punishing companies that do moderate. The cases are relevant because this is the legal world that we return to under a repeal of section 230. 

Regarding community based moderation, extend Ryan-Mosley which finds that users of sites may be held liable for their content moderation decisions. If users get too scared of litigation to moderate platforms, sites like reddit and wikipedia will cease to exist. Walker 2022 finds that the work done to moderate just 126 out of reddits 2.8 million subreddits represented $3.4 million worth of unpaid work, nearly 3% of the sites revenue. 

On Over-moderation

The best argument for the risks of over-moderation is my opponent's own card.

There is no reason to believe that social media platforms would engage in careful moderation that protects free speech rights. 

My opponent cites Rodriguez 2023, which reads without Section 230, online platforms would minimize the risk of liability for illegal content by engaging in heavy-handed cost-effective censorship instead of carefully reviewing every piece of content.

Rodriguez 2023 furthers that increased censorship by platforms seeking to evade liability would further silence diverse perspectives on important issues like racial and gender justice. We also noted that as we conduct more and more of our daily lives online, automated decision-making systems risk reproducing discrimination at scale.

Pro Case

Deterring Extremism

Crossapply the reasoning of Cubby Inc v. Compuserve. Companies are incentivized not to moderate, because of court precedent that a lack of moderation provides a legal defense against liability. Probability weighing goes towards the CON side, because I have provided evidence of court precedent, whereas my opponent has provided a hypothetical.

On state legislature's, my opponent focuses on blue states that want to increase moderation, while ignoring red states that want to effectively ban moderation. Extend Kern 2023 which finds that Texas passed a bill preventing social media from "censoring users viewpoints", making content moderation more difficult. This is part of a national trend, Izaguirre 21 explains Republican state lawmakers are pushing for social media giants to face costly lawsuits for policing content on their websites, taking aim at a federal law that prevents internet companies from being sued for removing posts.

The split in how state legisatures handle content moderation will lead to more moderation in blue states, and less moderation in red states, leading to an increase in polarization.

Drug Trafficking

Again, crossapply the resaoning from Cubby Inc v. Compuserve. Some social media platforms will choose to not moderate as a legal shield. These platforms will lead to an expansion of drug markets, increasing supply.

At the same time, a repeal of section 230 will harm drug users by increasing overdoses. Extend Facher 2023. Even if you don't buy that a repeal of Section 230 will lead to an increase in arrests, removing suppliers from platforms will have the same effect. Users won't be able to get drugs from suppliers they trust, and will be forced to turn to riskier suppliers to compensate. 

Bank Runs

Crossapply Cubby Inc v. Compuserve. Misinformation will be allowed to prosper on sites that choose not to moderate as a legal shield. 

My opponent only provided one example of a bankrun caused by misinformation. They provided no evidence of misinformation leading to the SVB collapse. Extend Merler 2023  which explains that bank runs are difficult to handle, because they happen quickly. There's no reason to believe that social media companies will be able to quickly identify in real-time misinformation that may lead to risk of a bank run. 

Round 3
Their case, my case, weighing throughout

CON Case

Moderation As a Whole:

To summarize: As of right now, moderation on public forums is terrible. Look towards any media site, TikTok, Facebook, Instagram, etc. whose rules clearly state they don't permit drug usage or showing on their sites, violence, etc. (tiktok)

But let's look at reality. Think about how many times you've seen a post about drugs, violent intention, hate speech, etc. These regulations aren't enforced and thus don't work. The reason for this lack of enforcement lies in the fact that there is no reason these companies feel the NEED to enforce these rules. There are no penalties if they don't. The ONE thing that would get these companies to moderate would the frivolous lawsuits associated with a repeal (Masnick 21)


The Morish 23 card specifically talks about fact checking AI which is different from regulation AI. The regulation AI is specifically used for taking down posts, and would never post anything, but work behind the scenes as an operator, thus not spreading disinformation either.

AI knows a fact from opinion. If you feed sentences like 'cheetahs are the fastest land animal' and 'I like the color purple' AI would be able to tell which is fact/opinion. The problem is the nuances of language however, as AI inevitably advances, this becomes less of a problem. For perspective, practically no one ever got to use generative AI before 2021. Now ChatGPT and others like it are just at our fingertips. As the programs evolve and get more data, they become more accurate too(Javid 24). Prioritize the long term here. 


AI isn't what's biased, it's the people(Bahargava 18 ). This is a systemic issue, and is persisted more by the current framework of section 230. By the incompetence of site's moderation it harms speech more than it helps. By providing a forum where hate speech isn't tolerated, it cracks down on the issue from the root.


I don't ignore Oakton v. Prodigy(look to my second argument on under-moderation) Again I'd reiterate, it's outdated at a time when social media and the types of chat forums where you can instantly see what others are posting and respond was not the case. Today with more people posting and commenting than ever, that's when action becomes imperative. This isn't the 1990s, the bills need to catch up.

There can't be parallels made to 1995 social media as access is far different. Where only 18 million(~6%) families had access to a household computer in 1995 (PRC 95). In 2023 over 94% (~312 million) can access a household computer (IBIS 23) The staggering difference alone says everything it needs to about this case's inadequacy at addressing the present. 

You'd always prefer something to nothing, and now, something is a repeal. It is curbing terrorism, and drug trafficking which both run rampant in the status quo. As long as the framework of this is benefits/harms prefer action. Even if we say that sites will be under-moderated (which they won't) they cannot tell you how many people will be hurt, or what the true impact would be because if anything it would be minute.

Remember Walker 22 states that everything with the fear of litigation is in the status quo, not in a PRO world, the evidence just doesn't line up and contradicts my opponent's point.


There is no reason to believe that social media platforms would engage in careful moderation that protects free speech rights NOW, because they have no incentive. What's gone COLD CONCEDED throughout this entire round is that the reason for a lack of regulations because there is no reason for these sites to regulate. If they don't, they won't be sued, so what do they care. However, in the PRO world, where they're getting sued, they HAVE a reason and that's why they start.

On Rodriguez: 

Remember what else they concede...It's not actually any censorship that isn't allowed already. It's all hate speech and illegal activity, and thus legal censorship (Smith 21)

PRO Case

Deterring Extremism:

Look to above on moderation... Synopsis (companies don't want to moderate NOW. Without 230, when lawsuits occur, and they are forced to pay a lot more, THAT's where the incentive comes from)

The probability cannot go to my opponents because their case is outdated and the internet it much different. Instead, prefer our solvency of their probability, because not only is the internet different, but again they have not been able to provide a quantification for how many people the PRO world would be harming.

The framework says to focus on the benefits/harms, and clearly harms in the NEG world, terrorism, drug trafficking, and failing banks, harm, so prefer the benefits of PRO.

The point on red/blue states doesn't matter as we have already proven that what would be taken out is not censorship which has gone conceded, and therefore, does not fall under this, it's a wash and so is polarization because it relies on the censorship link.

Drug Trafficking:

Same as before on moderation.

There is nothing riskier than an online drug deal who you don't know. It is scientifically proven that the reason cyberbullying is so bad is because they don't seem real, same with buying online.  Stopping online selling solves for both of these issues. Online sellers are more likely, to add fentanyl into their drugs (see case), and deaths from this with the internet high have spiked. This was not the same when people had to buy in person, look to history.

Bank Runs:

Remember, misinformation is the status quo, the only way to combat is to repeal.

I have provided multiple look to case (Swedbank) Additionally in 2019, false information in a WhatsApp post led to a run on Metro Bank...Credit Suisse itself was taken on by the Reddit crowds in late 2022" Merler 23

Due to severity, concessions, and adequate quantification, timeframe, and probability, it's clear the NEG world is providing the HARMS, and it's a clean vote for the PRO.

Thanks for the Debate! :)

My case, pro case.

Con Case

Moderation as a Whole

My opponent cites Masnick 21, an article in favor of Section 230, and an article that doesn't explain how a repeal of section 230 leads to better moderation. They then say that the only way to get companies to moderate is with "frivolous lawsuits", admitting that social media companies would moderate content bsaed on lawsuits that have no legal standing. 


It doesn't matter whether AI is posting or not. AI's inability to seperate fact from fiction disqualifies it from being effective at removing misinformation. My opponent doesn't address either warrant from Morish 23. Extend the fact that LLM's can't understand nuance in language, and aren't able to respnod to developing situations due to no knowledge of day-to-day events.


My opponent continues to misunderstand Oakmont v. Prodigy. The case is outdated. That's the point. Oakmont v. Prodigy and Cubby Inc. v. Compuserve is the court precedent we will return to under a repeal of section 230. Extend Rozenshtein 23 from the first round, "It was clear that Stratton Oakmont perversely incentivized platforms not to moderate content, since it was Prodigy’s decision to moderate some content that led the court to hold it liable as a publisher". 

There is no updated court precedent about what a world without section 230 looks like, because all major court decisions since Cubby Inc. v. Compuserve have been made in a world with section 230. 

Regarding community moderation, my opponent states that Walker 22 references a fear of litigation. This is not true, Walker 22 doesn't reference litigation at all, it only references the value of user labor.

Thus, the point on community moderation goes clean conceded. My opponent provides no rebuttals for the idea that user moderators will be too fearful of litigation to moderate, and thus platforms like reddit and wikipedia will no longer be able to operate.


My opponent mentions in round 3 that social media platforms would be subject to frivolous lawsuits, and my opponent's own card, Rodriguez 23 explains that a repeal of section 230 would lead to "heavy-handed cost-effective censorship."

Rodriguez 23  furthers that "increased censorship by platforms seeking to evade liability would further silence diverse perspectives on important issues like racial and gender justice." This would happen because "current content moderation systems already disproportionately silence Black people and other historically marginalized populations, even when they do not violate platform rules."

My opponent doesn't explain how a repeal Section 230 would lead to more careful moderation, and their own card suggests that the repeal would make things worse. 

Between heavy handed moderation from large platforms, and the destruction of the ability of users to self-govern via community moderation, a vote for PRO is a vote for the destruction of spaces for minority voices. 

Pro Case

Deterring Extremism

Again, a PRO ballot returns us to a world governed by the very court precedent by opponent refers to as outdated.

State legislatures matter because a world without Section 230 gives state legislatures the power to decide what content social media platforms can and cannot remove. The language of "opposing viewpoints" under Texas's bill is vague enough that social media companies may be too scared to remove misinformation and extreme viewpoints. Extend Izaguirre 21 which states that "Republican state lawmakers are pushing for social media platforms to face lawsuits for policing content on their websites." 

Regarding weighing, the exact numbers don't matter on a turn. However flawed moderation is now, a complete lack of moderation is clearly worse under my opponents link that low moderation leads to extremism. 

Drug Trafficking

Crossapply previous arguments on undermoderation. Websites that choose to not moderate will become safe havens for drug deals, leading to brand new markets and brand new dealers.

Paradoxically, the removal of old dealers won't lead to a reduce in overdoses. Extend Facher 2023 which shows that removing customers access to their dealer doubles the rate of overdose. 

Bank Runs

Crossapply previous arguments on undermoderation. 

Again, Swedebank is my opponents only example of a bank run caused by misinformation. Metro Bank collapsed due to misinformation spread on WhatsApp, which isn't a social media platform. This flows Con by showing that misinformation can still spread without social media. Credit Suisse is a new example, and no evidence is provided that the collapse was due to disinformation. Further, the platform that targeted the bank was Reddit, a platform that specifically relies on user based moderation that would no longer be possible under a repeal of Section 230. 

Crossapply previous arguments regarding how Section 230 doesn't lead to a decrease in misinformation. Specifically, extend Morrish 23 which explains that LLM's aren't useful when fact-checking day to day news, which limits it's ability to combat misinformation in the rapidly evolving dynamic scenario of a bank run. 

World Comparison

The debate comes down to my opponents fundamental misunderstanding of how social media platforms would react to a repeal of section 230. 

The negation world is the status quo. 

In the pro world, we're forced to rely on decades old court precedent, that holds social media platforms liable only if they choose to moderate their content. Thus, a lack of moderation is incentivized, and the harms my opponent brings up of extremism, drug traficking, and bank runs are made worse. At the same time, companies that choose to moderate are forced to use heavy-handed moderation techniques, that threaten to stifle free speech, remove a voice for minorities on the internet, and increase drug overdoses.