Hoylman seeks to hold social media accountable for violent hate speech and vaccine misinformation

Brad Hoylman wants to turn Big Tech’s algorithms against him, in order to stop the spread of violent hate speech, anti-vaxx untruths and self-harm.

The state senator seethes, far beyond an angry emoji, about how Facebook and other platforms were able to dodge the consequences, while fueling the madness.

In the week leading up to the first anniversary of the Jan. 6 riot at the United States Capitol, and as vaccine hesitancy helps fuel the Omicron variant, Hoylman announced new legislation (S.7568) to hold social media platforms accountable for “knowingly promoting misinformation”. , violent hate speech and other illegal content that may harm others. »

Section 230 of the federal Communications Decency Act protects social media platforms from being treated as publishers or speakers of content shared by users on their apps and websites. However, Hoylman’s proposed legislation instead focuses on the “active choices” these tech companies make when implementing algorithms designed to promote the most controversial and harmful content – content that Hoylman says “creates a general threat to public health and safety”.

“Social media algorithms are specially programmed to spread misinformation and hate speech to the detriment of the public good,” Hoylman said. “The prioritization of this type of content has real costs for public health and safety. So when social media spreads anti-vaccine lies and helps domestic terrorists plan a riot in the United States Capitol, they must be held accountable. Our new legislation will hold social media companies accountable for the dangers they promote.

For years, social media companies have sought protection from the legal consequences of their actions relating to the content of their websites by citing Section 230. However, according to Hoylman, social media websites are no longer simply a “unmoved” host for their users’ content. .

On the contrary, many social media companies use complex algorithms designed to put the most controversial and provocative content in front of users as much as possible, charge Hoylman and others. These algorithms drive engagement with their platforms, keep users hooked, and increase profits. In other words, Hoylman argues, social media companies that use these algorithms “are active participants in the conversation.”

As the bill states, “No person…shall knowingly or recklessly create, maintain, or contribute to any condition in New York State that endangers the safety or health of the public through the promotion of content, including through the use of algorithms or other automated systems. that prioritize content by some method other than solely by [the] the time and date this content was created…”

Last October, Frances Haugen, a former Facebook employee, testified before US senators that the tech giant was aware of research proving its product was harmful to teens, but deliberately hid those findings from the public. The whistleblower also said Facebook was willing to use hateful content to keep users glued to the site.

According to Hoylman, this type of “social media amplification” has been linked to many societal ills, including misinformation about vaccines, encouragement of self-harm, bullying and body image issues among young people. , and extremist radicalization leading to terrorist attacks like the January 19, 2019. 6 riot at the United States Capitol.

According to a Hoylman press release on the bill, “When a website knowingly or recklessly promotes hateful or violent content, it creates a threat to public health and safety. The conscious decision to elevate certain content is an affirmative act separate from mere hosting of information and is therefore not contemplated by the protections of Section 230 of the Communications Decency Act.

The state senator’s measure would allow the New York State Attorney General, the New York City Legal Department and private citizens to hold social media companies and others accountable when they make the promoting content that they know or “reasonably ought to know”: “advocates the use of force, is intended to incite or produce imminent unlawful action and is likely to produce such action; advocates self-harm, is intended to incite or produce imminent self-harm, and is likely to incite or produce such action; or includes a misrepresentation of fact or fraudulent medical theory likely to endanger the safety or health of the public.

Anna Kaplan, who represents part of Long Island, is co-sponsoring the bill with Hoylman in the state Senate.