Major Social Media Sites Are Failing LGBTQ+ People: Report
Major social media companies are failing to protect LGBTQ+ users from hate speech and harassment, according to a new report from GLAAD out Tuesday.
The Social Media Safety Index report highlights how major platforms either do not have policies to protect user data or fail to enforce them; refuse to safeguard users against against online hate; and canât or wonât stop the proliferation of harmful stereotypes and disinformation about LGBTQ+ people.
Advertisement
Now in its fourth year, the report ranked six social media platforms, including Metaâs Facebook, Instagram and Threads, as well as TikTok, YouTube and X (formerly Twitter), on 12 different criteria. Among these metrics were whether each company has explicit policies to protect trans, nonbinary and gender-nonconforming users against deadnaming and misgendering; has options for users to add their pronouns to profiles; protects legitimate LGBTQ+-related advertisements; and tracks and discloses violations of LGBTQ+ inclusivity policies.
GLAAD found social media companies miss the mark on almost all of these metrics â and allow harmful rhetoric to proliferate on their platforms, even as they rake in billions in advertising profits.
Almost all of the platforms received an F rating and a corresponding percentage. TikTok, however, received a D+, a slight improvement from last yearâs rating, because it recently adopted a policy to block advertisers from targeting users based on their sexual orientation or gender identity.
Although many of these social media companies currently have policies that on paper seem to protect LGBTQ+ users, the report notes that the platforms do little to actually stop the spread of harmful and false information.
Advertisement
For example, X, which received the lowest rating by percentage, has seen a sharp uptick in misinformation about LGBTQ+ people by âanti-LGBTQâ influencers. The Libs of TikTok account, for example, run by Chaya Raichik, is known for posting misinformation about gender-affirming care and for equating LGBTQ+ people with âgroomersâł and âpedophiles.â There have been dozens of reports of bomb threats at schools, gyms, and childrenâs hospitals that have been singled out by the account.
Elon Musk, the owner of X, has also promoted anti-trans content from Raichik and others, including posts that have praised restrictions on trans women participating in sports. Republican legislators, who have introduced a record number of anti-LGBTQ bills in statehouses across the country annually since 2020, have similarly amplified and promoted anti-LGBTQ+ sentiment on social media.
âThere is a direct line from dangerous online rhetoric and targeting to violent offline behavior against the LGBTQ community,â Sarah Kate Ellis, GLAADâs CEO, wrote in the report.
Though X has been one of the biggest platforms of anti-LGBTQ+ rhetoric, it only taken in $2.5 billion in advertising revenue in 2023. Meta â which has allowed posts equating trans people to âterrorists,â âperverts,â and the âmentally illâ to remain on its platforms â generated $134 billion in revenue last year.
Social media companies have also targeted legitimate LGBTQ+ content and made their platforms less safe and accessible to LGBTQ+ users, the report says.
Advertisement
The report notes one instance from March of this year, when the nonprofit Men Having Babies shared a photo of two gay dads and their newborn child in an Instagram post. Soon after posting, the organization saw that the platform had flagged Men Having Babiesâ post as âsensitive contentâ that may âcontain graphic or violent content.â
That label is typically used to âmitigate extreme content,â Leanna Garfield, GLAADâs social media safety program manager, told Pink News earlier this year. âThat shouldnât include something as innocuous as a photo of two fathers with their newborn.â
Increased usage of artificial intelligence tools for content moderation could lead to LGBTQ+ posts being targeted even more. An investigation by Wired in April found that AI systems like OpenAIâs Sora displayed biases in its depiction of queer people.
Companies like Facebook have at times relied âexclusivelyâ on automated systems to review content, forgoing any human review in the process, Axios reported last year. A GLAAD report released around the same time said this practice was âgravely concerningâ and could jeopardize the safety of all users, including those who are LGBTQ+.
The new GLAAD report claims that other tech companies, which the report did not name, have created âautomated gender recognitionâ technology that purports to predict a personâs gender in order to better sell products through targeted ads. But privacy advocates have warned that these technologies could be taken a step further, to try to categorize and surveil people in gendered or sex-segregated spaces like bathrooms and locker rooms.
Advertisement
Some countries and regions, like the European Union have adopted restrictions on AI and have regulated social media platformsâ practices, but the United States has lagged. The GLAAD report recommends that platforms strengthen and enforce their current policies to protect LGBTQ+ people â including by stopping advertisers from targeting LGBTQ+ users and by improving content moderation without simply automating it.
Comments are closed.