3-2-2024 (WASHINGTON) Republican Senator Lindsey Graham’s words to Meta CEO Mark Zuckerberg, “You have blood on your hands,” and Zuckerberg’s apology to families of online child abuse victims have shed light on the risks posed by social media platforms. As media researchers from Boston University and McGill University point out, these companies continue to put millions of young people at risk, and urgent action is needed to address the issue.
During the pandemic, the use of mobile devices by children and teenagers has skyrocketed and remained high. Social media platforms such as YouTube, TikTok, Snapchat, Instagram, Facebook, and Twitter have witnessed a massive number of users aged 17 and under. According to a study conducted by researchers at Harvard’s TH Chan School of Public Health, there were approximately 49.8 million users under the age of 17 on YouTube, 19 million on TikTok, 18 million on Snapchat, 16.7 million on Instagram, 9.9 million on Facebook, and 7 million on Twitter in 2022.
These platforms rely heavily on young users for their revenue. In 2022, social media platforms generated a staggering $11 billion from users aged 17 and under. Specifically, Instagram accrued nearly $5 billion, while TikTok and YouTube each earned over $2 billion. It is evident that teenagers are a significant source of income for these companies.
However, the researchers argue that social media poses various risks to teens. These risks include exposure to harassment, bullying, sexual exploitation, and the promotion of eating disorders and suicidal ideation. To effectively protect children online, three crucial issues need to be addressed: age verification, business models, and content moderation.
One of the main challenges lies in verifying the age of social media users. Companies have an incentive to turn a blind eye to the actual ages of their users, as implementing appropriate content moderation measures would require substantial resources. It is an “open secret” that millions of underage users (those under 13) exist on platforms like Meta. While Meta has suggested potential strategies for age verification, such as identification requirements or AI-based age estimation, the accuracy and transparency of these methods remain questionable.
Moreover, social media platforms heavily rely on teen adoption for their continued growth. The investigation known as the Facebook Files has revealed that Instagram’s growth strategy depends on teens helping their family members, especially younger siblings, join the platform. Although Meta claims to prioritize “meaningful social interaction” and family and friends’ content, Instagram allows pseudonymity and multiple accounts, making parental oversight more challenging.
Instances of harassment, bullying, and solicitation are prevalent on social media platforms, and it is evident that parental supervision and app store regulations alone are insufficient to address these problems. While Meta has taken steps to provide “age-appropriate experiences” for teens by restricting searches related to suicide, self-harm, and eating disorders, it remains a challenge to tackle online communities that promote harmful behaviors. Effective content moderation requires a well-trained team of human moderators to monitor and enforce terms of service violations within dangerous groups.
Social media companies often highlight the potential of artificial intelligence (AI) in moderating content, but the researchers emphasize that AI alone cannot solve the problem. It is crucial to recognize the limitations of AI and the necessity of human intervention to ensure the safety and well-being of young users.
Social media apps continue to pose significant risks to young people, and urgent action is needed to protect them. Age verification, revisiting business models, and enhancing content moderation are essential steps towards creating a safer online environment for children and teenagers. As legislators and industry leaders grapple with these challenges, it is crucial to prioritize the well-being of young users and ensure that social media platforms are not regarded as “dangerous products.”