[ad_1]

How old are you?Social media companies have an incentive to look the other way in terms of their users’ ages. Otherwise they would have to spend the resources to moderate their content appropriately. Millions of underage users – those under 13 – are an “open secret” at Meta. Meta has described some potential strategies to verify user ages, like requiring identification or video selfies, and using AI to guess their age based on “Happy Birthday” messages.However, the accuracy of these methods is not publicly open to scrutiny, so it’s difficult to audit them independently.Meta has stated that online teen safety legislation is needed to prevent harm, but the company points to app stores, currently dominated by Apple and Google, as the place where age verification should happen. However, these guardrails can be easily circumvented by accessing a social media platform’s website rather than its app.New generations of customersTeen adoption is crucial for continued growth of all social media platforms. The Facebook Files, an investigation based on a review of company documents, showed that Instagram’s growth strategy relies on teens helping family members, particularly younger siblings, get on the platform.Meta claims it optimises for “meaningful social interaction,” prioritising family and friends’ content over other interests. However, Instagram allows pseudonymity and multiple accounts, which makes parental oversight even more difficult.On Nov 7, 2023, Auturo Bejar, a former senior engineer at Facebook, testified before Congress. At Meta he surveyed teen Instagram users and found 24% of 13- to 15-year-olds said they had received unwanted advances within the past seven days, a fact he characterises as “likely the largest-scale sexual harassment of teens to have ever happened”. Meta has since implemented restrictions on direct messaging in its products for underage users.But to be clear, widespread harassment, bullying and solicitation is a part of the landscape of social media, and it’s going to take more than parents and app stores to rein it in.Meta recently announced that it is aiming to provide teens with “age-appropriate experiences,” in part by prohibiting searches for terms related to suicide, self-harm and eating disorders.However, these steps don’t stop online communities that promote these harmful behaviours from flourishing on the company’s social media platforms. It takes a carefully trained team of human moderators to monitor and enforce terms of service violations for dangerous groups.Content moderationSocial media companies point to the promise of artificial intelligence to moderate content and provide safety on their platforms, but AI is not a silver bullet for managing human behaviour. Communities adapt quickly to AI moderation, augmenting banned words with purposeful misspellings and creating backup accounts to prevent getting kicked off a platform.Human content moderation is also problematic, given social media companies’ business models and practices. Since 2022, social media companies have implemented massive layoffs that struck at the heart of their trust and safety operations and weakened content moderation across the industry.Congress will need hard data from the social media companies – data the companies have not provided to date – to assess the appropriate ratio of moderators to users.

[ad_2]

Source link