TikTok is set to implement new age-verification technology across the European Union in the coming weeks. This move comes as pressure mounts on social media platforms to better identify and manage accounts belonging to children, particularly in light of discussions surrounding a potential social media ban for users under 16 in several countries, including the United Kingdom.
The age-verification system, which has been piloted quietly in the EU over the past year, uses advanced algorithms to analyze profile information, video content, and user behavior. The goal is to predict whether an account belongs to someone under the age of 13. TikTok has stated that accounts flagged by this system will not face immediate bans. Instead, they will be reviewed by specialist moderators, allowing for a more nuanced approach before any account removal takes place. During the UK pilot, thousands of accounts were removed as a result of these checks.
Social media platforms like TikTok and YouTube are under increasing scrutiny, particularly after Australia implemented a ban on social media usage for individuals under 16 years old. The eSafety Commissioner of Australia recently reported that over 4.7 million accounts have been removed across ten platforms—including TikTok, Instagram, and Facebook—since the ban took effect on December 10, 2023. This has prompted further discussions in Europe regarding how platforms verify user ages in compliance with data protection regulations.
Growing Calls for Enhanced Regulations
In recent discussions, UK Labour leader Keir Starmer expressed openness to a potential social media ban for young people, citing concerns about the amount of time children spend on smartphones. He highlighted alarming reports of children as young as five spending excessive hours in front of screens each day. Although Starmer has previously opposed such bans, fearing they could drive teenagers to less safe online spaces, the ongoing discussions reflect a growing concern about the impact of social media on youth.
The push for stricter age limits is also echoed in the European Parliament, with some countries, including Denmark, advocating for a ban on social media for users under the age of 15. TikTok has confirmed to Reuters that its new age-verification technology has been designed specifically to meet the EU’s regulatory requirements. The company has collaborated with Ireland’s Data Protection Commission, the lead EU privacy regulator, during the development of this system.
Earlier this month, Ellen Roome, a mother whose 14-year-old son tragically died following an online challenge, called for more parental rights to access deceased children’s social media accounts. This tragic incident underscores the urgency for more robust protections for young users online.
As social media platforms continue to navigate the complex landscape of user safety and privacy, the effectiveness of TikTok’s new age-verification technology will be closely monitored by regulators and concerned parents alike. The outcome of these measures could set a precedent for how social media companies manage underage users in the future.
