Editor’s Note: Michal Luria, PhD, is a research fellow with the Center for Democracy & Technology. Aliya Bhatia also works with the Center for Democracy & Technology, and is a policy analyst with the Free Expression Project. The views expressed here are their own. Read more opinion on CNN.
CNN
—
In a political environment in which every conversation seems to cause polarization, one thing people can agree on is the need to keep kids safe online.
Experts, including the US Surgeon General and the American Psychological Association, have pointed to worrying trends in mental health, self-esteem and general well-being among youth, sometimes linking these concerns to increased internet use. Many Americans are also becoming increasingly worried about their privacy and safety online. According to the Pew Research Center, about 9 in 10 Americans are concerned that social media platforms have too much personal information on children.
It’s evident that the status quo isn’t working, and policymakers are ready to take action. But the bills mentioned by policymakers at a congressional hearing last month to address the problem — specifically, the Kids Online Safety Act (KOSA) and the Protecting Kids on Social Media Act — may do more harm than good.
Both bills rest on the premise that minors should be blocked from accessing some content or online services entirely. Yet research indicates that these approaches are unlikely to work — and might even put some children in greater danger by depriving them of spaces where they can access information critical to their development, health and safety.
KOSA requires companies to design their platforms in a way that doesn’t expose minors to content or features that might cause anxiety and other negative mental health outcomes. The Protecting Kids on Social Media Act bans children under 13 from accessing online services entirely, including platforms designed specifically for children, and expands the use of monitoring technology to monitor what children, including those under 18, do online.
Keeping kids away from “harmful” content requires a consensus around what “harm” is and what content causes that harm. But beyond that, research doesn’t prove that restricting children from accessing content helps. Research demonstrates that keeping kids away from some types of content causes them to use online resources less. But teenagers, in particular, use online services to gain access to vital information, including on sexual health, fitness and nutrition and mental health conditions.
According to the Global Kids Online 2019 survey, kids with less restrictive parents tended to use the internet for a range of informational and creative activities, while kids of more restrictive parents leaned toward entertainment-only activities. Another study revealed that restrictions prevented kids from using the internet for completing simple tasks like homework.
But restricting online access not only discourages young people from spaces that can be crucial for their information-seeking and learning — it doesn’t even keep them safe.
Yes, restriction can reduce minors’ exposure to risks in the immediate term by keeping them away, but in the long run, it can have significant negative consequences. Scholars argue that this kind of restriction hinders learning critical skills, such as privacy-related awareness and the digital literacy that kids need to protect themselves online. Restricting online access for kids and teenagers, as the Protecting Kids on Social Media Act would do, could leave young people unable to exercise judgment and therefore vulnerable when they inevitably navigate online environments independently.
Both KOSA and the Protecting Kids on Social Media Act also propose parental monitoring tools to help guardians observe their children’s online activities. But this “longer leash” approach also has problems. Researchers found that tools or third-party apps that monitor kids’ online access eroded parent-child relationships, created problems between children and their peers and usually had no positive impact, and sometimes even posed a negative one.
Like restricting internet access, monitoring has been linked to limiting kids’ freedom to socialize online and a reduced digital competency, such as lacking knowledge about how to use the internet safely. In other cases, monitoring is ineffective. Children can circumvent parental monitoring, sometimes pushing them to pursue their curiosity in dangerous spaces.
The problem with restriction and monitoring is that they undermine trust. Researchers discovered that teenagers want to be trusted, and therefore generally don’t respond well to restrictive parenting. In a qualitative analysis of mobile safety app reviews posted by 8- to 19-year-olds, 76% gave the apps a one-star review, describing these apps as invasive of their privacy and negatively impacting their relationships with their parents.
But children also don’t want full independence online, and they expect adults to be involved. At the Center for Democracy & Technology, we worked with 14- to 21-year-olds to understand the negative encounters they faced when using online messaging services. Respondents told us that they would turn to a trusted adult when they found themselves in hairy situations but wanted to exercise their discretion of when that would be.
Instead of attempting to fully shield kids, scholars have advocated for policymakers, companies and parents to focus on equipping young people to navigate the web safely, knowing that caregivers, educators and other support networks are there to help them as they grow.
Policymakers, for their part, can advance privacy legislation like the bipartisan American Privacy Rights Act (APRA) introduced earlier this year. APRA aims to minimize data that is collected and processed for all individuals and deter bad actors from targeting harmful content at children and exposing them to unwanted encounters.
Policymakers can also strengthen APRA by banning ads targeted to those under 17, a provision backed by lawmakers, child rights groups, digital rights groups and the Federal Trade Commission. Conversely, enacting a “parental panopticon,” as some have called models of full control over kids’ interactions online, will undermine expectations of privacy, putting the most marginalized kids at risk.
Social media companies are essential in strengthening children’s safety online, too. They can provide kids and parents with more tools to control content and interactions on their platforms, like Threads’ new feature that allows users to filter content based on chosen keywords.
Get our free weekly newsletter
But that’s not enough — social media platforms should continue taking active steps toward more thoughtful designs for young people. In our research, we identified some specific ways of doing so, such as making private profiles the default, reducing interactions with strangers (such as through notifying users that they have no mutual friends with someone and asking if they would like to proceed), improving user reporting features and providing content warnings when possible, like the ones Apple and Instagram recently introduced for images with nudity.
Instead of forcing restrictions, parents can build on these foundations through “active mediation,” which suggests that trusted adults should serve as the connective tissue between unwanted content and negative outcomes. A parent might talk to their child about what they are doing online or respond to something they post on social media. With the support of their parents, young people can learn that risky content exists and develop tools to promote their own safety online.
Eroding children’s privacy will never do the job of keeping them safe. We cannot divert our attention to policies that feel good when they do little to protect kids. Results matter — especially when there’s so much at stake.
This story has been updated to reflect news developments.
Credit: Source link