New York Attorney General Letitia James recently proposed regulations to crack down on addictive social media feeds for children, including rules for verifying a user’s age.
The regulations would flesh out the Stop Addictive Feeds Exploitation for Kids Act passed last year. This act prohibits social media companies from showing users under 18 personalized feeds by algorithms without their parents’ consent. Instead, feeds on apps like TikTok and Instagram would be limited to posts from accounts that young users follow.
The law also bars companies from sending notifications to users under 18 between midnight and 6 a.m.
“Children and teenagers are struggling with high rates of anxiety and depression because of addictive features on social media platforms,” James said on Sunday, Sept. 14 in releasing the rules, which are subject to a 60-day public comment period.
The regulations can be seen as a companion to the “bell-to-bell” ban on cellphone use in New York State that went into effect at the beginning of the school year.
While it is still early, the initial results of the cellphone ban appear to be good,
Teachers report that students are more attentive in class, listening better, participating more and being less distracted. In districts that had similar bans already, educators said there were improvements in classroom behavior and discussions.
Cellphone use in schools has been linked to reduced attention and learning, lower test scores, higher stress and anxiety, more bullying and social conflict, and classroom disruptions and discipline challenges.
Some early feedback to the cellphone ban also suggests that students are engaging more socially (talking and interacting rather than being on phones).
Imagine that. Actually talking to other people.
This begs the question of why New York State and the federal government are not doing more to protect people over 18 from social media algorithms designed to maximize engagement with likes, clicks, shares and watch time.
For the tech behemoths like Facebook, Instagram and TikTok that rule social media, this is good business.
But the rest of us are paying a very high price.
Algorithms tend to prioritize sensational, emotional, or divisive content because those posts get more reactions and keep people online longer.
This has led to the spread of misinformation, conspiracy theories, and extremist views, which often generate stronger engagement than balancing the facts.
Personalized feeds show you more of what you already like while hiding opposing views. This creates “echo chambers,” reinforcing existing beliefs and making it harder to understand or empathize with others’ perspectives.
Constant comparison to curated lives on Instagram/TikTok can fuel anxiety, depression, body image issues and low self-esteem.
Algorithms are also vulnerable to manipulation by advertisers, bots, and political actors, who learn how to game the system for reach. This includes foreign governments seeking to exploit division in this country.
This has been exploited for election interference, targeted disinformation, and product scams by Russia, China and other countries trying to divide Americans.
Like gunmakers, social media companies are exempted from legal liability for the problems they create, by Section 230 of the Communications Decency Act, which gives them the ability to act with impunity in the United States.
In Europe, especially the EU, there has been quite a lot of regulation recently aimed at how social media platforms’ algorithms work.
This includes requiring platforms to explain how their algorithms work, assess risks from their systems, and give users more control or at least more visibility into why they’re seeing what they see. Also, there are obligations to allow users to contest or complain about content moderation decisions.
Researchers and regulators can also request internal data from platforms to audit or inspect algorithms’ performance. Regulations also prevent dominant platforms from abusing power, forcing more fairness and interoperability.
There has been far less regulation in the United States.
President Trump recently signed into law the Take It Down Act to combat the spread of sexually explicit images of minors and nonconsensual intimate images, sometimes called “revenge porn,” online.
The act creates federal criminal penalties for the knowing distribution of nonconsensual sexually explicit images, requiring platforms to take down these images quickly once notified. It also applies to adults and children and gives victims legal recourse to demand removal and sue perpetrators or platforms that fail to act.
But free speech concerns – and the enormous political clout wielded by the owners of dominant platforms – have blocked further regulation in this country.
This makes New York’s Stop Addictive Feeds Exploitation for Kids Act and the bell-to-bell cellphone ban implemented by many other states that much more important.
The results of the legislation and school bans offer a good case study on the benefits of social media restrictions for those under 18.
Perhaps then we can get serious about responding to the harm being done to those over 18.

































