FB pixel

Rhode Island bills push age verification, limits for minors online

Six-bill package would require age checks, parental consent and default safety settings across platforms, schools and ai systems
Categories Age Assurance  |  Biometrics News
Rhode Island bills push age verification, limits for minors online
 

Rhode Island lawmakers are pressing ahead with a broad package of child online safety legislation that would impose new rules on social media platforms, gaming and online services, school technology, and AI companions as state officials and advocates argue that existing safeguards have not kept pace with the risks young users face online.

The package is being led in the House by Reps. Tina L. Spears, Justine Caldwell and Megan L. Cotter, who said they introduced the measures to address growing concerns about social media and digital technology use by children, including exposure to harmful content, exploitation, and adverse mental health effects.

The three lawmakers highlighted the bills at a state House event two weeks ago “to call attention to the necessity of ensuring that laws protecting kids evolve alongside the ever-changing challenges presented by technology.”

They were joined by two of the Senate sponsors of the bills, Sens. Louis P. DiPalma and Lori Urso, as well as the Office of the Attorney General, community advocates, and a mother who spoke about losing her son to suicide after he plunged into online activities and communication she was unaware of.

Spears, Caldwell, and Cotter have framed the effort as a way to establish guardrails and hold technology companies accountable for the products they design and deploy when children use them.

“As technology evolves, so does our responsibility to protect children,” said Spears. “These bills are about putting common-sense guardrails in place to ensure kids can engage online more safely.”

“These proposals recognize that online spaces are part of everyday life for kids,” Cotter added. “Our goal is to make those spaces safer, healthier and more responsible.”

“There’s no reason we should accept threats to children’s safety as an inevitability of technology,” said Caldwell. “Instead, we should demand that every step be taken to put safeguards in place on school devices, social media and throughout the Internet, and hold companies accountable when they fail to do so.”

The legislation is expansive. The package consists of the Age-Appropriate Design Code, (H 7632/ S 2406); the Rhode Island Children’s Online Safety Act, (H 7746); the Social Media Regulation Act, (H 7953/S 2968); the Safe School Technology Act of 2026, (H 7895); a resolution creating a legislative commission to study third-party digital platforms in public education, (H 8345); and an AI companion bill, (H 7350).

Taken together, the bills are aimed at strengthening protections for minors on social media, gaming and other online platforms, regulating school-provided devices and applications, and establishing safety standards for AI companions.

The Age-Appropriate Design Code bill would apply to online services, products, and features reasonably likely to be accessed by children under 18 and would require covered businesses to use reasonable care to avoid a heightened risk of harm to children.

Last week, the Electronic Privacy Information Center (EPIC) said it supported the goals of the bill, but suggested amendments to make the bill more resistant to legal challenge.

“Kids spend a lot of time online – often more than they would like,” EPIC said, adding, “this is by design. Companies design their platforms to extract as much time and data as possible, and in the process they prey on minors’ psychological vulnerabilities for profit.”

EPIC stated that “these manipulative design strategies lead to compulsive use, depriving minors of control of their online experiences and subjecting them to heightened health, privacy, and data security risks, all so that companies can generate more revenue. The design of these platforms is what is harming kids and teens, and regulating design is the best solution.”

The bill would require data protection impact assessments, set high-privacy defaults for known child users, restrict the collection of data used for age estimation, prohibit precise geolocation collection by default unless strictly necessary, and bar the use of dark patterns that push children to weaken privacy protections.

In other words, it is built around the idea that child safety should be embedded into the design of digital products rather than treated as an afterthought.

A second major bill, the Rhode Island Children’s Online Safety Act, would require commercially reasonable age verification and then impose default safety settings for minors on covered platforms.

Those defaults would restrict unapproved users from directly messaging children, viewing their profiles, tagging them or engaging in financial transactions with them unless a parent affirmatively approves changes.

For children under 13, the measure would require parental approval for new connections and would require parental approval for financial transactions involving minors. The bill reflects a product settings approach that aims to reduce risky interactions without banning all youth access to covered online services.

The package’s most aggressive proposal is the Social Media Regulation Act. This bill would prohibit Rhode Island residents under 18 from holding accounts on covered social media platforms beginning in 2027 and would require age verification for both new and existing account holders in the state. If a user fails age verification, the platform would have to deny access.

The measure applies to platforms with at least five-million account holders worldwide, with some exceptions, including certain educational platforms used under school direction.

Because it combines age verification, access restrictions and both public enforcement and a private right of action, it is likely to become one of the most legally and politically contentious parts of the package.

The American Civil Liberties Union of Rhode Island said the bill “would effectively block social media access for any minor who is unable to demonstrate parental consent.”

The package also reaches well beyond traditional social media legislation by targeting school technology. H 7895, the Safe School Technology Act of 2026, would create a certification system for instructional technology and school tools used in classrooms.

To qualify, a tool would have to meet standards around developmental appropriateness, evidence-based instruction, and student privacy.

The proposal would bar certified tools from using geolocation, generative or conversational AI, targeted advertising, personalized recommendation systems, adult stranger access, and addictive design features such as autoplay or infinite scroll.

It would also impose strict student data protections, including caregiver notice, access and correction rights and deletion rights.

The Software and Information Industry Association (SIIA) said it “strongly opposes” the bill, arguing “that while the bill aims to protect student safety and privacy, it would ultimately harm educational outcomes. The legislation introduces a restrictive regulatory framework that would impose burdensome pre-market approval requirements on ed tech providers, limit local school district decision-making, and significantly delay or reduce access to innovative learning tools.”

SIIA also said “vague standards around ‘addictive design’ and mandates like providing non-digital alternatives would create legal uncertainty and overwhelming administrative burdens for educators.”

The related education measure, H 8345, would establish a legislative commission to study the use and impact of third-party digital platforms in Rhode Island public education.

Rather than imposing immediate mandates, that resolution would gather legislators, educators, students and school personnel to examine how such tools are used, what harms they may pose and whether further policy changes are needed.

Its inclusion in the package shows lawmakers are treating school technology as part of the same broader ecosystem of child digital safety and not simply as a procurement issue.

The sixth bill, H 7350, addresses AI companions. It would make it unlawful to operate an AI companion in Rhode Island unless the system includes protocols to respond to possible suicidal ideation or self-harm, possible physical harm to others, and possible financial harm to others.

It would also require operators to tell users at the start of an interaction, and at least every three hours thereafter, that the AI companion is a computer program and not a human being.

The bill reflects a fast-emerging policy concern that emotionally responsive AI systems can create psychological dependence, obscure the fact that users are interacting with software, and fail dangerously when people express distress or intent to harm themselves or others.

The package is notable for its breadth and for the way its sponsors are framing responsibility.

The Rhode Island lawmakers are not simply urging parents to monitor children more closely or asking platforms to offer more optional tools, they are arguing that technology companies should bear responsibility for the digital environments they create and that the law should impose clear safety standards, transparency requirements, and proactive risk mitigation when children are involved.

In that respect, Rhode Island’s proposal is part of a broader national shift toward treating child online safety less as a matter of individual choice and more as a question of product design, platform accountability, and institutional duty.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

White House fraud crackdown sharpens focus on digital identity

The Trump administration’s March 6 Executive Order 14390, aimed at combating cybercrime and fraud, has prompted a significant response from…

 

Gender gaps threaten progress on global legal identity goals, Vital Strategies CEO warns

As countries work toward universal legal identity under SDG 16.9, greater focus on gender inclusion is needed to ensure women and…

 

Guyana data chief says digital ID won’t replace voter ID

Guyana’s Data Protection Commissioner, Aneal Giddings, has clarified that the country’s national digital ID is not intended to be used…

 

Biometrics at scale: EES setbacks meet growth push

The effectiveness of biometrics deployments at scale can be prone to failures of procedure or coordination, as travelers to Europe…

 

Concordium’s Boris Bohrer-Bilowitzki wants to keep your AI agents in line

“Without identity, autonomous action is just autonomous risk.” So says Boris Bohrer-Bilowitzki, CEO of Layer-1 blockchain protocol Concordium. Concordium has…

 

Veratad among first certified to ISO 27566 age assurance standard

Veratad is one of the first companies worldwide to achieve certification to ISO/IEC 27566‑1:2025, the newly established international standard for…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events