The U.S. House Energy and Commerce Subcommittee on Commerce, Manufacturing, and Trade recently held a hearing, “Legislative Solutions to Protect Children and Teens Online,” in which the subcommittee considered 19 pieces of legislation. Over the coming months the subcommittee and full committee will be considering these measures. Here, we provide our analysis on 8 of those proposals. We focused our analysis on the drafts that, if enacted in their current form, would most affect the future of computing and artificial intelligence. For further reading, our principles for protecting kids and innovation can be found here.
At the outset, it is worth noting that a variety of these proposals segment online users by age. Whether or not the requirement to verify user age is explicit, services are likely to do so in order to avoid legal liability. Such requirements, whether implicit or explicit, would gate access to computing and free expression for all Americans and cause a variety of inherent security concerns that we explore below.
The SAFE BOTS Act is the only bill in the proposed package that specifically regulates minors’ use of AI tools. The discussion draft proposes to govern certain actions by chatbots for users under 17 years of age. Key requirements include prohibiting chatbots from claiming to be licensed professionals (unless true), mandating they identify as chatbots when prompted, providing suicide prevention resources if prompted, and advising users to take a break after three hours of continuous use. A chatbot provider would be required to have policies on how it addresses topics such as “sexual material harmful to minors,” gambling and “the distribution, sale, or use of illegal drugs, tobacco products, or alcohol” with users under 17. The proposal would preempt state laws if they cover these matters. It would also commission a study on risks and benefits of chatbots to youth mental health. The proposal clarifies that nothing within it may be construed to force the chatbot provider to collect personal information about the age of a user that it’s not already collecting.
Notably, most leading consumer AI companies have already implemented the features this draft would require. For example, Character.ai recently adjusted its service to reduce the daily time limit for users under 18 from two hours to one hour—stricter than the three-hour limit proposed in this draft. Character.ai and OpenAI have also begun deploying age assurance technology that enhances model safety protocols if, based on user prompts, the technology determines the user is a minor. Voluntary adoption and deployment of any age assurance system, including age verification, is fully within the rights of the company and not a violation of Americans’ civil liberties. However, all age verification systems—even industry-led requirements—can come with serious security and privacy risks.
Crucially, this discussion draft is missing a mechanism or standard for how AI companies should determine whether or not a user is under 17. Should this draft—or any bill that requires tailored requirements for minors—become law, platforms large and small would need to develop robust mechanisms to comply. Without clarification on which services need to comply, the current language could have a profound effect on AI access for all Americans. Compliance hinges on whether a chatbot is “incidental” to the primary purpose of the service, as defined in Subsection K(3)(B). It is possible that AI chat tools could not be integrated into mundane software, like word processors, without needing to follow the regulations in this draft. For example, is Microsoft’s Copilot truly incidental or is it a core feature of their software? Currently, Copilot is the advertised feature for all individual and business Office365 packages. Under a more liberal reading, Meta’s AI chat features would not be implicated as an argument could be made that those are incidental to the app’s social media service. Either way, there is a risk of litigation. Therefore to avoid potential litigation, a platform is likely to just abandon helpful AI chat services which could have a profound impact on usability and productivity. This means computing as we know it would remain the status quo, rather than becoming a supercharged productivity, education, and entertainment tool.
The disclosure requirements offer potential benefits. However, more research on effectiveness is necessary and the evidence we do have is mixed, according to a study on AI labels at the NYU Center on Tech Policy. The required policy might also be duplicative to standard industry practice, because most models currently deploy a disclosure that the tool is an AI system on sign up or that is constantly displayed. The draft likely aims to address shortcomings seen in high-profile cases with older AI models, where the system refused to acknowledge it was an AI, typically done as part of a character. It is unclear and too early to say whether such a law requiring disclosure at all times is entirely necessary. The upside is a common standard that could prove a helpful feature if users get too wrapped up in the tool. What is unknown is how helpful that is to those users. The downside would mostly be in the entertainment context. It is likely that the majority of users don’t lose touch with reality in those contexts. Like getting lost in a movie or fantasy novel, there could be a value and right, especially for adults, to have access to a bot that is not required to say it’s an AI when prompted. Finally, it is not predetermined that societal and cultural norms won’t adapt to putting AI systems in their appropriate place. In other words, users won’t need disclosures because they will just know they’re not talking to a person much like norms have adapted to the point where most people know the special effects seen in a film are not real.
Another provision risks stifling tools for the very people who need them most. Section 2(a) stipulates that, “A chatbot provider may not provide to a covered user a chatbot that states to the covered user that the chatbot is a licensed professional (unless such statement is true).” Any AI tool that offers “therapy” or “mental health” assistance could run afoul of this law. The draft language does leave open the possibility for an AI tool to become certified, but that comes at the cost of less and more expensive access. As Taylor Barkley has written elsewhere, there are profound mental health needs, particularly for teens, where AI therapy tools can be helpful. There are also better policy models, as exemplified in Utah, that don’t involve bans.
Finally, the draft’s proposed study is a welcome inclusion that would serve as a valuable resource to policymakers and industry alongside the breadth of academic, industry, and consumer group reports under development. As noted above, there is a profound lack of data about child and teen use of AI systems and the effectiveness of certain policy measures. Ultimately, public policies should be based on evidence and such a study proposed here could provide much of that data.
This proposal would direct the Secretary of Commerce to establish a body that would coordinate among relevant federal agencies and stakeholders to identify risks and benefits for minors online. The Partnership would publish a regular report about its findings on these topics and how online services offer protections for minors and tools for parents. It would also have to publish a “playbook” for online services to help them to implement the “widely accepted or evidence-based best practices” with regard to age assurance, “design features, parental tools, and default privacy and account settings.” The Partnership would sunset after five years.
In its current version, the bill could provide helpful information to stakeholders and industry. However, it would benefit from a few tweaks. Although artificial intelligence (AI) tools are part of many of the technologies and platforms named, AI is not specifically named. As children and teens come into frequent contact with AI systems, the proposed Partnership should examine the benefits and risks of those technologies too. An additional edit should be made to the framing of these technologies. Although there are nods to “benefits” in the discussion text and in related press releases, it is not apparent that beneficial use cases are a focus of the Partnership. Because there are so many online digital technologies available to minors, the Partnership reports could easily become entirely focused on risk analysis without space or room to present beneficial use cases. This would be a missed opportunity, especially for policymakers, because they must weigh the benefits and risks effectively. The draft could be strengthened by adding a section that directs the partnership to focus on benefits. Finally, it would probably be better for the report to focus on the mentioned “evidence-based best practices” rather than just “widely accepted ones.” Policy recommendations should be grounded in evidence and not just common viewpoints.
This bill would direct the Federal Trade Commission to work with a variety of other partners to establish a public education effort that would promote minors using the internet safely. The group would submit annual reports to Congress summarizing its efforts.
Public education efforts as proposed in this draft are well within the appropriate role of the federal government and policymakers at all levels. The federal government has existing programs such as Know2Protect (from the Department of Homeland Security), which raises awareness and combats online child sexual exploitation, or FBI Safe Online Surfing (SOS), an educational initiative for elementary and middle school students about cyber-safety and digital citizenship. And these are just two of many. Instead, the bill appears to aim for integration and coordination, by making the FTC a “hub” for public-facing online-safety resources: a national front door that can aggregate and promote materials from DHS, the FBI, educational programs, nonprofits, and other stakeholders, while also expanding the lens to include mental-health, content-exposure, and behavioral risks. In doing so, H.R. 6289 could reduce fragmentation in the federal online-safety ecosystem, streamline outreach to parents, educators, and minors, and create a standardized, cross-agency foundation for protecting youth online.
This would direct the Federal Trade Commission (FTC) to work with relevant federal agencies to develop and share resources on the safe use of AI chatbots by minors. Notably, this program would be modeled on the Youville material currently developed and made available by the Commission. As noted above, public awareness and education campaigns like these can provide help to parents, caregivers, educators, and children and teens themselves. The challenge for such an effort would be to stay up to date on a rapidly evolving space. Nonetheless, government educational efforts would serve as a useful supplement to industry and consumer protection efforts.
KOSA applies to websites and apps of all sizes that focus on user-generated content, allow people to create searchable user accounts, and use account holder information to advertise or recommend content to the user. As written, this would require even AllTrails, a variety of not-for-profit online medical forums, and innumerable other small forums to provide a completely new suite of user and parental controls not just for users but also for those without registered accounts. In order to provide parental tools to those who aren’t even registered with the service, such platforms would have to actively track these users, which seems counterproductive for the purpose of protecting privacy online.
The platform would similarly have to provide parents with information about the parental tools required by the law and obtain verifiable parental consent for users and visitors under the age of 13. The bill adopts the same standard for consent that appears in the Children’s Online Privacy Protection Act of 1998. But some of the approved methods under this law are easy to circumvent by users of any age, including making a credit card transaction or calling a phone number.
Moreover, as with any legislation that requires treating different age groups differently online, many platforms will likely pursue more robust age verification methods in order to avoid potential liability, such as having users upload government identification and face scans. This practice has repeatedly led to data breaches, leaving affected people vulnerable to financial fraud and other crimes.
These same platforms would also have to pay tens of thousands of dollars to hire independent auditors. Such costs and regulatory burdens are not feasible for many of the small—even not for profit—forums and other services that would be covered by the law.
This proposal would divide users into different age groups and require that app stores receive consent from parents for their children to download apps or make in-app purchases. Unfortunately, age verification for minors is extremely difficult, verification still comes with security risks, the definition of “parental account” means it’s easy for minors to circumvent parental consent, and the bill applies only to apps and not websites.
The bill relies heavily on segmenting users into different age categories of 18 or older, 16-17, 13-15, or below 13 years of age. The problem is that there is not a reliable method to verify minors’ age. Age estimation errs by years, minors generally don’t have government photo identification cards, and other methods of identification such as birth certificates or Social Security cards (which also don’t have birthdays) don’t have photos that can be matched to the person in front of the screen.
There are also the more fundamental cybersecurity concerns with age verification. The bill would require that age verification data is protected by limiting its collection and storage to only what is necessary to verify a user’s age, obtain parental consent, and maintain compliance records. It would also mandate that the data must be kept safe by using “reasonable” safeguards to secure it, including encryption. The encryption requirement is a welcome provision, but age verification systems don’t always adhere to even their own standards and users cannot know for certain how such data is protected, and they can still be hacked and breached. Further, the sensitive information needed to prove age—biometrics, government IDs, etc.—is the same information needed to prove compliance with the law. So although the prompts to data minimization are welcome, they don’t solve the concerns here.
It’s also not just age verification databases that can be breached (as mentioned above), but other systems in the age verification process. After implementing age verification due to the U.K. Online Safety Act, Discord’s vendor breached tens of thousands of government IDs. That breach wasn’t even from users of their main age assurance system, but from people who were using a backup method when biometric age estimation failed or they otherwise couldn’t use estimation. Those tens of thousands of people will now have to worry about identity theft and bank hacks. That is the scale of harm that can be done by the government requiring age verification.
The way the legislation defines “parental account” also underscores the difficulty of verifying the parent-child relationship online. The text only requires that a parental account is established by a user that the app store has determined through age verification is at least 18 and whose account is affiliated with at least one account of a minor. Although few documents are truly useful for the purpose of verifying the parent-child relationship—and these documents don’t include the photo identification necessary to prove the users are the same people in front of the screen—this doesn’t escape the problem that minors can find other adults to allow them access online. It would be easy enough for a child to find an older sibling or other relative to allow them more permissive app access.
Another problem is that this bill applies only to apps and not websites. Minors could still access all the same content and more with web browsers without parental supervision. Although Congress could pass another law applying to websites, users would then need to functionally verify their ages twice for each service—once through app stores for the apps and again through the services directly when using websites. This would further increase security issues with age verification by providing more databases and more opportunities for hacks and breaches. Users frequently access both websites and apps belonging to the same services—consider email providers, social media, and niche services like AllTrails and ZocDoc.
This bill, on the other hand, would require app stores to have users only declare their ages, while noting that age assurance software can be used for this purpose. It would require app stores to provide a user’s parent the ability to prevent their child from downloading or using apps which—whether voluntarily or as required by law—provide different online experiences for minors and adults. App stores would also have to give these apps the ability to prevent minors from downloading or using them.
The legislation does not offer guidance as to how app stores must determine the parent-child relationship, which lends itself to the same problems as in the App Store Accountability Act regarding minors finding an older friend or sibling to confirm their app use. Because users inputting their age without further proof is an acceptable mechanism of proving age, friends could find other friends their own age who simply lied about their age to the app store to help them. However, app stores may opt to implement full age verification and require more documentation to prove the parent-child relationship, which can cause the same security concerns mentioned earlier.
Meanwhile, developers would be required to let app stores know if they provide different experiences for minors than for adults and would have to provide information about online safety settings for parents unless the apps block minors. These developers would also be required to use age assurance—which can include an age signal from the app store—unless the app is required by law to block minors, in which case they would need more robust means to check if adults really are adults. Developers of these apps would also have to “make a reasonable effort” to prevent minors from engaging in activity on the app restricted to adults and obtain consent (it does not specify from whom) before allowing minors to access parts of an app the developer deems “unsuitable for use by Minors without parental guidance or supervision” or content age gated by law.
Oddly, the bill applies all the same requirements it applies to apps also to website versions of those apps. If a website that provides different experiences to minors and adults has no app, then such a website is exempt. But even applying the same requirements of the bill to website versions of covered apps raises some very strange questions. Apps with web versions don’t always exist in all app stores. Some exist in iOS and Android app stores (or just in one or the other), but not in app stores on laptops or on Windows phones. If someone were to access such a website on their laptop or Windows phone, many provisions of the law would not make sense, including all the information they would be required to share with app stores that don’t house them. There are also a variety of requirements about how app stores must interact and share information with covered apps, and it is unclear whether those provisions also apply to covered websites, especially when accessed on devices with app stores that don’t contain the covered app.
However, the bill also includes some welcome provisions such as prohibiting apps from attempting to figure out a user’s birthday by repeatedly requesting user age from the app store. There is no guarantee that apps won’t still do so, but attempting to prevent the practice is still a good idea. The bill also allows app stores to withhold age signals from developers that don’t adhere to the app store’s policies and safety standards, which is a good step to protect user information. Additionally, the duty is on the apps rather than the app stores to determine whether an app is covered by the bill. App stores don’t necessarily know whether an app provides different experiences for minors and adults, so this makes sense.
This would change the Children’s Online Privacy Protection Act of 1998 to apply not just to children but also to teens, and not just to websites but also to apps. It also preempts similar laws at the state level. Among other changes, it loosens the knowledge standard depending on company size. Whether a service knows that a child is in fact a child is changed from “actual knowledge” to “knowledge” for the largest social media companies, while the current actual knowledge standard remains intact for services that generate less than three billion dollars in annual revenue, with fewer than 300 million monthly active users, and which don’t focus mainly on user generated content. Although keeping the actual knowledge standard in most cases is preferable, applying a looser knowledge standard to the top social media companies still raises difficult questions for compliance. The bill defines “knowledge” in such cases as when a platform “willfully disregarded information that would lead a reasonable and prudent person to determine, that a user is a child or teen.” It is unclear what could be used as evidence to that effect under that standard. For example, parents researching toys for children or colleges for their teens may look a lot like kids researching these things for themselves. This “should have known” standard is not workable or predictable.
Additionally, the bill would prohibit a service from cutting off their service to children or teens if a parent or teen requests that their personal information be deleted, so long as the service can be provided without such information. The ways in which user data are necessary for the service to function correctly aren’t always apparent to those using the website. However, proving as much in court is likely to be a burdensome process for these services—particularly small services. It isn’t far-fetched to see a parent requesting that a service delete their child’s information, the service doing so and removing the child from the service, and the service being sued. Indeed, that is what this provision enables.
We share the Energy and Commerce Committee’s goal of ensuring a safe online environment for children and teens. However, as Congress considers these legislative proposals, it is critical to balance safety objectives with the technical realities of the digital ecosystem and the need to preserve American innovation.
While some of these measures offer constructive steps—such as public education campaigns and evidence-based studies—others present serious functional and security concerns. Specifically, mandates for broad age verification often ignore the technical infeasibility of current verification methods and the cybersecurity risks created by collecting sensitive user data. Furthermore, overly broad definitions risk sweeping in beneficial technologies, potentially cutting off minors from valuable educational and mental health resources under the guise of protection.
We urge the Committee to prioritize solutions that empower parents and deployers without imposing unworkable mandates that stifle the development of next-generation computing. We remain ready to assist the Committee in refining these proposals to ensure they effectively protect youth while fostering a vibrant and open digital future.