Powering Prosperity: Five State-Level Reforms to Unleash Abundant, Affordable Electricity
Reform 1: Create a State-Level Welcome Mat for Nuclear Energy
Advanced nuclear energy is uniquely positioned to provide reliable and clean energy to grow our economy, yet decades of federal regulatory paralysis and lingering state-level prohibitions have made new nuclear development exceedingly rare.
States can act where federal processes have stalled. The Overturn Prohibitions and Establish a Nuclear Coordinator (OPEN) Act model takes a straightforward approach:
Repeal explicit or implicit state bans on nuclear development.
Establish a dedicated state nuclear coordinator to guide projects through permitting, siting, and interagency review.1
Utah offers a useful example. By creating a nuclear development role within its Office of Energy Development, Utah has attracted interest from multiple advanced nuclear firms and aligned its efforts with neighboring states that already host critical nuclear infrastructure.
States that proactively remove outdated barriers and build a runway for responsible development will capture the next generation of nuclear investment and attract new industries.
Reform 2: Make Permitting Transparent and Predictable
Opaque permitting processes impose real economic costs. When timelines are uncertain and responsibility is unclear, projects stall, capital sits idle, and public trust erodes. Transparency alone can dramatically improve performance.
Permitting transparency dashboards function like package-tracking systems for permit applications. They show:
Where an application sits in the process
Which agency or official is responsible for the next step
How long each stage takes relative to statutory or target timelines
These dashboards do not change environmental standards. They simply make the process visible and accountable. Virginia’s experience illustrates the impact. After implementing a statewide permitting transparency portal, the state reported reductions in environmental permitting timelines of up to 65 percent in certain agencies, from more than 335 days to around 120 days. More than 100,000 applications now move through the system with real-time public visibility.
For states seeking faster results without major statutory overhauls, implementing transparency dashboards is a proven and high-return reform.
The traditional regulated utility model remains foundational—but it is not always well-suited to rapid innovation or specialized energy needs. States can unlock new solutions by allowing consumer-regulated, off-grid electricity systemsto operate outside the traditional utility framework. This is a regulatory sandbox to foster innovation in a heavily-regulated area without any risk to existing customers.
The principle is straightforward: if an electricity provider generates, transmits, and sells power entirely independent of the regulated grid, it should not be regulated as a public utility. If it later interconnects, regulation applies. New Hampshire has codified this approach by creating a legal category for off-grid electricity providers.2
Reform 4: Authorize Third-Party Permitting and Inspections
Permitting backlogs are often capacity problems, not policy failures. States can address them by authorizing qualified third-party professionals to perform plan reviews and inspections using the same codes and standards as public agencies.
This reform introduces competition and flexibility into a core government function while preserving safety and accountability.
Florida has used third-party inspections for decades. Texas, Tennessee, and other states have adopted similar models, often triggered when local governments fail to meet established timelines. These programs accelerate housing, distributed energy, and infrastructure projects while allowing public agencies to focus on oversight rather than throughput.3
Reform 5: Design for Speed and Certainty
States should architect their permitting systems for speed and certainty—especially for large, capital-intensive energy projects. Several tools have proven effective:
Shot clocks, automatic approvals, and permits by rule. Binding deadlines create strong incentives for timely action. Where permits can be issued by rule, they improve certainty for applicants. State leaders should also borrow from Arizona’s “deemed approved” framework to create a clear backstop when agencies fail to act.
Limits on energy bans and moratoria. Open-ended bans on energy development create uncertainty and suppress investment in all forms of energy. Model policies limit moratoria in duration and require clear findings tied to public health or safety.4
Financial accountability mechanisms. Money-back guarantees for permits when agencies miss deadlines, and options to pay for expedited processes encourage promptness in every case and can speed up the most valuable projects.
Faster development at existing and brownfield sites. Streamlining replacements for co-located generation—particularly at retiring facilities—turns grid constraints into opportunities. Arizona House Bill 2774 (2024) provides a useful framework for getting new nuclear energy online at existing generation sites.5
Advanced nuclear energy is uniquely positioned to provide reliable and clean energy to grow our economy, yet decades of federal regulatory paralysis and lingering state-level prohibitions have made new nuclear development exceedingly rare.
article
false
include_in_hero_section
false
category
Articles
topic
Energy
evergy_subtopic
Electricity
Nuclear
Permitting
article_view
article content only
social_image
is_displayed
true
display_order
—
Featured Article
Custom Fields
is_displayed
true
display_order
2
publication
Law & Liberty
title
What Is And Isn't Ripe For The AI Litigation Task Force
Join tech and legal experts Prof. Kevin Frazier (University of Texas School of Law), Neil Chilson (Abundance Institute), and Charlie Bullock (Institute for Law & AI) for a breakdown of AI legal policy and regulatory developments in 2025 at the state, federal and executive levels, and the future of AI policy in 2026.
Join tech and legal experts Prof. Kevin Frazier (University of Texas School of Law), Neil Chilson (Abundance Institute), and Charlie Bullock (Institute for Law & AI) for a breakdown of AI legal policy and regulatory developments in 2025 at the state, federal and executive levels, and the future of AI policy in 2026.
article
false
include_in_hero_section
false
category
Media mentions
topic
Technology
technology_subtopic
Artificial intelligence
article_view
article content only
social_image
is_displayed
true
display_order
—
Article
Flatley and Barkley: AI can help fix what’s broken in foster care
Published with Maureen Flatley in the Boston Herald.
President Donald Trump’s executive order directing states to deploy artificial intelligence in foster care isn’t just welcome — it’s overdue.
The provision calling for “predictive analytics and tools powered by artificial intelligence, to increase caregiver recruitment and retention rates, improve caregiver and child matching, and deploy Federal child-welfare funding to maximally effective purposes” addresses real failures in a system that desperately needs help.
The foster care system’s problems aren’t hypothetical.
Caseworkers manage 24-31 families each, with supervisors overseeing hundreds of cases. Children wait years for permanent placements. Around 2,000 children die annually from abuse and neglect, with reporting gaps suggesting the real number is higher. Overburdened workers rely on limited information and gut instinct to make life-altering decisions. This isn’t working.
AI offers something the current system lacks: the ability to process vast amounts of information to identify patterns human caseworkers simply cannot see. Research from Illinois demonstrates this potential. Predictive models can identify which youth are at highest risk of running away from foster placements within their first 90 days, enabling targeted interventions during a critical window. Systems can flag when residential care placement is likely, allowing caseworkers to connect families with intensive community-based services instead. These aren’t marginal improvements — they represent the difference between crisis response and genuine prevention.
Critics worry AI will amplify existing biases in child welfare. This concern, while understandable, gets the analysis backwards. Human decision-making already produces deeply biased outcomes. Research presented by Dr. Rhema Vaithianathan, director of the Centre for Social Data Analytics at Auckland University of Technology and lead developer of the Allegheny County Family Screening Tool, revealed something crucial: when Black children scored as low-risk, they were still investigated more often than white children with similar scores. Subjective assessments by overwhelmed caseworkers operating without adequate information lead to inconsistent, sometimes discriminatory decisions. It exposed bias in human decision-making that the algorithm helped surface.
That’s AI’s real promise: transparency. Unlike the black box of human judgment, algorithmic decisions can be examined, tested, and corrected. AI makes bias visible and measurable, which is the first step to eliminating it.
None of this means AI deployment should be careless. The executive order’s 180-day timeline is ambitious, and implementation must include essential safeguards:
Mandatory bias testing and regular audits should be standard for any AI system used in child welfare decisions. Algorithms must be continuously evaluated for disparate racial or ethnic impacts, with clear thresholds triggering review and correction.
Human oversight remains essential. AI should inform, not dictate, caseworker decisions. Training must emphasize that risk scores and recommendations are tools for professional judgment, not substitutes for it. Final decisions about family separation or child placement must rest with trained professionals who can consider context algorithms cannot capture.
Transparency requirements should apply to any vendor providing AI tools to child welfare agencies. Proprietary algorithms are fine for commercial applications, but decisions about children’s lives demand explainability. Agencies must understand how systems reach conclusions and be able to articulate those rationales to families and courts.
Rigorous evaluation must accompany deployment. The order’s proposed state-level scorecard should track not just overall outcomes but specifically whether AI tools reduce disparities or inadvertently increase them. Independent researchers should assess effectiveness, and agencies must be willing to suspend or modify systems that don’t perform as intended.
The alternative to AI isn’t some pristine system of perfectly unbiased human judgment — it’s the status quo, where overwhelmed caseworkers make consequential decisions with inadequate information and no systematic oversight. Where children fall through cracks that better data analysis could have prevented. Where placement matches fail because no human could possibly process all relevant compatibility factors. Where preventable tragedies occur because risk factors weren’t identified in time.
Implementation details matter enormously, and HHS must get them right. But the executive order’s core insight is sound: AI and predictive analytics can transform foster care from a crisis-driven system to one that prevents harm before it occurs. The question isn’t whether to deploy these tools, it’s how to deploy them responsibly. With proper safeguards, AI can address the very problems critics fear it will create.
America’s foster children deserve better than the status quo. AI gives us a path to deliver it.
Maureen Flatley is an expert in child welfare policy and has been an architect of a number of major child welfare reforms. She also serves as the President of Stop Child Predators. Taylor Barkley is Director of Public Policy at the Abundance Institute, focusing on technology policy and innovation.
President Donald Trump’s executive order directing states to deploy artificial intelligence in foster care isn’t just welcome — it’s overdue.
article
false
include_in_hero_section
false
category
Op-eds
topic
Technology
technology_subtopic
Artificial intelligence
article_view
article content only
social_image
is_displayed
true
display_order
—
Article
Testimony for hearing, “Artificial Intelligence and its impact on the American workforce and education system”
SUBMITTED STATEMENT OF KEVIN FRAZIER AI INNOVATION AND LAW FELLOW THE UNIVERSITY OF TEXAS SCHOOL OF LAW
SENIOR FELLOW THE ABUNDANCE INSTITUTE
BEFORE THE HOUSE COMMITTEE ON EDUCATION AND THE WORKFORCE U.S. HOUSE OF REPRESENTATIVES
HEARING ON “ARTIFICIAL INTELLIGENCE AND ITS IMPACT ON THE AMERICAN WORKFORCE AND EDUCATION SYSTEM”
Chairman Walberg, Ranking Member Scott, and distinguished members of the Committee, thank you for the opportunity to testify.
My name is Kevin Frazier. I’m the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Editor at Lawfare, and a Senior Fellow with the Abundance Institute.
The economic normal in the Age of AI is and will be marked by flexibility. Future generations of Americans won’t associate traditional work with 9-to-5 employment. The career ladder will be replaced by a career flywheel, in which individuals succeed due to their capacity to adapt and willingness to learn. In short, we’ll soon see more and more Americans participating in what I call the Portfolio Economy: workers will maintain an array of skills that they can offer to a range of employers on a project-by-project basis. This transition will put pressure on New Deal labor and employment laws, such as the Fair Labor Standards Act.
My written testimony makes three points: the transition to the Portfolio Economy must be data-driven, worker-focused, and flexible.
Absent more data about how the economy is evolving, Congress may lack the information necessary to assess whether existing laws are functioning as intended. Many of the current sources of labor market data are infrequent, imprecise, or inaccurate. The Contingent Worker Survey, for example, is issued on an irregular basis and does not accurately capture the number of Americans in non-traditional jobs.
It’s also unlikely that Congress has up to date and comprehensive information on AI adoption. According to recent polls, just ten percent of workers use AI daily and a mere nine percent of small firms have picked up AI. This information, though, is akin to learning LeBron James played 47 minutes in a game—it’s better than nothing but it’s missing what’s really important like how many points he scored or, to return to AI, whether its use is increasing productivity and altering hiring decisions.
Updating and expanding the sources of information on how AI progress will become all the more important as the technology continues to evolve. While most experts agree we should expect ongoing advances in AI, they diverge when it comes to pinpointing the specifics. The so-called jagged frontier of AI will cause firms to preserve the option of automating certain tasks and roles and, consequently, to prioritize finding workers with specific skills for finite projects. In turn, workers will need to be ready and willing to learn new skills and fast.
With a better understanding of AI advances and adoption, I recommend Congress analyze laws such as the Fair Labor Standards Act with a focus on two aspects:
To what extent does the law rely on frameworks and definitions that clash with the Portfolio Economy?
Does the law incentivize workers to engage in the career flywheel–to study, to shadow via apprenticeships, and to work in non-traditional arrangements?
Thank you again for this opportunity. I look forward to your questions.
Introduction: The Portfolio Economy
Artificial intelligence (AI) will inevitably and permanently alter the nature of work. Where, how, and to what extent is unknown and, critically, unknowable. Economists do not have a definitive test to determine which jobs are most likely to be disrupted nor when such disruption will occur. They also lack the means to reliably predict which corporations and industries will successfully integrate AI and which may struggle to do so. This explains why, depending on the day, the public may come across headlines anticipating the rapid elimination of entire professions due to AI or reports touting how AI development is creating jobs and leading to entire new fields of work.
Technologists are similarly in the dark. They cannot precisely forecast the capabilities of future AI models. They vary in their expectations about how and when AI will achieve “superintelligence” or achieve “AGI.” Their differences do not end there. Some contest whether those are definable concepts or concepts worth defining in the first place! Technologists even struggle to pin down the exact capabilities of existing models. These numerous and vast gaps in knowledge will persist for the foreseeable future. America’s world-leading AI labs are exploring new training methods that will result in AI models that create even more capable and diverse AI tools.
Despite the litany of known unknowns and unknown unknowns in AI development and diffusion, it is generally agreed upon that AI will accelerate workforce trends that were already underway before ChatGPT. Work has been and will be increasingly skill-based, short-term, and independent. The future of work looks far more like the gig economy than a 30-year career with a single firm. It will soon be the norm, rather than the exception, that Americans are simultaneously performing work for several firms under a range of different employment arrangements.
Put differently, we have entered the first innings of a Portfolio Economy. Workers will strive to maintain a range of valuable skills and a stable of clients; they will have to regularly update both as AI continues to advance and the nature of human-AI collaboration shifts. This economic reality is the product of how AI seems likely to develop and diffuse. AI does not progress at the same rate across all tasks and domains; its capacity to handle a specific job function is highly variable. AI experts commonly refer to this as the technology’s “jagged frontier.”
Whether AI will augment how a human performs a specific function, take over that function, or have no ability to augment or automate that function is a guessing game. While some tasks have been and will be delegated to AI, others will remain the exclusive domain of humans or involve some human-AI collaboration. This is precisely why those who warned that radiologists would soon be out of work have had to walk back their statements. Across the spectrum of tasks performed by radiologists, only some are suitable to entirely delegate to AI. For operational and legal reasons, many of the remaining radiological tasks must and will be performed by humans.
Learning from the case study of radiologists, assessments of the future value of any one task or profession must consider the substantial technical limitations of AI as well as broader legal and institutional inertia. While technological hurdles and regulatory barriers may eventually be cleared, many jobs with even a high rate of “exposure” to AI—meaning that AI tools seem capable of taking on many of that job’s tasks—will remain to be human-held positions. In some cases, AI augmenting or automating tasks may actually increase demand for the profession in question. A majority of firms with fewer than twenty employees expect that AI will cause them to hire more employees. AI as a job creator makes intuitive sense in many contexts. Consider the vast shortage of mental health professionals, for instance. As AI allows therapists to take more accurate notes and handle administrative tasks—thereby reducing the cost of treatment, more members of the public may seek out mental health support. More generally, AI can lower the costs of things like customer service that may have previously caused a customer to prefer larger firms to a smaller one.
Of course, in other domains, the productivity gains induced by AI will cause some employers to demand fewer workers in that field. It is fairly clear that there will be fewer court reporters in the future, for example. There are only so many trials in so many courts, so as AI makes key tasks of that role more cost effective, court systems will simply hire fewer reporters. Individuals in these sorts of fields will be formal members of the Portfolio Economy. They may spend a fraction of their time in their old, traditional W-2 role but will otherwise need to develop additional skills to market to other firms.
In this near-future, professional stability and economic security will look like having the means and opportunity to study new skills through private or public educational and vocational programs, train under mentors through apprenticeships, and work for a variety of firms around the world. Whether Americans thrive in the Portfolio Economy rests on whether labor and employment laws evolve to permit and encourage flexibility or maintain their current rigidity.
Policymakers seeking to navigate this challenge by developing the flexible, adaptive laws required in the Age of AI should adhere to a few best practices. First, seek to understand the underlying technology. A foundational knowledge of the flaws and likely capabilities of AI models in the near- and medium-term is essential to sorting through conflicting and even contradictory reports of how AI will alter the economy and society, more generally. Policymakers should also have a strong grasp of how and when AI can complement and augment humans rather than automate roles. Technological literacy will go a long way toward sorting through sensationalistic AI claims that tend to dominate the headlines.
Second, gather more information from the private sector about AI adoption plans and workforce needs. Information on how small-, medium-, and large firms plan to integrate AI can inform both immediate retraining and upskilling initiatives as well as more long-term reforms to our educational and workforce development programs. This data will similarly help dispel hyperbolic claims about the imminent demise of entire industries and professions.
Third, develop and test policies crafted in response to a thorough understanding of AI and reliable data on its adoption. What it means to succeed in the Portfolio Economy is unclear and contingent on variable factors—including but not limited to the pace and nature of AI advances and the level of AI adoption by firms and laborers alike. Laws and regulations crafted to today’s AI or based on the current use of AI by firms and laborers will rapidly become technologically obsolete. Legislative tools such as sunrise clauses, retrospective review, and regulatory sandboxes are indispensable as lawmakers strive to make sure the United States is first to the future rather than the last to move on from the past.
The remainder of this testimony provides initial guidance on each of those practices. This guidance is far from comprehensive and is soon to be out of date. In the same way that workers in the Portfolio Economy will have to continually update their menu of skills and services, policymakers will have to serially seek out new information on AI capabilities, AI adoption, and the regulatory tools most responsive to technological progress and its diffusion.
Understanding the Technology: The Technical Reasons Why AI Will Transform the Nature of Work
Study of prior general purpose technologies, such as the steam engine and electricity, indicates a two-stage process to the transformation of the economy. In the first phase, the technology is applied to existing processes—often with little or marginal effects. In the second phase, systemic changes take place as entire institutions and processes develop around the specific attributes of the emerging technology.
A historical case study helps illustrate the difference between task-based adoption of technology and systemic reorientation around new technology. A large percentage of people may think of the steam engine as being invented in the 1800s. Yet, Thomas Newcomen developed such a system in 1710. The reason for the wide discrepancy? Significant technical limitations meant that the Newcomen engine wasn’t of much use outside of pumping water out of flooded mines. Firms found it cheaper to stick with coal than to upend their workflows around this early iteration of the steam engine. So while the Newcomen engine may have displaced a few miners who were no longer needed for the one-off task of addressing flooded mines, it fell far short of transforming mining or any other industry. When technological adoption is in this first stage, it’s best to assess its societal and economic impacts on a more granular basis. It will never be the case that a new technology achieves its full potential in the days and months following its initial introduction. Cultural, economic, legal, and political factors all shape and slow technological diffusion.
AI is in many ways in its Newcomen stage. The vast majority of firms have yet to adopt AI. Barely more than twelve percent of large firms are using AI. Smaller firms, those under 250 employees, report even less use—below ten percent. Across the U.S. workforce, just one in ten employees regularly engage with AI; the majority of workers sparingly turn to AI for assistance. Many workers—about one in four—are unsure of whether their company has an AI policy or strategy.
Even among the firms that have formally adopted AI, it’s likely that they are generally doing so to handle or augment discrete tasks; systemic redesign seems years (and millions of dollars) away. Small and large firms that use AI tend to do so for just two specific tasks, such as developing marketing materials. Critically, these more AI-forward firms have yet to even attempt to reorient their entire operations around AI. Among small firms that have adopted AI, half have made no substantive investments in staff training, consultants, or operational updates. Only slightly more large firms have made AI-related investments. This dearth of investment suggests that it will be quite some time before AI causes systemic changes to the nature of work. Technologists expect that for every one dollar spent by a firm on AI they will have to invest nine more on intangible human capital. Firms have clearly yet to follow that ratio. While some may excuse underinvestment as a strategy to save costs, economists expect that firms willing to invest in AI and related institutional changes will experience greater productivity gains from AI sooner.
Cultural factors may also be slowing the workplace effects of AI. Reports of so-called AI stigma—a sense that colleagues may look down on co-workers for using AI—is pervasive. An unwillingness to use AI among a firm’s employees may reduce the usefulness of even highly reliable AI tools and delay any potential productivity gains. Stigmatization may also cause workers to engage in riskier uses of AI because of a hesitancy to seek out information on how to properly use AI. When I travel the country talking to lawyers about AI, for instance, many attendees tell me after the fact that they rarely share how they use AI with colleagues because so many lawyers fear that they will become the subject of the next story detailing a lawyer submitting a brief with a hallucinated citation. Lawyers aren’t alone in feeling as though they have to hide their AI use. So-called secret cyborgs—employees clandestinely using AI—exist in many companies.
Technical limitations additionally explain why AI adoption has generally been confined to taking over or assisting with discrete tasks. Evaluations of the extent to which AI tools have “economically relevant capabilities” show that AI has a long way to go before outpacing workers on each of their tasks. OpenAI’s GDPval, which assesses the performance of AI tools across 1,320 specialized tasks relevant to 44 occupations, indicates that leading AI tools demonstrate near expert-level performance on about 48 percent of key tasks. Certain tasks—though involving “know-what” or judgement, wisdom, and intuition–will likely remain beyond the capabilities of AI tools for quite some time. That said, today’s AI is the worst AI we will ever use.
Several likely technical advances in the short- to medium-term may hasten the ability of AI to augment or automate a broader suite of tasks as well as to assist in the redesign of entire processes. Agentic AI systems—tools capable of autonomously performing any tasks someone could do on their computer–loom on the horizon. In short, whereas most AI tools today require the user to continually prompt or instruct the tool, AI agents can pursue goals set by the user with little to no intervention. While early agentic systems are already available, they tend to struggle on especially complex or long-lasting tasks. AI developers expect that these shortcomings can and will be addressed in the near future–heralding the second phase of AI-driven transformation of the economy.
AI agents will allow for a new kind of business—companies designed entirely around AI rather than simply turning to AI to aid humans with current obligations. AI-native firms will differ from today’s firms in meaningful ways. First, they will require fewer humans relative to competitors that refrain from altering their processes. Second, AI-native firms will operate in a nimbler fashion. AI agents do not tire; they work 24/7/365. AI agents can also quickly be re-tasked at minimal expense, whereas humans may need time and training to become productive in a new line of work. Third, these firms can easily move in and out of different markets, so long as regulatory and technical systems facilitate such cross-border activity. As an aside, progress in robotics will allow for greater use of AI agents in sectors such as manufacturing where AI use is less common today relative to the knowledge sector, for instance. It is likely that developments in world models—a new set of AI tools that “predict what will happen next in the world, modeling how things move, collide, fall, interact and persist over time” —will accelerate this progress. As the sophistication of world models improves, robots will be able to take on a greater range of tasks with lower error rates and for longer periods of time. For these reasons and more, there will be a strong incentive for firms in many sectors to become more and more oriented around AI agents.
Yet not all sectors are amenable to a systemic overhaul around AI. The most common AI tools have significant limitations in certain domains due to their inherent technical features. As described by John Pavlus, today’s AI tools learn “scores of disconnected rules of thumb that can approximate responses to specific scenarios, but don’t cohere into a consistent whole.” In other words, the usefulness of today’s AI is highly context dependent. If AI has not been trained on relevant, up to date data, then it will struggle in that domain.
This is due to the probabilistic nature of generative AI tools. In a very simplified sense, today’s AI tools predict the next best word in response to a user’s prompt based on their training data, the AI developer’s instructions for how to prioritize certain information or responses over others, and safeguards that the AI developer may have imposed to limit the generation of illegal or harmful outputs. Fields lacking data for AI to train on—think everything from the massage industry to crisis response management—will likely not experience a systemic reorientation around AI.
The unpredictability of how AI will advance means that there is no definitive timeline for how these stages will play out in different sectors. The best path forward is to develop agile and adaptive frameworks that facilitate two tasks: first, gathering information about how and to what extent (e.g. for augmentation, automation, or systemic redesign) AI is being adopted in different sectors; and, second, based on that information, updating labor laws as necessary to permit workers to meaningfully contribute to existing and new tasks and sectors.
Measuring Adoption: The Key Information Necessary to Determine How AI is Actually Changing the Economy
Congress cannot help American workers thrive in the Age of AI if it is operating with outdated, incomplete, or inaccurate data about the aforementioned phases of AI adoption into the economy. Yet, the Federal Government’s current approach to learning about private sector use of novel technology and complex scientific and technological matters in general is highly reactionary and fragmented. Notably, these issues predated the current AI policy conversation. “Congress is science-poor,” concluded Martha Kinsella & Maya Kornberg in 2023. They continued, “The lack of scientific understanding and expertise cramps policymaking, with terrible effects on the country. Congress can fill this gap itself, and it must.” Absent changes, Congress will lack the information necessary to properly evaluate and, if necessary, respond to economy-wide trends emerging from AI.
In theory, a lawmaker focused on identifying the tasks and roles their constituents should seek out in the Portfolio Economy could gather information from the following sources: tax returns that may provide indirect evidence of the intensity of corporate investment in AI development and adoption; SEC disclosures that refer to corporate AI strategies; notices of layoffs that may have been driven by AI as compelled by the Worker Adjustment and Retraining Notification (WARN) Act; and, responses to AI-related Census and other recurring survey questions. In practice, that lawmaker will find themself woefully uninformed about the nature of AI adoption.
These information sources are either too narrow, too broad, or too infrequent to provide Congress with an accurate picture of AI capabilities and the extent to which those capabilities are being adopted by private actors. For instance, the WARN Act was enacted to provide state and federal officials with more information about large-scale factory closures, which differ from the timing and nature of AI-induced layoffs. More generally, the aforementioned sources generally do not require explicit and ongoing reporting about AI use by the private sector. Even if several agencies attempted to collect such AI-related metrics, the resulting information would still be of limited value. There is no standard agreement among these various agencies nor within the applicable statutes as to how to define AI, AI adoption, and related terms that would be of interest to the lawmaker in question. Reporting requirements may also elicit too much as well as too little information. On the one hand, not all firms of interest are captured by these disparate collection mechanisms and not all firms may invest the same level of resources to accurately respond to such inquiries; on the other, firms may opt to flood agencies with information to reduce the odds of the meaningful kernels being identified. Congress and receiving agencies may also lack the capacity to meaningfully analyze what may be troves of data on AI development, diffusion, and adoption.
Absent significantly more accurate and timely data, it is highly likely that Congress will be tempted to legislate in response to anecdotes rather than based on evidence. That’s a solvable problem. Rather than rush to regulate AI and hope that the chosen statutory response will work as intended, Congress needs to thoroughly examine and improve how the Federal Government learns about AI use across the economy. Notably, this will mark an improvement upon how the government has previously responded to information gaps related to emerging technology—consider that there was a twelve year gap (2005 to 2017) between formal reports on the state of the contingent and alternative work arrangements, well after the rise of this key part of the economy. As late as 2024, such reports did not even include specific analysis of app-based work arrangements. Assuming that Congress corrects for this lag in the context of AI, a few key principles should guide any information gathering proposal.
First, collected AI-related information should generally be anonymized when submitted and aggregated when shared so that companies are incentivized to provide the most accurate and comprehensive data possible. If companies are coerced into making their AI adoption plans fully known to the public, they may face popular scrutiny for merely attempting to adjust to the Age of AI. This will have the pervasive effect of slowing AI adoption, resulting in U.S. firms being technological laggards and, consequently, slower to create the products, services, and jobs of the future. Lawmakers seeking to develop educational and workforce development programs for the Portfolio Economy can do so with broader measures of AI adoption rates by firm size and industry type, for instance.
Second, information sharing processes should be as automatable as possible to reduce the costs and operational burdens associated with compliance, an especially key concern for small businesses. The costs to comply with even straightforward regulations are disproportionately high for small and medium-sized businesses. Some may accordingly call for businesses under a certain threshold being omitted from any mandatory AI adoption information scheme. However, the omission of smaller companies will deprive Congress of critical information when it comes to preparing for the Portfolio Economy.
Startups and small businesses are often on the vanguard of creating and offering new products and services. They also are engines of economic opportunity and dynamism—facilitating the sort of churn that will allow workers to build out a larger portfolio of client companies. Congress must have a strong grasp of the state of AI across firms of all sizes. To accomplish this goal, policymakers should explore the use of AI to gather this information from private stakeholders and should mandate that agencies collecting any relevant data use standard forms and definitions.
Third, companies that make a good faith effort to comply with any such reporting requirements should be given the opportunity to cure any incorrect or late disclosures. This regulatory safe harbor will have the dual benefit of increasing the odds of companies submitting information in the first place and, therefore, providing Congress with a more complete picture of the AI landscape and state of the Portfolio Economy.
Fourth, any information collection schemes should be subject to a sunset clause. Congress should have to regularly reexamine whether it still requires certain information. This will reduce the odds of America’s companies being saddled by increasingly onerous, duplicitous, or antiquated information reporting requirements. Additionally, this recurring investigation of the need for specific information will force Congress to clearly think through why certain information may or may not be necessary for its regulatory goals. It’s highly likely that the metrics that matter most for informing AI policy will shift as the technology and its inputs evolve. By way of example, demands for information on the training data used by AI labs may be less legally important if labs begin to instead train on synthetic data—data generated by another AI.
Adherence to these principles will put Congress and the entire Federal Government in a much stronger position to see how the Portfolio Economy is emerging in real-time. In turn, policymakers can develop responsive policies that help Americans navigate this new economic reality. That said, Congress should not wait to begin to study how to proactively set Americans up for success in a more dynamic and fluid labor market.
Planning for the Portfolio Economy
As previously mentioned, AI is compounding several trends that were already straining labor laws better suited to technologies and market forces in place in the 1920s than the 2020s. A few such trends are especially relevant to the Portfolio Economy. For one, it’s far from a new phenomenon that more Americans are working in a continent or alternative work arrangement. Somewhere between ten and thirty percent of US workers derive their primary income from a nontraditional work arrangement. Despite that vast span, it is evident that such arrangements have become more common in the 2000s. Numerous signals suggest this trend will not abate.
The jagged frontier of AI means that which tasks are in demand will vary in a rapid fashion. New tools will be deployed with minimal notice and innovators will devise creative ways for humans to leverage AI. The net effect of these two facts is a shifting menu of highly sought after skills. Firms, especially following recent overhiring, are rightfully cautious of hiring too many people in fields that may soon be eliminated or altered; increased uncertainty as to the value of different skills will only further entrench their preference for alternative work arrangements over traditional W-2 agreements.
Workers, too, increasingly seek out flexible work arrangements. Following the pandemic, businesses that tolerate a wider range of hours, schedules, and work locations have seen an uptick in interest by applicants and retention among employees. The next generations of workers may place an even higher premium on bespoke work arrangements. Members of Gen Z have signaled a strong demand for anything other than 9-5 work. Forecasters expect Gen Alpha will seek out similarly flexible job opportunities.
In this fluid, shifting, and skill-specific labor market, there’s also a strong mutual interest among employers and workers alike for efficacious upskilling and retraining programs. All else equal, employers stand to benefit from a deeper labor pool–both in terms of the absolute number of qualified workers and the range of skills held by the average worker. Workers with more skills or a proven ability to quickly pick up skills will allow firms to easily shift between AI, humans, and human-AI teams as the technology, culture, and regulations evolve.
Relatedly, workers have an obvious interest in maintaining and, when possible, increasing their skill portfolio. In an economy that turns quickly to reward certain skills, the workers with a wide range of skills and the capacity to apply them in different contexts will fare better. Employers may soon look for evidence that workers are capable of adding immediate value to small and large businesses as well as to businesses operating in different sectors and even in different countries; in other words, the capacity to adapt and to problem solve will likely become even more valuable as the economy and technology continue to evolve.
Crucially, the Federal Government also has an interest and role to play in a skills-based economy. In an international market for skills, employers may turn to workers in other countries to tackle specific short-term efforts if they cannot find domestic talent. When companies come to rely more and more on foreign talent, the domestic economy will struggle, which has obvious negative ramifications on the government. Rather than attempt to interfere in dictating the specific skills workers ought to learn and the precise means to do so, the Federal Government can instead ensure the proper market and legal structures exist that achieve the following: first, make it as easy as possible for workers to accurately signal their skills to employers; second, ensure workers and employers alike have plenty of opportunities to learn new skills and to provide ongoing training opportunities, respectively; and, third, design labor laws such that workers can easily shift between different employers and projects and participate in lifelong learning and apprenticeship opportunities.
This is an ambitious but necessary agenda. In the same way that the successful business of the future will find ways to reorient their processes around AI rather than merely improve existing systems, success on this agenda will turn on the extent to which policymakers are willing to reinvent the wheel. The recommendations below are presented at a high level to facilitate this sort of bold thinking—the goal is to prevent the sort of piecemeal, fragmented approach that may take place through one-off amendments to current frameworks.
A. Skill Signaling Reform
The transition to a Portfolio Economy will falter if workers lack credible, low-cost ways to signal what they can do—and if employers lack reliable tools to identify that talent. Today’s dominant signals of worker competence—grades, formal educational degrees, and static certifications—are increasingly ill-suited to a labor market defined by rapid skill turnover, short-term engagements, and shifting patterns of human-AI collaboration. Grade inflation has eroded the informational value of transcripts. Degree requirements frequently function as blunt proxies for aptitude rather than evidence of job-relevant skills. At the same time, a proliferation of fast-moving credentials and training programs has added noise rather than clarity to the labor-matching process.
In the Portfolio Economy, skill signaling systems should communicate competence as well as encourage and shape future investment. Workers are more likely to pursue retraining when they can credibly document and monetize new skills. Employers are more likely to fund training when those investments are applicable across projects and teams for varying periods of time. Absent more accurate and dynamic signals of skills, both sides of the labor market will underinvest in upskilling, slowing adaptation at a time when economic dynamism is of extreme importance.
For these reasons, Congress should prioritize reforms that modernize how skills are documented, verified, and shared. While traditional credentialing mechanisms may continue, they should no longer occupy favored status in terms of federal funding and value in the labor market. A revised, skills-based, standardized skills signaling system can supplement—and over time improve upon—traditional credentials with more precise, continuously updated, and trustworthy signals that better align with the realities of portfolio-based work.
Initiate a study and pilot program for a Cryptographic Curriculum Vitae.
Congress should direct the Department of Labor, in coordination with the Department of Education and the National Institute of Standards and Technology, to study the feasibility of providing each American worker with a cryptographic curriculum vitae (C-CV)—a secure, portable, and serially updated record of verified skills, competencies, and work experiences. A C-CV would allow workers to document what skills they possess and both how and where those skills were acquired and applied, including through formal education, apprenticeships, short-term contracts, and on-the-job learning.
For employers operating in the Portfolio Economy, C-CVs would significantly reduce the transaction costs associated with identifying the right talent for discrete tasks or time-limited projects. Rather than relying on coarse proxies such as degrees or job titles, firms could examine more reliable sources of information, such as whether a worker has demonstrated proficiency in specific tools, methods, or workflows. For workers, C-CVs would lower barriers to entry across firms and sectors, enabling them to market discrete skills to multiple employers simultaneously and to update their profiles as their capabilities evolve.
At a minimum, the study should address:
How to evaluate and record skill proficiency across K–12 education, higher education, apprenticeships, and professional settings in ways that are comparable without being rigid or exclusionary.
How to incorporate evidence of real-world application—such as project outcomes, employer attestations, or peer validation—while protecting sensitive or proprietary information.
Which governance structures are best suited to oversee a C-CV Exchange—where employers and workers can post opportunities and share their skills, including clear standards for privacy, cybersecurity, and antidiscrimination compliance.
A phased rollout strategy, including pilot programs in select states, regions, or industries, to assess adoption, usability, and labor-market effects.
Whether, and under what conditions, recipients of federal education or workforce development funds should be required to participate in the C-CV Exchange to ensure interoperability and broad access.
Federal legislators and regulators should authorize and fund convenings that bring together employers, educators, workforce development organizations, and technologists to design multiple, competing skill evaluation tools. Rather than imposing a national standard, the Federal Government should facilitate experimentation and then collect data on which assessments employers find most predictive of job performance and most accurate in terms of worker expertise across various domains. These convenings ought not crowd out the work that is already being done in this space but rather fuel it. The Markle Group has collaborated with others to create a “Job Posting Skillitizer.” Hiring managers can use this tool to craft skills-based job descriptions that lend themself to a skills-based approach to labor matching. While it’s a useful product, it could likely benefit from ongoing scrutiny by labor market participants—that’s where such convenings could come in handy. Iterative assessment of this kind could occur across all aspects of the labor market.
This competitive, information-driven process would allow ineffective or noisy assessments to fall out of use while rewarding those that accurately capture job-relevant skills. Over time, this feedback loop would improve the quality of skill signals available to both workers and firms, ensuring that educational and training programs evolve in response to actual labor-market demand rather than static credentialing norms.
Align federal education and workforce funding with improved skill transparency.
Even if Congress were to follow the first two recommendations, there will be an ongoing need to create and share skill evaluation tools. If Congress determines that the private sector is not sufficiently developing and adopting such evaluations, then it ought to commission an analysis of the extent to which traditional grading and credentialing systems fall short of employers’ and workers’ needs in a skills-based labor market. This rigorous examination is pivotal to deterring students and workers from chasing credentials with little to no economic return on investment. Of the hundreds of thousands, if not millions of badges, certificates, and other credentials, many fail short of qualifying as “credentials of value,” or credentials that “equip recipients for strong career trajectories, improve their earnings opportunities, align with high-demand jobs offered by . . . employers,” and propel recipients to “earn enough within 10 years to pay for the cost of their education[.]” That analysis should inform the imposition of conditions on federal education and workforce development funds and grants, with the aim of encouraging—though not abruptly mandating—the adoption of more granular, skills-based reporting mechanisms.
By tying federal support to improved skill transparency rather than to particular credentials, Congress can help institutions orient toward outcomes that matter in the Portfolio Economy without dictating curricular content or instructional methods. To be blunt, ongoing direct and indirect support of educational and vocational programs that do little to help workers show their capabilities and employers find the best workers represents a poor allocation of federal funds given superior alternatives.
Establish a legal safe harbor for skills-based hiring and signaling.
Congress should enact a statutory safe harbor clarifying that employers who hire based on the C-CV Exchange and otherwise rely in good faith on validated, skills-based signals—rather than degree requirements or pedigree-based proxies—will not face heightened liability under federal employment or civil rights laws, provided those tools are demonstrably job-related and nondiscriminatory in design.
This reform would remove a significant legal disincentive to modernizing hiring practices. Many employers may default to degree requirements when hiring because they are familiar and legally safe more so than due to the informational value of degrees. A clear safe harbor would accelerate the shift toward skills-first hiring, expanding opportunity for workers whose competencies were acquired outside traditional pathways and improving labor-market matching efficiency.
Application of these reforms would strengthen the informational infrastructure of the labor market. Workers would be better positioned to invest in new skills with confidence that those investments can be credibly signaled and rewarded. Employers would gain faster, cheaper, and more accurate access to talent. And policymakers would support a labor market that rewards adaptability, continuous learning, and demonstrated competence—the core attributes required to thrive in a Portfolio Economy shaped by rapid technological change.
B. Increasing Skill Development Opportunities
The Portfolio Economy will reward workers who can repeatedly acquire, apply, and redeploy skills across shifting tasks, firms, and industries. Yet much of federal labor and education policy still reflects a linear model of work: education occurs upfront, retraining is episodic and reactive, and employer-provided training is treated as a discretionary benefit rather than an operational necessity. Those assumptions create avoidable friction. They raise the cost of entering learning-oriented roles, make it harder to finance mid-career pivots, and discourage both workers and firms from investing in skills that will be valuable precisely because they are adaptable and portable.
Congress need not and should not attempt to forecast which skills will matter most. The jagged frontier of AI makes such predictions unreliable. A more durable federal role is to remove legal and financial barriers that prevent workers and employers from responding to shifting skill demand in real time, while building in mechanisms for learning and course correction. The proposals below are intended to expand the range of lawful, practical pathways through which Americans can build skills through formal education, vocational programs, and on-the-job learning—without imposing unnecessary federal mandates or rigid national standards.
Ease the creation of more trainee opportunities for young Americans.
Young Americans have found the current economic churn particularly hard to navigate. The combination of a shifting labor market and inadequate incentives for firms to gamble on entry-level workers with unknown capabilities has resulted in a troublingly high rate of unemployment among recent graduates and other young Americans. Failure to help these Americans find their economic footing may have dire long-term consequences. The first few years of a worker’s career go a long way toward shaping their future professional path. If they find themselves underemployed for prolonged periods, then they may never find their way to more appropriate and remunerative forms of employment.
That’s why it is necessary to amend the 90 calendar-day youth minimum wage exception under the Fair Labor Standards Act (FLSA) to 180 days, subject to a two-year sunset clause. The labor-market logic behind this proposal is simple: many entry-level, learning-oriented positions require more than a brief introductory period before a worker becomes productive; yet firms face an uncertain return on investment in expending resources on training opportunities. Extending the youth minimum window reduces the marginal cost to employers of offering longer trainee roles and increases the likelihood that youth work experiences generate durable, transferable skills rather than short-term, low-skill churn. Moreover, this longer term can foster more meaningful work-trial periods over which the worker and employer can determine if there’s a good fit for a longer-term, more permanent role.
Because the risks of abuse of this extension are real—particularly substitution of lower-cost youth labor for adult labor or the creation of extended low-wage roles with little skill accumulation—Congress should require retrospective evaluation of such programs by the Department of Labor during the two-year pilot period. That evaluation should focus on measurable outcomes rather than compliance formalities: rates of participation, wage progression after the exception ends, duration of employment, transition into apprenticeships or higher-wage roles, and any displacement effects. This sunset clause will force Congress to revisit the policy in light of evidence and to narrow, expand, or terminate it accordingly.
Relatedly, a minor, yet strategic adjustment to the Trump Account (TA) framework by Congress to allow employers who hire student trainees under 18 to contribute directly to those trainees’ accounts can contribute to ongoing flexibility and training. Specifically, pursuant to the TA, employers should be permitted to reallocate the existing $2,500 credit available to dependents of employees to eligible student trainees—so long as the employee does not claim it or affirmatively waives it. This would strengthen early skill development while avoiding the creation of a new entitlement program or a large new administrative apparatus. Though the potential benefits of this policy change will not be realized for some time, it’s nevertheless a wise strategic investment in a more flexible labor force down the road.
Finally, the Department of Labor should launch a pilot program that encourages firms to launch apprenticeship roles with an income-sharing agreement (ISAs) as the primary basis for covering the cost of the training. Fear of runaway apprentices—trainees benefiting from the time and expertise of a mentor, then fleeing for another role—understandably chills development of apprenticeships. ISAs curtail that concern by allowing firms to recoup apprenticeship costs and, depending on the ISA terms, even profit from running particularly effective programs. This initial pilot of two or three years should then inform agency guidance to firms looking to offer such programs.
Create a narrowly tailored FLSA trainee pathway for students aged 17 and older in hazardous occupations, with strict safety constraints and time-limited authorization.
The FLSA should be amended to allow students aged 17 and older to work in hazardous occupations in bona fide trainee roles. Many of the most economically valuable, labor-constrained, and AI-resistant skills—advanced manufacturing, energy infrastructure, logistics, and skilled trades—cannot be developed solely through classroom learning. A categorical prohibition reduces exposure to high-demand fields and delays skill accumulation in sectors where experience is itself a credential.
This reform should be deliberately narrow. Congress can initiate this reform by authorizing a time-limited pilot that applies only to clearly defined trainee roles and requires safeguards that are practical, yet not overly burdensome: supervision requirements; restrictions on the most dangerous task categories; mandatory safety instruction consistent with industry norms; and reporting of serious incidents and program completion rates. The aim is to expand opportunity in the fields of the future, albeit taking the requisite steps to guard against the exception becoming a loophole that places youth in unsafe settings. After the pilot period, Congress should evaluate whether the pathway increased entry into apprenticeships and high-demand skilled roles without adverse safety outcomes and, if so, how best to make such pathways more available and permanent.
Continue expanding Pell Grant flexibility to cover high-quality vocational and short-term training, while hardening guardrails against low-value programs.
Following the recent expansion of permissible Pell Grant uses, Congress should further broaden eligible Pell Grant recipients to vocational and short-term training opportunities, including bootcamps and training programs. The Portfolio Economy requires training that is modular, stackable, and accessible in short intervals—particularly for workers who cannot afford to exit the labor market for multi-year programs. It is likely that entities other than higher education institutions—currently off the table for Pell recipients—are best suited to provide such training. Pell Grants recipients should not be unduly forced to spend their funds on less efficacious opportunities.
At the same time, Congress should recognize that expanding financing without quality safeguards risks subsidizing credential mills and further degrading skill signals. Any expansion should therefore be coupled with outcome-focused guardrails that are administrable and technology-neutral, such as transparent completion rates, job placement measures where appropriate, and earnings or advancement indicators. This preserves flexibility while ensuring federal dollars support programs that plausibly improve a worker’s labor-market prospects.
Modernize overtime rules to permit voluntary conversion of overtime pay into training comp time.
Rigid approaches to how overtime is calculated and rewarded deprive workers of more autonomy over their preferred means of compensation. Here, again, the FLSA requires amendment. Congress should update how employers may alter overtime policies to allow workers, by voluntary agreement, to convert overtime compensation into compensatory time that can be used for qualifying skill development opportunities. This is especially well-suited to the Portfolio Economy because it acknowledges time—rather than only money—as a significant constraint on retraining. Many workers can finance a course but cannot attend one without sacrificing wages or risking job loss.
To prevent abuse, Congress can task the Department of Labor with spelling out rules that facilitate worker opt-in, prohibit coercion by employers, and, if deemed necessary, set reasonable caps on total comp time accrual. Congress may also direct Labor to study whether comp time should be portable or usable across multiple employers within a defined period, recognizing that workers increasingly move between short-term engagements.
Permit a vocational education exception to early withdrawal penalties from Trump Accounts.
Disparate access to training opportunities among workers has long hindered America’s total productive capacity. Though AI and other technologies such as virtual reality and artificial reality will eliminate some of the geographic and quality barriers that have contributed to access gaps, many Americans may still face financial hurdles to enrolling in the best training opportunities. This marks a key spot for congressional action, such as permitting TA holders to access their funds early without penalty for qualifying vocational and workforce training. In a labor market defined by workers experiencing numerous career and employer transitions, restricting savings vehicles to narrow categories of “traditional” education or penalizing mid-career use undermines the very flexibility the Portfolio Economy requires.
This exception should be structured to reduce fraud and misuse by tying eligibility to clearly defined categories of qualifying programs (including apprenticeships and accredited vocational programs) and by requiring basic documentation of expenditures. Congress should authorize a review period to evaluate whether the exception increases training participation and whether it is used primarily for bona fide skill acquisition.
Revise the Internal Revenue Code to expand write-offs for worker training and increase per-worker caps.
Employer-based training opportunities can serve as a win-win: firms benefit directly from a workforce that can integrate new tools—especially AI-enabled systems—and can move between tasks as workflows change. Yet training is often underprovided due to free-rider dynamics, accounting constraints, and legal uncertainty. Federal policy can address these structural barriers by making training easier to finance, easier to offer, and less legally fraught—without dictating training content or imposing rigid national standards.
Congress should revise the Internal Revenue Code to allow firms to more fully write off expenses for training workers for new roles and to increase caps on training-related deductions where they limit participation. Present policy effectively discourages employers from supporting workers who want to upskill by participating in a program of study that will qualify the employee for a new trade or business. In short, the tax code limits employer support to the bare minimum of education that is required for a worker to perform their current role. What’s more, deductible employer support is capped at around $5,000, which may not meaningfully assist with more substantive training opportunities.
Updates to these policies make sense even if the Portfolio Economy is slower to emerge than expected. Employers have tremendous incentives to upskill their existing workforce rather than search for workers in the existing market. HR specialists estimate that recruiting and onboarding a new employee may involve about $7,000 to $28,000 in costs, as well as lost productivity associated with the myriad tasks associated with bringing a new person onto the team.
This reform recognizes a basic reality: in an AI-influenced economy, training is not a perk but a recurring input into productivity. To ensure the policy is usable beyond large enterprises, Congress should prioritize administrative simplicity—clear eligibility standards, straightforward documentation, and minimal compliance burdens for small and medium-sized firms. Congress should also require the Treasury to report, in aggregate, on uptake by firm size and sector, enabling future refinement.
Fund Skill Development Opportunity Centers through competitive grants modeled on P-TECH, with local flexibility and measurable outcomes.
An economy that demands more agile workers would benefit from more agile educational and vocational institutions. Thankfully, Congress need not look far to find promising examples of educational institutions of the future. The P-TECH model in place in New York is a particularly compelling model. These programs permit students of varied academic backgrounds to seek out a hybrid model of education–a mix of classroom learning and on-the-job training–that culminates in students earning both their high school diploma and an associate degree.
Yet, most Americans have few to no odds of enrolling in any similar program. Congress can make such institutions far more common by launching a grant program for local “Skill Development Opportunity Centers” in which employers, community colleges, and high schools collaborate to offer six-year pathways combining classroom learning with paid, work-based training. As with the P-TECH schools, successful participants should earn both a high school diploma and an associate degree. This structure directly serves the Portfolio Economy by embedding skill development within real work settings and by producing graduates with both credentials and demonstrated competence.
Critically, rather than forcing uniform design, recipients of any such federal funds should be encouraged to adapt to local labor-market conditions. Grant selection should prioritize employer engagement, evidence of regional skill demand, and credible placement pathways. The agency tasked with overseeing this grant program—presumably the Department of Education—should require periodic evaluation of outcomes—completion, placement, wage progression—and should retain the ability to reallocate funding toward the most effective education models over time.
C. Easing the Transition to the Portfolio Economy
As work becomes more task-based, short-term, and distributed across firms, the challenge facing policymakers is no longer simply how workers acquire skills, but how they use those skills across multiple jobs without incurring unnecessary legal, financial, or administrative penalties. For many Americans, participating in the Portfolio Economy will mean holding multiple roles at once—combining W-2 employment with contract work, pursuing training while actively working, or serving clients across state lines. Yet core elements of federal labor, benefits, and tax law remain structured around exclusivity: one employer, one job, one benefits bundle, one jurisdiction.
That mismatch imposes real costs. Workers delay taking on additional clients for fear of losing benefits or triggering tax complexity. Employers refrain from offering support—such as benefits contributions or flexible arrangements—out of concern that doing so will alter worker classification. States erect licensing barriers that frustrate geographic and professional mobility. The net effect is to slow labor-market adjustment at precisely the moment when adaptability is most valuable.
The recommendations below focus on easing these transition costs. They are not intended to privilege portfolio work over traditional employment, nor to mandate new employment structures. Instead, they seek to ensure that federal law does not penalize workers and firms that operate across multiple engagements and that it provides clear, predictable rules for doing so.
Establish a portable benefits framework centered on worker-controlled Opportunity Accounts.
Bipartisan members of Congress have long called for a portable benefits framework suited to today’s economy and the Portfolio Economy of the future. Now is the time for Congress to act on that bipartisan consensus by authorizing a portable benefits program. Pursuant to this program, employers—whether engaging workers as employees or independent contractors—may contribute to worker-controlled “Opportunity Accounts.” These accounts would travel with the worker across jobs and clients and could be used for a defined set of purposes closely tied to portfolio work: qualifying education and vocational training; physical and mental health expenses for the worker and dependents; and relocation or travel costs associated with pursuing work in regions showing strong demand for the worker’s skills.
The core advantage of this approach is structural neutrality. Opportunity Accounts would decouple benefits from long-term attachment to a single firm, while preserving flexibility in contribution levels and participation. Congress should design the program to be voluntary, administratively lightweight, and accessible to small firms, while requiring periodic evaluation to assess uptake, usage patterns, and effects on worker mobility and retention.
Adopt a single, clear federal standard for worker classification and specify that benefit provision is classification-neutral.
Legislators must also act on seemingly extensive congressional agreement when it comes to ongoing and unnecessary confusion as to worker classification. Congress should replace the current patchwork of federal worker-classification tests with a single, clear standard. At the same time, it should make explicit that the voluntary provision of benefits—whether through Opportunity Accounts or other portable mechanisms—does not weigh in favor of employee classification. This would align with efforts already underway at the state level, such as in Utah, and signal congressional dedication to identifying and improving ambiguous policies.
Classification uncertainty remains one of the most significant barriers to portfolio work. Firms frequently avoid offering benefits, training, or flexibility not because they oppose worker support, but because such actions risk reclassification and retroactive liability. Clarifying that benefits are neutral with respect to classification would reduce that chilling effect, expand access to support, and allow firms to compete on worker experience without fear of legal exposure. As with other recommendations, this reform should include an opportunity for Congress or the applicable regulator to revisit the standard after an initial implementation period to assess its effects on misclassification disputes and labor-market participation.
Direct Treasury to study simplified tax compliance for workers with multiple income streams.
As outlined from the start of this testimony, workers should not face disparate legal treatment or regulatory burdens simply because they opt for non-traditional work arrangements. Taxes represent one of the most obvious gulfs between workers that opt for standard employment arrangements and those who carve a different path. Congress can remedy this issue by directing the Department of the Treasury to examine options for simplifying tax compliance for workers earning income through multiple arrangements—such as combinations of W-2 employment, 1099 contracting, and short-term project work. Workers that lean into the Portfolio Economy frequently face higher compliance costs, uneven withholding, and greater risk of error, all of which discourage participation in flexible work.
The study should evaluate mechanisms such as standardized withholding across income types, simplified quarterly payment systems, and consolidated reporting tools. Any reforms should be piloted and assessed before broad adoption, with particular attention to impacts on compliance rates and ease of filing (or lack thereof).
Conclusion: Entrepreneurial Liberty in the Portfolio Economy
The Portfolio Economy is unavoidable. It’s an inevitable product of how AI develops, diffuses, and interacts with human work. Countries that attempt to steer around it will eventually run aground—and likely sooner than later. Countries that instead adapt to the economy of the near-future can thrive. The key to success is acknowledging and responding to a labor market in which individuals build durable economic security by cultivating skills, assembling projects, and moving fluidly across firms, sectors, and geographies. In such an economy, stability no longer comes from a single job title or employer, but from agency—the capacity to learn, adapt, and apply one’s talents where they are most valued.
The central policy question, then, is not how to freeze work in a familiar form, nor how to preordain which jobs should exist and who may perform them. It is whether our laws expand or constrain the freedom Americans need to navigate constant change.
History counsels restraint. When lawmakers attempt to lock in outcomes amid technological uncertainty, they tend to protect incumbents, entrench inefficiency, and narrow opportunity. When they instead focus on enabling individual initiative—lowering barriers to learning, mobility, and experimentation—they create the conditions for broad-based prosperity.
This testimony has advanced a simple organizing principle: the Age of AI demands a renewed commitment to entrepreneurial liberty. That means protecting the freedom to study—by making skills legible, portable, and worth investing in. It means protecting the freedom to shadow—by expanding apprenticeships, trainee pathways, and real-world learning that lower the cost of entry into new fields. And it means protecting the freedom to work—by ensuring labor, tax, and benefits laws do not punish those who move between roles, clients, or places in pursuit of opportunity.
If Congress gets this right, Americans will not merely endure the transition to an AI-shaped economy; they will shape it themselves. The surest path to a future of work that is innovative, inclusive, and resilient is not to manage outcomes from the center, but to trust individuals with the tools, signals, and legal freedom to get ahead. That is how the American Dream has always been renewed—and how it can endure in the Portfolio Economy.
Artificial intelligence (AI) will inevitably and permanently alter the nature of work. Where, how, and to what extent is unknown and, critically, unknowable. Economists do not have a definitive test to determine which jobs are most likely to be disrupted nor when such disruption will occur. They also lack the means to reliably predict which corporations and industries will successfully integrate AI and which may struggle to do so. This explains why, depending on the day, the public may come across headlines anticipating the rapid elimination of entire professions due to AI or reports touting how AI development is creating jobs and leading to entire new fields of work.
Professor Kevin Frazier is a Senior Fellow at the Abundance Institute, focusing on the nexus of regulatory design, innovation policy, and constitutional law. The most important issue he is working on is identifying outdated legal frameworks and assumptions that impede the American Dream in the Age of AI.
Frazier currently leads the AI Innovation and Law Program at the University of Texas School of Law. He has testified before Congress on topics ranging from artificial intelligence to undersea cables. Frazier regularly advises state and federal policymakers on how to accelerate adoption of emerging technologies. You can listen to his two cents on all things tech policy on the [Scaling Laws](https://www.lawfaremedia.org/contributors/kfrazier) podcast. Born and raised in Beaverton, Oregon, Frazier is a graduate of the University of Oregon, UC Berkeley School of Law, and Harvard Kennedy School. Prior to joining the legal academy, he clerked for the Montana Supreme Court and conducted a research fellowship on AI.
Editor’s Note: This story is a multimedia special. You can listen above, read the text below, or watch the mini-documentary.
In December 1983, while folks exchanged sweaters and fruitcakes, Neil Young received a special gift from his label, Geffen Records: a lawsuit. Geffen was suing him for, of all things, not being himself.
The lawsuit alleged his recent albums were “unrepresentative” and “musically uncharacteristic,” and the label demanded $3.3 million from Young for the crime of not sounding like Neil Young.
Geffen, a hungry new label, had signed Young to boost its prestige. But when he delivered his first album, Trans, they got nervous. While Young’s ragged Les Paul guitar, “Old Black,” was still in the mix, it was buried under layers of synthesizers. Worse, his distinctive, fragile tenor had been replaced by a robot.
Geffen was right, it didn’t sound like Neil Young. But that was the point. Young wasn’t trying to sound like himself; he was trying to sound distorted with a device called a vocoder.
Long before Young used the vocoder to frustrate the executives at Geffen, it was used to foil the codebreakers of the Third Reich. In the 1940s, as the war in Europe became the world’s, FDR and Churchill needed to communicate securely across the Atlantic. Their standard radio-wave conversations were easily intercepted by a Nazi station in Norway, so Bell Telephone Laboratories developed a solution codenamed SIGSALY.
The system was massive and complex, involving synchronized vinyl and rooms of equipment. At its heart was the “voice encoder,” or vocoder. Invented by Bell engineer Homer Dudley, it converted the human voice into encrypted electronic signals. It was perfect for the war effort, but could it make music?
Bell Labs had actually dipped its toe in the musical waters in 1938, using a vocoder to record an old Irish folk tune, “Love’s Old Sweet Song.” It had an ethereal quality, but the sound remained a novelty until the mid-60s, when a young artist, Wendy Carlos, encountered the device at the New York World’s Fair.
Carlos began experimenting and later recalled that, “The first reactions were unanimous: everyone hated it!” Everyone, that is, except Stanley Kubrick. Fresh off 2001: A Space Odyssey,Kubrick heard Carlos’s vocoder treatment of Beethoven’s Ninth and recruited Carlos for the soundtrack of A Clockwork Orange. The vocoder was now an instrument, and it was soon adopted by genre-bending artists like Kraftwerk, Herbie Hancock, Afrika Bambaataa, and ELO.
By the late 90s, the device was so routine that when Cher released “Believe,” one of the producers explained away the AutoTune effect as just another vocoder trick.
But when Neil Young plugged in a vocoder, critics were baffled. Rolling Stone said it was “like seeing a satellite dish sitting outside of a log cabin.” It felt alien. But again, for Young, that was the point.
At the time, one of his sons, born with cerebral palsy, was struggling to communicate.
For Young, the vocoder wasn’t just a cool effect; it was a father trying to inhabit his child’s world, inviting listeners into that disorienting, painful distance.
The philosopher of technology Albert Borgmann distinguishes between a “device” and a “focal thing.” A “device” is a tool that delivers a result but hides the work behind it. A “focal thing” demands your skill and attention, creating meaning around it. A modern heating system is a device that gives warmth. A hearth, by contrast, is a focal thing. It demands that you chop wood, build the fire, and tend the flame. It requires skill and attention to gather the household around a center of meaning.
It’s easy to treat technology as just a device, a shortcut to make life easier. Neil Young did the opposite. He didn’t use the vocoder to hide his effort; he used it to struggle, to center his life around his son’s condition. He didn’t use it to make singing simpler, he used it to make communication more meaningful.
Geffen eventually settled out of court and apologized, while Young continued his legendary career. But Trans left us with an enduring lesson: Technology is often a device that reduces effort, but it can also be a focal point that deepens connection.
The difference isn’t in the machine. It’s in what you’re doing with it.
The Salt Lake Tribune’s recent editorial on Gov. Spencer Cox’s nuclear energy initiatives claims to approach the issue with “equal measures of hope and suspicion.” Unfortunately, it dedicates far more space to the latter. It serves only to hamstring Utah’s pursuit of an affordable, reliable, and clean future with its embarrassing lack of awareness and thought.
The editorial’s first and chief sin is its suggestion that Gov. Cox is unaware of the targeted renewable project cancellations by President Trump. Incredibly, The Tribune links as evidence of this to a CNN report about a cancelled solar project in which Gov. Cox is directly referenced defending that very solar farm. In fact, Cox took to Twitter declaring, “This is how we lose the AI/energy arms race with China.” Print readers will be unaware of The Tribune’s poor reading of its own sources.
The editorial’s core stance — the perpetual “just asking questions” approach to nuclear power — is a classic example of the nirvana fallacy. They criticize a pathway because it is imperfect while ignoring the ongoing problems of the status quo.
Disappointing again is The Tribune’s attempt to link modern, nuclear energy to the painful legacy of the downwinders and nuclear weapons. This comparison is not merely flawed; it is a profound failure of context. It conflates the Cold War-era development of nuclear weapons with the highly regulated, safety-focused generation of nuclear power. It is akin to criticizing butter knives because their steel can be melted down into bayonets. Our energy future deserves better than such fear-mongering.
The issue of nuclear waste is similarly distorted in the editorial. To put waste in perspective: the entirety of a single person’s lifetime energy consumption, if provided by nuclear power, results in waste about the size of a coffee cup. All the commercial nuclear waste since 1950, the Department of Energy wrote during President Biden’s administration, could fit on a football field and not reach the 10-yard line. Furthermore, advanced reactor designs and recycling technologies are rapidly transforming waste from a storage problem into a valuable fuel source.
We need vast power to feed a rapidly electrifying world alongside new American industries. Every Utahn should embrace the proactive optimism embodied in efforts like the prototyping and testing of 11 reactors just up the road at the Idaho National Laboratory, and the ways that Utah is leading the next stage of nuclear’s American story. The Tribune should join in — rather than continue to jeer from the sidelines.
At the very least, the paper should read the sources it cites and take its own advice when insisting, as they say in the editorial, “We need to go fully trust-but-verify.”
The Salt Lake Tribune’s recent editorial on Gov. Spencer Cox’s nuclear energy initiatives claims to approach the issue with “equal measures of hope and suspicion.” Unfortunately, it dedicates far more space to the latter. It serves only to hamstring Utah’s pursuit of an affordable, reliable, and clean future with its embarrassing lack of awareness and thought.
article
false
include_in_hero_section
false
category
Op-eds
topic
Energy
evergy_subtopic
Nuclear
article_view
article content only
social_image
false
is_displayed
true
display_order
—
Article
I get Australia’s social media ban. They still did it wrong.
The last resort should be taking freedom away from parents. That’s what Australia is doing with social media: restricting not only children’s freedom but also parents’ freedom.
Nothing matters more than protecting kids. Australia just used this powerful argument to ban social media for everyone under 16, becoming the first nation to do so.
The move from Down Under is being widely celebrated here in America, especially by parents, family advocates and politicians. I worry, however, that they’ve forgotten a piece of wisdom every kid knows: Two wrongs don’t make a right.
I say this as a parent of seven – yes, seven – children, ages 10 and under. My wife and I have zero intention of letting them use social media.
We’re equally committed to keeping cell phones out of their hands as long as we can, probably until they go off to college. We believe so much of what is on the internet steals someone’s childhood, introducing them to things that no kid should see.
The dangers are real and well-documented. We’re absolutely going to keep our kids safe.
But I’m also a big believer in technology’s incredible potential. I’m also a big doubter of the government’s ability to solve problems with heavy-handed mandates and one-size-fits-all rules.
Taking away people’s choices is a slippery slope
It’s profoundly dangerous to let politicians replace parents as decision-makers. It’s also deeply harmful to push an entire category of technology out of bounds.
Last I checked, politicians have a terrible track record of recognizing what’s helpful or harmful – much less right or wrong.
I admit, this is a countercultural agreement. My sense is that most Americans think you can be pro-family or pro-technology, but definitely not both.
That’s a false choice. If you’re truly pro-family, you should promote each family’s ability to make its own choices, including with technology.
The last resort should be taking freedom away from parents. That’s what Australia is doing with social media: restricting not only children’s freedom but also parents’ freedom.
That’s what American states have done with cell phone bans in schools. You can bet that social media and artificial intelligence bans are coming next.
This is the definition of the nanny-statism that the right used to abhor. Technology, properly monitored, could be beneficial for kids in a huge number of circumstances.
Cell phones and screens, properly managed, can help kids learn in school. In an increasingly technological world, the prudent use of technology will have significant benefits.
Families should be free to decide when the benefits outweigh the risks. They shouldn’t have their freedom stripped away in the name of safety.
There are better ways to combat abuses
Will some families still use too much social media? Of course. Yet instead of restricting freedom to prevent any bad outcome, politicians should do everything in their power to help Americans make the best choices.
The real solution is empowering families through public awareness campaigns. In recent decades, such campaigns have all but solved the crisis of teen smoking.
There’s a similar effort to help parents get the right car seats for their kids, tackling the problem of children dying in car accidents. That’s a better solution than banning infants from being in cars.
My home state of Utah already has a public awareness campaign to help parents recognize the dangers of social media. Now is the time to double down on that effort.
Instead, political leaders on the right and the left are saying that America should follow Australia’s lead, and pro-family voices across the country are echoing them.
Passing bans just puts government, not parents, in the driver’s seat. Surely, it would be better to give parents more resources and support to keep their kids on the straight and narrow.
The last resort should be taking freedom away from parents. That's what Australia is doing with social media: restricting not only children's freedom but also parents' freedom.
State-by-state rules risk crushing startups and ceding AI leadership to big tech.
A crisis is looming over artificial intelligence, but it’s not what you think.
While politicians fret about AI’s impact on jobs and deepfake videos, state laws are strangling the small businesses trying to harness this technology for Americans’ benefit. They need uniform nationwide standards, but instead, they face a costly patchwork of conflicting and innovation-killing rules.
This doesn’t just prevent the creation of AI that serves families and job creators. It directly benefits the biggest tech companies at the expense of their small competitors, while threatening American leadership in a field where the U.S. has to win. Only D.C. can end this crisis before it gets worse.
Thankfully, President Trump just took the first big step. On December 11th, he signed an executive order that seeks to block state laws that stifle AI. Under this policy, the administration can now sue states that have gone too far. But even more important, the President pledged to work with Congress to develop national AI standards as soon as possible. While Congress has failed to make progress, most recently abandoning reforms in its defense spending bill this month, the President’s executive order adds a new sense of urgency to pass the strongest policy. Congress should aim to pass a law early in 2026.
The sooner, the better, because states have passed over 160 AI laws to date, with more on the way. Colorado has passed a law that would require companies developing and deploying AI to file “high risk reports” — a heavy-handed measure that lawmakers admit will hurt small businesses. A bill is sitting on New York Gov. Kathy Hochul’s desk that would apply a massive regulatory regime to AI companies in the name of safety. California Gov. Gavin Newsom just signed a similar bill into law, along with 17 others.
The supporters of these laws say they reflect states’ role as laboratories of democracy. AI’s detractors say that state protections are necessary to keep such a dangerous technology in check. Yet so many state laws make it significantly harder for smaller AI companies to compete and introduce products that improve Americans’ lives.
Take Phil Salesses, who co-founded MoveAI. The company uses AI to help people easily find and hire services when switching apartments or homes. It works across state lines, so Phil has seen how different state laws “make our nationwide operation harder.” He’s forced to spend money on state-by-state compliance — money that would be better spent helping the business meet more movers’ needs. He says he needs “straightforward, uniform policies that allow small businesses like ours to compete nationwide.”
Or take Aidan Chau, who founded a company called Maple that develops AI phone answering services for restaurants. The tool frees workers to focus on cooking and serving customers, leading to stronger restaurants that create more jobs. He says that New York’s pending law would give him less access to the large learning models the company needs. If it becomes law, startups like his may move to another state for their businesses’ survival. But what if that new state tries the same trick down the line?
Thousands of entrepreneurs and start-up founders face similar concerns, but these are also early days. The regulatory burden is set to grow worse, with most states considering multiple proposals. Next year alone, over a dozen states may pass vague algorithm anti-discrimination laws, each of which will be interpreted and applied differently.
Good luck to small AI businesses trying to follow so many different rules. Their compliance budgets will skyrocket, taking precious money from innovation and causing many to fail in time. Meanwhile, the big tech companies that Americans trust least will cement their dominance. They’re the only ones with budgets big enough to survive this death by 50 state regulatory cuts.
A recipe for American AI leadership, this is not. If Congress doesn’t pass national standards, countries like China could gain an unbeatable advantage. But history shows a better path. The U.S. led the internet revolution in large part because federal lawmakers blocked state action in the 1990s. The result has been massive innovation and economic growth, creating trillions of dollars in wealth and lifting every American’s standard of living. Artificial intelligence could be an even bigger boon, defying critics’ predictions about economic and social collapse, while extending America’s global lead.
But that surely won’t happen if places like Albany, Sacramento, and Denver strangle AI with their tribal agendas. President Trump has taken the first step. Now it’s up to Congress to create the nationwide standards that will unleash AI’s potential — benefitting families and job creators while firmly keeping the U.S. in the lead.
A crisis is looming over artificial intelligence, but it’s not what you think. While politicians fret about AI’s impact on jobs and deepfake videos, state laws are strangling the small businesses trying to harness this technology for Americans’ benefit. They need uniform nationwide standards, but instead, they face a costly patchwork of conflicting and innovation-killing rules.
State governments are moving at breakneck speed crafting policy on artificial intelligence. In just two years, lawmakers have passed dozens of bills targeting deepfakes in campaigns, shielding citizens from abusive synthetic media, creating rules for high-risk applications. In 2025 alone, over 1,000 AI-related bills were introduced across the states.
For most Americans, it is assumed that the freedom to access and use computing power, the very foundation of modern innovation, is secure. Yet in practice, that freedom is under threat. From California to New York, legislatures and governors are chipping away at this liberty, treating computation itself as something the public must be shielded from rather than empowered by. This is not a small matter: it strikes at a core pillar of the American experiment—our ability to think, invent, and build with the tools of the age.
Montana charted a different course. In spring 2025, it became the first jurisdiction in the world to enact a right to compute: a statutory guarantee that individuals and organizations can own and use computational resources unless the government can demonstrate that restrictions are narrowly tailored to achieve a compelling interest. This simple but profound step filled a glaring gap in state, and even global, AI lawmaking.
Montana’s Right to Compute Act, signed in April 2025 after strong bipartisan votes, creates a clear default of freedom for its citizens: government actions that would restrict lawful use or ownership of “computational resources”—hardware, software, algorithms, cryptography, machine learning, networks, even quantum applications—must be narrowly tailored and demonstrably necessary to serve a compelling government interest. That language is not rhetoric; it’s the operative standard, and the statute provides practical definitions that will help agencies, courts, and businesses apply it.
Montana pairs this rights‑affirming law with targeted safety measures for critical infrastructure. If an AI system helps operate a critical facility, the deployer must maintain a reasonable risk‑management policy that references widely recognized standards—explicitly including the NIST AI Risk Management Framework (AI RMF) or comparable international frameworks. This is governance that adapts as best practices evolve, instead of freezing technology in statute.
Why Government Should Protect Computational Liberty
This raises the question: why is explicit legal protection for computational rights necessary now? Americans have, after all, been using computers for decades without a specific “right to compute” enshrined in law. The answer lies in the changing global and domestic regulatory landscape. A computer, like the abacus and slide rule before it, is simply a technological amplification of human cognition. In the 21st century, access to computational resources increasingly determines who can participate fully in economic, civic, and intellectual life. Computers enable economic growth and an improved quality of life that benefits all Americans. Most of all, the computer represents opportunity.
As computers become more intertwined in daily life, computational resources and access are increasingly subject to government restrictions. This is often based on how much processing power they use, what tasks they perform, or who is using them. Montana’s approach is rooted in a deeper philosophical principle: computational freedom is not a privilege to be granted by the government but a natural extension of rights we already possess that should be protected by the government.
This isn’t merely abstract philosophy. We’ve already seen how governments can abuse control over computational resources. In the UK the government is requiring identification before citizens can access the internet and is now implementing a digital ID system. China’s government imposes even stricter requirements on its citizens’ ability to access the internet. Similar ideas have been proposed in the US that would require verification before citizens can access app stores or even purchase a smartphone. President Biden’s Executive Order 14110 imposed regulations on AI development based on arbitrary computational thresholds, modeled on the European Union’s AI Act. Fortunately, President Trump nullified that executive order. All these approaches, and similar ones that could easily be proposed in the future, give regulatory agencies sweeping discretion to determine who may access computational power and under what conditions. A right to compute law provides a firewall against this kind of creeping technocratic control.
Why other states should adopt a Right to Compute
First, it keeps the focus on bad conduct, not tools. State laws already prohibit almost all harmful uses of AI without outlawing general‑purpose computing. A right to compute complements current law by clarifying that open‑ended innovation remains presumptively lawful, while fraud, deception, and harassment remain illegal. It is a freedom-preserving measure for all citizens of the state, providing individuals with a defensive mechanism against government overreach.
Second, it opens the door for builders. Entrepreneurs, universities, and small firms need assurance that new code, chips, and models won’t be preemptively banned just because they’re new or particularly powerful. A clear statutory presumption in favor of lawful compute lowers the “unknown unknowns” that can chase investment away from emerging tech hubs and university research corridors.
Third, it strengthens economic competitiveness. AI has unleashed a race to expand computing capacity and the infrastructure behind it—power, fiber, data centers, cooling, and skilled labor. States sending a stable, pro‑innovation signal will compete better for the projects, jobs, and grid upgrades that come with this build‑out.
Who’s moving next?
Montana won’t be alone for long. Ohio legislators introduced the Ohio Right to Compute Act this summer, signaling widespread interest in transplanting the same framework—affirm the right, define the terms, and pair it with risk management for AI in critical infrastructure. New Hampshire is considering right to compute constitutional amendment. The American Legislative Exchange Council adopted and released a right compute model bill that closely tracks Montana’s structure, giving states a starting point to adapt to local law.
Despite all the benefits, there are some common critiques of this bold approach.
“Isn’t a right to compute a hands‑off approach to AI?” No. It merely forbids broad, preemptive bans on tools while preserving enforcement against deception, fraud, harassment, IP infringement, and safety risks. Montana’s law even enumerates compelling interests to make that point unmistakable. And where AI touches critical infrastructure, it requires documented risk management tied to national standards. It shifts the burden onto the government to demonstrate that regulation is required.
“Won’t this tie regulators’ hands as AI evolves?” No. It merely puts an additional barrier between government regulation and an individual’s right to use their property. As the Montana bill and model bills stipulate, there needs to be compelling government interest, so regulation is still possible if the reason fits that qualification. The core rule—punish harmful conduct, not generalized capability—ages better than technical mandates that hard‑code today’s assumptions. Americans currently have broad access rights to computers, and that has not prevented law enforcement from prosecuting bad actors who use computers to break the law.
“Isn’t it premature to enshrine legal protections for technology we don’t yet fully understand?” This objection gets the question backwards. The right to compute doesn’t create a new right; it affirms an existing one. Just as the First Amendment protected speech before anyone imagined the internet, and the Fourth Amendment protected privacy before digital communications existed, the right to compute simply legally enshrines the notion that fundamental rights apply to new technologies. The alternative—waiting until we “fully understand” all forms of future computing before protecting access to it—would mean years or decades of regulatory uncertainty that could crush innovation and leave citizens vulnerable to government overreach.
A practical, bipartisan win
Every state wants the jobs, research, and productivity gains unlocked by AI and advanced computing. At the same time, policymakers hear concerns about deception, discrimination, and infrastructure strain. A right to compute resolves that tension with a simple principle: default to freedom for lawful computation, create targeted safeguards when harms are known, and keep enforcement aimed only at bad actors.
Montana’s statute shows it can be done in a few pages. For legislatures that want to compete for entrepreneurs and new technologies in the global marketplace, the right to compute is a natural next step. It tells people everywhere the same thing: build here.
State governments are moving at breakneck speed crafting policy on artificial intelligence. In just two years, lawmakers have passed dozens of bills targeting deepfakes in campaigns, shielding citizens from abusive synthetic media, creating rules for high-risk applications. In 2025 alone, over 1,000 AI-related bills were introduced across the states.
article
false
include_in_hero_section
false
category
Op-eds
topic
Technology
technology_subtopic
—
article_view
article content only
social_image
false
is_displayed
true
display_order
—
Article
Unwrapping 2025 State AI Policy
On the 12th Day of Christmas…okay, okay we’ll hit pause on singing another Christmas carol and instead take a look back at the year that was in AI policy across the states here at the Abundance Institute.
🎄🎄🎄🎄🎄
The exact number varies depending on what you think counts as “AI legislation” but even at the lower bound estimates, hundreds of related bills were raised and considered in statehouses from Hawaii to Maine this year. Our team at the Abundance Institute weighed in on, well, a lot of them.
1. Right to Compute
On the pro-innovation side, Montana passed SB 212, a first in the nation proposal that reminds us that AI or advanced computing is a general purpose technology that we should be free to enjoy as citizens. The use of this technology isn’t something granted to us by the government, but a freedom to be protected by the government.
This concept captures an Abundance mindset for AI policy very well and has caught on elsewhere; being proposed as a constitutional amendment and pre-filed bill for 2026 in New Hampshire, passed as a model bill at the American Legislative Exchange Council, and introduced as a bill in Ohio. Our own Taylor Barkley submitted written testimony on HB 392 in Ohio, published an op-ed in The Columbus Dispatch and published an article with the James Madison Institute.
2. Colorado’s Quagmire
Passed in 2024, SB 205 has yet to be implemented in Colorado. This European Union-style approach to AI governance is heavy handed and scheduled to take effect after the upcoming legislative session in 2026. When Governor Jared Polis signed the bill into law he expressed serious reservation—signing it as a tradeoff to move separate legislation—and so have other political, business, and community leaders across the state.
The Abundance Institute has explained why this bill willharminnovation and individuals throughout the year, and it appears other states have listened as no one else has passed the same legislation after having ample opportunities to do so. The Abundance Institute is also right in the middle of trying to improve this looming legislation with our team collaborating with key experts in the state to find a solution to prevent this Sword of Damocles from falling on Colorado’s economy.
3. Turmoil in Texas
The Lone Star State gained national attention in the AI space earlier this year when HB 1709 was introduced: the Texas Responsible AI Governance Act (TRAIGA). Our own Christopher Koopman penned an op-ed for the Houston Chronicle which stated, “The [bill] would create some of the strictest artificial intelligence regulations in the country, echoing recent California bills the legislature wouldn’t pass and Gavin Newsom wouldn’t sign because they were too extreme.” After Chris’ op-ed, this legislation was pulled, overhauled, and reintroduced in a different form which ultimately did pass late in the Texas session as HB 149. The final bill was an improvement over the initial proposal, and efforts to reform it will continue next year while the Texas legislature sits idle.
4. Connecticut Connection
Just as he did last year, State Senator James Maroney introduced SB 2: An Act Concerning Artificial Intelligence. When the General Law Committee held a hearing on the bill in February it kicked off with Connecticut DECD Commissioner Daniel O’Keefe making the case against this preemptive regulatory approach. The discussion was excellent and continued through Neil Chilson’s virtual testimony and Q&A with committee members. The bill, same as last year, was not taken up on the House and we are hopeful that Sen. Maroney, one of the most well-versed state policymakers, will take up a more Abundance mindset on AI in 2026!
5. California Conundrum
There were over 40 AI bills introduced in California alone in 2025! Neil Chilson and Taylor Barkley submitted written testimony on AB 1018 and SB 813 to highlight just a couple of instances where the Abundance Institute weighed in on AI governance in The Golden State. Governor Gavin Newsom vetoed a handful of notable AI bills in 2025, just as he vetoed AB 1047 last year, but other bills like SB 53 and SB 243 were signed into law. Neil Chilson has offered a series of reforms to SB 53 that would improve the legislation and should be considered by California policymakers next year.
Neil also submitted a comment to the California Privacy Protection Agency regarding proposed regulations governing Automated Decision-Making Technology (ADMT) under the California Consumer Privacy Act (CCPA). Neil Chilson’s comment highlighted how the CPPA’s proposed changes were overly burdensome and costly, with minimal demonstrated consumer benefit. They risked exceeding the CPPA’s legal authority, infringing on First Amendment rights, and transforming the CCPA from a privacy law into a de facto AI regulation regime.
6. Neverending in New York
Policymakers in Albany must have been fed up with the attention everyone was getting in Sacramento, as New York managed to propose even more bills regulating AI! Abundance Institute provided real-time analysis on several proposals, including A8884: The NY AI Act which ultimately failed to pass. Neil Chilson joined a letter expressing concerns about the impact of State Assemblymember Alex Bores’ bill A6453: The RAISE Act, which helped ensure positive reforms were made before it passed the legislature and was signed into law by Governor Kathy Hochul on Friday.
7. Veto in Virginia
One of the more notable bills that made it to a governor’s desk this year was HB 2094 in Virginia, introduced by State Delegate Michelle Lopes Maldonado. Our own Christopher Koopman offered a critique of the proposal and argued that, “America’s great advantage—its gift, really—has been that it does not regulate the future before it arrives. It allows new ideas to take shape, to be tested, to flourish or fail…But a regulatory fever is spreading.”
Governor Glenn Youngkin’s team did their homework on this legislation and made the decision to veto it. Gov. Youngkin’s veto explanation is the mindset governors across the country should have on AI policy, and will hopefully be shared by the incoming administration led by Governor-elect Abigail Spanberger.
8. Florida Frenzy
Florida kicked off the legislative session with the introduction of State Representative Fiona McFarland’s HB 369: Provenance of Digital Content and a myriad of other proposals regulating AI. Abundance Institute’s pro-innovation approach was shared with Rep. McFarland and other legislative leaders and they considered the tradeoffs of new regulations in The Sunshine State.
9. Nebraska Notions
The nation’s only unicameral legislature makes for an interesting policymaking process and our own Taylor Barkley had the chance to witness it first hand as he testified (22:10 mark) on LB 642 earlier this year in Lincoln. Taylor’s testimony stated that, “We see two fundamental issues with the AICPA as drafted. First, the legislation is unnecessary…Second, the legislation is technically infeasible.” The Judiciary Committee and bill sponsor State Senator Eliot Bostar took his insights seriously as the bill failed to move forward.
10. Iowa Ideas
Much like Nebraska, legislators in Des Moines considered HSB 294: An Act Relating to Artificial Intelligence. We were happy to see that the bill sponsor, State Representative Ray Sorensen, opted to hit pause on the bill in 2025 and is looking to find an alternative path for The Hawkeye State during next year’s session.
11. AI Infrastructure
As demand for daily use of AI increases states are looking for solutions on how to build out data centers. Josh Smith teamed up with Turner Loesel from the James Madison Institute to publish Digital Foundations: The Essential Guide to Data Centers and Their Growth which provides an overview of data centers themselves, why they are important for the economy today and into the future, and how to address concerns over scarce water and energy resources. The Abundance Institute has also provided states with a roadmap to an abundant energy future through nuclearenergy, improved interconnections, and a build what worksmindset. We also made the case for why data centers can be the economic foundation for an evolving economy, particularly in rural America.
12. State Preemption
While working at the state level to help bring about better outcomes from innovation in the AI space, the Abundance Institute has also been working with Congress to ensure an overly burdensome patchwork of state regulations doesn’t stymie this inherently interstate technology. This concept has been raised in both the One Big Beautiful Bill Act (OBBBA) and the National Defense Authorization Act (NDAA). Although ultimately not included in either bill’s passage, the concept is likely to be considered in future legislation as requested by President Trump in Section 7 of his executive order, “Ensuring a National Policy Framework for Artificial Intelligence.” Read a short summary from Neil here.
Over the summer we worked with state level partners from across the country to send a coalition letter in support of a federal preemption. This effort brought together a great group of like-minded organizations from sea to shining sea and will help support any future preemption proposals. Much like the Internet Tax Freedom Act of 1998, Congress should be setting the rules of the road for interstate tools such as AI to ensure effective competition and efficient markets develop.
This list could go on-and-on, with an even longer list of thank yous to individuals and other organizations who helped drive our ideas forward across the country, but I think this offers a good look back on some of the notable moments we had in state AI in 2025. The year ahead shows no sign of slowing down, and we will continue to be a voice for the innovators of tomorrow.
On the 12th Day of Christmas…okay, okay we’ll hit pause on singing another Christmas carol and instead take a look back at the year that was in AI policy across the states here at the Abundance Institute.
article
false
include_in_hero_section
true
hero_image
hero_order
1
category
Articles
topic
Technology
technology_subtopic
Artificial intelligence
article_view
article content only
social_image
is_displayed
true
display_order
—
Article
Neil Chilson on Federal vs. State Regulation of Artificial Intelligence
Our Neil Chilson joined C-SPAN Washington Journal to talk about President Trump’s executive order on artificial intelligence regulations. Read his work explaining how the executive order works here.
To unleash the full potential of American energy, we must prioritize certainty and stability in our regulatory framework. Measures that simplify processes, regulatory requirements, and generally make permitting processes fast, predictable, and fair are vital for American energy abundance and affordability.
Today’s permitting reform conversations around the National Environmental Policy Act (NEPA) and the Clean Water Act (CWA) all represent promising steps. Measures like the Standardizing Permitting and Expediting Economic Development (SPEED) Act, and Promoting Efficient Review for Modern Infrastructure Today Act (the PERMIT Act) represent smart updates to environmental laws first passed in the 1970s.
Industry leaders are already sounding the alarm on the dangers of political whims determining energy investment decisions. Shell is the largest oil producer in the Gulf of America, yet the company explicitly warned that the cancellation of offshore wind projects sets a dangerous precedent, fearing these actions will serve as a pretext for future administrations to target traditional energy projects. This concern is echoed by broad industry voices. On December 3, the American Petroleum Institute, several gas trade associations, and the American Clean Power Association signed a joint letter on the need for certainty and endorsing the SPEED Act.
We have seen this movie before: from the Biden Administration’s pause on LNG export permits and the revocation of the Keystone XL permit to the targeting of offshore wind. This cycle of retribution, canceling and restarting permits based on who is in the White House, benefits no one. By establishing durable, neutral permitting reform, we can stop the political pendulum and give American innovators the stability they need to invest in and power our future.
True energy dominance requires an all-of-the-above approach where the market—not government favoritism—picks winners and losers. Updates to public policy through the One Big Beautiful Bill Act helped remove subsidies that distort the market. Consumers always benefit when they are in the driver’s seat, rather than having their energy choices dictated by who is best connected to the current administration. Permitting reforms that prevent the weaponization of the regulatory and permitting process are the natural successor to promote consumer choice and energy abundance.
Without permitting reform, there are still thumbs on the scale driving and destroying energy development outside of the market. We cannot afford a system where energy policy swings violently with every change in administration, creating a whipsaw effect that chills investment across the board.
By clearing the bureaucratic path for builders, we unlock a future defined by energy abundance and environmental progress. A fair, fast, market-driven regulatory landscape ensures that American innovation makes us all wealthier.
To unleash the full potential of American energy, we must prioritize certainty and stability in our regulatory framework. Measures that simplify processes, regulatory requirements, and generally make permitting processes fast, predictable, and fair are vital for American energy abundance and affordability.
Sets the policy of the U.S. as to “sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” (Much of the order applies only to state AI laws that violate this policy. I’ll call them “conflicting State AI laws” for short.)
Creates a Dept. of Justice Task Force to challenge conflicting State AI laws.
Tasks the Dept. of Commerce with identifying existing conflicting State AI laws and publishing a report.
Requires Commerce and agencies to make certain kinds of funding conditional on whether the state has or enforces conflicting AI laws.
Directs the Federal Communications Commission to begin a proceeding on whether it should require AI model reporting that preempts conflicting State AI laws.
Requires the Federal Trade Commission to issue a policy statement detailing when conflicting State AI laws are preempted by the FTC Act’s prohibition on deceptive acts or practices.
Directs presidential advisors to prepare draft federal AI legislation that preempts conflicting State AI laws, with no preemption for four buckets of state laws, including “child safety protections.”
FIRST THINGS FIRST: If you are reporting on the EO or arguing about it online, I implore you to READ IT YOURSELF. It’s only 1400 words, and it’s clearly written with little legal jargon. You’ll save yourself the potential embarrassment of repeating incorrect talking points from people who are misrepresenting it out of ignorance or malice.
But there are somethings that might not be obvious to everyone from reading it. Here are my key takeaways, Section by Section. You should think of these as the key things people might fight over in the EO, or key things they might ignore as inconvenient to their position.
SEC. 1 — PURPOSE
What it does: Sets forth the purpose of the EO.What you should know: This is important: The President clearly intends the EO to serve as a stopgap against the worst state AI laws until Congress does the necessary work of establishing a minimally burdensome national standard that protects kids, prevents censorship, respects copyrights, and safeguards communities.This is not a permanent “fix.” Importantly, the EO CREATES NO NEW PREEMPTION. The EO clearly and properly recognizes that the executive branch cannot do that. The statements in the video above as well as the text of the EO drive home that Congress must act. And when it does act, Congress must preserve an important roles for states, while recognizing that the federal government must lead on this nationally important technology.
SEC. 2 — POLICY
What it does: The EO sets as the policy of the United States “to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.”
What you should know: This is based.
SEC. 3 — CREATION OF A DOJ AI LITIGATION TASK FORCE
What it does: Establishes a Task Force at the DOJ with tho sole purpose of suing states for conflicting AI laws.
What you should know: Two observations. First, DOJ doesn’t need an EO to challenge illegal and unconstitutional state laws. They have that authority now. But this creates a institutional structure who will be held responsible for doing so.
Second, as I already noted, this does not ban any state laws that were legal before the EA was passed. It doesn’t preempt state laws. DOJ will need to persuade a court that every law challenged is unlawful under current law.
Of course, litigation imposes costs on defendants even if they win, so this task force could have an overall chilling effect on state AI legislation. That’s the point. Up until now, states faced little cost of any kind for imposing vague, unworkable, and extraterritorial restrictions on AI developers, deployers, and users. Now they’ll at least have a reason to think twice.
SEC. 4 — EVALUATION OF STATE AI LAWS
What it does: The Secretary of Commerce must publish an evaluation of existing conflicting State AI laws, and identify which laws should be referred to the Section 3 Task Force.
What you should know: The EO singles out for scrutiny laws that implicate speech, including those that “require AI models to alter their truthful outputs” or those that violate the Constitution by requiring disclosures or reports by AI developers or deployers. First Amendment lawyers, start your engines — there are really interesting questions here.
Echoing past language from various congressional measures on preemption, the Secretary is also permitted to “identify State laws that promote AI innovation…” This could inform the Sec. 8(b)(iv) “other topics” carveouts from preemption that will be in the White House’s recommended legislation.
I suspect there are going to be a lot of state Governors and other stakeholders seeking to meet with the relevant Commerce staff to lobby for their various state laws. I can already imagine some of the arguments they’ll make.
SEC. 5 — RESTRICTIONS ON STATE FUNDING
What it does: Substantively, this is the most complex requirement of the EO. It obligates Commerce to issue a Policy Notice specifying that states with “onerous AI laws” as identified in the Sec. 4 report discussed above or challenged by the Sec. 3 Task Force “are ineligible for non-deployment funds” from the Broadband Equity Access and Deployment (BEAD) Program, “to the maximum extent allowed by Federal law.” This section also directs other “executive departments and agencies” to determine whether they can condition any discretionary grants on states not passing or enforcing conflicting AI laws.What you should know: I am not sure how large a bucket of BEAD money this involves (one of my telecom law buddies probably knows) or to what extent federal law would permit these kinds of conditions. However, this does strike me as one of the more legally risky areas of the EO, because there are large private telecommunications companies who would be receiving this money from the states and who may have the incentive and means to sue to challenge any such conditions, if they are applied aggressively. As for the other agencies’ $$$, it is even less clear how much money this affects — it’s the sort of thing that probably would be hard for even the White House to determine independently. That’s why the agencies are tasked with it. But I suspect this isn’t a massive amount of money. On top of that, most agencies probably don’t want to mess with their existing programs and may resent this extra work. Institutional incentives lean toward agencies minimizing the amount affected. As such, I expect this to be a relatively low-impact provision.
SEC. 6 — PREEMPTIVE FEDERAL REPORTING REQUIREMENT
What it does: This section requires the Federal Communications Commission to start a proceeding asking whether it should adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws.
What you should know: Note that this doesn’t require the FCC to actually adopt such a provision. It requires what is known as a “Notice of Inquiry” or “NOI”, which is what agencies sometimes do before they start a rulemaking to ask whether they actually should start a rulemaking.
My own initial view is that the FCC would be a strange place to house such AI reporting and disclosure requirements, and I have questions about the FCC’s legal authority to do this. But I look forward to digging in and commenting on the forthcoming NOI.
SEC. 7 — FTC UDAP PREEMPTION
What it does: This section directs the Federal Trade Commission to issue a policy statement identifying situations in which a State requirement to “alter[] truthful outputs of AI models” is preempted by the FTC Act Section 5’s “Unfair and Deceptive Acts or Practices” (UDAP) authority.
What you should know: This section is fascinating to me because during my time at the Federal Trade Commission I dealt with many dozens of cases involving the FTC’s UDAP authority. I have never seen it applied like this, but it doesn’t strike me as obviously wrong. The theory seems to be that if a State law requires a company to lie, but Section 5 prohibits a company from lying, those laws are in direct conflict and therefore Section 5 preempts the law. I guess this Policy Statement would be used in court by companies defending themselves against such laws?
I want to think more about this, including what it could mean for other state laws that arguably require “lying.” For example, California’s cancer labeling requirement probably wouldn’t be substantiated under typical FTC standards. There are a bunch of green labeling / environmental disclosure requirements that similarly probably require companies to bend the truth, or at least not fully represent its nuance.
Also, the deception statement doesn’t mean every false statement is a violation. To be deceptive under section 5 a false statement has to be material to a customer, meaning they would have acted differently if told the truth. Does that mean the state AI law is only preempted where the required deception would be material?
Anyhow, very early thoughts on this — I will be writing more.
SEC. 8. — LEGISLATIVE RECOMMENDATION
What it does: Consistent with the stop-gap nature of the EO, this section jointly tasks the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology with preparing a legislative recommendation “establishing a uniform Federal policy framework” that preempts conflicting state AI laws.
What you should know: New to this version (wasn’t in the draft) are four areas carved out from what the recommendation may recommend preemption. The recommendation will not include preemption of :
child safety protections;
AI compute and data center infrastructure “other than generally applicable permitting reforms”;
State government procurement and use of AI; and
“other topics as shall be determined”
That last bucket could include state AI laws that promote AI development or deployment. The second, infrastructure carveout is also interesting: it appears to preserve the legislative draft’s ability to recommend preempting certain state permitting practices.
These carveouts make crystal clear what supporters of various measures to contain state laws, all the way back to the July moratorium fight, had attempted to explain: there are definitely areas where states have an important role and should not be preempted.
SEC. 9 — GENERAL PROVISIONS
This is just the usual Executive Order boilerplate.
FINAL THOUGHTS
This EO is not a silver bullet, and it doesn’t pretend to be one. It does not magically wipe away state AI laws, nor could it. What it does instead is more subtle and more realistic. It raises the cost of the worst forms of state AI regulation, creates institutional pressure to test their legality, and clearly signals that the status quo of fifty competing AI regimes is unacceptable for a technology that operates at national and global scale.
Most importantly, it frames the executive branch’s role correctly: as a bridge to legislation, not a substitute for it. The hard work now shifts to Congress, where the real question is not whether there should be a national AI framework, but how it can be drawn to ensure continued American AI dominance, including by preempting overreaching state laws while preserving state authority where it makes sense.
In that sense, the EO succeeds if it does one thing above all else: it forces the debate out of abstraction and into concrete legal, institutional, and political tradeoffs. That debate is long overdue.
Watch Neil explain how the AI executive order works and the importance of a federal framework on C-SPAN here.
On Thursday night President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence.” Here’s some signing comments by Trump and commentary / explanation by Crypto and AI Czar David Sacks, who drove this effort:
topic
Technology
technology_subtopic
Artificial intelligence
article
false
include_in_hero_section
true
category
Articles
article_view
article content only
social_image
is_displayed
true
display_order
—
hero_image
hero_order
—
Team Member
Jared Lambert
Custom Fields
is_displayed
true
display_order
—
name
Jared Lambert
position
GovTech Fellow
image
intro
[Jared Lambert](https://jared.lmbrt.net) is the resident software engineer at the Abundance Institute. He is building applications that showcase the immense positive impact that technological progress will bring to the world.
Jared specializes in AI-enhanced development, and he has spent thousands of hours using LLMs to rapidly deploy software in a variety of fields. He attended the Utah AI Summit as a technical expert, and has helped multiple companies implement automation and improve productivity using AI.
Legislating Child Safety Online: A Review of the House E&C Subcommittee’s Proposals
The U.S. House Energy and Commerce Subcommittee on Commerce, Manufacturing, and Trade recently held a hearing, “Legislative Solutions to Protect Children and Teens Online,” in which the subcommittee considered 19 pieces of legislation. Over the coming months the subcommittee and full committee will be considering these measures. Here, we provide our analysis on 8 of those proposals. We focused our analysis on the drafts that, if enacted in their current form, would most affect the future of computing and artificial intelligence. For further reading, our principles for protecting kids and innovation can be found here.
At the outset, it is worth noting that a variety of these proposals segment online users by age. Whether or not the requirement to verify user age is explicit, services are likely to do so in order to avoid legal liability. Such requirements, whether implicit or explicit, would gate access to computing and free expression for all Americans and cause a variety of inherent security concerns that we explore below.
The SAFE BOTS Act is the only bill in the proposed package that specifically regulates minors’ use of AI tools. The discussion draft proposes to govern certain actions by chatbots for users under 17 years of age. Key requirements include prohibiting chatbots from claiming to be licensed professionals (unless true), mandating they identify as chatbots when prompted, providing suicide prevention resources if prompted, and advising users to take a break after three hours of continuous use. A chatbot provider would be required to have policies on how it addresses topics such as “sexual material harmful to minors,” gambling and “the distribution, sale, or use of illegal drugs, tobacco products, or alcohol” with users under 17. The proposal would preempt state laws if they cover these matters. It would also commission a study on risks and benefits of chatbots to youth mental health. The proposal clarifies that nothing within it may be construed to force the chatbot provider to collect personal information about the age of a user that it’s not already collecting.
Notably, most leading consumer AI companies have already implemented the features this draft would require. For example, Character.ai recently adjusted its service to reduce the daily time limit for users under 18 from two hours to one hour—stricter than the three-hour limit proposed in this draft. Character.ai and OpenAI have also begun deploying age assurance technology that enhances model safety protocols if, based on user prompts, the technology determines the user is a minor. Voluntary adoption and deployment of any age assurance system, including age verification, is fully within the rights of the company and not a violation of Americans’ civil liberties. However, all age verification systems—even industry-led requirements—can come with serious security and privacy risks.
Crucially, this discussion draft is missing a mechanism or standard for how AI companies should determine whether or not a user is under 17. Should this draft—or any bill that requires tailored requirements for minors—become law, platforms large and small would need to develop robust mechanisms to comply. Without clarification on which services need to comply, the current language could have a profound effect on AI access for all Americans. Compliance hinges on whether a chatbot is “incidental” to the primary purpose of the service, as defined in Subsection K(3)(B). It is possible that AI chat tools could not be integrated into mundane software, like word processors, without needing to follow the regulations in this draft. For example, is Microsoft’s Copilot truly incidental or is it a core feature of their software? Currently, Copilot is the advertised feature for all individual and business Office365 packages. Under a more liberal reading, Meta’s AI chat features would not be implicated as an argument could be made that those are incidental to the app’s social media service. Either way, there is a risk of litigation. Therefore to avoid potential litigation, a platform is likely to just abandon helpful AI chat services which could have a profound impact on usability and productivity. This means computing as we know it would remain the status quo, rather than becoming a supercharged productivity, education, and entertainment tool.
The disclosure requirements offer potential benefits. However, more research on effectiveness is necessary and the evidence we do have is mixed, according to a study on AI labels at the NYU Center on Tech Policy. The required policy might also be duplicative to standard industry practice, because most models currently deploy a disclosure that the tool is an AI system on sign up or that is constantly displayed. The draft likely aims to address shortcomings seen in high-profile cases with older AI models, where the system refused to acknowledge it was an AI, typically done as part of a character. It is unclear and too early to say whether such a law requiring disclosure at all times is entirely necessary. The upside is a common standard that could prove a helpful feature if users get too wrapped up in the tool. What is unknown is how helpful that is to those users. The downside would mostly be in the entertainment context. It is likely that the majority of users don’t lose touch with reality in those contexts. Like getting lost in a movie or fantasy novel, there could be a value and right, especially for adults, to have access to a bot that is not required to say it’s an AI when prompted. Finally, it is not predetermined that societal and cultural norms won’t adapt to putting AI systems in their appropriate place. In other words, users won’t need disclosures because they will just know they’re not talking to a person much like norms have adapted to the point where most people know the special effects seen in a film are not real.
Another provision risks stifling tools for the very people who need them most. Section 2(a) stipulates that, “A chatbot provider may not provide to a covered user a chatbot that states to the covered user that the chatbot is a licensed professional (unless such statement is true).” Any AI tool that offers “therapy” or “mental health” assistance could run afoul of this law. The draft language does leave open the possibility for an AI tool to become certified, but that comes at the cost of less and more expensive access. As Taylor Barkley has written elsewhere, there are profound mental health needs, particularly for teens, where AI therapy tools can be helpful. There are also better policy models, as exemplified in Utah, that don’t involve bans.
Finally, the draft’s proposed study is a welcome inclusion that would serve as a valuable resource to policymakers and industry alongside the breadth of academic, industry, and consumer group reports under development. As noted above, there is a profound lack of data about child and teen use of AI systems and the effectiveness of certain policy measures. Ultimately, public policies should be based on evidence and such a study proposed here could provide much of that data.
This proposal would direct the Secretary of Commerce to establish a body that would coordinate among relevant federal agencies and stakeholders to identify risks and benefits for minors online. The Partnership would publish a regular report about its findings on these topics and how online services offer protections for minors and tools for parents. It would also have to publish a “playbook” for online services to help them to implement the “widely accepted or evidence-based best practices” with regard to age assurance, “design features, parental tools, and default privacy and account settings.” The Partnership would sunset after five years.
In its current version, the bill could provide helpful information to stakeholders and industry. However, it would benefit from a few tweaks. Although artificial intelligence (AI) tools are part of many of the technologies and platforms named, AI is not specifically named. As children and teens come into frequent contact with AI systems, the proposed Partnership should examine the benefits and risks of those technologies too. An additional edit should be made to the framing of these technologies. Although there are nods to “benefits” in the discussion text and in related press releases, it is not apparent that beneficial use cases are a focus of the Partnership. Because there are so many online digital technologies available to minors, the Partnership reports could easily become entirely focused on risk analysis without space or room to present beneficial use cases. This would be a missed opportunity, especially for policymakers, because they must weigh the benefits and risks effectively. The draft could be strengthened by adding a section that directs the partnership to focus on benefits. Finally, it would probably be better for the report to focus on the mentioned “evidence-based best practices” rather than just “widely accepted ones.” Policy recommendations should be grounded in evidence and not just common viewpoints.
This bill would direct the Federal Trade Commission to work with a variety of other partners to establish a public education effort that would promote minors using the internet safely. The group would submit annual reports to Congress summarizing its efforts.
Public education efforts as proposed in this draft are well within the appropriate role of the federal government and policymakers at all levels. The federal government has existing programs such as Know2Protect (from the Department of Homeland Security), which raises awareness and combats online child sexual exploitation, or FBI Safe Online Surfing (SOS), an educational initiative for elementary and middle school students about cyber-safety and digital citizenship. And these are just two of many. Instead, the bill appears to aim for integration and coordination, by making the FTC a “hub” for public-facing online-safety resources: a national front door that can aggregate and promote materials from DHS, the FBI, educational programs, nonprofits, and other stakeholders, while also expanding the lens to include mental-health, content-exposure, and behavioral risks. In doing so, H.R. 6289 could reduce fragmentation in the federal online-safety ecosystem, streamline outreach to parents, educators, and minors, and create a standardized, cross-agency foundation for protecting youth online.
This would direct the Federal Trade Commission (FTC) to work with relevant federal agencies to develop and share resources on the safe use of AI chatbots by minors. Notably, this program would be modeled on the Youville material currently developed and made available by the Commission. As noted above, public awareness and education campaigns like these can provide help to parents, caregivers, educators, and children and teens themselves. The challenge for such an effort would be to stay up to date on a rapidly evolving space. Nonetheless, government educational efforts would serve as a useful supplement to industry and consumer protection efforts.
KOSA applies to websites and apps of all sizes that focus on user-generated content, allow people to create searchable user accounts, and use account holder information to advertise or recommend content to the user. As written, this would require even AllTrails, a variety of not-for-profit online medical forums, and innumerable other small forums to provide a completely new suite of user and parental controls not just for users but also for those without registered accounts. In order to provide parental tools to those who aren’t even registered with the service, such platforms would have to actively track these users, which seems counterproductive for the purpose of protecting privacy online.
The platform would similarly have to provide parents with information about the parental tools required by the law and obtain verifiable parental consent for users and visitors under the age of 13. The bill adopts the same standard for consent that appears in theChildren’s Online Privacy Protection Act of 1998. But some of the approved methods under this law are easy to circumvent by users of any age, including making a credit card transaction or calling a phone number.
Moreover, as with any legislation that requires treating different age groups differently online, many platforms will likely pursue more robust age verification methods in order to avoid potential liability, such as having users upload government identification and face scans. This practice has repeatedlyled to data breaches, leaving affected people vulnerable to financial fraud and other crimes.
These same platforms would also have to pay tens of thousands of dollars to hire independent auditors. Such costs and regulatory burdens are not feasible for many of the small—even not for profit—forums and other services that would be covered by the law.
This proposal would divide users into different age groups and require that app stores receive consent from parents for their children to download apps or make in-app purchases. Unfortunately, age verification for minors is extremely difficult, verification still comes with security risks, the definition of “parental account” means it’s easy for minors to circumvent parental consent, and the bill applies only to apps and not websites.
The bill relies heavily on segmenting users into different age categories of 18 or older, 16-17, 13-15, or below 13 years of age. The problem is that there is not a reliable method to verify minors’ age. Age estimation errs by years, minors generally don’t have government photo identification cards, and other methods of identification such as birth certificates or Social Security cards (which also don’t have birthdays) don’t have photos that can be matched to the person in front of the screen.
There are also the more fundamental cybersecurity concerns with age verification. The bill would require that age verification data is protected by limiting its collection and storage to only what is necessary to verify a user’s age, obtain parental consent, and maintain compliance records. It would also mandate that the data must be kept safe by using “reasonable” safeguards to secure it, including encryption. The encryption requirement is a welcome provision, but age verification systems don’t always adhere to even their own standards and users cannot know for certain how such data is protected, and they can still be hacked and breached. Further, the sensitive information needed to prove age—biometrics, government IDs, etc.—is the same information needed to prove compliance with the law. So although the prompts to data minimization are welcome, they don’t solve the concerns here.
It’s also not just age verification databases that can be breached (as mentioned above), but other systems in the age verification process. After implementing age verification due to the U.K. Online Safety Act, Discord’s vendor breached tens of thousands of government IDs. That breach wasn’t even from users of their main age assurance system, but from people who were using a backup method when biometric age estimation failed or they otherwise couldn’t use estimation. Those tens of thousands of people will now have to worry about identity theft and bank hacks. That is the scale of harm that can be done by the government requiring age verification.
The way the legislation defines “parental account” also underscores the difficulty of verifying the parent-child relationship online. The text only requires that a parental account is established by a user that the app store has determined through age verification is at least 18 and whose account is affiliated with at least one account of a minor. Although few documents are truly useful for the purpose of verifying the parent-child relationship—and these documents don’t include the photo identification necessary to prove the users are the same people in front of the screen—this doesn’t escape the problem that minors can find other adults to allow them access online. It would be easy enough for a child to find an older sibling or other relative to allow them more permissive app access.
Another problem is that this bill applies only to apps and not websites. Minors could still access all the same content and more with web browsers without parental supervision. Although Congress could pass another law applying to websites, users would then need to functionally verify their ages twice for each service—once through app stores for the apps and again through the services directly when using websites. This would further increase security issues with age verification by providing more databases and more opportunities for hacks and breaches. Users frequently access both websites and apps belonging to the same services—consider email providers, social media, and niche services like AllTrails and ZocDoc.
This bill, on the other hand, would require app stores to have users only declare their ages, while noting that age assurance software can be used for this purpose. It would require app stores to provide a user’s parent the ability to prevent their child from downloading or using apps which—whether voluntarily or as required by law—provide different online experiences for minors and adults. App stores would also have to give these apps the ability to prevent minors from downloading or using them.
The legislation does not offer guidance as to how app stores must determine the parent-child relationship, which lends itself to the same problems as in the App Store Accountability Act regarding minors finding an older friend or sibling to confirm their app use. Because users inputting their age without further proof is an acceptable mechanism of proving age, friends could find other friends their own age who simply lied about their age to the app store to help them. However, app stores may opt to implement full age verification and require more documentation to prove the parent-child relationship, which can cause the same security concerns mentioned earlier.
Meanwhile, developers would be required to let app stores know if they provide different experiences for minors than for adults and would have to provide information about online safety settings for parents unless the apps block minors. These developers would also be required to use age assurance—which can include an age signal from the app store—unless the app is required by law to block minors, in which case they would need more robust means to check if adults really are adults. Developers of these apps would also have to “make a reasonable effort” to prevent minors from engaging in activity on the app restricted to adults and obtain consent (it does not specify from whom) before allowing minors to access parts of an app the developer deems “unsuitable for use by Minors without parental guidance or supervision” or content age gated by law.
Oddly, the bill applies all the same requirements it applies to apps also to website versions of those apps. If a website that provides different experiences to minors and adults has no app, then such a website is exempt. But even applying the same requirements of the bill to website versions of covered apps raises some very strange questions. Apps with web versions don’t always exist in all app stores. Some exist in iOS and Android app stores (or just in one or the other), but not in app stores on laptops or on Windows phones. If someone were to access such a website on their laptop or Windows phone, many provisions of the law would not make sense, including all the information they would be required to share with app stores that don’t house them. There are also a variety of requirements about how app stores must interact and share information with covered apps, and it is unclear whether those provisions also apply to covered websites, especially when accessed on devices with app stores that don’t contain the covered app.
However, the bill also includes some welcome provisions such as prohibiting apps from attempting to figure out a user’s birthday by repeatedly requesting user age from the app store. There is no guarantee that apps won’t still do so, but attempting to prevent the practice is still a good idea. The bill also allows app stores to withhold age signals from developers that don’t adhere to the app store’s policies and safety standards, which is a good step to protect user information. Additionally, the duty is on the apps rather than the app stores to determine whether an app is covered by the bill. App stores don’t necessarily know whether an app provides different experiences for minors and adults, so this makes sense.
This would change the Children’s Online Privacy Protection Act of 1998 to apply not just to children but also to teens, and not just to websites but also to apps. It also preempts similar laws at the state level. Among other changes, it loosens the knowledge standard depending on company size. Whether a service knows that a child is in fact a child is changed from “actual knowledge” to “knowledge” for the largest social media companies, while the current actual knowledge standard remains intact for services that generate less than three billion dollars in annual revenue, with fewer than 300 million monthly active users, and which don’t focus mainly on user generated content. Although keeping the actual knowledge standard in most cases is preferable, applying a looser knowledge standard to the top social media companies still raises difficult questions for compliance. The bill defines “knowledge” in such cases as when a platform “willfully disregarded information that would lead a reasonable and prudent person to determine, that a user is a child or teen.” It is unclear what could be used as evidence to that effect under that standard. For example, parents researching toys for children or colleges for their teens may look a lot like kids researching these things for themselves. This “should have known” standard is not workable or predictable.
Additionally, the bill would prohibit a service from cutting off their service to children or teens if a parent or teen requests that their personal information be deleted, so long as the service can be provided without such information. The ways in which user data are necessary for the service to function correctly aren’t always apparent to those using the website. However, proving as much in court is likely to be a burdensome process for these services—particularly small services. It isn’t far-fetched to see a parent requesting that a service delete their child’s information, the service doing so and removing the child from the service, and the service being sued. Indeed, that is what this provision enables.
Conclusion
We share the Energy and Commerce Committee’s goal of ensuring a safe online environment for children and teens. However, as Congress considers these legislative proposals, it is critical to balance safety objectives with the technical realities of the digital ecosystem and the need to preserve American innovation.
While some of these measures offer constructive steps—such as public education campaigns and evidence-based studies—others present serious functional and security concerns. Specifically, mandates for broad age verification often ignore the technical infeasibility of current verification methods and the cybersecurity risks created by collecting sensitive user data. Furthermore, overly broad definitions risk sweeping in beneficial technologies, potentially cutting off minors from valuable educational and mental health resources under the guise of protection.
We urge the Committee to prioritize solutions that empower parents and deployers without imposing unworkable mandates that stifle the development of next-generation computing. We remain ready to assist the Committee in refining these proposals to ensure they effectively protect youth while fostering a vibrant and open digital future.
The U.S. House Energy and Commerce Subcommittee on Commerce, Manufacturing, and Trade recently held a hearing, “Legislative Solutions to Protect Children and Teens Online,” in which the subcommittee considered 19 pieces of legislation. Over the coming months the subcommittee and full committee will be considering these measures. Here, we provide our analysis on 8 of those proposals. We focused our analysis on the drafts that, if enacted in their current form, would most affect the future of computing and artificial intelligence.
article
false
include_in_hero_section
false
category
Articles
topic
Technology
article_view
article content only
social_image
false
is_displayed
true
display_order
—
technology_subtopic
Artificial intelligence
Social media
Chatbots
Article
A State Policymaker’s Playbook For AI Success
The Opportunity
Artificial Intelligence is a general-purpose technology—like electricity or the internet—that will define U.S. competitiveness, productivity, and prosperity for decades. With the right approach, AI can expand economic opportunity, improve health and education, and create abundance for all. See real-world examples atAI Opportunity.
Why It Matters
Jobs & Growth: AI will create entire industries and expand workforce productivity; it can help a state’s farmers, doctors, small businesses, and teachers—if government gets out of the way.
State Leadership: Pro-innovation policies will attract AI entrepreneurs, jobs, and investment. Policymakers should treat AI as the opportunity it is, and we will be the generation that provides every student with a private tutor and every patient with access to personalized treatments.
Global Competitiveness: Adopting a free-market framework ensures the U.S. will lead the way in global AI innovation, outpacing China and any potential adversaries.
Guiding Principles
Freedom to Build
No permission slips for entrepreneurs. Innovators should be free to build without Washington-style bureaucrats standing in the way.
Building the Launchpad for an AI Moonshot: Build the infrastructure, regulatory scaffolding, and policy incentives to allow the private sector’s “rocket ship” of innovation to launch.
Right to Compute Act: Computational freedom is not a privilege to be granted by government, but a natural extension of rights we already possess that should be protected by government. After being passed in Montana, this concept has been introduced in Ohio and New Hampshire.
Punish Abuse, Foster Learning
Like other computing technology, AI is a tool. We should not preemptively regulate people building tools with unknown upside potential; instead, we should hold bad actors accountable when they use any tool to commit fraud or violate rights.
Regulating Machine Learning Open-Source Software: Regulatory burdens on open-source developers would concentrate power, stifle innovation, and undermine the real-world benefits of open-source AI.
The Vibrant AI Competitive Landscape: The current AI ecosystem is deeply vibrant and competitive across hardware, models, cloud, and applications. Overregulation will undermine competition.
Sunset the Red Tape
New rules should work for today and tomorrow. We will actively review, revise, and repeal—keeping government flexible and accountable.
Resetting AI Regulation: AI policy should include sunset provisions, iterative review, and risk-based oversight, preserving the state’s role as an innovation-friendly leader.
Utah’s Mental Health Chatbot Act: Represents a thoughtful and balanced regulatory model for AI applications in mental healthcare, combining user protection, ethical considerations, and innovation-friendly policies. A far betterapproach than what Illinois did here.
Government Use
Proper use of AI can streamline and improve government functions, saving taxpayers’ money while protecting residents’ interests and rights. There are enormous opportunities for such benefits in state procurement, benefits administration, resident services, and even emergency services and natural disaster mitigation and relief. See Improving Government Efficiency with AI Technologies.
Build Energy Abundance
To reap the benefits of AI innovation, states have an opportunity to blaze a new trail on energy generation where we build what works.
Data Centers: In partnership with the James Madison Institute, this article outlines the basics for a regulatory framework around data centers, their energy use, and their water use.
Energy Use: Data centers are the invisible foundation of the modern economy. They are computers that you use through your own devices without touching. They are large electricity users, but are willing to work with states to meet their needs without shifting costs onto others.
Water Use: In this article, and its follow-up, we outline fact-based responses comparing water use in energy and data infrastructure.
Grid Assets: Emerging evidence shows that new data centers, when structured properly, can actually pay for grid revitalization projects because of the load flexibility they bring to the grid.
Nuclear Energy: Five states and three nuclear companies are currently suing the NRC to return nuclear regulatory authority to the states. This article summarizes the lawsuit and its potential to unlock nuclear power generated by small modular reactors.
AI is not a threat to be feared, but a tool to be harnessed and leveraged. AI will be the source of the next Industrial Revolution, and states should seek to be first to build the metaphorical railroads of the future. With a free-market, pro-innovation approach, we can make our state—and America—the global leader in artificial intelligence, securing prosperity and abundance for future generations.
Artificial Intelligence is a general-purpose technology—like electricity or the internet—that will define U.S. competitiveness, productivity, and prosperity for decades. With the right approach, AI can expand economic opportunity, improve health and education, and create abundance for all.
In 1999, Mark Taylor, co-producer of Cher’s global-mega hit, “Believe,” explained to Sound on Sound magazine how he created the now famous robo-glide on Cher’s voice. The account was elaborate: a Korg VC10 vocoder, a Digitech Talker, a Nord Rack, and some Cubase gymnastics. It sounded like a fire sale at Radio Shack.
It was also untrue.
The real method was very simple: a new plug-in invented by a flautist at Exxon, AutoTune.
So, why lie about it, months after the song had conquered the planet? Protecting your secret ingredient is one explanation. Another is cultural: in 1998-99, it could be a little taboo to admit you were friendly with AutoTune. Many producers were using it because it saved time, money, and singers’ vocal chords. But they didn’t tend to speak of it.
Before the plug-in era, “perfecting” a take required extensive manual labor. You coached vowels, recorded take after take, and then started cutting tape. Engineers spliced syllables with razor blades, created slapback to blur edges, and nudged tape speed. The introduction of Digital Audio Workstations (DAWs) made this process faster, but it was the same idea. Pitch correction was an exhaustive exercise in meticulously managing performances and creatively masking mistakes.
The world’s largest DAW in 1988 with 64 megabytes of RAM. Photo credit mu:zines
AutoTune dramatically trimmed the workflow. The pursuit of perfection got a lot cheaper.
Still hardly anyone mentioned it. It was like Nanna’s “from scratch” sauce… with a suspicious number of empty jars in the pantry. Everyone uses the shortcut; no one admits it. Why? Because plenty of folks would publicly denounce it as “cheating” and “dehumanizing,” and nobody wants to be the first heretic at the cookout.
So while publicly, AutoTune was a little taboo, privately it was just Tuesday in the studio. This mismatch is what Todd Rose calls a collective illusion: when most people privately believe one thing but wrongly believe that other people believe the opposite. The result is a public consensus that almost no one actually wants. People in the office schedule video meetings thinking that’s what everyone else prefers. They don’t.
“Believe” punctured the illusion. Once it was clear that AutoTune, not a vintage vocoder, was the real engine, the taboo began to fade. Artists began experimenting with it as an instrument, and the question changed from “do you use it” to “how do you use it?” Then a great singer, even without the effect, Faheem Rashad Najm began saturating his music with it. Soon, Faheem, better known as T-Pain, became the Johnny Appleseed of AutoTune, sprinkling it all across the land. For a stretch in 2007, he appeared in four songs in the Billboard Hot 100 top 10 at the same time. AutoTune was king.
We are living through a similar dynamic today with AI in music. In the public square, many artists worry or insist that AI is the enemy. But the reality is quieter and more complicated: artists and producers are in the studios experimenting with a lyric assist here, sample generation there, stem separation, new melody suggestions, a prompt or two, and they’re finding that much of it is useful. A new survey by LANDR found that “87% of respondents use AI tools in their music workflow”. And nearly 30% are using AI song generators in their creative work.
In hushed tones, artists are asking: “Wait—are you prompting?” “Uh… maybe?” Then the grin: “Me too.”
If “Believe” has taught us anything, other than that you shouldn’t give up on topping the charts after age 50, it’s how collective illusions collapse. When gatekeepers and tastemakers normalize what’s already happening, the social penalties fueling self-censorship crumble and fact can overcome fiction. Once that happens, the story flips from “cheating” to “new instrument” and creators collaborate on innovation and new soundscapes, leaving behind the taboo.
Cher’s “Believe” didn’t just change the sound of pop, it helped make it acceptable to treat a weird new gadget as an instrument instead of a scandal. We need the same move with AI. As long as AI is cast as an evil monolith, artists will hesitate to share publicly how they are using it. But when trendsetters talk openly about their explorations and both the cons and the pros, then the taboo begins to crack. The collective illusion collapses, and conversations can shift from whether anyone is allowed to touch AI to how it can and should be used. Then artists can more meaningfully help shape the future of the tool. And with a tool this disruptive and empowering, we want the creators at the tables, not just the lawyers.
In 1999, Mark Taylor, co-producer of Cher’s global-mega hit, “Believe,” explained to Sound on Sound magazine how he created the now famous robo-glide on Cher’s voice. The account was elaborate: a Korg VC10 vocoder, a Digitech Talker, a Nord Rack, and some Cubase gymnastics. It sounded like a fire sale at Radio Shack.
article
false
include_in_hero_section
false
category
Articles
topic
Creative Frontiers
article_view
article content only
social_image
false
is_displayed
true
display_order
—
Article
How AI could supercharge America
The American economy is like a flabby giant. Its size and strength are still impressive, but it huffs and puffs when faced with certain challenges. The federal government runs trillion-dollar deficits – even in good years. Birth rates are falling. Schools are failing many kids. Infrastructure is rusting out faster than we can replace it. And none of these are new problems. They are the consequences of years of cultural drift, neglected maintenance, complacent citizens, and weak leadership.
Yet under this blubbery exterior, the U.S. economy still boasts some strength. Today that is coming from technology companies, big and small, which are fueling the country’s economic growth. In September 2024, Mario Draghi – the former president of the European Central Bank – noted in a major report on competitiveness that while the European Union and the United States had boasted comparably sized economies in 2000, since then, per-capita real disposable income in the United States had almost doubled the EU level. The main reason for this shift and the widening productivity gap between the two economies, Draghi concluded, was the tech sector.
Artificial intelligence could bring this sector’s strength to the rest of the American economy. Properly applied, AI could become a general-purpose technology on par with or even surpassing electricity or the internet. It could boost productivity, expand opportunity, and revitalize our bloated and sluggish systems. AI-driven productivity increases could help balance our national budget, fill the gaps left by an aging workforce, remake education, and drive scientific and health care discoveries and innovations that underpin prosperity. AI offers a path to a robust, muscular, fit American economy.
The opportunity is America’s to seize. The United States leads the world in AI research and investment, although other countries, especially China, are in hot pursuit. If we miss this moment, it won’t be because technology failed us – it will be because our politics did. Fear, fragmentation, and bureaucratic overreach could choke off the very growth the United States desperately needs. The country currently faces three significant political challenges in this domain: whether to allow a patchwork of state laws to strangle AI innovation before it scales; whether to let our children learn with AI; and whether to build the physical power and computing resources needed to let AI proliferate.
How the United States meets these challenges will determine whether we use AI to whip the country’s economy back into shape – or decide instead to resign ourselves to a couch-potato economy, with all the stagnation that would bring.
Move slow and break nothing
Important parts of the U.S. economy today have become undisciplined, shortsighted, and slow. Last year, despite low unemployment and steady GDP growth, the federal deficit hit $1.8 trillion. The share of eighth graders scoring “proficient” in math was just 28%, down six percentage points since 2019. The country’s fertility rate remains well below the replacement level. And projects to build basic necessities such as transmission lines or high-speed railways routinely take a decade or more to permit and construct.
America’s capacity to do big things has atrophied. Our last real productivity boom ended two decades ago. Outside of technology, the economy is barely growing. As of mid-2025, just four tech firms accounted for roughly 60% of year-over-year stock-market gains. The so-called Magnificent Seven largest U.S. tech companies make up almost 50% of the total value (by market capitalization) of the NASDAQ 100 stock index. The U.S. economy leans heavily on our tech companies.
But AI could reinvigorate the rest of the economy, and the country along with it. Consider the following. In November 2022, OpenAI released ChatGPT as an experiment. Despite the company never intending to build a mass consumer product, ChatGPT became the fastest-adopted technology in history, hitting 100 million users in two months. The app ignited a surge of investment that spread far beyond chatbots. By 2024, private AI investment in the United States had reached $109 billion, and the number is still growing rapidly.
ChatGPT is not the first time a technological breakthrough has driven excitement about AI. But this time is different. Machine learning, which is the core process underpinning modern AI, uses algorithms trained on vast data sets to recognize patterns and make predictions. This approach is proving highly generalizable. It can already draft contracts, model proteins, translate languages, and guide robots. Machine learning is making its way into every field that runs on data. And in the 21st century, that’s nearly all of them.
AI, in other words, will define our era, and the good news is that the United States leads the world in this technology. U.S. companies designed and trained the AI models, constructed the data centers that run them, and developed the applications that bring the power of AI to users. America dominates the AI industry today, in investment, revenue, and innovation.
AI won’t solve America’s problems on its own. But it could make almost every problem easier to solve – if we don’t get in our own way.
Barrier one: The 50-state trap
Unfortunately, too many American leaders currently treat AI as a reason to panic. In 2024, state lawmakers introduced 635 AI-related bills, enacting 99. By mid-2025, that number had ballooned to more than 1,100 proposals. The National Conference of State Legislatures reports that 38 states adopted or enacted approximately 100 measures in just the first six months of this year.
The intentions of these laws vary: to make AI safe, protect consumers, prevent bias, regulate deepfakes, or restrain tech giants. But whatever the goal, the outcome of this flurry of lawmaking is the same: a minefield for the companies required to comply with regulation. Unlike these laws, modern AI isn’t built state by state. It’s trained on global data, deployed on cloud servers located around the world, and used in ways that cross borders in real time. In other words, modern AI is not a single product situated in a single place; it’s a distributed set of constantly evolving services. Trying to govern AI locally is like trying to use your thermostat to control the weather.
The threat posed to tech leadership created by this patchwork of regulation isn’t theoretical. Each new state mandate adds conflicting definitions, overlapping audits, and redundant reporting requirements that companies must struggle to fulfill. Pro-regulation states with large markets – like California and New York – essentially set the rules for everyone else. Big companies must waste huge amounts of money and time complying with all the rules – but they probably can absorb the costs. Startups can’t. The results will be predictable: fewer startups, slower product launches, chilled investment, and innovation driven offshore.
One need only look at Europe – where well-intentioned but cumbersome AI and technology rules have slowed research and driven talent to the United States – to see what impact such a regulatory approach could have here. If the United States commits its own version of this mistake by allowing individual states to race for the most restrictive standards, the whole country will lose.
The U.S. Congress should act before that happens. It should preempt most state AI laws and set a single national framework for model training and deployment – an approach that treats AI as the interstate infrastructure it is. A coherent federal policy would consistently protect users, clarify responsibilities, and streamline compliance for innovators. The right model would mirror what has worked for past transformative technologies: uniform, light-touch rules, allowing for open competition and space for experimentation. Anything else will weigh down our economy with onerous amounts of legal paperwork.
Barrier two: Banning the future of education
The second threat to America’s AI dominance – and the technology’s potential to transform our economy – is more emotionally fraught but no less destructive: overreacting to the use of AI by children.
The fear is understandable. The growth of the internet has taught us to be wary of tech’s unintended effects. Parents today have many reasons to be protective. But banning AI outright in our classrooms or making it harder for children to use – moves some lawmakers have already proposed – would be an act of educational malpractice.
That’s because AI tutors could become the most powerful learning tool since the printed book. At Alpha School in Austin, Texas, for example, AI systems coach students through their core academic work in just a few hours – and then the students spend the rest of the day building drones, running businesses, or exploring the outdoors. Alpha School is also developing a platform called Timeback that aims to empower educational entrepreneurs to create personalized, one-on-one instruction for less than $1,000 a year per student.
This isn’t science fiction; it’s a working prototype of what individualized education could look like. Properly used, AI tutors could democratize elite instruction, helping kids learn at their own pace, in their own style, with real-time feedback and fewer bureaucratic barriers.
But lawmakers are letting fear drive policy. Bills intended to protect kids could undercut the very feedback loops that power AI-driven educational tools. Overly strict rules protecting privacy, for example, would prevent AI systems from effectively tracking students’ progress or spotting their subtle learning patterns. And a tutor that can’t observe is a tutor that can’t teach. These and other poorly considered laws could drive AI innovators away from education to less legally fraught areas, even though the country desperately needs more innovation in this field.
We haven’t banned microscopes because they reveal too much detail, or calculators because they could potentially replace our arithmetic skills. Instead, we have equipped our educators to use such tools responsibly and trusted them to train our students to do the same. For similar reasons, the solution to how to deploy AI in education today is not prohibition but thoughtful application and experimentation. Parents should have options and schools should have significant flexibility. Privacy laws should deter the misuse of information rather than the mere gathering of it. An open, pluralistic approach would nourish what works and weed out what doesn’t.
Our current education system is failing too many of our children. Denying students access to the tools that will define their generation would not be appropriately cautious – it would be shortsighted and reckless.
Barrier three: The building bottleneck
AI is software, but its progress ultimately depends on our ability to build significant physical infrastructure. And America no longer builds as it once did. The mid-20th century United States erected an entire modern world in a generation. It poured concrete for highways, raised power plants, wired cities, and built the grid that powers everything from suburbs to supercomputers. The country had a bias for action.
Today, that bias is gone. The very laws designed to manage progress have become tools to prevent it. When Congress passed the National Environmental Policy Act (NEPA) in 1969, it was intended to strengthen environmental stewardship. But the law now functions as a procedural labyrinth and the most powerful tool in the NIMBY toolbox. Environmental reviews for infrastructure projects take a median of more than two years, an average of nearly four, and often generate thousands of pages of analysis with little measurable benefit. The result is paralysis by paperwork. Every major project – solar farms, wind installations, data centers, transmission lines – can be delayed for years by bureaucracy and litigation.
Yet AI needs physical infrastructure. Data centers – which really should just be called supercomputers – are the modern equivalent of factories. All the online services we use, including AI services, run on these supercomputers housed in large warehouses. Training a cutting-edge model and serving its users require significant amounts of computing power and energy. (Contrary to popular belief, data centers don’t really require that much water; they use significantly less of it than many industrial factories or agriculture products.)
The growth of AI has increased the demand for data centers and the infrastructure they require. In particular, AI requires more energy production and distribution. But the United States struggles to build new power sources and connect them to the grid quickly enough. Most of the United States has expanded capacity very slowly since the 1970s. Only Texas, which operates a deregulated “connect-and-manage” grid, has grown quickly, adding more than twice as much as any other grid operator in the country between 2021 and 2023. The state’s dynamic energy market is a major draw for new data centers, with hundreds of billions of dollars in planned investments testifying to the grid’s stability and recovery since Winter Storm Uri in 2021.
If the United States can’t speed up its permitting and building processes, the AI boom will stall. The world’s most sophisticated algorithms are useless without electrons to power the computers.
Congress should therefore treat infrastructure improvement as a national security priority. It should replace our current process-for-process’-sake approach to new construction with outcome-based environmental standards. It should set firm timelines for reviews and limit their scope. It should expand categorical exclusions for low-impact projects. And it should limit injunctions to cases of clear and imminent harm.
At the same time, federal and state agencies should coordinate to unclog interconnection queues and modernize the grid. The future of AI – and much else – depends on abundant, reliable energy. Building it is the precondition for greatly increasing our prosperity.
Fear or abundance
America has been here before. We’ve stood on the edge of a technological breakthrough, uncertain whether to seize it or smother it. We faced it with the railroads, the electrification of cities, the interstate highway system, and the dawn of the internet. In each case, abundance won out over fear, though not always quickly and not always cleanly. The choice before us now is the same: to treat AI as a threat to be contained or as an opportunity for renewal.
Choosing abundance means trusting the American people to build, learn, and adapt. It means allocating government rules between the federal government and states in a way that promotes experimentation rather than chills it. It means giving every child access to the tools of the age rather than locking them behind digital fences. And it means rediscovering the courage to build – not someday, but now.
The alternative is a future in which AI progress happens elsewhere, U.S. schools stagnate while those in other countries accelerate, and the next generation of American innovators grows up under a regime of control rather than freedom.
That would be a major societal failure.
AI is not a silver bullet for all our problems, but it could be the catalyst that restarts broad American dynamism. The question is not whether AI will transform the world. It will. The question is whether the United States will lead this transformation, or if we will comfortably watch others from the sidelines.
We can still choose abundance. The United States remains the most capable society on earth for translating invention into prosperity. We’re a bit doughy and out of practice, but we still have the talent, the institutions, the capital, and the culture of risk taking that every other country envies. What we need is to give ourselves permission to shed the unnecessary deadweight, to exercise our entrepreneurial muscles, and to wrestle optimistically with the challenges ahead.
The song dropped a few weeks before Thanksgiving, and tastemakers attacked. They ranked it at the bottom, sneered that the artist wasn’t real, dismissed it as novelty, and excoriated the music for being just plain bad. Not serious. Not authentic. Not real.
In 1958, Ross Bagdasarian was a struggling actor and songwriter. He’d had an Alfred Hitchcock cameo and he co-wrote a hit for Rosemary Clooney, but the money had dried up. In fact, before that, he’d tried grape farming in the late 1940s and his crop literally dried up. When your resume includes “failed raisin magnate,” you’re not exactly on the glide path to stardom.
By this point, he had about $200 to his name, according to his kids, and spent $190 of it on a vari-speed tape machine. He discovered that if he sang very slowly into the recorder at half-speed, then played it back at regular speed, his voice turned into a helium-induced cartoon. Then he had an idea, a song about seeking advice from an alternative healer. He wrote and recorded “Witch Doctor” using the vocal trick.
The executives of Liberty Records, Alvin Bennett, Simon Waronker, and Theodore Keep, were close to bankruptcy and bet it all on releasing this odd song. In April, 1958, “Witch Doctor” rocketed to the top of the charts.
Riding that success, months before Christmas, Bagdasarian’s four-year old son began the annual parental torture ritual, “When is Christmas?” That gave him another song idea. He whistled the tune into a tape recorder (he couldn’t play an instrument) and wrote a Christmas song. But he felt it shouldn’t be a choir, it should be singing insects or animals. He eventually landed on Chipmunks.
He took the stage name David Seville and named his high-pitch trio after those Liberty Records execs.
“The Chipmunk Song (Christmas Don’t Be Late)” debuted on American Bandstand’s “Rate-A-Record” segment. It scored the lowest possible rating of 35 across the board. As bad as it gets. Critics called it novelty. Even decades later, writer Tom Breihan praised its ingenuity but also called it a parlor trick and added, “As a piece of music, it sucks shit.”
Listeners didn’t care.
The Chipmunks spent four weeks at Number 1, stayed on the charts for thirteen weeks, and was the last number one Christmas song until Mariah Carey’s “All I Want for Christmas is You” in 2019. The record also won three Grammys at the inaugural event.
None of this is surprising.
When artists use new technology to make new kinds of art, some gatekeepers respond by declaring it “not real.” A 15th century monk said the printing press made a “harlot” of literature. Music legends warned that synthesizers would “destroy souls.” Today, critics slap the label “AI slop” on AI-generated music and content. The charge is the same: that this isn’t real art because it lacks human experience and depth.
Maybe “The Chipmunk Song” really is a parlor trick. Maybe Broken Rust’s “Walk My Walk” really is “AI Slop.” But once something hits Number 1, we’re forced to face an uncomfortable question: if millions of people like it, what about it isn’t “real”?
That’s not to say that popularity settles an argument. Plenty of popular things are shallow and disposable. But popularity does tell us that something is happening in people’s heads and hearts. A recent Deezer-Ipsos survey found that 97% of listeners can’t tell the difference between AI and human-composed music. If most people can’t hear the difference, then “this isn’t real” can’t just be about how it sounds.
Often, the accusation is about jobs. Historically, when new tools arrive, critics pair their aesthetic complaints with concerns about “real” artists losing work. “AI slop” can work the same way. It’s a taste judgment but carries a quieter concern of what if this new stuff replaces us.
That fear isn’t fake, but dismissing the tech as fake or unworthy doesn’t solve the problem. It just insults the audience. If the concern is that algorithms and bots are juicing engagement, then the argument is not with the songs, it’s with the incentives and the business model. If the concern is that artists will lose their work, then the argument is with how we structure rights, revenue, and opportunities for human creators.
Ross Bagdasarian’s chipmunks remind us that listeners have always had a soft spot for gimmicks, novelties, and new sonic landscapes. And those experiments can become part of the canon, not by passing a purity test, but by connecting with people. As AI tools flood the landscape, artists must rethink their advantages that no model can automate. (On this, I just had a great conversation with Bandcamp’s Dan Melnick – more later.) And critics can retire the border-patrol badge and help us tease out why sounds land in the first place, and what that says about us, the humans.
The song dropped a few weeks before Thanksgiving, and tastemakers attacked. They ranked it at the bottom, sneered that the artist wasn’t real, dismissed it as novelty, and excoriated the music for being just plain bad. Not serious. Not authentic. Not real.
article
false
include_in_hero_section
false
category
Articles
topic
Creative Frontiers
article_view
article content only
social_image
false
is_displayed
true
display_order
—
Article
The 1925 New Tech That Let a Legend Invent a New Sound
He needed one more tune for the recording session at Okeh Records. Dinner was almost ready, his mother was at the stove, and he sat down at her table to “scratch out” something fast. But this song had to be different. There was a new technology in recording that many were criticizing, but what if he could take advantage of it and create a whole new sound. In fifteen minutes he finished the song, he recorded it the next day, and it was an instant hit. The new technology was the microphone, the song was “Mood Indigo,” and the artist was Duke Ellington.
Music historians argue over how much of it Duke actually wrote that evening. Clarinetist, Barney Bigard, had floated the melody to Ellington earlier. But the important part is that Ellington orchestrated the song for the microphone, not just through it. That was the leap.
Before 1925, recording was entirely mechanical. Bands would “gather ‘round the horn” and play into it so the sound pressure would jiggle a diaphragm. The diaphragm moved a stylus that scratched the vibrations into a cylinder or disc. Big, brighter tones worked great; quiet, lower instruments struggled to be heard. There was a choreographed dance as studio assistants (“pushers”) shuffled musicians to and from the horn to vary the dynamics. A singer might have to stick her head in the horn to register her softer notes. One violinist just sat on a box with wheels, to more easily adjust throughout a recording. And everything was all live, all the time. There was no editing.
Western Electric’s electrical recording changed it all.
A microphone listens differently; it’s sensitive. It hears low tones and quiet details. Near instruments sound warmer, farther instruments sound airy, and they can all be heard. So, while horn-recording flattened music, the microphone created a three-dimensional sound stage. That shift offered enormous potential for innovative artists. It also stirred a lot of controversy.
Critics said the microphone was breaking up the band, spotlighting individual instruments and destroying the ensemble sound. Other familiar criticisms accompanied. It didn’t sound “natural” or authentic. It threatened livelihoods: acoustic engineers with years of experience were suddenly rookies again. And then came “crooning,” the intimate mic style that set off a moral panic (we’ll save that one for another day—it’s worth it.)
But Duke Ellington was intrigued. He could see—or hear—the mic as a new instrument with its own physics and color palette. New soundscapes were possible. The mic could let the string bass “crowd” the frontline, previously dominated by horns, and steer the groove. The plunger-muted brass could growl without turning to fuzz. The low reeds could whisper and hold their own with the rest of the band.
Ellington also heard something interesting when he recorded on a microphone an earlier song, “Black and Tan Fantasy.” He called it a “mic tone”; a vibration like a ghostly extra pitch that emerged when certain instruments and intervals interacted with the mic. Not feedback, not distortion, but a new overtone. Rather than fight it, he wrote to it. That brings us back to “Mood Indigo.”
At his mother’s kitchen table, Ellington inverted the usual brass-reed hierarchy. He handed the bass line to the clarinet, parked the trumpet in the middle register, and let the trombone float high. It was an arrangement that would have turned to mud in the horn era but it bloomed in the new mic era. The stack created the illusion of a fourth voice, born in the microphone. He also discovered that the original key, A flat, rattled the mic too much, so he bumped it a whole step to B flat, and it was perfect.
Duke Ellington & His Orchestra with a Marconi-Reisz Mic, Circa 1933
“Mood Indigo” was written for the mic and it was a phenomenal success. Ellington would become a household name, known for his hit songs and for his nightly broadcasts across the country from the Cotton Club… through a microphone, naturally.
Ellington didn’t ask the microphone to behave like the horn. He rearranged the band. He saw a new tool, new rules, and he pressed to see what it could do. He’s regarded as one of the most influential artists in music history, in large part because he wrote to the innovation, not against it.
He needed one more tune for the recording session at Okeh Records. Dinner was almost ready, his mother was at the stove, and he sat down at her table to “scratch out” something fast. But this song had to be different. There was a new technology in recording that many were criticizing, but what if he could take advantage of it and create a whole new sound. In fifteen minutes he finished the song, he recorded it the next day, and it was an instant hit. The new technology was the microphone, the song was “Mood Indigo,” and the artist was Duke Ellington.
article
false
include_in_hero_section
false
category
Articles
topic
Creative Frontiers
article_view
article content only
social_image
false
is_displayed
true
display_order
—
Article
Practice Didn’t Die, It Moved: Auto-Tune and Death Cab for Cutie
The indie rock band, Death Cab for Cutie, arrived at the 2009 Grammy awards in protest. With baby blue ribbons prominently pinned to their lapels, they decried a contaminant sweeping the globe. It poisoned natural beauty, concealed human error, and bulldozed diversity. Not oil. Not chemicals. Auto-Tune.
On the red carpet, they warned of a music industry awash in the “digital manipulation” of thousands of singers. But this admonition wasn’t new. It was another verse for the chorus that has echoed since the first vocoders crackled to life. It didn’t sound human, critics had charged; it scrubbed away the small imperfections that make performances feel alive and authentic.
Bassist Nick Harmer added that because of Auto-Tune, “musicians of tomorrow will never practice. They will never try to be good, because yeah, you can do it just on the computer.” We’ve heard this lyric before.
Another musician had similarly worried over a machine in music: “And what is the result? The child becomes indifferent to practice.” When music can be easily acquired, he continued, “without the labor of study and close application, and without the slow process of acquiring a technic, it will be simply a question of time when the amateur disappears entirely….” That wasn’t a concern about Auto-Tune. That was renowned composer and band leaderJohn Philip Souza in 1906, troubled by the player piano. Different gadget, same prophecy.
But Souza was wrong. In the years after his warning that player pianos would diminish the public’s interest in learning, the opposite occurred. According to a 1915 article in Music Quarterly entitled “The Occupation of the Musician in the United States,” census data revealed that between 1890 and 1910, the number of piano teachers in the U.S. increased by over 25%. It was an increase of 1.2 piano teachers per thousand to 1.5 per thousand. Perhaps the player piano decreased the rate of growth, but certainly the desire to make music didn’t die; it adapted. Practice rarely disappears, it just sometimes migrates.
That’s the pattern. The microphone changed the frontier from lung power to mic craft. Drum machines spread precision from wrists to arrangement. Sampling expanded creativity from takes to crate-digging and taste. Auto-Tune, used as an instrument instead of spackle, prized design and studio judgment. The practice didn’t vanish, it just morphed and moved.
Why then, do we get obituaries each time? Part of it is that these “practice panics” aren’t just about sound, they’re also about status. They can be a contest in who defines “real.” Norm guardians such as unions, established taste makers, conservatories, critics, and fans police the boundaries of “authentic” practice. If legitimacy has been long signaled by a specific kind of labor, a tool that reduces that labor can look like cultural vandalism. Thus, these prophetic proclamations of future despair can sound noble, cloaked in virtue (“for the craft”), but they may be an effort to protect yesterday’s pecking orders.
This is not to say that concerns are simply cynicism. We all build our identities around the techniques we’ve developed through blood, sweat, and tears. A new tool can rightly feel like an assault on meaning. But history shows us that though difficult to navigate, practice adapts and the tent gets bigger.
As the world around us continues to evolve quickly, it’s important that we keep the target in sight and separate the ends from the means. The end is expression, or in other fields it can be preserving resources, improving health, or something else; the means are the tools, and they can change without the sky falling. Perhaps it’s a question of scrutinizing the conduct instead of regulating the capability. We didn’t outlaw microphones because crooning scandalized 1928, and we shouldn’t bury pitch correction because 2009 felt overscrubbed.
“Real” lives in the listener’s gut, not in the checklist of chores that deliver it. Even Auto-Tune can be a new grammar, a new way to sculpt the soundwaves to create an authentic experience. So, the debate shouldn’t be about destroying the tool, it should be about how best to teach and to learn the craft where it now lives.
Innovation keeps relocating the work. Artists keep chasing it because that’s where the meaning is, that’s where there’s a chance to land a song as true because it enriches someone’s life.
The indie rock band, Death Cab for Cutie, arrived at the 2009 Grammy awards in protest. With baby blue ribbons prominently pinned to their lapels, they decried a contaminant sweeping the globe. It poisoned natural beauty, concealed human error, and bulldozed diversity. Not oil. Not chemicals. Auto-Tune.
article
false
include_in_hero_section
false
category
Articles
topic
Creative Frontiers
article_view
article content only
social_image
false
is_displayed
true
display_order
—
Article
Before iPhones and ChatGPT, Venice Had Its Own Tech Panic
Filippo was convinced that the kids were in trouble. A flashy new machine was hijacking their attention, exposing them to risque material, and turning brains to mush, while the “tech bros” shrugged and made more. So he did what any concerned citizen would do, he wrote a letter to City Hall. Technically, it was 1474 and “City Hall” was Nicolo Marcello, Venice’s chief magistrate. The machine was the printing press. Filippo de Strata wanted it shut down.
Reading his plea, “lest the wicked should triumph,” is pure déjà vu. Ignore the courtly flattery (“may you hold sway forever… exalted as you deserve”) and the SAT words (“circumlocution”), and you’re basically at a modern Hill hearing about iPhones or ChatGPT. Same script, different nouns.
It shouldn’t be all that surprising. Human concerns don’t update as fast as the tech does. In fact, they remain pretty constant. A Benedictine monk writing five centuries ago with ink-stained fingers sounds a lot like a 2025 think tanker with a ring light. In fact, De Strata follows a classic playbook that resonates today: jobs, authenticity, and the children.
First, jobs. This is the economy. The printing press, he says, is putting “reputable writers” out of work while “utterly uncouth types of people” (printers), muscle in with their “cunning.” As a professional scribe, De Strata’s business model depended on scarcity: slow, meticulous processes. The press messed it up. “They print the stuff at such a low price that anyone and everyone procures it for himself in abundance.” Translation: scarcity for others pays my rent; abundance for others puts me out of a job.
Next, authenticity. This is sociology. Who gets to be “real”? Every scene has gate-keepers, norm-guardians that define the rules and police the border between authentic and counterfeit. De Strata draws a clear line with gusto. “True writers” wield goose-quills, printers are “drunken” and “uncultured…asses.” He explains that the work of the author is a superior art form. Writing is a “maiden with a pen” until she suffers “degradation in the brothel of the printing presses.” Then literature becomes a “harlot in print” and a “sick vice.” Tell us how you really feel, Filippo.
He also polices credentials. Printing, he worries, allows people tobuy their way into expertise. For a small sum, “doctors” can be made in only three years. It’s the timeless concern that new tools compress the distance between novice and master—or create false senses of mastery. A decade ago it was weekend masterclasses, MOOCs, and Wikipedia challenging traditional passages of learning (never mind simply staying at a Holiday Inn Express last night). Today, self-publishing, Substack threads, YouTube explainers, and X let anyone speak with an expert cadence. The question though isn’t if the gate got wider, but how do we measure real mastery.
Finally, think of the children. Cheap and easily-accessible books, he warns, are vehicles of debauchery and impurity that are corrupting kids. Maybe that’s just rhetorical gasoline for his arguments to catch fire, or maybe it was a sincere pastoral concern for the next generation. As a dad who’s watched his kids disappear into a screen too often, I totally get the concerns. Either way, the “for the kids” refrain reliably clothes his economic and status concerns in civic virtue.
Unfortunately for Filippo De Strata, City Hall didn’t bite. Printers kept printing, presses multiplied, and Venice became the hottest book town in Europe. The printing press didn’t end scholarship; it multiplied the scholars. His letter didn’t stop the presses, but it left us a helpful snapshot of how we react when new tools arrive.
A 500-year-old letter is more than a curiosity, it’s a diagnostic. Objections to new tools cluster in timeless buckets: economic pain (who loses their job?), social status (who defines “real”?), and moral urgency (what about the kids?). When a fresh technology arrives, we can map the reactions and work to distinguish measurable harms from preferences for yesterday’s workflows.
De Strata wanted the future to behave like the past. Venice chose to bargain with the future, building guardrails that let abundance work for more people. The kids still need guidance. Experts still matter. But the threatening tool can become the instrument that broadens who gets to read, think, and make.
Filippo was convinced that the kids were in trouble. A flashy new machine was hijacking their attention, exposing them to risque material, and turning brains to mush, while the “tech bros” shrugged and made more. So he did what any concerned citizen would do, he wrote a letter to City Hall. Technically, it was 1474 and “City Hall” was Nicolo Marcello, Venice’s chief magistrate. The machine was the printing press. Filippo de Strata wanted it shut down.
article
false
include_in_hero_section
true
category
Articles
topic
—
article_view
article content only
social_image
is_displayed
true
display_order
—
hero_image
hero_order
—
Article
AI in Music Feels Familiar: The Silent Album, Sousa, and Déjà Vu
You know the feeling, that eerie sense you’ve already lived this moment. You know the feeling, that eerie sense you’ve already lived this moment. Dad joke deployed! Déjà vu! Let’s press on.
I had a déjà vu moment a few months ago with a song that wasn’t a song. In late February, 2025, a thousand U.K. artists released an album of silence. Multiple studios, one sound: nothingness. It’s called Is This What We Want?, and it’s less a new vibe and more a brick through the policy window. Each track title spells out a message to Parliament: “The British Government Must Not Legalise Music Theft to Benefit AI Companies.” The argument is that proposed reforms to U.K. copyright law will allow generative artificial intelligence to replace musicians. They believe that the studios will be silent and the machines will take the gigs.
I put the record on (insert joke about adjusting the EQ) while prepping a conversation on AI in music with drummer Elmo Lovano (Go with Elmo! and JammCard) and AI expert Neil Chilson (those CSPAN clips!). In doing a little research, I fell back into 1906 and met an old friend from your Fourth of July playlist, John Philips Souza. I grew up listening to my amazing, WWII veteran grandfather wear out those march records with the “Stars and Stripes Forever,” “Semper Fidelis,” and “The Washington Post.”
In 1906, Souza wrote an article, “The Menace of Mechanical Music,” that sounds like a century-old oppo piece on AI. He writes that if machines can steal music from artists it will destroy “further creative work,” where “the amateur [musician] disappears entirely” and for the professionals “compositions will no longer flow from their pens.” More machines, fewer musicians. Déjà vu. Only he wasn’t worried about neural nets; he was concerned about the player piano.
One of Souza’s concerns outlined in his oppo piece
A few weeks later, as I was getting ready to talk with Jarobi White from A Tribe Called Quest, the echoes got louder. I kept running into similar indictments. This isn’t real creativity, some say. It just copied music that came before, stealing bits and pieces from other artists, slicing them up, and recombining them without permission. It cheapens the art. It steals jobs from real musicians. Mark Volman of The Turtles summed it up, saying, “[It] is just a longer term for theft. Anybody who can honestly say [that it] is some sort of creativity has never done anything creative.” Déjà vu. Volman and the others weren’t talking about AI. They were talking about sampling.
Then came more conversations with legendary producers, Om’Mas Keith and Jimmy Jam. More déjà vu as they shared stories about responses to innovations in the creative spaces. The nouns changed (piano rolls, drum machines, synthesizers, DAWs) but the verbs and arguments rhymed. I wanted to learn more.
That’s the seed of Creative Frontiers. I’m not here to crown winners, write manifestos, or install a master theory. This is a learning tour. I want to understand why these arguments against new technology sound the same across centuries, what’s genuinely new each time, and what previous debates and resolutions can teach us today.
The question at the center is: How do humans respond to innovation and what can we learn to make more Makers, and consequently more abundance and more human flourishing?
Now, I’m not anti-alarm. Some alarms save lives and catalogs. I’m just pro-curiosity. The silent album is a statement. Souza’s commentary was one too. Both carry a fear that’s real, losing what we love, and what is good, to a machine. But history suggests that most of the time, the machine ends up in the band and for the better. The player piano didn’t erase voices, it taught songs to households without a teacher. Samplers didn’t end creativity, they helped create new genres.
Maybe AI will be different. Maybe not. Either way, I want to understand the patterns before we write the rules.
No grand conclusions, just an invitation. If you’re curious about how creativity and innovation and technology keep bumping into each other, and why the soundtrack of that collision keeps repeating, pull up a chair. I’ll bring the archives. You bring your questions and ideas. Let’s see what we can learn together.
You know the feeling, that eerie sense you’ve already lived this moment. You know the feeling, that eerie sense you’ve already lived this moment. Dad joke deployed! Déjà vu! Let’s press on.
article
false
include_in_hero_section
false
category
Articles
topic
Creative Frontiers
article_view
article content only
social_image
false
is_displayed
true
display_order
—
Article
The United States already leads the world in high-tech development. But policy, not technology, now stands in our way.
The Right to Compute Act might sound abstract, but it’s about something every Ohioan should care deeply about: the freedom to think, build and innovate with the tools of the modern age.
Over the past two years, states have raced to regulate artificial intelligence — which is just another way of saying “advanced computing.”
More than 1,000 AI-related bills have been introduced nationwide, from deepfake bans to rules for “high-risk” algorithms.
Some are necessary; others risk overreach.What’s often missing in these debates is a simple baseline: the recognition that Americans have a fundamental right to use computers — to access and apply computational power — without government permission or arbitrary limits.
That’s what Montana affirmed earlier this year when it became the first jurisdiction in the world to enact a Right to Compute Act.
The law guarantees that individuals and organizations can own and use computational resources — hardware, software, algorithms, even quantum systems — unless the government can show a compelling reason to restrict them. It pairs that freedom with sensible guardrails for critical infrastructure, requiring companies to follow national safety frameworks like NIST’s AI Risk Management Framework.
Now Ohio has the opportunity to join Montana.
The Buckeye State is already a computing powerhouse.
The Data center corridor outside Columbus is home to Amazon Web Services, Google and Meta facilities.
Intel’s $20 billion chip-manufacturing investment near New Albany promises to make Ohio a global center for advanced computation. Universities like Ohio State and Case Western Reserve are training the next generation of AI researchers and engineers.
But this promise comes with risk.
Technology could be restricted
Some lawmakers in other states are flirting with laws that restrict access to computing power based on who you are, how much you use or what you’re building.
California and New York have floated measures to license AI developers or cap computing use at arbitrary thresholds. President Biden’s now-revoked Executive Order 14110 tried to impose federal controls on AI development based on the number of chips in a server — an approach copied from Europe’s more bureaucratic AI Act.
Without a clear right to compute, Ohio’s innovators could face the same uncertainty.
Entrepreneurs and researchers need to know that they can build, experiment and scale without the rug being pulled out from under them by a regulator who suddenly decides their computer is “too powerful.” It also protects the rights of individual citizens to use and operate computers from the smartphone to the home server.
The Right to Compute Act is not a “hands-off” approach to AI.
Act will ensure balance
It simply restores constitutional balance: The government must justify restrictions, not the other way around. Fraud, deception and harassment remain illegal, and critical-infrastructure systems must still follow recognized safety standards.
For Ohioans, this means economic growth grounded in freedom. The same principles that made this state a manufacturing and research leader in the 20th century can make it a leader in 21st-century innovation.
A legal guarantee of computational freedom tells investors, students and entrepreneurs alike: Ohio is open for building.
This isn’t a partisan idea.
Montana’s version passed with strong bipartisan support. Protecting lawful access to computational tools is a practical step toward ensuring that AI and advanced computing benefit everyone, from small businesses in Dayton to students at Ohio State and farmers using smart equipment in rural counties.
Ohio can set global standard
History teaches that rights are easiest to defend before they’re lost.
Just as free speech protections had to be reaffirmed for the internet age, the right to compute updates a timeless principle for a new era: Citizens, not bureaucracies, should decide how they use their tools of thought.
If Ohio enacts this law, it won’t just follow Montana’s example, it will set a global standard for freedom, innovation and competitiveness.
Legislators should seize this opportunity to keep the Buckeye State at the forefront of America and the world’s technological future.
In a world where governments are beginning to decide who may compute and who may not, Ohio can send a clear message: In this state, the power to think, build and innovate belongs to the people.
Legislators should seize this opportunity to keep the Buckeye State at the forefront of America and the world’s technological future.
article
false
include_in_hero_section
false
category
Op-eds
topic
Technology
technology_subtopic
Artificial intelligence
article_view
article content only
social_image
false
is_displayed
true
display_order
—
Article
Bolstering Data Center Growth, Resilience, and Security
Introduction and Summary
Thank you for the opportunity to participate in this Request for Comment. The Abundance Institute is a mission-driven nonprofit focused on creating space for emerging technologies to grow, thrive, and reach their full potential. Data centers represent the backbone for developing various new technologies in both the digital and the physical spaces. I am Josh T. Smith, the Energy Policy Lead at the Institute. Our energy policy work has focused on interconnection queues, data center regulation, and the institutional differences in regional transmission operator governance (RTOs).
Reporting and public conversations around data centers have correctly identified the critical problem for data centers as energy supply. This concern has often been overstated–effectively ignoring half of the equation by only looking at the growing demand for electricity.
My central advice to the National Telecommunications and Information Administration (NTIA) is to examine both sides, supply and demand. There are both large energy users looking for ways to meet their energy needs and substantial energy resources looking to connect and supply that energy. A successful NTIA report would establish what holds back would-be energy suppliers from serving that demand and recommend solutions for regulators at every level.
To summarize our suggestions for the eventual National Telecommunications and Information Administration report on data centers, NTIA should:
Design and suggest policies that leverage market signals to guide energy investments.
Encourage federal, state, and local action to streamline permitting of data centers and their related energy infrastructure. In particular, NTIA should encourage regional transmission operators and states to consider how they interconnect resources. The Texas model, employing an energy-only approach and a philosophy of “connect and manage,” is the only system operator not slowing dramatically.1
Resist calls to require additionality in the supply of energy sources in favor of relying on market signals to energy suppliers and private additionality and matching efforts.
Allow and encourage innovative solutions to energy needs, such as co-location and flexibility, to continue evolving and developing. To maintain certainty as people experiment, policymakers should apply existing and well-known cost allocation principles to these new business practices.
My reply to the request for comments is responsive to questions 1, 2(a), 2(c), 2(e), 3(a), 3(b), 3(c), 4(c), 5(e), 7, 7(a), 7(b), 7(c), 7(e), 7(f), and 11.
Building Abundant, Reliable Energy for All Users
In question 3(a), the NTIA asks if “an imbalance between demand and supply” of energy is expected. Blackboard drawings of supply and demand curves from Economics 101 imply a more fixed view of markets by focusing on an end state rather than the process.
In reality, supply and demand equilibrate over many different choices and actions of many different actors. The long-run and short-run equilibrium can be very different as short-term price increases incentivize new entrants, bringing down prices. Prices are usually cast as the villain in public discussions. Economists instead emphasize that prices are the heroes. Policymakers should approach energy questions with this process and the role of prices in mind.
In practice, this means considering what prevents supply from entering the market. Here, the answers are straightforward. Addressing energy needs swiftly and effectively requires a dual focus on permitting reform and interconnection improvements.
To reform the interconnection process, the NTIA should encourage RTOs and states to learn from the successes of the Texas “connect and manage” style of regulation.2 The energy-only system is simpler for compliance and evaluation. It allows dramatically greater amounts of energy supply to be connected to the system in much less time.3 In addition, researchers have recently laid out fundamental and extensive deficiencies in the capacity market approach.4
On permitting reforms, the NTIA should encourage state and local governments to expedite permits for data centers and related energy infrastructure. There are also growing numbers of barriers to renewable projects, such as local bans on wind and solar.5 Even homeowners associations are sometimes barriers to installing solar, batteries, or other energy technologies at residential locations.6 The NTIA should recommend ways to overcome this localized opposition.7
The last 20 years are a better guide than the last 24 months
Neither of these two changes, permitting reform or interconnection queue solutions, represent overnight fixes. Taking a view of the next few years, rather than what has happened in the last few weeks is vital for setting good policy. The history of energy and computing is a more useful guide than intemperate news reports. Keep in mind that dramatic improvements have been seen in computing efficiencies. One team summarized the global trend as a six-fold increase in computing with only a one quarter increase in energy use.8 There is little reason to doubt continued efficiencies.
Past misses in estimating the future energy requirements of the internet and personal computing should feature prominently alongside claims that data centers will consume outsized shares of electricity.9 The early history of personal computers was replete with poor analysis. Echoes of this can be seen today in confusions between the growth rates and absolute growth required by data center expansions.10
To the extent that recent news reports have highlighted energy consumption increases or emissions increases, these reflect temporary trends and upfront costs in developing AI. As artificial intelligence improves, we should see both efficiencies rise in energy use and discover ways to reduce environmental costs.11 Because energy costs are a substantial portion of data center operations, there are natural and pre-existing motives for data centers to find solutions that reduce those costs.
Additionality requirements are counterproductive and unnecessary
Marrying reforms that streamline permitting with ill-defined questions of additionality is impossible. An additionality requirement merely substitutes one regulatory thicket for another. The arguments around hydrogen tax credits are a concrete example of the problems of mandated additionality. A requirement that data centers bring their own supply, whether that is defined as “clean” or defined as “dispatchable,” introduces uncertainty and discourages data center development.12
Because the interconnection queue is overwhelmingly made up of clean generators, there is no need to apply additionality requirements to data centers. That is, requiring data centers to build equal supplies of their own energy generation is misplaced. Instead, regulators should focus on removing barriers to new supply entering the market. As I wrote in Heatmap with Alex Trembath of the Breakthrough Institute:
There are more than enough clean generators queueing to enter the system — 2.6 terawatts at last count, according to the Lawrence Berkeley National Laboratory. The unfortunate reality, however, is that just one in five of these projects will make it through — and those represent just 14% of the capacity waiting to connect. Still, this totals about 360 gigawatts of new energy generation over the next few years, much more than the predicted demand from AI data centers. Obstacles to technology licensing, permitting, interconnection, and transmission are the key bottlenecks here.
Finally, data center companies are already investing significant resources into building more generation on top of matching their demand with real-time clean energy generation. There is no need to mandate ongoing actions. NTIA should consider recommending that agencies work with companies in their private pursuits to green their energy consumption and supply chains. For example, by assisting in relevant data collection or through making building the relevant energy assets easier.
Co-location should be allowed to develop further under existing cost allocation rules
The emergence of co-location between energy generation and data centers suggests that the electricity market is an innovative area. The Federal Energy Regulatory Commission’s recent conference demonstrates that there are open questions about co-location.13 Co-location should be allowed, further studied, and traditional practices of cost causation should be applied to prevent cost-shifting.
In addition, policymakers must consider the long-term. Complaints that a data center co-locating with an existing nuclear or other “clean-firm” generator takes supply from the market or other consumers are short-sighted and fundamentally confused. This is the way all markets work. If I purchase a loaf of bread, then that loaf is no longer available to you. However, my purchase encourages breadmakers to expand the supply. Electricity is certainly a more complicated good than bread, but the market process in the background is the same.14 Policies directly lowering the cost of new entry for energy suppliers will go much further than objecting to new business models for data centers and energy companies that may actually reduce total system costs.
Flexibility from data centers should be enabled but not required
Similarly, the ability of energy consumers to flexibly adapt to grid conditions is a young practice. This is an area that public agencies should not make firm requirements around yet. However, the NTIA could recommend that regulators at state and local levels begin reconsidering how to design rates that encourage flexibility that does not fit the already familiar versions of demand response. One example is a 2016 data center development in Wyoming. The data center employs its backup to serve the wider grid, which reduces costs for both the data center and the local grid.15
These actions should enable two forms of flexibility. First, the flexibility that comes from relying on backup and co-located energy assets in response to grid conditions must be enabled by policy. Data center companies have already shown interest in this option. Second, flexibility from the nature of the computing at the data center may also face policy barriers. Some data centers require 100 percent uptime. Other uses with lower latency requirements can be shifted off the system’s peaking times to support the grid’s safe and reliable operation.
Regulators need to enable such cases of flexibility. One option is to create a process to joining the system that accounts for expected peak load contributions of flexible loads. Requiring that all data centers adopt such practices will backfire because of differences in computing needs for different computing uses. However, adding new pathways onto the grid expands options and possible business models. Because the system is heavily permissioned today, new options are valuable to operators and data centers.
Requiring flexibility, enrollment in traditional demand response programs, or singling out data centers to be first to have their loads shed sets poor incentives for the entire system. It singles out specific solutions in a novel industry where such rules could easily prevent better solutions from emerging.
Conclusion
By fostering a market-driven approach to energy access and encouraging permitting reform, the NTIA can create a supportive environment for data centers, facilitating their role in driving technological advancement and economic growth across the country.
I appreciate your efforts on this question and would welcome the opportunity to work with you or answer further questions if I can be of any assistance.
4 For an overview of these problems, see the work of Todd Aagaard and Andrew N. Kleit, especially their book Electricity Capacity Markets (Cambridge, United Kingdom New York, NY: Cambridge University Press, 2022).
8 Eric Masanet et al., “Recalibrating Global Data Center Energy-Use Estimates,” Science 367, no. 6481 (February 28, 2020): 984–86, https://doi.org/10.1126/science.aba3758.
9 See, for example, the careful work of Jonathan Koomey as compared to other claims that computers would use half of all electricity. For a useful overview, Robinson Meyer’s reporting for Heatmap is an excellent introduction: “Is AI Really About to Devour All Our Energy? There is precedent for this panic,” Heatmap, April 16, 2024, https://heatmap.news/technology/ai-energy-consumption. For the academic debunking of more extreme claims, see: Jonathan G Koomey, “Worldwide Electricity Used in Data Centers,” Environmental Research Letters 3, no. 3 (July 2008): 034008, https://doi.org/10.1088/1748-9326/3/3/034008; Jonathan G. Koomey et al., “Sorry, Wrong Number: The Use and Misuse of Numerical Facts in Analysis and Media Reporting of Energy Issues,” Annual Review of Energy and the Environment 27, no. 1 (November 2002): 119–58, https://doi.org/10.1146/annurev.energy.27.122001.083458; Jonathan Koomey, “Separating Fact from Fiction: A Challenge for the Media [Soapbox],” IEEE Consumer Electronics Magazine 3, no. 1 (January 2014): 9–11, https://doi.org/10.1109/MCE.2013.2284952; Jonathan G Koomey, “Rebuttal to Testimony on ‘Kyoto and the Internet: The Energy Implications of the Digital Economy,’” n.d.
By fostering a market-driven approach to energy access and encouraging permitting reform, the NTIA can create a supportive environment for data centers, facilitating their role in driving technological advancement and economic growth across the country.
Today presents a unique opportunity for states to step into the driver’s seat on nuclear policy:
A federal administration supportive of nuclear.
High demand for electricity is a certainty.
New users have significant resources and are eager to invest in energy sources that are reliable, clean, and available 24/7.
Idaho National Labs has 11 test reactors in an accelerator to be tested by July 4, 2026, which is a number of new designs unseen since the early days of American nuclear.
An ongoing lawsuit led by Texas, Utah, and several nuclear companies could give states new authority over small modular reactor development.
Despite this, most states are not yet prepared to seize the moment. The Overturn Prohibitions & Establish a Nuclear Coordinator (OPEN Act) model policy will prepare states to lead a new atomic age.
The OPEN Act lays the groundwork for states to permit and begin construction on new nuclear facilities within 180 days. It draws on the recent experience Utah, Texas, and other states have gained in laying guardrails and tracks for new nuclear development.
What the OPEN Act does
Ends nuclear bans and special hurdles.
Prevents new nuclear bans.
Creates a one-stop state-level lead and authority for nuclear development.
Sets fast, concurrent review expectations.
Benefits
The OPEN Act advances a state’s interest in a reliable and affordable energy supply. It capitalizes on a moment that may never come again. Your entire state will benefit, and America will continue to lead the world:
Economic growth in artificial intelligence and manufacturing will be supercharged by safe, reliable nuclear power.
New growth will mean new jobs in the nuclear sector and in the attendant industries, powered by new reactors.
Boosted tax revenues will flow to local and state governments from the development of new energy infrastructure and 24/7 data centers.
Today’s conditions echo an era when America built nuclear power plants swiftly, safely, and cheaply. In 1968, Connecticut Yankee came online after roughly five years, and at a price tag of about $1 billion in today’s terms. Since then, the regulatory process has smothered new nuclear proposals, resulting in only a few new plants coming online. Those new plants were years behind schedule and cost billions of dollars in cost overruns.
We can realize a future of “too cheap to meter” with quick and definite action today
Resources
“A Lawless NRC Obstructs Safe Nuclear Power,” Wall Street Journal, Christopher Koopman and Eli Dourado, Jan. 5, 2025.
Josh T. Smith, House Oversight Committee testimony on nuclear policy and the role of states in building nuclear swiftly, safely, and cheaply.