API Content Preview

Latest objects from all public post types.

Article

Safeguard⁠i⁠ng Compu⁠t⁠a⁠t⁠⁠i⁠onal L⁠i⁠ber⁠t⁠y ⁠i⁠n Amer⁠i⁠ca

Published by the James Madison Institute.

State governments are moving at breakneck speed crafting policy on artificial intelligence. In just two years, lawmakers have passed dozens of bills targeting deepfakes in campaigns, shielding citizens from abusive synthetic media, creating rules for high-risk applications. In 2025 alone, over 1,000 AI-related bills were introduced across the states.

For most Americans, it is assumed that the freedom to access and use computing power, the very foundation of modern innovation, is secure. Yet in practice, that freedom is under threat. From California to New York, legislatures and governors are chipping away at this liberty, treating computation itself as something the public must be shielded from rather than empowered by. This is not a small matter: it strikes at a core pillar of the American experiment—our ability to think, invent, and build with the tools of the age.

Montana charted a different course. In spring 2025, it became the first jurisdiction in the world to enact a right to compute: a statutory guarantee that individuals and organizations can own and use computational resources unless the government can demonstrate that restrictions are narrowly tailored to achieve a compelling interest. This simple but profound step filled a glaring gap in state, and even global, AI lawmaking.

Montana’s Right to Compute Act, signed in April 2025 after strong bipartisan votes, creates a clear default of freedom for its citizens: government actions that would restrict lawful use or ownership of “computational resources”—hardware, software, algorithms, cryptography, machine learning, networks, even quantum applications—must be narrowly tailored and demonstrably necessary to serve a compelling government interest. That language is not rhetoric; it’s the operative standard, and the statute provides practical definitions that will help agencies, courts, and businesses apply it.

Montana pairs this rights‑affirming law with targeted safety measures for critical infrastructure. If an AI system helps operate a critical facility, the deployer must maintain a reasonable risk‑management policy that references widely recognized standards—explicitly including the NIST AI Risk Management Framework (AI RMF) or comparable international frameworks. This is governance that adapts as best practices evolve, instead of freezing technology in statute.

Why Government Should Protect Computational Liberty

This raises the question: why is explicit legal protection for computational rights necessary now? Americans have, after all, been using computers for decades without a specific “right to compute” enshrined in law. The answer lies in the changing global and domestic regulatory landscape. A computer, like the abacus and slide rule before it, is simply a technological amplification of human cognition. In the 21st century, access to computational resources increasingly determines who can participate fully in economic, civic, and intellectual life. Computers enable economic growth and an improved quality of life that benefits all Americans. Most of all, the computer represents opportunity.

As computers become more intertwined in daily life, computational resources and access are increasingly subject to government restrictions. This is often based on how much processing power they use, what tasks they perform, or who is using them. Montana’s approach is rooted in a deeper philosophical principle: computational freedom is not a privilege to be granted by the government but a natural extension of rights we already possess that should be protected by the government.

This isn’t merely abstract philosophy. We’ve already seen how governments can abuse control over computational resources. In the UK the government is requiring identification before citizens can access the internet and is now implementing a digital ID system. China’s government imposes even stricter requirements on its citizens’ ability to access the internet. Similar ideas have been proposed in the US that would require verification before citizens can access app stores or even purchase a smartphone. President Biden’s Executive Order 14110 imposed regulations on AI development based on arbitrary computational thresholds, modeled on the European Union’s AI Act. Fortunately, President Trump nullified that executive order. All these approaches, and similar ones that could easily be proposed in the future, give regulatory agencies sweeping discretion to determine who may access computational power and under what conditions. A right to compute law provides a firewall against this kind of creeping technocratic control.

Why other states should adopt a Right to Compute

First, it keeps the focus on bad conduct, not tools. State laws already prohibit almost all harmful uses of AI without outlawing general‑purpose computing. A right to compute complements current law by clarifying that open‑ended innovation remains presumptively lawful, while fraud, deception, and harassment remain illegal. It is a freedom-preserving measure for all citizens of the state, providing individuals with a defensive mechanism against government overreach.

Second, it opens the door for builders. Entrepreneurs, universities, and small firms need assurance that new code, chips, and models won’t be preemptively banned just because they’re new or particularly powerful. A clear statutory presumption in favor of lawful compute lowers the “unknown unknowns” that can chase investment away from emerging tech hubs and university research corridors.

Third, it strengthens economic competitiveness. AI has unleashed a race to expand computing capacity and the infrastructure behind it—power, fiber, data centers, cooling, and skilled labor. States sending a stable, pro‑innovation signal will compete better for the projects, jobs, and grid upgrades that come with this build‑out.

Who’s moving next?

Montana won’t be alone for long. Ohio legislators introduced the Ohio Right to Compute Act this summer, signaling widespread interest in transplanting the same framework—affirm the right, define the terms, and pair it with risk management for AI in critical infrastructure. New Hampshire is considering right to compute constitutional amendment. The American Legislative Exchange Council adopted and released a right compute model bill that closely tracks Montana’s structure, giving states a starting point to adapt to local law.

Despite all the benefits, there are some common critiques of this bold approach.

“Isn’t a right to compute a hands‑off approach to AI?” No. It merely forbids broad, preemptive bans on tools while preserving enforcement against deception, fraud, harassment, IP infringement, and safety risks. Montana’s law even enumerates compelling interests to make that point unmistakable. And where AI touches critical infrastructure, it requires documented risk management tied to national standards. It shifts the burden onto the government to demonstrate that regulation is required.

“Won’t this tie regulators’ hands as AI evolves?” No. It merely puts an additional barrier between government regulation and an individual’s right to use their property. As the Montana bill and model bills stipulate, there needs to be compelling government interest, so regulation is still possible if the reason fits that qualification. The core rule—punish harmful conduct, not generalized capability—ages better than technical mandates that hard‑code today’s assumptions. Americans currently have broad access rights to computers, and that has not prevented law enforcement from prosecuting bad actors who use computers to break the law.

“Isn’t it premature to enshrine legal protections for technology we don’t yet fully understand?” This objection gets the question backwards. The right to compute doesn’t create a new right; it affirms an existing one. Just as the First Amendment protected speech before anyone imagined the internet, and the Fourth Amendment protected privacy before digital communications existed, the right to compute simply legally enshrines the notion that fundamental rights apply to new technologies. The alternative—waiting until we “fully understand” all forms of future computing before protecting access to it—would mean years or decades of regulatory uncertainty that could crush innovation and leave citizens vulnerable to government overreach.

A practical, bipartisan win

Every state wants the jobs, research, and productivity gains unlocked by AI and advanced computing. At the same time, policymakers hear concerns about deception, discrimination, and infrastructure strain. A right to compute resolves that tension with a simple principle: default to freedom for lawful computation, create targeted safeguards when harms are known, and keep enforcement aimed only at bad actors.

Montana’s statute shows it can be done in a few pages. For legislatures that want to compete for entrepreneurs and new technologies in the global marketplace, the right to compute is a natural next step. It tells people everywhere the same thing: build here.

Custom Fields

hook
State governments are moving at breakneck speed crafting policy on artificial intelligence. In just two years, lawmakers have passed dozens of bills targeting deepfakes in campaigns, shielding citizens from abusive synthetic media, creating rules for high-risk applications. In 2025 alone, over 1,000 AI-related bills were introduced across the states.
article
false
include_in_hero_section
false
category
Op-eds
topic
  • Technology
technology_subtopic
article_view
article content only
social_image
false
is_displayed
true
display_order
Article

Unwrapping 2025 State AI Policy

On the 12th Day of Christmas…okay, okay we’ll hit pause on singing another Christmas carol and instead take a look back at the year that was in AI policy across the states here at the Abundance Institute.

🎄🎄🎄🎄🎄

The exact number varies depending on what you think counts as “AI legislation” but even at the lower bound estimates, hundreds of related bills were raised and considered in statehouses from Hawaii to Maine this year. Our team at the Abundance Institute weighed in on, well, a lot of them.

1. Right to Compute

  • On the pro-innovation side, Montana passed SB 212, a first in the nation proposal that reminds us that AI or advanced computing is a general purpose technology that we should be free to enjoy as citizens. The use of this technology isn’t something granted to us by the government, but a freedom to be protected by the government.
  • This concept captures an Abundance mindset for AI policy very well and has caught on elsewhere; being proposed as a constitutional amendment and pre-filed bill for 2026 in New Hampshire, passed as a model bill at the American Legislative Exchange Council, and introduced as a bill in Ohio. Our own Taylor Barkley submitted written testimony on HB 392 in Ohio, published an op-ed in The Columbus Dispatch and published an article with the James Madison Institute.

2. Colorado’s Quagmire

  • Passed in 2024, SB 205 has yet to be implemented in Colorado. This European Union-style approach to AI governance is heavy handed and scheduled to take effect after the upcoming legislative session in 2026. When Governor Jared Polis signed the bill into law he expressed serious reservation—signing it as a tradeoff to move separate legislation—and so have other political, business, and community leaders across the state.
  • The Abundance Institute has explained why this bill will harm innovation and individuals throughout the year, and it appears other states have listened as no one else has passed the same legislation after having ample opportunities to do so. The Abundance Institute is also right in the middle of trying to improve this looming legislation with our team collaborating with key experts in the state to find a solution to prevent this Sword of Damocles from falling on Colorado’s economy.

3. Turmoil in Texas

  • The Lone Star State gained national attention in the AI space earlier this year when HB 1709 was introduced: the Texas Responsible AI Governance Act (TRAIGA). Our own Christopher Koopman penned an op-ed for the Houston Chronicle which stated, “The [bill] would create some of the strictest artificial intelligence regulations in the country, echoing recent California bills the legislature wouldn’t pass and Gavin Newsom wouldn’t sign because they were too extreme.” After Chris’ op-ed, this legislation was pulled, overhauled, and reintroduced in a different form which ultimately did pass late in the Texas session as HB 149. The final bill was an improvement over the initial proposal, and efforts to reform it will continue next year while the Texas legislature sits idle.

4. Connecticut Connection

  • Just as he did last year, State Senator James Maroney introduced SB 2: An Act Concerning Artificial Intelligence. When the General Law Committee held a hearing on the bill in February it kicked off with Connecticut DECD Commissioner Daniel O’Keefe making the case against this preemptive regulatory approach. The discussion was excellent and continued through Neil Chilson’s virtual testimony and Q&A with committee members. The bill, same as last year, was not taken up on the House and we are hopeful that Sen. Maroney, one of the most well-versed state policymakers, will take up a more Abundance mindset on AI in 2026!

5. California Conundrum

  • There were over 40 AI bills introduced in California alone in 2025! Neil Chilson and Taylor Barkley submitted written testimony on AB 1018 and SB 813 to highlight just a couple of instances where the Abundance Institute weighed in on AI governance in The Golden State. Governor Gavin Newsom vetoed a handful of notable AI bills in 2025, just as he vetoed AB 1047 last year, but other bills like SB 53 and SB 243 were signed into law. Neil Chilson has offered a series of reforms to SB 53 that would improve the legislation and should be considered by California policymakers next year.
  • Neil also submitted a comment to the California Privacy Protection Agency regarding proposed regulations governing Automated Decision-Making Technology (ADMT) under the California Consumer Privacy Act (CCPA). Neil Chilson’s comment highlighted how the CPPA’s proposed changes were overly burdensome and costly, with minimal demonstrated consumer benefit. They risked exceeding the CPPA’s legal authority, infringing on First Amendment rights, and transforming the CCPA from a privacy law into a de facto AI regulation regime.

6. Neverending in New York

  • Policymakers in Albany must have been fed up with the attention everyone was getting in Sacramento, as New York managed to propose even more bills regulating AI! Abundance Institute provided real-time analysis on several proposals, including A8884: The NY AI Act which ultimately failed to pass. Neil Chilson joined a letter expressing concerns about the impact of State Assemblymember Alex Bores’ bill A6453: The RAISE Act, which helped ensure positive reforms were made before it passed the legislature and was signed into law by Governor Kathy Hochul on Friday. 

7. Veto in Virginia

  • One of the more notable bills that made it to a governor’s desk this year was HB 2094 in Virginia, introduced by State Delegate Michelle Lopes Maldonado. Our own Christopher Koopman offered a critique of the proposal and argued that, “America’s great advantage—its gift, really—has been that it does not regulate the future before it arrives. It allows new ideas to take shape, to be tested, to flourish or fail…But a regulatory fever is spreading.”
  • Governor Glenn Youngkin’s team did their homework on this legislation and made the decision to veto it. Gov. Youngkin’s veto explanation is the mindset governors across the country should have on AI policy, and will hopefully be shared by the incoming administration led by Governor-elect Abigail Spanberger.

8. Florida Frenzy

  • Florida kicked off the legislative session with the introduction of State Representative Fiona McFarland’s HB 369: Provenance of Digital Content and a myriad of other proposals regulating AI. Abundance Institute’s pro-innovation approach was shared with Rep. McFarland and other legislative leaders and they considered the tradeoffs of new regulations in The Sunshine State.

9. Nebraska Notions

  • The nation’s only unicameral legislature makes for an interesting policymaking process and our own Taylor Barkley had the chance to witness it first hand as he testified (22:10 mark) on LB 642 earlier this year in Lincoln. Taylor’s testimony stated that, “We see two fundamental issues with the AICPA as drafted. First, the legislation is unnecessary…Second, the legislation is technically infeasible.” The Judiciary Committee and bill sponsor State Senator Eliot Bostar took his insights seriously as the bill failed to move forward.

10. Iowa Ideas

  • Much like Nebraska, legislators in Des Moines considered HSB 294: An Act Relating to Artificial Intelligence. We were happy to see that the bill sponsor, State Representative Ray Sorensen, opted to hit pause on the bill in 2025 and is looking to find an alternative path for The Hawkeye State during next year’s session.

11. AI Infrastructure

12. State Preemption

  • While working at the state level to help bring about better outcomes from innovation in the AI space, the Abundance Institute has also been working with Congress to ensure an overly burdensome patchwork of state regulations doesn’t stymie this inherently interstate technology. This concept has been raised in both the One Big Beautiful Bill Act (OBBBA) and the National Defense Authorization Act (NDAA). Although ultimately not included in either bill’s passage, the concept is likely to be considered in future legislation as requested by President Trump in Section 7 of his executive order, “Ensuring a National Policy Framework for Artificial Intelligence.” Read a short summary from Neil here.
  • Over the summer we worked with state level partners from across the country to send a coalition letter in support of a federal preemption. This effort brought together a great group of like-minded organizations from sea to shining sea and will help support any future preemption proposals. Much like the Internet Tax Freedom Act of 1998, Congress should be setting the rules of the road for interstate tools such as AI to ensure effective competition and efficient markets develop.

This list could go on-and-on, with an even longer list of thank yous to individuals and other organizations who helped drive our ideas forward across the country, but I think this offers a good look back on some of the notable moments we had in state AI in 2025. The year ahead shows no sign of slowing down, and we will continue to be a voice for the innovators of tomorrow.

Custom Fields

hook
On the 12th Day of Christmas…okay, okay we’ll hit pause on singing another Christmas carol and instead take a look back at the year that was in AI policy across the states here at the Abundance Institute.
article
false
include_in_hero_section
true
hero_image
hero_order
1
category
Articles
topic
  • Technology
technology_subtopic
  • Artificial intelligence
article_view
article content only
social_image
is_displayed
true
display_order
Article

Neil Chilson on Federal vs. State Regulation of Artificial Intelligence

Our Neil Chilson joined C-SPAN Washington Journal to talk about President Trump’s executive order on artificial intelligence regulations. Read his work explaining how the executive order works here.

Custom Fields

authors
hook
Our Neil Chilson joined C-SPAN Washington Journal to talk about President Trump's executive order on artificial intelligence regulations.
article
false
include_in_hero_section
false
category
Media mentions
topic
  • Technology
technology_subtopic
  • Artificial intelligence
article_view
article content only
social_image
is_displayed
true
display_order
Featured Article

Raising the Cost of Bad AI Laws — Out of Control

Custom Fields

is_displayed
true
display_order
2
publication
Getting Out of Control
title
Raising the Cost of Bad AI Laws
link
https://outofcontrol.substack.com/p/raising-the-cost-of-bad-ai-laws
Article

Make permitting fast and fair

Read this post on Josh’s Substack: Powering Spaceship Earth.

To unleash the full potential of American energy, we must prioritize certainty and stability in our regulatory framework. Measures that simplify processes, regulatory requirements, and generally make permitting processes fast, predictable, and fair are vital for American energy abundance and affordability.

Today’s permitting reform conversations around the National Environmental Policy Act (NEPA) and the Clean Water Act (CWA) all represent promising steps. Measures like the Standardizing Permitting and Expediting Economic Development (SPEED) Act, and Promoting Efficient Review for Modern Infrastructure Today Act (the PERMIT Act) represent smart updates to environmental laws first passed in the 1970s.

Industry leaders are already sounding the alarm on the dangers of political whims determining energy investment decisions. Shell is the largest oil producer in the Gulf of America, yet the company explicitly warned that the cancellation of offshore wind projects sets a dangerous precedent, fearing these actions will serve as a pretext for future administrations to target traditional energy projects. This concern is echoed by broad industry voices. On December 3, the American Petroleum Institute, several gas trade associations, and the American Clean Power Association signed a joint letter on the need for certainty and endorsing the SPEED Act.

We have seen this movie before: from the Biden Administration’s pause on LNG export permits and the revocation of the Keystone XL permit to the targeting of offshore wind. This cycle of retribution, canceling and restarting permits based on who is in the White House, benefits no one. By establishing durable, neutral permitting reform, we can stop the political pendulum and give American innovators the stability they need to invest in and power our future.

True energy dominance requires an all-of-the-above approach where the market—not government favoritism—picks winners and losers. Updates to public policy through the One Big Beautiful Bill Act helped remove subsidies that distort the market. Consumers always benefit when they are in the driver’s seat, rather than having their energy choices dictated by who is best connected to the current administration. Permitting reforms that prevent the weaponization of the regulatory and permitting process are the natural successor to promote consumer choice and energy abundance.

Without permitting reform, there are still thumbs on the scale driving and destroying energy development outside of the market. We cannot afford a system where energy policy swings violently with every change in administration, creating a whipsaw effect that chills investment across the board.

By clearing the bureaucratic path for builders, we unlock a future defined by energy abundance and environmental progress. A fair, fast, market-driven regulatory landscape ensures that American innovation makes us all wealthier.

Custom Fields

hook
To unleash the full potential of American energy, we must prioritize certainty and stability in our regulatory framework. Measures that simplify processes, regulatory requirements, and generally make permitting processes fast, predictable, and fair are vital for American energy abundance and affordability.
topic
  • Energy
article
false
include_in_hero_section
false
category
Articles
evergy_subtopic
  • Permitting
article_view
article content only
social_image
false
is_displayed
true
display_order
Article

Raising the Cost of Bad AI Laws

Read this post on Neil’s Substack: Getting Out of Control.

On Thursday night President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence.” Here’s some signing comments by Trump and commentary / explanation by Crypto and AI Czar David Sacks, who drove this effort:

The EO takes seven actions:

  1. Sets the policy of the U.S. as to “sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” (Much of the order applies only to state AI laws that violate this policy. I’ll call them “conflicting State AI laws” for short.)
  2. Creates a Dept. of Justice Task Force to challenge conflicting State AI laws.
  3. Tasks the Dept. of Commerce with identifying existing conflicting State AI laws and publishing a report.
  4. Requires Commerce and agencies to make certain kinds of funding conditional on whether the state has or enforces conflicting AI laws.
  5. Directs the Federal Communications Commission to begin a proceeding on whether it should require AI model reporting that preempts conflicting State AI laws.
  6. Requires the Federal Trade Commission to issue a policy statement detailing when conflicting State AI laws are preempted by the FTC Act’s prohibition on deceptive acts or practices.
  7. Directs presidential advisors to prepare draft federal AI legislation that preempts conflicting State AI laws, with no preemption for four buckets of state laws, including “child safety protections.”

FIRST THINGS FIRST: If you are reporting on the EO or arguing about it online, I implore you to READ IT YOURSELF. It’s only 1400 words, and it’s clearly written with little legal jargon. You’ll save yourself the potential embarrassment of repeating incorrect talking points from people who are misrepresenting it out of ignorance or malice.

But there are somethings that might not be obvious to everyone from reading it. Here are my key takeaways, Section by Section. You should think of these as the key things people might fight over in the EO, or key things they might ignore as inconvenient to their position.

SEC. 1 — PURPOSE

What it does: Sets forth the purpose of the EO.What you should know: This is important: The President clearly intends the EO to serve as a stopgap against the worst state AI laws until Congress does the necessary work of establishing a minimally burdensome national standard that protects kids, prevents censorship, respects copyrights, and safeguards communities. This is not a permanent “fix.” Importantly, the EO CREATES NO NEW PREEMPTION. The EO clearly and properly recognizes that the executive branch cannot do that. The statements in the video above as well as the text of the EO drive home that Congress must act. And when it does act, Congress must preserve an important roles for states, while recognizing that the federal government must lead on this nationally important technology.

SEC. 2 — POLICY

What it does: The EO sets as the policy of the United States “to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.”

What you should know: This is based.

SEC. 3 — CREATION OF A DOJ AI LITIGATION TASK FORCE

What it does: Establishes a Task Force at the DOJ with tho sole purpose of suing states for conflicting AI laws.

What you should know: Two observations. First, DOJ doesn’t need an EO to challenge illegal and unconstitutional state laws. They have that authority now. But this creates a institutional structure who will be held responsible for doing so.

Second, as I already noted, this does not ban any state laws that were legal before the EA was passed. It doesn’t preempt state laws. DOJ will need to persuade a court that every law challenged is unlawful under current law.

Of course, litigation imposes costs on defendants even if they win, so this task force could have an overall chilling effect on state AI legislation. That’s the point. Up until now, states faced little cost of any kind for imposing vague, unworkable, and extraterritorial restrictions on AI developers, deployers, and users. Now they’ll at least have a reason to think twice.

SEC. 4 — EVALUATION OF STATE AI LAWS

What it does: The Secretary of Commerce must publish an evaluation of existing conflicting State AI laws, and identify which laws should be referred to the Section 3 Task Force.

What you should know: The EO singles out for scrutiny laws that implicate speech, including those that “require AI models to alter their truthful outputs” or those that violate the Constitution by requiring disclosures or reports by AI developers or deployers. First Amendment lawyers, start your engines — there are really interesting questions here.

Echoing past language from various congressional measures on preemption, the Secretary is also permitted to “identify State laws that promote AI innovation…” This could inform the Sec. 8(b)(iv) “other topics” carveouts from preemption that will be in the White House’s recommended legislation.

I suspect there are going to be a lot of state Governors and other stakeholders seeking to meet with the relevant Commerce staff to lobby for their various state laws. I can already imagine some of the arguments they’ll make.

SEC. 5 — RESTRICTIONS ON STATE FUNDING

What it does: Substantively, this is the most complex requirement of the EO. It obligates Commerce to issue a Policy Notice specifying that states with “onerous AI laws” as identified in the Sec. 4 report discussed above or challenged by the Sec. 3 Task Force “are ineligible for non-deployment funds” from the Broadband Equity Access and Deployment (BEAD) Program, “to the maximum extent allowed by Federal law.” This section also directs other “executive departments and agencies” to determine whether they can condition any discretionary grants on states not passing or enforcing conflicting AI laws.What you should know: I am not sure how large a bucket of BEAD money this involves (one of my telecom law buddies probably knows) or to what extent federal law would permit these kinds of conditions. However, this does strike me as one of the more legally risky areas of the EO, because there are large private telecommunications companies who would be receiving this money from the states and who may have the incentive and means to sue to challenge any such conditions, if they are applied aggressively. As for the other agencies’ $$$, it is even less clear how much money this affects — it’s the sort of thing that probably would be hard for even the White House to determine independently. That’s why the agencies are tasked with it. But I suspect this isn’t a massive amount of money. On top of that, most agencies probably don’t want to mess with their existing programs and may resent this extra work. Institutional incentives lean toward agencies minimizing the amount affected. As such, I expect this to be a relatively low-impact provision.

SEC. 6 — PREEMPTIVE FEDERAL REPORTING REQUIREMENT

What it does: This section requires the Federal Communications Commission to start a proceeding asking whether it should adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws.

What you should know: Note that this doesn’t require the FCC to actually adopt such a provision. It requires what is known as a “Notice of Inquiry” or “NOI”, which is what agencies sometimes do before they start a rulemaking to ask whether they actually should start a rulemaking.

My own initial view is that the FCC would be a strange place to house such AI reporting and disclosure requirements, and I have questions about the FCC’s legal authority to do this. But I look forward to digging in and commenting on the forthcoming NOI.

SEC. 7 — FTC UDAP PREEMPTION

What it does: This section directs the Federal Trade Commission to issue a policy statement identifying situations in which a State requirement to “alter[] truthful outputs of AI models” is preempted by the FTC Act Section 5’s “Unfair and Deceptive Acts or Practices” (UDAP) authority.

What you should know: This section is fascinating to me because during my time at the Federal Trade Commission I dealt with many dozens of cases involving the FTC’s UDAP authority. I have never seen it applied like this, but it doesn’t strike me as obviously wrong. The theory seems to be that if a State law requires a company to lie, but Section 5 prohibits a company from lying, those laws are in direct conflict and therefore Section 5 preempts the law. I guess this Policy Statement would be used in court by companies defending themselves against such laws?

I want to think more about this, including what it could mean for other state laws that arguably require “lying.” For example, California’s cancer labeling requirement probably wouldn’t be substantiated under typical FTC standards. There are a bunch of green labeling / environmental disclosure requirements that similarly probably require companies to bend the truth, or at least not fully represent its nuance.

Also, the deception statement doesn’t mean every false statement is a violation. To be deceptive under section 5 a false statement has to be material to a customer, meaning they would have acted differently if told the truth. Does that mean the state AI law is only preempted where the required deception would be material?

Anyhow, very early thoughts on this — I will be writing more.

SEC. 8. — LEGISLATIVE RECOMMENDATION

What it does: Consistent with the stop-gap nature of the EO, this section jointly tasks the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology with preparing a legislative recommendation “establishing a uniform Federal policy framework” that preempts conflicting state AI laws.

What you should know: New to this version (wasn’t in the draft) are four areas carved out from what the recommendation may recommend preemption. The recommendation will not include preemption of :

  1. child safety protections;
  2. AI compute and data center infrastructure “other than generally applicable permitting reforms”;
  3. State government procurement and use of AI; and
  4. “other topics as shall be determined”

That last bucket could include state AI laws that promote AI development or deployment. The second, infrastructure carveout is also interesting: it appears to preserve the legislative draft’s ability to recommend preempting certain state permitting practices.

These carveouts make crystal clear what supporters of various measures to contain state laws, all the way back to the July moratorium fight, had attempted to explain: there are definitely areas where states have an important role and should not be preempted.

SEC. 9 — GENERAL PROVISIONS

This is just the usual Executive Order boilerplate.

FINAL THOUGHTS

This EO is not a silver bullet, and it doesn’t pretend to be one. It does not magically wipe away state AI laws, nor could it. What it does instead is more subtle and more realistic. It raises the cost of the worst forms of state AI regulation, creates institutional pressure to test their legality, and clearly signals that the status quo of fifty competing AI regimes is unacceptable for a technology that operates at national and global scale.

Most importantly, it frames the executive branch’s role correctly: as a bridge to legislation, not a substitute for it. The hard work now shifts to Congress, where the real question is not whether there should be a national AI framework, but how it can be drawn to ensure continued American AI dominance, including by preempting overreaching state laws while preserving state authority where it makes sense.

In that sense, the EO succeeds if it does one thing above all else: it forces the debate out of abstraction and into concrete legal, institutional, and political tradeoffs. That debate is long overdue.

Watch Neil explain how the AI executive order works and the importance of a federal framework on C-SPAN here.

Custom Fields

authors
hook
On Thursday night President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence.” Here’s some signing comments by Trump and commentary / explanation by Crypto and AI Czar David Sacks, who drove this effort:
topic
  • Technology
technology_subtopic
  • Artificial intelligence
article
false
include_in_hero_section
true
category
Articles
article_view
article content only
social_image
is_displayed
true
display_order
hero_image
hero_order
Team Member

Jared Lambert

Custom Fields

is_displayed
true
display_order
name
Jared Lambert
position
GovTech Fellow
image
intro
[Jared Lambert](https://jared.lmbrt.net) is the resident software engineer at the Abundance Institute. He is building applications that showcase the immense positive impact that technological progress will bring to the world.

Jared specializes in AI-enhanced development, and he has spent thousands of hours using LLMs to rapidly deploy software in a variety of fields. He attended the Utah AI Summit as a technical expert, and has helped multiple companies implement automation and improve productivity using AI.

ONLINE @ [X](https://x.com/jared__lambert)
featured_publications
false
Person

Jared Lambert

Custom Fields

name
Jared Lambert
Fellow

Virginia Postrel

Custom Fields

name
Virginia Postrel
position_and_org
Columnist, Works in Progress Magazine
image
featured_publications
false
is_displayed
true
display_order
Fellow

Charles C. Mann

Custom Fields

name
Charles C. Mann
position_and_org
Author, 1491
image
featured_publications
false
is_displayed
true
display_order
Person

Charles C. Mann

Custom Fields

name
Charles C. Mann
Person

Virginia Postrel

Custom Fields

name
Virginia Postrel
Article

Legislating Child Safety Online: A Review of the House E&C Subcommittee’s Proposals

The U.S. House Energy and Commerce Subcommittee on Commerce, Manufacturing, and Trade recently held a hearing, “Legislative Solutions to Protect Children and Teens Online,” in which the subcommittee considered 19 pieces of legislation. Over the coming months the subcommittee and full committee will be considering these measures. Here, we provide our analysis on 8 of those proposals. We focused our analysis on the drafts that, if enacted in their current form, would most affect the future of computing and artificial intelligence. For further reading, our principles for protecting kids and innovation can be found here.

At the outset, it is worth noting that a variety of these proposals segment online users by age. Whether or not the requirement to verify user age is explicit, services are likely to do so in order to avoid legal liability. Such requirements, whether implicit or explicit, would gate access to computing and free expression for all Americans and cause a variety of inherent security concerns that we explore below.

Chatbot regulation

Safeguarding Adolescents From Exploitative BOTs (SAFE BOTS) Act

The SAFE BOTS Act is the only bill in the proposed package that specifically regulates minors’ use of AI tools. The discussion draft proposes to govern certain actions by chatbots for users under 17 years of age. Key requirements include prohibiting chatbots from claiming to be licensed professionals (unless true), mandating they identify as chatbots when prompted, providing suicide prevention resources if prompted, and advising users to take a break after three hours of continuous use. A chatbot provider would be required to have policies on how it addresses topics such as “sexual material harmful to minors,” gambling and “the distribution, sale, or use of illegal drugs, tobacco products, or alcohol” with users under 17. The proposal would preempt state laws if they cover these matters. It would also commission a study on risks and benefits of chatbots to youth mental health. The proposal clarifies that nothing within it may be construed to force the chatbot provider to collect personal information about the age of a user that it’s not already collecting.

Notably, most leading consumer AI companies have already implemented the features this draft would require. For example, Character.ai recently adjusted its service to reduce the daily time limit for users under 18 from two hours to one hour—stricter than the three-hour limit proposed in this draft. Character.ai and OpenAI have also begun deploying age assurance technology that enhances model safety protocols if, based on user prompts, the technology determines the user is a minor. Voluntary adoption and deployment of any age assurance system, including age verification, is fully within the rights of the company and not a violation of Americans’ civil liberties. However, all age verification systems—even industry-led requirements—can come with serious security and privacy risks.

Crucially, this discussion draft is missing a mechanism or standard for how AI companies should determine whether or not a user is under 17. Should this draft—or any bill that requires tailored requirements for minors—become law, platforms large and small would need to develop robust mechanisms to comply. Without clarification on which services need to comply, the current language could have a profound effect on AI access for all Americans. Compliance hinges on whether a chatbot is “incidental” to the primary purpose of the service, as defined in Subsection K(3)(B). It is possible that AI chat tools could not be integrated into mundane software, like word processors, without needing to follow the regulations in this draft. For example, is Microsoft’s Copilot truly incidental or is it a core feature of their software? Currently, Copilot is the advertised feature for all individual and business Office365 packages. Under a more liberal reading, Meta’s AI chat features would not be implicated as an argument could be made that those are incidental to the app’s social media service. Either way, there is a risk of litigation. Therefore to avoid potential litigation, a platform is likely to just abandon helpful AI chat services which could have a profound impact on usability and productivity. This means computing as we know it would remain the status quo, rather than becoming a supercharged productivity, education, and entertainment tool.

The disclosure requirements offer potential benefits. However, more research on effectiveness is necessary and the evidence we do have is mixed, according to a study on AI labels at the NYU Center on Tech Policy. The required policy might also be duplicative to standard industry practice, because most models currently deploy a disclosure that the tool is an AI system on sign up or that is constantly displayed. The draft likely aims to address shortcomings seen in high-profile cases with older AI models, where the system refused to acknowledge it was an AI, typically done as part of a character. It is unclear and too early to say whether such a law requiring disclosure at all times is entirely necessary. The upside is a common standard that could prove a helpful feature if users get too wrapped up in the tool. What is unknown is how helpful that is to those users. The downside would mostly be in the entertainment context. It is likely that the majority of users don’t lose touch with reality in those contexts. Like getting lost in a movie or fantasy novel, there could be a value and right, especially for adults, to have access to a bot that is not required to say it’s an AI when prompted. Finally, it is not predetermined that societal and cultural norms won’t adapt to putting AI systems in their appropriate place. In other words, users won’t need disclosures because they will just know they’re not talking to a person much like norms have adapted to the point where most people know the special effects seen in a film are not real.

Another provision risks stifling tools for the very people who need them most. Section 2(a) stipulates that, “A chatbot provider may not provide to a covered user a chatbot that states to the covered user that the chatbot is a licensed professional (unless such statement is true).” Any AI tool that offers “therapy” or “mental health” assistance could run afoul of this law. The draft language does leave open the possibility for an AI tool to become certified, but that comes at the cost of less and more expensive access. As Taylor Barkley has written elsewhere, there are profound mental health needs, particularly for teens, where AI therapy tools can be helpful. There are also better policy models, as exemplified in Utah, that don’t involve bans.

Finally, the draft’s proposed study is a welcome inclusion that would serve as a valuable resource to policymakers and industry alongside the breadth of academic, industry, and consumer group reports under development. As noted above, there is a profound lack of data about child and teen use of AI systems and the effectiveness of certain policy measures. Ultimately, public policies should be based on evidence and such a study proposed here could provide much of that data.

Bills that establish committees and reports

Kids Internet Safety Partnership Act

This proposal would direct the Secretary of Commerce to establish a body that would coordinate among relevant federal agencies and stakeholders to identify risks and benefits for minors online. The Partnership would publish a regular report about its findings on these topics and how online services offer protections for minors and tools for parents. It would also have to publish a “playbook” for online services to help them to implement the “widely accepted or evidence-based best practices” with regard to age assurance, “design features, parental tools, and default privacy and account settings.” The Partnership would sunset after five years.

In its current version, the bill could provide helpful information to stakeholders and industry. However, it would benefit from a few tweaks. Although artificial intelligence (AI) tools are part of many of the technologies and platforms named, AI is not specifically named. As children and teens come into frequent contact with AI systems, the proposed Partnership should examine the benefits and risks of those technologies too. An additional edit should be made to the framing of these technologies. Although there are nods to “benefits” in the discussion text and in related press releases, it is not apparent that beneficial use cases are a focus of the Partnership. Because there are so many online digital technologies available to minors, the Partnership reports could easily become entirely focused on risk analysis without space or room to present beneficial use cases. This would be a missed opportunity, especially for policymakers, because they must weigh the benefits and risks effectively. The draft could be strengthened by adding a section that directs the partnership to focus on benefits. Finally, it would probably be better for the report to focus on the mentioned “evidence-based best practices” rather than just “widely accepted ones.” Policy recommendations should be grounded in evidence and not just common viewpoints.

Promoting a Safe Internet for Minors Act

This bill would direct the Federal Trade Commission to work with a variety of other partners to establish a public education effort that would promote minors using the internet safely. The group would submit annual reports to Congress summarizing its efforts.

Public education efforts as proposed in this draft are well within the appropriate role of the federal government and policymakers at all levels. The federal government has existing programs such as Know2Protect (from the Department of Homeland Security), which raises awareness and combats online child sexual exploitation, or FBI Safe Online Surfing (SOS), an educational initiative for elementary and middle school students about cyber-safety and digital citizenship. And these are just two of many. Instead, the bill appears to aim for integration and coordination, by making the FTC a “hub” for public-facing online-safety resources: a national front door that can aggregate and promote materials from DHS, the FBI, educational programs, nonprofits, and other stakeholders, while also expanding the lens to include mental-health, content-exposure, and behavioral risks. In doing so, H.R. 6289 could reduce fragmentation in the federal online-safety ecosystem, streamline outreach to parents, educators, and minors, and create a standardized, cross-agency foundation for protecting youth online.

AI Warnings And Resources for Education Act (AWARE Act)

This would direct the Federal Trade Commission (FTC) to work with relevant federal agencies to develop and share resources on the safe use of AI chatbots by minors. Notably, this program would be modeled on the Youville material currently developed and made available by the Commission. As noted above, public awareness and education campaigns like these can provide help to parents, caregivers, educators, and children and teens themselves. The challenge for such an effort would be to stay up to date on a rapidly evolving space. Nonetheless, government educational efforts would serve as a useful supplement to industry and consumer protection efforts.

Social media and app store age verification bills

Kids Online Safety Act (KOSA)

KOSA applies to websites and apps of all sizes that focus on user-generated content, allow people to create searchable user accounts, and use account holder information to advertise or recommend content to the user. As written, this would require even AllTrails, a variety of not-for-profit online medical forums, and innumerable other small forums to provide a completely new suite of user and parental controls not just for users but also for those without registered accounts. In order to provide parental tools to those who aren’t even registered with the service, such platforms would have to actively track these users, which seems counterproductive for the purpose of protecting privacy online.

The platform would similarly have to provide parents with information about the parental tools required by the law and obtain verifiable parental consent for users and visitors under the age of 13. The bill adopts the same standard for consent that appears in the Children’s Online Privacy Protection Act of 1998. But some of the approved methods under this law are easy to circumvent by users of any age, including making a credit card transaction or calling a phone number.

Moreover, as with any legislation that requires treating different age groups differently online, many platforms will likely pursue more robust age verification methods in order to avoid potential liability, such as having users upload government identification and face scans. This practice has repeatedly led to data breaches, leaving affected people vulnerable to financial fraud and other crimes.

These same platforms would also have to pay tens of thousands of dollars to hire independent auditors. Such costs and regulatory burdens are not feasible for many of the small—even not for profit—forums and other services that would be covered by the law.

App Store Accountability Act

This proposal would divide users into different age groups and require that app stores receive consent from parents for their children to download apps or make in-app purchases. Unfortunately, age verification for minors is extremely difficult, verification still comes with security risks, the definition of “parental account” means it’s easy for minors to circumvent parental consent, and the bill applies only to apps and not websites.

The bill relies heavily on segmenting users into different age categories of 18 or older, 16-17, 13-15, or below 13 years of age. The problem is that there is not a reliable method to verify minors’ age. Age estimation errs by years, minors generally don’t have government photo identification cards, and other methods of identification such as birth certificates or Social Security cards (which also don’t have birthdays) don’t have photos that can be matched to the person in front of the screen.

There are also the more fundamental cybersecurity concerns with age verification. The bill would require that age verification data is protected by limiting its collection and storage to only what is necessary to verify a user’s age, obtain parental consent, and maintain compliance records. It would also mandate that the data must be kept safe by using “reasonable” safeguards to secure it, including encryption. The encryption requirement is a welcome provision, but age verification systems don’t always adhere to even their own standards and users cannot know for certain how such data is protected, and they can still be hacked and breached. Further, the sensitive information needed to prove age—biometrics, government IDs, etc.—is the same information needed to prove compliance with the law. So although the prompts to data minimization are welcome, they don’t solve the concerns here.

It’s also not just age verification databases that can be breached (as mentioned above), but other systems in the age verification process. After implementing age verification due to the U.K. Online Safety Act, Discord’s vendor breached tens of thousands of government IDs. That breach wasn’t even from users of their main age assurance system, but from people who were using a backup method when biometric age estimation failed or they otherwise couldn’t use estimation. Those tens of thousands of people will now have to worry about identity theft and bank hacks. That is the scale of harm that can be done by the government requiring age verification.

The way the legislation defines “parental account” also underscores the difficulty of verifying the parent-child relationship online. The text only requires that a parental account is established by a user that the app store has determined through age verification is at least 18 and whose account is affiliated with at least one account of a minor. Although few documents are truly useful for the purpose of verifying the parent-child relationship—and these documents don’t include the photo identification necessary to prove the users are the same people in front of the screen—this doesn’t escape the problem that minors can find other adults to allow them access online. It would be easy enough for a child to find an older sibling or other relative to allow them more permissive app access.

Another problem is that this bill applies only to apps and not websites. Minors could still access all the same content and more with web browsers without parental supervision. Although Congress could pass another law applying to websites, users would then need to functionally verify their ages twice for each service—once through app stores for the apps and again through the services directly when using websites. This would further increase security issues with age verification by providing more databases and more opportunities for hacks and breaches. Users frequently access both websites and apps belonging to the same services—consider email providers, social media, and niche services like AllTrails and ZocDoc.

Parents Over Platforms Act

This bill, on the other hand, would require app stores to have users only declare their ages, while noting that age assurance software can be used for this purpose. It would require app stores to provide a user’s parent the ability to prevent their child from downloading or using apps which—whether voluntarily or as required by law—provide different online experiences for minors and adults. App stores would also have to give these apps the ability to prevent minors from downloading or using them.

The legislation does not offer guidance as to how app stores must determine the parent-child relationship, which lends itself to the same problems as in the App Store Accountability Act regarding minors finding an older friend or sibling to confirm their app use. Because users inputting their age without further proof is an acceptable mechanism of proving age, friends could find other friends their own age who simply lied about their age to the app store to help them. However, app stores may opt to implement full age verification and require more documentation to prove the parent-child relationship, which can cause the same security concerns mentioned earlier.

Meanwhile, developers would be required to let app stores know if they provide different experiences for minors than for adults and would have to provide information about online safety settings for parents unless the apps block minors. These developers would also be required to use age assurance—which can include an age signal from the app store—unless the app is required by law to block minors, in which case they would need more robust means to check if adults really are adults. Developers of these apps would also have to “make a reasonable effort” to prevent minors from engaging in activity on the app restricted to adults and obtain consent (it does not specify from whom) before allowing minors to access parts of an app the developer deems “unsuitable for use by Minors without parental guidance or supervision” or content age gated by law.

Oddly, the bill applies all the same requirements it applies to apps also to website versions of those apps. If a website that provides different experiences to minors and adults has no app, then such a website is exempt. But even applying the same requirements of the bill to website versions of covered apps raises some very strange questions. Apps with web versions don’t always exist in all app stores. Some exist in iOS and Android app stores (or just in one or the other), but not in app stores on laptops or on Windows phones. If someone were to access such a website on their laptop or Windows phone, many provisions of the law would not make sense, including all the information they would be required to share with app stores that don’t house them. There are also a variety of requirements about how app stores must interact and share information with covered apps, and it is unclear whether those provisions also apply to covered websites, especially when accessed on devices with app stores that don’t contain the covered app.

However, the bill also includes some welcome provisions such as prohibiting apps from attempting to figure out a user’s birthday by repeatedly requesting user age from the app store. There is no guarantee that apps won’t still do so, but attempting to prevent the practice is still a good idea. The bill also allows app stores to withhold age signals from developers that don’t adhere to the app store’s policies and safety standards, which is a good step to protect user information. Additionally, the duty is on the apps rather than the app stores to determine whether an app is covered by the bill. App stores don’t necessarily know whether an app provides different experiences for minors and adults, so this makes sense.

COPPA update

Children and Teens’ Online Privacy Protection Act

This would change the Children’s Online Privacy Protection Act of 1998 to apply not just to children but also to teens, and not just to websites but also to apps. It also preempts similar laws at the state level. Among other changes, it loosens the knowledge standard depending on company size. Whether a service knows that a child is in fact a child is changed from “actual knowledge” to “knowledge” for the largest social media companies, while the current actual knowledge standard remains intact for services that generate less than three billion dollars in annual revenue, with fewer than 300 million monthly active users, and which don’t focus mainly on user generated content. Although keeping the actual knowledge standard in most cases is preferable, applying a looser knowledge standard to the top social media companies still raises difficult questions for compliance. The bill defines “knowledge” in such cases as when a platform “willfully disregarded information that would lead a reasonable and prudent person to determine, that a user is a child or teen.” It is unclear what could be used as evidence to that effect under that standard. For example, parents researching toys for children or colleges for their teens may look a lot like kids researching these things for themselves. This “should have known” standard is not workable or predictable.

Additionally, the bill would prohibit a service from cutting off their service to children or teens if a parent or teen requests that their personal information be deleted, so long as the service can be provided without such information. The ways in which user data are necessary for the service to function correctly aren’t always apparent to those using the website. However, proving as much in court is likely to be a burdensome process for these services—particularly small services. It isn’t far-fetched to see a parent requesting that a service delete their child’s information, the service doing so and removing the child from the service, and the service being sued. Indeed, that is what this provision enables.

Conclusion

We share the Energy and Commerce Committee’s goal of ensuring a safe online environment for children and teens. However, as Congress considers these legislative proposals, it is critical to balance safety objectives with the technical realities of the digital ecosystem and the need to preserve American innovation.

While some of these measures offer constructive steps—such as public education campaigns and evidence-based studies—others present serious functional and security concerns. Specifically, mandates for broad age verification often ignore the technical infeasibility of current verification methods and the cybersecurity risks created by collecting sensitive user data. Furthermore, overly broad definitions risk sweeping in beneficial technologies, potentially cutting off minors from valuable educational and mental health resources under the guise of protection.

We urge the Committee to prioritize solutions that empower parents and deployers without imposing unworkable mandates that stifle the development of next-generation computing. We remain ready to assist the Committee in refining these proposals to ensure they effectively protect youth while fostering a vibrant and open digital future.

Custom Fields

hook
The U.S. House Energy and Commerce Subcommittee on Commerce, Manufacturing, and Trade recently held a hearing, “Legislative Solutions to Protect Children and Teens Online,” in which the subcommittee considered 19 pieces of legislation. Over the coming months the subcommittee and full committee will be considering these measures. Here, we provide our analysis on 8 of those proposals. We focused our analysis on the drafts that, if enacted in their current form, would most affect the future of computing and artificial intelligence.
article
false
include_in_hero_section
false
category
Articles
topic
  • Technology
article_view
article content only
social_image
false
is_displayed
true
display_order
technology_subtopic
  • Artificial intelligence
  • Social media
  • Chatbots
Article

A State Policymaker’s Playbook For AI Success

The Opportunity

Artificial Intelligence is a general-purpose technology—like electricity or the internet—that will define U.S. competitiveness, productivity, and prosperity for decades. With the right approach, AI can expand economic opportunity, improve health and education, and create abundance for all. See real-world examples at AI Opportunity.

Why It Matters

  • Jobs & Growth: AI will create entire industries and expand workforce productivity; it can help a state’s farmers, doctors, small businesses, and teachers—if government gets out of the way.
  • State Leadership: Pro-innovation policies will attract AI entrepreneurs, jobs, and investment. Policymakers should treat AI as the opportunity it is, and we will be the generation that provides every student with a private tutor and every patient with access to personalized treatments.
  • Global Competitiveness: Adopting a free-market framework ensures the U.S. will lead the way in global AI innovation, outpacing China and any potential adversaries.

Guiding Principles

Freedom to Build

No permission slips for entrepreneurs. Innovators should be free to build without Washington-style bureaucrats standing in the way.

Policy Proposals

  • Right to Compute Act: Computational freedom is not a privilege to be granted by government, but a natural extension of rights we already possess that should be protected by government. After being passed in Montana, this concept has been introduced in Ohio and New Hampshire.

Punish Abuse, Foster Learning

Like other computing technology, AI is a tool. We should not preemptively regulate people building tools with unknown upside potential; instead, we should hold bad actors accountable when they use any tool to commit fraud or violate rights.

Policy Proposals

Open Models, Open Markets

Encourage open-source participation to democratize AI development and reduce the concentration of power.

Sunset the Red Tape

New rules should work for today and tomorrow. We will actively review, revise, and repeal—keeping government flexible and accountable.

Policy proposals

Government Use

Proper use of AI can streamline and improve government functions, saving taxpayers’ money while protecting residents’ interests and rights. There are enormous opportunities for such benefits in state procurement, benefits administration, resident services, and even emergency services and natural disaster mitigation and relief. See Improving Government Efficiency with AI Technologies.

Build Energy Abundance

To reap the benefits of AI innovation, states have an opportunity to blaze a new trail on energy generation where we build what works.

  • Data Centers: In partnership with the James Madison Institute, this article outlines the basics for a regulatory framework around data centers, their energy use, and their water use.
    • Energy Use: Data centers are the invisible foundation of the modern economy. They are computers that you use through your own devices without touching. They are large electricity users, but are willing to work with states to meet their needs without shifting costs onto others.
    • Water Use: In this article, and its follow-up, we outline fact-based responses comparing water use in energy and data infrastructure.
    • Grid Assets: Emerging evidence shows that new data centers, when structured properly, can actually pay for grid revitalization projects because of the load flexibility they bring to the grid.
  • Nuclear Energy: Five states and three nuclear companies are currently suing the NRC to return nuclear regulatory authority to the states. This article summarizes the lawsuit and its potential to unlock nuclear power generated by small modular reactors.
  • Build What Works: Powering Spaceship Earth offers a path forward

Closing Message

AI is not a threat to be feared, but a tool to be harnessed and leveraged. AI will be the source of the next Industrial Revolution, and states should seek to be first to build the metaphorical railroads of the future. With a free-market, pro-innovation approach, we can make our state—and America—the global leader in artificial intelligence, securing prosperity and abundance for future generations.

Custom Fields

hook
Artificial Intelligence is a general-purpose technology—like electricity or the internet—that will define U.S. competitiveness, productivity, and prosperity for decades. With the right approach, AI can expand economic opportunity, improve health and education, and create abundance for all.
article
https://www.abundance.institute/wp-content/uploads/2025/12/22091239/abundanceinstitute_2025_emergingtech_docs_oct_govplaybook_nostate.pdf
include_in_hero_section
false
category
Primers
topic
  • Technology
technology_subtopic
  • Artificial intelligence
article_view
article content only
social_image
is_displayed
true
display_order
Article

From AutoTune to AI Music: What Cher’s “Believe” Got Right

Read on Creative Frontiers.

In 1999, Mark Taylor, co-producer of Cher’s global-mega hit, “Believe,” explained to Sound on Sound magazine how he created the now famous robo-glide on Cher’s voice. The account was elaborate: a Korg VC10 vocoder, a Digitech Talker, a Nord Rack, and some Cubase gymnastics. It sounded like a fire sale at Radio Shack.

It was also untrue.

The real method was very simple: a new plug-in invented by a flautist at ExxonAutoTune.

So, why lie about it, months after the song had conquered the planet? Protecting your secret ingredient is one explanation. Another is cultural: in 1998-99, it could be a little taboo to admit you were friendly with AutoTune. Many producers were using it because it saved time, money, and singers’ vocal chords. But they didn’t tend to speak of it.

Before the plug-in era, “perfecting” a take required extensive manual labor. You coached vowels, recorded take after take, and then started cutting tape. Engineers spliced syllables with razor blades, created slapback to blur edges, and nudged tape speed. The introduction of Digital Audio Workstations (DAWs) made this process faster, but it was the same idea. Pitch correction was an exhaustive exercise in meticulously managing performances and creatively masking mistakes.

The world’s largest DAW in 1988 with 64 megabytes of RAM. Photo credit mu:zines

AutoTune dramatically trimmed the workflow. The pursuit of perfection got a lot cheaper.

Still hardly anyone mentioned it. It was like Nanna’s “from scratch” sauce… with a suspicious number of empty jars in the pantry. Everyone uses the shortcut; no one admits it. Why? Because plenty of folks would publicly denounce it as “cheating” and “dehumanizing,” and nobody wants to be the first heretic at the cookout.

So while publicly, AutoTune was a little taboo, privately it was just Tuesday in the studio. This mismatch is what Todd Rose calls a collective illusionwhen most people privately believe one thing but wrongly believe that other people believe the opposite. The result is a public consensus that almost no one actually wants. People in the office schedule video meetings thinking that’s what everyone else prefers. They don’t.

“Believe” punctured the illusion. Once it was clear that AutoTune, not a vintage vocoder, was the real engine, the taboo began to fade. Artists began experimenting with it as an instrument, and the question changed from “do you use it” to “how do you use it?” Then a great singer, even without the effect, Faheem Rashad Najm began saturating his music with it. Soon, Faheem, better known as T-Pain, became the Johnny Appleseed of AutoTune, sprinkling it all across the land. For a stretch in 2007, he appeared in four songs in the Billboard Hot 100 top 10 at the same time. AutoTune was king.

We are living through a similar dynamic today with AI in music. In the public square, many artists worry or insist that AI is the enemy. But the reality is quieter and more complicated: artists and producers are in the studios experimenting with a lyric assist here, sample generation there, stem separation, new melody suggestions, a prompt or two, and they’re finding that much of it is useful. A new survey by LANDR found that “87% of respondents use AI tools in their music workflow”. And nearly 30% are using AI song generators in their creative work.

Photo credit LANDR

In hushed tones, artists are asking: “Wait—are you prompting?” “Uh… maybe?” Then the grin: “Me too.”

If “Believe” has taught us anything, other than that you shouldn’t give up on topping the charts after age 50, it’s how collective illusions collapse. When gatekeepers and tastemakers normalize what’s already happening, the social penalties fueling self-censorship crumble and fact can overcome fiction. Once that happens, the story flips from “cheating” to “new instrument” and creators collaborate on innovation and new soundscapes, leaving behind the taboo.

Cher’s “Believe” didn’t just change the sound of pop, it helped make it acceptable to treat a weird new gadget as an instrument instead of a scandal. We need the same move with AI. As long as AI is cast as an evil monolith, artists will hesitate to share publicly how they are using it. But when trendsetters talk openly about their explorations and both the cons and the pros, then the taboo begins to crack. The collective illusion collapses, and conversations can shift from whether anyone is allowed to touch AI to how it can and should be used. Then artists can more meaningfully help shape the future of the tool. And with a tool this disruptive and empowering, we want the creators at the tables, not just the lawyers.

Custom Fields

authors
hook
In 1999, Mark Taylor, co-producer of Cher’s global-mega hit, “Believe,” explained to Sound on Sound magazine how he created the now famous robo-glide on Cher’s voice. The account was elaborate: a Korg VC10 vocoder, a Digitech Talker, a Nord Rack, and some Cubase gymnastics. It sounded like a fire sale at Radio Shack.

article
false
include_in_hero_section
false
category
Articles
topic
  • Creative Frontiers
article_view
article content only
social_image
false
is_displayed
true
display_order
Article

How AI could supercharge America

The American economy is like a flabby giant. Its size and strength are still impressive, but it huffs and puffs when faced with certain challenges. The federal government runs trillion-dollar deficits – even in good years. Birth rates are falling. Schools are failing many kids. Infrastructure is rusting out faster than we can replace it. And none of these are new problems. They are the consequences of years of cultural drift, neglected maintenance, complacent citizens, and weak leadership. 

Yet under this blubbery exterior, the U.S. economy still boasts some strength. Today that is coming from technology companies, big and small, which are fueling the country’s economic growth. In September 2024, Mario Draghi – the former president of the European Central Bank – noted in a major report  on competitiveness that while the European Union and the United States had boasted comparably sized economies in 2000, since then, per-capita real disposable income in the United States had almost doubled the EU level. The main reason for this shift and the widening productivity gap between the two economies, Draghi concluded, was the tech sector.  

Artificial intelligence could bring this sector’s strength to the rest of the American economy. Properly applied, AI could become a general-purpose technology on par with or even surpassing electricity or the internet. It could boost productivity, expand opportunity, and revitalize our bloated and sluggish systems. AI-driven productivity increases could help balance our national budget, fill the gaps left by an aging workforce, remake education, and drive scientific and health care discoveries and innovations that underpin prosperity. AI offers a path to a robust, muscular, fit American economy.  

The opportunity is America’s to seize. The United States leads the world in AI research and investment, although other countries, especially China, are in hot pursuit. If we miss this moment, it won’t be because technology failed us – it will be because our politics did. Fear, fragmentation, and bureaucratic overreach could choke off the very growth the United States desperately needs. The country currently faces three significant political challenges in this domain: whether to allow a patchwork of state laws to strangle AI innovation before it scales; whether to let our children learn with AI; and whether to build the physical power and computing resources needed to let AI proliferate. 

How the United States meets these challenges will determine whether we use AI to whip the country’s economy back into shape – or decide instead to resign ourselves to a couch-potato economy, with all the stagnation that would bring. 

Move slow and break nothing 

Important parts of the U.S. economy today have become undisciplined, shortsighted, and slow. Last year, despite low unemployment and steady GDP growth, the federal deficit hit $1.8 trillion. The share of eighth graders scoring “proficient” in math was just 28%, down six percentage points since 2019. The country’s fertility rate remains well below the replacement level. And projects to build basic necessities such as transmission lines or high-speed railways routinely take a decade or more to permit and construct. 

America’s capacity to do big things has atrophied. Our last real productivity boom ended two decades ago. Outside of technology, the economy is barely growing. As of mid-2025, just four tech firms accounted for roughly 60% of year-over-year stock-market gains. The so-called Magnificent Seven largest U.S. tech companies make up almost 50% of the total value (by market capitalization) of the NASDAQ 100 stock index.  The U.S. economy leans heavily on our tech companies. 

But AI could reinvigorate the rest of the economy, and the country along with it. Consider the following. In November 2022, OpenAI released ChatGPT as an experiment. Despite the company never intending to build a mass consumer product, ChatGPT became the fastest-adopted technology in history, hitting 100 million users in two months. The app ignited a surge of investment that spread far beyond chatbots. By 2024, private AI investment in the United States had reached $109 billion, and the number is still growing rapidly. 

ChatGPT is not the first time a technological breakthrough has driven excitement about AI. But this time is different. Machine learning, which is the core process underpinning modern AI, uses algorithms trained on vast data sets to recognize patterns and make predictions. This approach is proving highly generalizable. It can already draft contracts, model proteins, translate languages, and guide robots. Machine learning is making its way into every field that runs on data. And in the 21st century, that’s nearly all of them. 

AI, in other words, will define our era, and the good news is that the United States leads the world in this technology. U.S. companies designed and trained the AI models, constructed the data centers that run them, and developed the applications that bring the power of AI to users. America dominates the AI industry today, in investment, revenue, and innovation.  

AI won’t solve America’s problems on its own. But it could make almost every problem easier to solve – if we don’t get in our own way. 

Barrier one: The 50-state trap 

Unfortunately, too many American leaders currently treat AI as a reason to panic. In 2024, state lawmakers introduced 635 AI-related bills, enacting 99. By mid-2025, that number had ballooned to more than 1,100 proposals. The National Conference of State Legislatures reports that 38 states adopted or enacted approximately 100 measures in just the first six months of this year. 

The intentions of these laws vary: to make AI safe, protect consumers, prevent bias, regulate deepfakes, or restrain tech giants. But whatever the goal, the outcome of this flurry of lawmaking is the same: a minefield for the companies required to comply with regulation. Unlike these laws, modern AI isn’t built state by state. It’s trained on global data, deployed on cloud servers located around the world, and used in ways that cross borders in real time. In other words, modern AI is not a single product situated in a single place; it’s a distributed set of constantly evolving services. Trying to govern AI locally is like trying to use your thermostat to control the weather. 

The threat posed to tech leadership created by this patchwork of regulation isn’t theoretical. Each new state mandate adds conflicting definitions, overlapping audits, and redundant reporting requirements that companies must struggle to fulfill. Pro-regulation states with large markets – like California and New York – essentially set the rules for everyone else. Big companies must waste huge amounts of money and time complying with all the rules – but they probably can absorb the costs. Startups can’t. The results will be predictable: fewer startups, slower product launches, chilled investment, and innovation driven offshore.  

One need only look at Europe – where well-intentioned but cumbersome AI and technology rules have slowed research and driven talent to the United States – to see what impact such a regulatory approach could have here. If the United States commits its own version of this mistake by allowing individual states to race for the most restrictive standards, the whole country will lose. 

The U.S. Congress should act before that happens. It should preempt most state AI laws and set a single national framework for model training and deployment – an approach that treats AI as the interstate infrastructure it is. A coherent federal policy would consistently protect users, clarify responsibilities, and streamline compliance for innovators. The right model would mirror what has worked for past transformative technologies: uniform, light-touch rules, allowing for open competition and space for experimentation. Anything else will weigh down our economy with onerous amounts of legal paperwork. 

Barrier two: Banning the future of education 

The second threat to America’s AI dominance – and the technology’s potential to transform our economy – is more emotionally fraught but no less destructive: overreacting to the use of AI by children. 

The fear is understandable. The growth of the internet has taught us to be wary of tech’s unintended effects. Parents today have many reasons to be protective. But banning AI outright in our classrooms or making it harder for children to use – moves some lawmakers have already proposed – would be an act of educational malpractice. 

That’s because AI tutors could become the most powerful learning tool since the printed book. At Alpha School in Austin, Texas, for example, AI systems coach students through their core academic work in just a few hours – and then the students spend the rest of the day building drones, running businesses, or exploring the outdoors. Alpha School is also developing a platform called Timeback that aims to empower educational entrepreneurs to create personalized, one-on-one instruction for less than $1,000 a year per student. 

This isn’t science fiction; it’s a working prototype of what individualized education could look like. Properly used, AI tutors could democratize elite instruction, helping kids learn at their own pace, in their own style, with real-time feedback and fewer bureaucratic barriers. 

But lawmakers are letting fear drive policy. Bills intended to protect kids could undercut the very feedback loops that power AI-driven educational tools. Overly strict rules protecting privacy, for example, would prevent AI systems from effectively tracking students’ progress or spotting their subtle learning patterns. And a tutor that can’t observe is a tutor that can’t teach. These and other poorly considered laws could drive AI innovators away from education to less legally fraught areas, even though the country desperately needs more innovation in this field.  

We haven’t banned microscopes because they reveal too much detail, or calculators because they could potentially replace our arithmetic skills. Instead, we have equipped our educators to use such tools responsibly and trusted them to train our students to do the same. For similar reasons, the solution to how to deploy AI in education today is not prohibition but thoughtful application and experimentation. Parents should have options and schools should have significant flexibility. Privacy laws should deter the misuse of information rather than the mere gathering of it. An open, pluralistic approach would nourish what works and weed out what doesn’t.  

Our current education system is failing too many of our children. Denying students access to the tools that will define their generation would not be appropriately cautious – it would be shortsighted and reckless. 

Barrier three: The building bottleneck 

AI is software, but its progress ultimately depends on our ability to build significant physical infrastructure. And America no longer builds as it once did. The mid-20th century United States erected an entire modern world in a generation. It poured concrete for highways, raised power plants, wired cities, and built the grid that powers everything from suburbs to supercomputers. The country had a bias for action. 

Today, that bias is gone. The very laws designed to manage progress have become tools to prevent it. When Congress passed the National Environmental Policy Act (NEPA) in 1969, it was intended to strengthen environmental stewardship. But the law now functions as a procedural labyrinth and the most powerful tool in the NIMBY toolbox. Environmental reviews for infrastructure projects take a median of more than two years, an average of nearly four, and often generate thousands of pages of analysis with little measurable benefit. The result is paralysis by paperwork. Every major project – solar farms, wind installations, data centers, transmission lines – can be delayed for years by bureaucracy and litigation. 

Yet AI needs physical infrastructure. Data centers – which really should just be called supercomputers – are the modern equivalent of factories. All the online services we use, including AI services, run on these supercomputers housed in large warehouses. Training a cutting-edge model and serving its users require significant amounts of computing power and energy. (Contrary to popular belief, data centers don’t really require that much water; they use significantly less of it than many industrial factories or agriculture products.) 

The growth of AI has increased the demand for data centers and the infrastructure they require. In particular, AI requires more energy production and distribution. But the United States struggles to build new power sources and connect them to the grid quickly enough. Most of the United States has expanded capacity very slowly since the 1970s. Only Texas, which operates a deregulated “connect-and-manage” grid, has grown quickly, adding more than twice as much as any other grid operator in the country between 2021 and 2023. The state’s dynamic energy market is a major draw for new data centers, with hundreds of billions of dollars in planned investments testifying to the grid’s stability and recovery since Winter Storm Uri in 2021. 

If the United States can’t speed up its permitting and building processes, the AI boom will stall. The world’s most sophisticated algorithms are useless without electrons to power the computers. 

Congress should therefore treat infrastructure improvement as a national security priority. It should replace our current process-for-process’-sake approach to new construction with outcome-based environmental standards. It should set firm timelines for reviews and limit their scope. It should expand categorical exclusions for low-impact projects. And it should limit injunctions to cases of clear and imminent harm. 

At the same time, federal and state agencies should coordinate to unclog interconnection queues and modernize the grid. The future of AI – and much else – depends on abundant, reliable energy. Building it is the precondition for greatly increasing our prosperity. 

Fear or abundance 

America has been here before. We’ve stood on the edge of a technological breakthrough, uncertain whether to seize it or smother it. We faced it with the railroads, the electrification of cities, the interstate highway system, and the dawn of the internet. In each case, abundance won out over fear, though not always quickly and not always cleanly. The choice before us now is the same: to treat AI as a threat to be contained or as an opportunity for renewal. 

Choosing abundance means trusting the American people to build, learn, and adapt. It means allocating government rules between the federal government and states in a way that promotes experimentation rather than chills it. It means giving every child access to the tools of the age rather than locking them behind digital fences. And it means rediscovering the courage to build – not someday, but now. 

The alternative is a future in which AI progress happens elsewhere, U.S. schools stagnate while those in other countries accelerate, and the next generation of American innovators grows up under a regime of control rather than freedom. 

That would be a major societal failure. 

AI is not a silver bullet for all our problems, but it could be the catalyst that restarts broad American dynamism. The question is not whether AI will transform the world. It will. The question is whether the United States will lead this transformation, or if we will comfortably watch others from the sidelines. 

We can still choose abundance. The United States remains the most capable society on earth for translating invention into prosperity. We’re a bit doughy and out of practice, but we still have the talent, the institutions, the capital, and the culture of risk taking that every other country envies.  What we need is to give ourselves permission to shed the unnecessary deadweight, to exercise our entrepreneurial muscles, and to wrestle optimistically with the challenges ahead.  

Custom Fields

authors
hook
The United States already leads the world in high-tech development. But policy, not technology, now stands in our way.
article
false
include_in_hero_section
false
category
Op-eds
topic
  • Technology
technology_subtopic
  • Artificial intelligence
article_view
article content only
social_image
false
is_displayed
true
display_order
Article

Breaking Rust, AI Slop, and the Long War over Real Music

Read on Creative Frontiers.

The song dropped a few weeks before Thanksgiving, and tastemakers attacked. They ranked it at the bottom, sneered that the artist wasn’t real, dismissed it as novelty, and excoriated the music for being just plain bad. Not serious. Not authentic. Not real.

Then it went to Number 1.

I’m not talking about recent and mysterious AI act Broken Rust’s number one country hit, “Walk my Walk.” I’m talking about Alvin and the Chipmunks.

In 1958, Ross Bagdasarian was a struggling actor and songwriter. He’d had an Alfred Hitchcock cameo and he co-wrote a hit for Rosemary Clooney, but the money had dried up. In fact, before that, he’d tried grape farming in the late 1940s and his crop literally dried up. When your resume includes “failed raisin magnate,” you’re not exactly on the glide path to stardom.

By this point, he had about $200 to his name, according to his kids, and spent $190 of it on a vari-speed tape machine. He discovered that if he sang very slowly into the recorder at half-speed, then played it back at regular speed, his voice turned into a helium-induced cartoon. Then he had an idea, a song about seeking advice from an alternative healer. He wrote and recorded “Witch Doctor” using the vocal trick.

The executives of Liberty Records, Alvin Bennett, Simon Waronker, and Theodore Keep, were close to bankruptcy and bet it all on releasing this odd song. In April, 1958, “Witch Doctor” rocketed to the top of the charts.

Riding that success, months before Christmas, Bagdasarian’s four-year old son began the annual parental torture ritual, “When is Christmas?” That gave him another song idea. He whistled the tune into a tape recorder (he couldn’t play an instrument) and wrote a Christmas song. But he felt it shouldn’t be a choir, it should be singing insects or animals. He eventually landed on Chipmunks.

He took the stage name David Seville and named his high-pitch trio after those Liberty Records execs.

The Chipmunk Song (Christmas Don’t Be Late)” debuted on American Bandstand’s “Rate-A-Record” segment. It scored the lowest possible rating of 35 across the board. As bad as it gets. Critics called it novelty. Even decades later, writer Tom Breihan praised its ingenuity but also called it a parlor trick and added, “As a piece of music, it sucks shit.”

Listeners didn’t care.

The Chipmunks spent four weeks at Number 1, stayed on the charts for thirteen weeks, and was the last number one Christmas song until Mariah Carey’s “All I Want for Christmas is You” in 2019. The record also won three Grammys at the inaugural event.

None of this is surprising.

When artists use new technology to make new kinds of art, some gatekeepers respond by declaring it “not real.” A 15th century monk said the printing press made a “harlot” of literature. Music legends warned that synthesizers would “destroy souls.” Today, critics slap the label “AI slop” on AI-generated music and content. The charge is the same: that this isn’t real art because it lacks human experience and depth.

Maybe “The Chipmunk Song” really is a parlor trick. Maybe Broken Rust’s “Walk My Walk” really is “AI Slop.” But once something hits Number 1, we’re forced to face an uncomfortable question: if millions of people like it, what about it isn’t “real”?

That’s not to say that popularity settles an argument. Plenty of popular things are shallow and disposable. But popularity does tell us that something is happening in people’s heads and hearts. A recent Deezer-Ipsos survey found that 97% of listeners can’t tell the difference between AI and human-composed music. If most people can’t hear the difference, then “this isn’t real” can’t just be about how it sounds.

Often, the accusation is about jobs. Historically, when new tools arrive, critics pair their aesthetic complaints with concerns about “real” artists losing work. “AI slop” can work the same way. It’s a taste judgment but carries a quieter concern of what if this new stuff replaces us.

That fear isn’t fake, but dismissing the tech as fake or unworthy doesn’t solve the problem. It just insults the audience. If the concern is that algorithms and bots are juicing engagement, then the argument is not with the songs, it’s with the incentives and the business model. If the concern is that artists will lose their work, then the argument is with how we structure rights, revenue, and opportunities for human creators.

Ross Bagdasarian’s chipmunks remind us that listeners have always had a soft spot for gimmicks, novelties, and new sonic landscapes. And those experiments can become part of the canon, not by passing a purity test, but by connecting with people. As AI tools flood the landscape, artists must rethink their advantages that no model can automate. (On this, I just had a great conversation with Bandcamp’s Dan Melnick – more later.) And critics can retire the border-patrol badge and help us tease out why sounds land in the first place, and what that says about us, the humans.

Custom Fields

authors
hook
The song dropped a few weeks before Thanksgiving, and tastemakers attacked. They ranked it at the bottom, sneered that the artist wasn’t real, dismissed it as novelty, and excoriated the music for being just plain bad. Not serious. Not authentic. Not real.
article
false
include_in_hero_section
false
category
Articles
topic
  • Creative Frontiers
article_view
article content only
social_image
false
is_displayed
true
display_order
Article

The 1925 New Tech That Let a Legend Invent a New Sound

Read on Creative Frontiers.

He needed one more tune for the recording session at Okeh Records. Dinner was almost ready, his mother was at the stove, and he sat down at her table to “scratch out” something fast. But this song had to be different. There was a new technology in recording that many were criticizing, but what if he could take advantage of it and create a whole new sound. In fifteen minutes he finished the song, he recorded it the next day, and it was an instant hit. The new technology was the microphone, the song was “Mood Indigo,” and the artist was Duke Ellington.

Music historians argue over how much of it Duke actually wrote that evening. Clarinetist, Barney Bigard, had floated the melody to Ellington earlier. But the important part is that Ellington orchestrated the song for the microphone, not just through it. That was the leap.

Before 1925, recording was entirely mechanical. Bands would “gather ‘round the horn” and play into it so the sound pressure would jiggle a diaphragm. The diaphragm moved a stylus that scratched the vibrations into a cylinder or disc. Big, brighter tones worked great; quiet, lower instruments struggled to be heard. There was a choreographed dance as studio assistants (“pushers”) shuffled musicians to and from the horn to vary the dynamics. A singer might have to stick her head in the horn to register her softer notes. One violinist just sat on a box with wheels, to more easily adjust throughout a recording. And everything was all live, all the time. There was no editing.

Photo credit: Library of Congress

Western Electric’s electrical recording changed it all.

A microphone listens differently; it’s sensitive. It hears low tones and quiet details. Near instruments sound warmer, farther instruments sound airy, and they can all be heard. So, while horn-recording flattened music, the microphone created a three-dimensional sound stage. That shift offered enormous potential for innovative artists. It also stirred a lot of controversy.

Critics said the microphone was breaking up the band, spotlighting individual instruments and destroying the ensemble sound. Other familiar criticisms accompanied. It didn’t sound “natural” or authentic. It threatened livelihoods: acoustic engineers with years of experience were suddenly rookies again. And then came “crooning,” the intimate mic style that set off a moral panic (we’ll save that one for another day—it’s worth it.)

But Duke Ellington was intrigued. He could see—or hear—the mic as a new instrument with its own physics and color palette. New soundscapes were possible. The mic could let the string bass “crowd” the frontline, previously dominated by horns, and steer the groove. The plunger-muted brass could growl without turning to fuzz. The low reeds could whisper and hold their own with the rest of the band.

Ellington also heard something interesting when he recorded on a microphone an earlier song, “Black and Tan Fantasy.” He called it a “mic tone”; a vibration like a ghostly extra pitch that emerged when certain instruments and intervals interacted with the mic. Not feedback, not distortion, but a new overtone. Rather than fight it, he wrote to it. That brings us back to “Mood Indigo.”

At his mother’s kitchen table, Ellington inverted the usual brass-reed hierarchy. He handed the bass line to the clarinet, parked the trumpet in the middle register, and let the trombone float high. It was an arrangement that would have turned to mud in the horn era but it bloomed in the new mic era. The stack created the illusion of a fourth voice, born in the microphone. He also discovered that the original key, A flat, rattled the mic too much, so he bumped it a whole step to B flat, and it was perfect.

Duke Ellington & His Orchestra with a Marconi-Reisz Mic, Circa 1933

“Mood Indigo” was written for the mic and it was a phenomenal success. Ellington would become a household name, known for his hit songs and for his nightly broadcasts across the country from the Cotton Club… through a microphone, naturally.

Ellington didn’t ask the microphone to behave like the horn. He rearranged the band. He saw a new tool, new rules, and he pressed to see what it could do. He’s regarded as one of the most influential artists in music history, in large part because he wrote to the innovation, not against it.

Custom Fields

authors
hook
He needed one more tune for the recording session at Okeh Records. Dinner was almost ready, his mother was at the stove, and he sat down at her table to “scratch out” something fast. But this song had to be different. There was a new technology in recording that many were criticizing, but what if he could take advantage of it and create a whole new sound. In fifteen minutes he finished the song, he recorded it the next day, and it was an instant hit. The new technology was the microphone, the song was “Mood Indigo,” and the artist was Duke Ellington.
article
false
include_in_hero_section
false
category
Articles
topic
  • Creative Frontiers
article_view
article content only
social_image
false
is_displayed
true
display_order
Article

Practice Didn’t Die, It Moved: Auto-Tune and Death Cab for Cutie

Read on Creative Frontiers.

The indie rock band, Death Cab for Cutie, arrived at the 2009 Grammy awards in protest. With baby blue ribbons prominently pinned to their lapels, they decried a contaminant sweeping the globe. It poisoned natural beauty, concealed human error, and bulldozed diversity. Not oil. Not chemicals. Auto-Tune.

photo credit: twentyfourbit

On the red carpet, they warned of a music industry awash in the “digital manipulation” of thousands of singers. But this admonition wasn’t new. It was another verse for the chorus that has echoed since the first vocoders crackled to life. It didn’t sound human, critics had charged; it scrubbed away the small imperfections that make performances feel alive and authentic.

Bassist Nick Harmer added that because of Auto-Tune, “musicians of tomorrow will never practice. They will never try to be good, because yeah, you can do it just on the computer.” We’ve heard this lyric before.

Another musician had similarly worried over a machine in music: “And what is the result? The child becomes indifferent to practice.” When music can be easily acquired, he continued, “without the labor of study and close application, and without the slow process of acquiring a technic, it will be simply a question of time when the amateur disappears entirely….” That wasn’t a concern about Auto-Tune. That was renowned composer and band leader John Philip Souza in 1906, troubled by the player piano. Different gadget, same prophecy.

But Souza was wrong. In the years after his warning that player pianos would diminish the public’s interest in learning, the opposite occurred. According to a 1915 article in Music Quarterly entitled “The Occupation of the Musician in the United States,” census data revealed that between 1890 and 1910, the number of piano teachers in the U.S. increased by over 25%. It was an increase of 1.2 piano teachers per thousand to 1.5 per thousand. Perhaps the player piano decreased the rate of growth, but certainly the desire to make music didn’t die; it adapted. Practice rarely disappears, it just sometimes migrates.

That’s the pattern. The microphone changed the frontier from lung power to mic craft. Drum machines spread precision from wrists to arrangement. Sampling expanded creativity from takes to crate-digging and taste. Auto-Tune, used as an instrument instead of spackle, prized design and studio judgment. The practice didn’t vanish, it just morphed and moved.

Why then, do we get obituaries each time? Part of it is that these “practice panics” aren’t just about sound, they’re also about status. They can be a contest in who defines “real.” Norm guardians such as unions, established taste makers, conservatories, critics, and fans police the boundaries of “authentic” practice. If legitimacy has been long signaled by a specific kind of labor, a tool that reduces that labor can look like cultural vandalism. Thus, these prophetic proclamations of future despair can sound noble, cloaked in virtue (“for the craft”), but they may be an effort to protect yesterday’s pecking orders.

This is not to say that concerns are simply cynicism. We all build our identities around the techniques we’ve developed through blood, sweat, and tears. A new tool can rightly feel like an assault on meaning. But history shows us that though difficult to navigate, practice adapts and the tent gets bigger.

As the world around us continues to evolve quickly, it’s important that we keep the target in sight and separate the ends from the means. The end is expression, or in other fields it can be preserving resources, improving health, or something else; the means are the tools, and they can change without the sky falling. Perhaps it’s a question of scrutinizing the conduct instead of regulating the capability. We didn’t outlaw microphones because crooning scandalized 1928, and we shouldn’t bury pitch correction because 2009 felt overscrubbed.

“Real” lives in the listener’s gut, not in the checklist of chores that deliver it. Even Auto-Tune can be a new grammar, a new way to sculpt the soundwaves to create an authentic experience. So, the debate shouldn’t be about destroying the tool, it should be about how best to teach and to learn the craft where it now lives.

Innovation keeps relocating the work. Artists keep chasing it because that’s where the meaning is, that’s where there’s a chance to land a song as true because it enriches someone’s life.

Custom Fields

authors
hook
The indie rock band, Death Cab for Cutie, arrived at the 2009 Grammy awards in protest. With baby blue ribbons prominently pinned to their lapels, they decried a contaminant sweeping the globe. It poisoned natural beauty, concealed human error, and bulldozed diversity. Not oil. Not chemicals. Auto-Tune.
article
false
include_in_hero_section
false
category
Articles
topic
  • Creative Frontiers
article_view
article content only
social_image
false
is_displayed
true
display_order
Article

Before iPhones and ChatGPT, Venice Had Its Own Tech Panic

Read on Creative Frontiers.

Filippo was convinced that the kids were in trouble. A flashy new machine was hijacking their attention, exposing them to risque material, and turning brains to mush, while the “tech bros” shrugged and made more. So he did what any concerned citizen would do, he wrote a letter to City Hall. Technically, it was 1474 and “City Hall” was Nicolo Marcello, Venice’s chief magistrate. The machine was the printing press. Filippo de Strata wanted it shut down.

Reading his plea, “lest the wicked should triumph,” is pure déjà vu. Ignore the courtly flattery (“may you hold sway forever… exalted as you deserve”) and the SAT words (“circumlocution”), and you’re basically at a modern Hill hearing about iPhones or ChatGPT. Same script, different nouns.

It shouldn’t be all that surprising. Human concerns don’t update as fast as the tech does. In fact, they remain pretty constant. A Benedictine monk writing five centuries ago with ink-stained fingers sounds a lot like a 2025 think tanker with a ring light. In fact, De Strata follows a classic playbook that resonates today: jobs, authenticity, and the children.

First, jobs. This is the economy. The printing press, he says, is putting “reputable writers” out of work while “utterly uncouth types of people” (printers), muscle in with their “cunning.” As a professional scribe, De Strata’s business model depended on scarcity: slow, meticulous processes. The press messed it up. “They print the stuff at such a low price that anyone and everyone procures it for himself in abundance.” Translation: scarcity for others pays my rent; abundance for others puts me out of a job.

Next, authenticity. This is sociology. Who gets to be “real”? Every scene has gate-keepers, norm-guardians that define the rules and police the border between authentic and counterfeit. De Strata draws a clear line with gusto. “True writers” wield goose-quills, printers are “drunken” and “uncultured…asses.” He explains that the work of the author is a superior art form. Writing is a “maiden with a pen” until she suffers “degradation in the brothel of the printing presses.” Then literature becomes a “harlot in print” and a “sick vice.” Tell us how you really feel, Filippo.

He also polices credentials. Printing, he worries, allows people to buy their way into expertise. For a small sum, “doctors” can be made in only three years. It’s the timeless concern that new tools compress the distance between novice and master—or create false senses of mastery. A decade ago it was weekend masterclasses, MOOCs, and Wikipedia challenging traditional passages of learning (never mind simply staying at a Holiday Inn Express last night). Today, self-publishing, Substack threads, YouTube explainers, and X let anyone speak with an expert cadence. The question though isn’t if the gate got wider, but how do we measure real mastery.

Finally, think of the children. Cheap and easily-accessible books, he warns, are vehicles of debauchery and impurity that are corrupting kids. Maybe that’s just rhetorical gasoline for his arguments to catch fire, or maybe it was a sincere pastoral concern for the next generation. As a dad who’s watched his kids disappear into a screen too often, I totally get the concerns. Either way, the “for the kids” refrain reliably clothes his economic and status concerns in civic virtue.

Unfortunately for Filippo De Strata, City Hall didn’t bite. Printers kept printing, presses multiplied, and Venice became the hottest book town in Europe. The printing press didn’t end scholarship; it multiplied the scholars. His letter didn’t stop the presses, but it left us a helpful snapshot of how we react when new tools arrive.

A 500-year-old letter is more than a curiosity, it’s a diagnostic. Objections to new tools cluster in timeless buckets: economic pain (who loses their job?), social status (who defines “real”?), and moral urgency (what about the kids?). When a fresh technology arrives, we can map the reactions and work to distinguish measurable harms from preferences for yesterday’s workflows.

De Strata wanted the future to behave like the past. Venice chose to bargain with the future, building guardrails that let abundance work for more people. The kids still need guidance. Experts still matter. But the threatening tool can become the instrument that broadens who gets to read, think, and make.

Custom Fields

authors
hook
Filippo was convinced that the kids were in trouble. A flashy new machine was hijacking their attention, exposing them to risque material, and turning brains to mush, while the “tech bros” shrugged and made more. So he did what any concerned citizen would do, he wrote a letter to City Hall. Technically, it was 1474 and “City Hall” was Nicolo Marcello, Venice’s chief magistrate. The machine was the printing press. Filippo de Strata wanted it shut down.
article
false
include_in_hero_section
true
category
Articles
topic
article_view
article content only
social_image
is_displayed
true
display_order
hero_image
hero_order
Article

AI in Music Feels Familiar: The Silent Album, Sousa, and Déjà Vu

Read on Creative Frontiers.

You know the feeling, that eerie sense you’ve already lived this moment. You know the feeling, that eerie sense you’ve already lived this moment. Dad joke deployed! Déjà vu! Let’s press on.

I had a déjà vu moment a few months ago with a song that wasn’t a song. In late February, 2025, a thousand U.K. artists released an album of silence. Multiple studios, one sound: nothingness. It’s called Is This What We Want?, and it’s less a new vibe and more a brick through the policy window. Each track title spells out a message to Parliament: “The British Government Must Not Legalise Music Theft to Benefit AI Companies.” The argument is that proposed reforms to U.K. copyright law will allow generative artificial intelligence to replace musicians. They believe that the studios will be silent and the machines will take the gigs.

I put the record on (insert joke about adjusting the EQ) while prepping a conversation on AI in music with drummer Elmo Lovano (Go with Elmo! and JammCard) and AI expert Neil Chilson (those CSPAN clips!). In doing a little research, I fell back into 1906 and met an old friend from your Fourth of July playlist, John Philips Souza. I grew up listening to my amazing, WWII veteran grandfather wear out those march records with the “Stars and Stripes Forever,” “Semper Fidelis,” and “The Washington Post.”

In 1906, Souza wrote an article, “The Menace of Mechanical Music,” that sounds like a century-old oppo piece on AI. He writes that if machines can steal music from artists it will destroy “further creative work,” where “the amateur [musician] disappears entirely” and for the professionals “compositions will no longer flow from their pens.” More machines, fewer musicians. Déjà vu. Only he wasn’t worried about neural nets; he was concerned about the player piano.

One of Souza’s concerns outlined in his oppo piece

A few weeks later, as I was getting ready to talk with Jarobi White from A Tribe Called Questthe echoes got louder. I kept running into similar indictments. This isn’t real creativity, some say. It just copied music that came before, stealing bits and pieces from other artists, slicing them up, and recombining them without permission. It cheapens the art. It steals jobs from real musicians. Mark Volman of The Turtles summed it up, saying, “[It] is just a longer term for theft. Anybody who can honestly say [that it] is some sort of creativity has never done anything creative.” Déjà vu. Volman and the others weren’t talking about AI. They were talking about sampling.

Then came more conversations with legendary producers, Om’Mas Keith and Jimmy Jam. More déjà vu as they shared stories about responses to innovations in the creative spaces. The nouns changed (piano rolls, drum machines, synthesizers, DAWs) but the verbs and arguments rhymed. I wanted to learn more.

That’s the seed of Creative Frontiers. I’m not here to crown winners, write manifestos, or install a master theory. This is a learning tour. I want to understand why these arguments against new technology sound the same across centuries, what’s genuinely new each time, and what previous debates and resolutions can teach us today.

The question at the center is: How do humans respond to innovation and what can we learn to make more Makers, and consequently more abundance and more human flourishing?

Now, I’m not anti-alarm. Some alarms save lives and catalogs. I’m just pro-curiosity. The silent album is a statement. Souza’s commentary was one too. Both carry a fear that’s real, losing what we love, and what is good, to a machine. But history suggests that most of the time, the machine ends up in the band and for the better. The player piano didn’t erase voices, it taught songs to households without a teacher. Samplers didn’t end creativity, they helped create new genres.

Maybe AI will be different. Maybe not. Either way, I want to understand the patterns before we write the rules.

No grand conclusions, just an invitation. If you’re curious about how creativity and innovation and technology keep bumping into each other, and why the soundtrack of that collision keeps repeating, pull up a chair. I’ll bring the archives. You bring your questions and ideas. Let’s see what we can learn together.

Custom Fields

authors
hook
You know the feeling, that eerie sense you’ve already lived this moment. You know the feeling, that eerie sense you’ve already lived this moment. Dad joke deployed! Déjà vu! Let’s press on.
article
false
include_in_hero_section
false
category
Articles
topic
  • Creative Frontiers
article_view
article content only
social_image
false
is_displayed
true
display_order
Article

The United States already leads the world in high-tech development. But policy, not technology, now stands in our way.

Published in The Columbus Dispatch.

The Right to Compute Act might sound abstract, but it’s about something every Ohioan should care deeply about: the freedom to think, build and innovate with the tools of the modern age.

Over the past two years, states have raced to regulate artificial intelligence — which is just another way of saying “advanced computing.”

More than 1,000 AI-related bills have been introduced nationwide, from deepfake bans to rules for “high-risk” algorithms.

Some are necessary; others risk overreach.What’s often missing in these debates is a simple baseline: the recognition that Americans have a fundamental right to use computers — to access and apply computational power — without government permission or arbitrary limits.

That’s what Montana affirmed earlier this year when it became the first jurisdiction in the world to enact a Right to Compute Act. 

The law guarantees that individuals and organizations can own and use computational resources — hardware, software, algorithms, even quantum systems — unless the government can show a compelling reason to restrict them. It pairs that freedom with sensible guardrails for critical infrastructure, requiring companies to follow national safety frameworks like NIST’s AI Risk Management Framework.

Now Ohio has the opportunity to join Montana.

The Buckeye State is already a computing powerhouse.

The Data center corridor outside Columbus is home to Amazon Web Services, Google and Meta facilities.

Intel’s $20 billion chip-manufacturing investment near New Albany promises to make Ohio a global center for advanced computation. Universities like Ohio State and Case Western Reserve are training the next generation of AI researchers and engineers.

But this promise comes with risk.

Technology could be restricted

Some lawmakers in other states are flirting with laws that restrict access to computing power based on who you are, how much you use or what you’re building. 

California and New York have floated measures to license AI developers or cap computing use at arbitrary thresholds. President Biden’s now-revoked Executive Order 14110 tried to impose federal controls on AI development based on the number of chips in a server — an approach copied from Europe’s more bureaucratic AI Act.

Without a clear right to compute, Ohio’s innovators could face the same uncertainty.

Entrepreneurs and researchers need to know that they can build, experiment and scale without the rug being pulled out from under them by a regulator who suddenly decides their computer is “too powerful.” It also protects the rights of individual citizens to use and operate computers from the smartphone to the home server. 

The Right to Compute Act is not a “hands-off” approach to AI.

Act will ensure balance

It simply restores constitutional balance: The government must justify restrictions, not the other way around. Fraud, deception and harassment remain illegal, and critical-infrastructure systems must still follow recognized safety standards.

For Ohioans, this means economic growth grounded in freedom. The same principles that made this state a manufacturing and research leader in the 20th century can make it a leader in 21st-century innovation.

A legal guarantee of computational freedom tells investors, students and entrepreneurs alike: Ohio is open for building.

This isn’t a partisan idea.

Montana’s version passed with strong bipartisan support. Protecting lawful access to computational tools is a practical step toward ensuring that AI and advanced computing benefit everyone, from small businesses in Dayton to students at Ohio State and farmers using smart equipment in rural counties.

Ohio can set global standard

History teaches that rights are easiest to defend before they’re lost.

Just as free speech protections had to be reaffirmed for the internet age, the right to compute updates a timeless principle for a new era: Citizens, not bureaucracies, should decide how they use their tools of thought.

If Ohio enacts this law, it won’t just follow Montana’s example, it will set a global standard for freedom, innovation and competitiveness.

Legislators should seize this opportunity to keep the Buckeye State at the forefront of America and the world’s technological future.

In a world where governments are beginning to decide who may compute and who may not, Ohio can send a clear message: In this state, the power to think, build and innovate belongs to the people.

Custom Fields

hook
Legislators should seize this opportunity to keep the Buckeye State at the forefront of America and the world’s technological future.
article
false
include_in_hero_section
false
category
Op-eds
topic
  • Technology
technology_subtopic
  • Artificial intelligence
article_view
article content only
social_image
false
is_displayed
true
display_order
Article

Bolstering Data Center Growth, Resilience, and Security

Introduction and Summary

Thank you for the opportunity to participate in this Request for Comment. The Abundance Institute is a mission-driven nonprofit focused on creating space for emerging technologies to grow, thrive, and reach their full potential. Data centers represent the backbone for developing various new technologies in both the digital and the physical spaces. I am Josh T. Smith, the Energy Policy Lead at the Institute. Our energy policy work has focused on interconnection queues, data center regulation, and the institutional differences in regional transmission operator governance (RTOs). 

Reporting and public conversations around data centers have correctly identified the critical problem for data centers as energy supply. This concern has often been overstated–effectively ignoring half of the equation by only looking at the growing demand for electricity. 

My central advice to the National Telecommunications and Information Administration (NTIA) is to examine both sides, supply and demand. There are both large energy users looking for ways to meet their energy needs and substantial energy resources looking to connect and supply that energy. A successful NTIA report would establish what holds back would-be energy suppliers from serving that demand and recommend solutions for regulators at every level.

To summarize our suggestions for the eventual National Telecommunications and Information Administration report on data centers, NTIA should:

  1. Design and suggest policies that leverage market signals to guide energy investments. 
  2. Encourage federal, state, and local action to streamline permitting of data centers and their related energy infrastructure. In particular, NTIA should encourage regional transmission operators and states to consider how they interconnect resources. The Texas model, employing an energy-only approach and a philosophy of “connect and manage,” is the only system operator not slowing dramatically.1 
  3. Resist calls to require additionality in the supply of energy sources in favor of relying on market signals to energy suppliers and private additionality and matching efforts. 
  4. Allow and encourage innovative solutions to energy needs, such as co-location and flexibility, to continue evolving and developing. To maintain certainty as people experiment, policymakers should apply existing and well-known cost allocation principles to these new business practices.

My reply to the request for comments is responsive to questions 1, 2(a), 2(c), 2(e), 3(a), 3(b), 3(c), 4(c), 5(e), 7, 7(a), 7(b), 7(c), 7(e), 7(f), and 11.

Building Abundant, Reliable Energy for All Users

In question 3(a), the NTIA asks if “an imbalance between demand and supply” of energy is expected. Blackboard drawings of supply and demand curves from Economics 101 imply a more fixed view of markets by focusing on an end state rather than the process. 

In reality, supply and demand equilibrate over many different choices and actions of many different actors. The long-run and short-run equilibrium can be very different as short-term price increases incentivize new entrants, bringing down prices. Prices are usually cast as the villain in public discussions. Economists instead emphasize that prices are the heroes. Policymakers should approach energy questions with this process and the role of prices in mind.

In practice, this means considering what prevents supply from entering the market. Here, the answers are straightforward. Addressing energy needs swiftly and effectively requires a dual focus on permitting reform and interconnection improvements. 

  • To reform the interconnection process, the NTIA should encourage RTOs and states to learn from the successes of the Texas “connect and manage” style of regulation.2 The energy-only system is simpler for compliance and evaluation. It allows dramatically greater amounts of energy supply to be connected to the system in much less time.3 In addition, researchers have recently laid out fundamental and extensive deficiencies in the capacity market approach.4 
  • On permitting reforms, the NTIA should encourage state and local governments to expedite permits for data centers and related energy infrastructure. There are also growing numbers of barriers to renewable projects, such as local bans on wind and solar.5 Even homeowners associations are sometimes barriers to installing solar, batteries, or other energy technologies at residential locations.6 The NTIA should recommend ways to overcome this localized opposition.7

The last 20 years are a better guide than the last 24 months 

Neither of these two changes, permitting reform or interconnection queue solutions, represent overnight fixes. Taking a view of the next few years, rather than what has happened in the last few weeks is vital for setting good policy. The history of energy and computing is a more useful guide than intemperate news reports. Keep in mind that dramatic improvements have been seen in computing efficiencies. One team summarized the global trend as a six-fold increase in computing with only a one quarter increase in energy use.8 There is little reason to doubt continued efficiencies. 

Past misses in estimating the future energy requirements of the internet and personal computing should feature prominently alongside claims that data centers will consume outsized shares of electricity.9 The early history of personal computers was replete with poor analysis. Echoes of this can be seen today in confusions between the growth rates and absolute growth required by data center expansions.10

To the extent that recent news reports have highlighted energy consumption increases or emissions increases, these reflect temporary trends and upfront costs in developing AI. As artificial intelligence improves, we should see both efficiencies rise in energy use and discover ways to reduce environmental costs.11 Because energy costs are a substantial portion of data center operations, there are natural and pre-existing motives for data centers to find solutions that reduce those costs. 

Additionality requirements are counterproductive and unnecessary

Marrying reforms that streamline permitting with ill-defined questions of additionality is impossible. An additionality requirement merely substitutes one regulatory thicket for another. The arguments around hydrogen tax credits are a concrete example of the problems of mandated additionality. A requirement that data centers bring their own supply, whether that is defined as “clean” or defined as “dispatchable,” introduces uncertainty and discourages data center development.12

Because the interconnection queue is overwhelmingly made up of clean generators, there is no need to apply additionality requirements to data centers. That is, requiring data centers to build equal supplies of their own energy generation is misplaced. Instead, regulators should focus on removing barriers to new supply entering the market. As I wrote in Heatmap with Alex Trembath of the Breakthrough Institute:

There are more than enough clean generators queueing to enter the system — 2.6 terawatts at last count, according to the Lawrence Berkeley National Laboratory. The unfortunate reality, however, is that just one in five of these projects will make it through — and those represent just 14% of the capacity waiting to connect. Still, this totals about 360 gigawatts of new energy generation over the next few years, much more than the predicted demand from AI data centers. Obstacles to technology licensing, permitting, interconnection, and transmission are the key bottlenecks here.

Finally, data center companies are already investing significant resources into building more generation on top of matching their demand with real-time clean energy generation. There is no need to mandate ongoing actions. NTIA should consider recommending that agencies work with companies in their private pursuits to green their energy consumption and supply chains. For example, by assisting in relevant data collection or through making building the relevant energy assets easier.

Co-location should be allowed to develop further under existing cost allocation rules

The emergence of co-location between energy generation and data centers suggests that the electricity market is an innovative area. The Federal Energy Regulatory Commission’s recent conference demonstrates that there are open questions about co-location.13 Co-location should be allowed, further studied, and traditional practices of cost causation should be applied to prevent cost-shifting. 

In addition, policymakers must consider the long-term. Complaints that a data center co-locating with an existing nuclear or other “clean-firm” generator takes supply from the market or other consumers are short-sighted and fundamentally confused. This is the way all markets work. If I purchase a loaf of bread, then that loaf is no longer available to you. However, my purchase encourages breadmakers to expand the supply. Electricity is certainly a more complicated good than bread, but the market process in the background is the same.14 Policies directly lowering the cost of new entry for energy suppliers will go much further than objecting to new business models for data centers and energy companies that may actually reduce total system costs.

Flexibility from data centers should be enabled but not required

Similarly, the ability of energy consumers to flexibly adapt to grid conditions is a young practice. This is an area that public agencies should not make firm requirements around yet. However, the NTIA could recommend that regulators at state and local levels begin reconsidering how to design rates that encourage flexibility that does not fit the already familiar versions of demand response. One example is a 2016 data center development in Wyoming. The data center employs its backup to serve the wider grid, which reduces costs for both the data center and the local grid.15 

These actions should enable two forms of flexibility. First, the flexibility that comes from relying on backup and co-located energy assets in response to grid conditions must be enabled by policy. Data center companies have already shown interest in this option. Second, flexibility from the nature of the computing at the data center may also face policy barriers. Some data centers require 100 percent uptime. Other uses with lower latency requirements can be shifted off the system’s peaking times to support the grid’s safe and reliable operation. 

Regulators need to enable such cases of flexibility. One option is to create a process to joining the system that accounts for expected peak load contributions of flexible loads. Requiring that all data centers adopt such practices will backfire because of differences in computing needs for different computing uses. However, adding new pathways onto the grid expands options and possible business models. Because the system is heavily permissioned today, new options are valuable to operators and data centers.

Requiring flexibility, enrollment in traditional demand response programs, or singling out data centers to be first to have their loads shed sets poor incentives for the entire system. It singles out specific solutions in a novel industry where such rules could easily prevent better solutions from emerging.

Conclusion

By fostering a market-driven approach to energy access and encouraging permitting reform, the NTIA can create a supportive environment for data centers, facilitating their role in driving technological advancement and economic growth across the country. 

I appreciate your efforts on this question and would welcome the opportunity to work with you or answer further questions if I can be of any assistance.


1 Josh T. Smith, “ERCOT Is the Only One Getting Energy-Only Right,” Powering Spaceship Earth, August 3, 2024, https://poweringspaceshipearth.substack.com/p/ercot-is-the-only-one-getting-energy.

2 Tyler H. Norris, “Beyond FERC Order 2023: Considerations on Deep Interconnection Reform” (Duke University Nicholas Institute for Energy, Environment, and Sustainability, August 2023), https://nicholasinstitute.duke.edu/sites/default/files/publications/beyond-ferc-order-2023-considerations-deep-interconnection-reform.pdf; Tyler H. Norris, “Pre-Workshop Comments and Exhibit of Tyler H. Norris of Duke University,” Pre-workshop Comments AD24-9-000, September 2024, https://nicholasinstitute.duke.edu/publications/comments-ferc-workshop-innovations-efficiencies-generator-interconnection.

3 Josh T. Smith, “ERCOT Is the Only One Getting Energy-Only Right,” Powering Spaceship Earth, August 3, 2024, https://poweringspaceshipearth.substack.com/p/ercot-is-the-only-one-getting-energy; Josh T. Smith, “Is All This Red Tape Really to Protect Incumbents?,” Powering Spaceship Earth, May 3, 2024, https://poweringspaceshipearth.substack.com/p/is-all-this-red-tape-really-to-protect; Josh T. Smith, “How Texas Builds and Grows,” Powering Spaceship Earth, April 25, 2024, https://poweringspaceshipearth.substack.com/p/how-texas-builds-and-grows.

4 For an overview of these problems, see the work of Todd Aagaard and Andrew N. Kleit, especially their book Electricity Capacity Markets (Cambridge, United Kingdom New York, NY: Cambridge University Press, 2022).

5 Lawrence Susskind et al., “Sources of Opposition to Renewable Energy Projects in the United States,” Energy Policy 165 (June 1, 2022): 112922, https://doi.org/10.1016/j.enpol.2022.112922; Elizabeth Weise and Suhail Bhat, “Local Governments Block Green Energy: Here’s How USA TODAY Measured the Limits Nationwide,” USA TODAY, accessed November 4, 2024, https://www.usatoday.com/story/news/investigations/2024/02/04/green-energy-nationwide-bans/71841275007/; Matthew Eisenson et al., “Opposition to Renewable Energy Facilities in the United States: June 2024 Edition” (Columbia Climate School Sabin Center for Climate Change Law, June 2024), https://scholarship.law.columbia.edu/sabin_climate_change/226/; James W. Coleman, “Overcoming Local Roadblocks to Energy Transport and a Cleaner New Energy System” (American Enterprise Institute, August 2022), https://www.aei.org/wp-content/uploads/2022/08/Overcoming-Local-Roadblocks-to-Energy-Transport-and-a-Cleaner-New-Energy-System.pdf?x91208.

6 Josh T. Smith, “Making It Easier to Put up Rooftop Solar: Technically Legal, Hard to Get,” Powering Spaceship Earth, May 25, 2024, https://poweringspaceshipearth.substack.com/p/making-it-easier-to-put-up-rooftop.

7 In 2022, the National Renewable Energy Laboratory released databases of barriers to wind and solar development. “NREL Releases Comprehensive Databases of Local Ordinances for Siting Wind, Solar Energy Projects,” National Renewable Energy Laboratory, August 9, 2022, https://www.nrel.gov/news/program/2022/nrel-releases-comprehensive-databases-of-local-ordinances-for-siting-wind-solar-energy-projects.html.

8 Eric Masanet et al., “Recalibrating Global Data Center Energy-Use Estimates,” Science 367, no. 6481 (February 28, 2020): 984–86, https://doi.org/10.1126/science.aba3758.

9 See, for example, the careful work of Jonathan Koomey as compared to other claims that computers would use half of all electricity. For a useful overview, Robinson Meyer’s reporting for Heatmap is an excellent introduction: “Is AI Really About to Devour All Our Energy? There is precedent for this panic,” Heatmap, April 16, 2024, https://heatmap.news/technology/ai-energy-consumption. For the academic debunking of more extreme claims, see: Jonathan G Koomey, “Worldwide Electricity Used in Data Centers,” Environmental Research Letters 3, no. 3 (July 2008): 034008, https://doi.org/10.1088/1748-9326/3/3/034008; Jonathan G. Koomey et al., “Sorry, Wrong Number: The Use and Misuse of Numerical Facts in Analysis and Media Reporting of Energy Issues,” Annual Review of Energy and the Environment 27, no. 1 (November 2002): 119–58, https://doi.org/10.1146/annurev.energy.27.122001.083458; Jonathan Koomey, “Separating Fact from Fiction: A Challenge for the Media [Soapbox],” IEEE Consumer Electronics Magazine 3, no. 1 (January 2014): 9–11, https://doi.org/10.1109/MCE.2013.2284952; Jonathan G Koomey, “Rebuttal to Testimony on ‘Kyoto and the Internet: The Energy Implications of the Digital Economy,’” n.d.

10 Josh T. Smith, “Doubling a Pebble Doesn’t Make a Mountain,” Powering Spaceship Earth, September 6, 2024, https://poweringspaceshipearth.substack.com/p/doubling-a-pebble-doesnt-make-a-mountain; Josh T. Smith, “A Crisis of Our Own Making?,” Powering Spaceship Earth, August 16, 2024, https://poweringspaceshipearth.substack.com/p/a-crisis-of-our-own-making.

11 Josh T. Smith, “Operation Gigawatt: It’s Ok to Want More,” Powering Spaceship Earth, October 27, 2024, https://poweringspaceshipearth.substack.com/p/operation-gigawatt-its-ok-to-want; Josh T. Smith, “Doubling a Pebble Doesn’t Make a Mountain,” Powering Spaceship Earth, September 6, 2024, https://poweringspaceshipearth.substack.com/p/doubling-a-pebble-doesnt-make-a-mountain; Josh T. Smith, “A Crisis of Our Own Making?,” Powering Spaceship Earth, August 16, 2024, https://poweringspaceshipearth.substack.com/p/a-crisis-of-our-own-making; Josh T. Smith, “Magic Machines,” Powering Spaceship Earth, August 9, 2024, https://poweringspaceshipearth.substack.com/p/magic-machines.

12 Alex Trembath and Josh T. Smith, “Abundance Will Meet the Energy Demands of AI,” Heatmap News, October 15, 2024, https://heatmap.news/ideas/abundance-additionality-permitting-reform.

13 Federal Energy Regulatory Commission, Technical Conference on Co-Located Load, AD24-11-000, https://elibrary.ferc.gov/eLibrary/docketsheet?docket_number=AD24-11-000&sub_docket=All&dt_from=1960-01-01&dt_to=2024-10-29&chklegadata=false&pageNm=dsearch&date_range=custom&search_type=docket&date_type=filed_date&sub_docket_q=Allsub.

14 For example, see the commentary of Devin Hartman and Kent Chandler of the R Street Institute on the Amazon and Talen deal: https://www.rstreet.org/commentary/the-fuss-and-advantages-of-siting-large-consumers-at-power-plants/.

15 Shayle Kann, “Can chip efficiency slow AI’s energy demand?,” Catalyst Podcast, https://www.latitudemedia.com/news/catalyst-can-chip-efficiency-slow-ais-energy-demand; Josh T. Smith, “Microsoft’s 2016 flexible data center,” https://x.com/smithtjosh/status/1813977990155682139.

Custom Fields

hook
By fostering a market-driven approach to energy access and encouraging permitting reform, the NTIA can create a supportive environment for data centers, facilitating their role in driving technological advancement and economic growth across the country. 
article
https://www.abundance.institute/wp-content/uploads/2025/10/24151318/AbundanceInstitute-2024-EmergingTech-Publications-NOV-DataCenterGrowthResilienceandSecurityComment.pdf
is_displayed
true
display_order
category
Regulatory comments
authors
article_view
article content only
social_image
topic
  • Energy
  • Technology
evergy_subtopic
  • Electricity
technology_subtopic
  • Artificial intelligence
include_in_hero_section
true
hero_image
hero_order
Article

States Can Lead A New Atomic Age

Today presents a unique opportunity for states to step into the driver’s seat on nuclear policy:

  • A federal administration supportive of nuclear.
  • High demand for electricity is a certainty.
  • New users have significant resources and are eager to invest in energy sources that are reliable, clean, and available 24/7.
  • Idaho National Labs has 11 test reactors in an accelerator to be tested by July 4, 2026, which is a number of new designs unseen since the early days of American nuclear.
  • An ongoing lawsuit led by Texas, Utah, and several nuclear companies could give states new authority over small modular reactor development.

Despite this, most states are not yet prepared to seize the moment. The Overturn Prohibitions & Establish a Nuclear Coordinator (OPEN Act) model policy will prepare states to lead a new atomic age.

The OPEN Act lays the groundwork for states to permit and begin construction on new nuclear facilities within 180 days. It draws on the recent experience Utah, Texas, and other states have gained in laying guardrails and tracks for new nuclear development.

What the OPEN Act does

  • Ends nuclear bans and special hurdles.
  • Prevents new nuclear bans.
  • Creates a one-stop state-level lead and authority for nuclear development.
  • Sets fast, concurrent review expectations.

Benefits

The OPEN Act advances a state’s interest in a reliable and affordable energy supply. It capitalizes on a moment that may never come again. Your entire state will benefit, and America will continue to lead the world:

  • Economic growth in artificial intelligence and manufacturing will be supercharged by safe, reliable nuclear power.
  • New growth will mean new jobs in the nuclear sector and in the attendant industries, powered by new reactors.
  • Boosted tax revenues will flow to local and state governments from the development of new energy infrastructure and 24/7 data centers.

Today’s conditions echo an era when America built nuclear power plants swiftly, safely, and cheaply. In 1968, Connecticut Yankee came online after roughly five years, and at a price tag of about $1 billion in today’s terms. Since then, the regulatory process has smothered new nuclear proposals, resulting in only a few new plants coming online. Those new plants were years behind schedule and cost billions of dollars in cost overruns.

We can realize a future of “too cheap to meter” with quick and definite action today

Resources

“A Lawless NRC Obstructs Safe Nuclear Power,” Wall Street Journal, Christopher Koopman and Eli Dourado, Jan. 5, 2025.

Josh T. Smith, House Oversight Committee testimony on nuclear policy and the role of states in building nuclear swiftly, safely, and cheaply.

Learn more

State of Texas v. U.S. Nuclear Regulatory Commission

OPEN Act text (model policy for consideration at ALEC)

Custom Fields

authors
hook
Today presents a unique opportunity for states to step into the driver’s seat on nuclear policy.
article
https://www.abundance.institute/wp-content/uploads/2025/12/22123445/Energy-Nov-2025-New-Atomic-Age.pdf
include_in_hero_section
false
category
Primers
topic
  • Energy
evergy_subtopic
  • Nuclear
article_view
article content only
social_image
false
is_displayed
true
display_order
Person

Jan Zilinsky

Custom Fields

name
Jan Zilinsky
Person

Thomas Zeitzoff

Custom Fields

name
Thomas Zeitzoff
Person

Austin Vernon

Custom Fields

name
Austin Vernon
Person

Nirit Weiss-Blatt

Custom Fields

name
Nirit Weiss-Blatt
Person

Adam Thierer

Custom Fields

name
Adam Thierer
Person

Logan Whitehair

Custom Fields

name
Logan Whitehair
Person

Matt Perault

Custom Fields

name
Matt Perault
Person

David Puelz

Custom Fields

name
David Puelz
Person

Joel Stonedale

Custom Fields

name
Joel Stonedale
Person

David J. Teece

Custom Fields

name
David J. Teece
Person

Eli Dourado

Custom Fields

name
Eli Dourado
Person

Alex Trembath

Custom Fields

name
Alex Trembath
Person

John Villasenor

Custom Fields

name
John Villasenor
Person

Lynne Kiesling

Custom Fields

name
Lynne Kiesling
Person

Lauren Wagner

Custom Fields

name
Lauren Wagner
Person

Jesse Peltan

Custom Fields

name
Jesse Peltan
Person

James Ostrowski

Custom Fields

name
James Ostrowski
Person

J. Storrs Hall

Custom Fields

name
J. Storrs Hall
Person

Orly Lobel

Custom Fields

name
Orly Lobel
Person

Jason Carman

Custom Fields

name
Jason Carman
Person

Jane Bambauer

Custom Fields

name
Jane Bambauer
Person

Jason Feifer

Custom Fields

name
Jason Feifer
Person

Chris Koopman

Custom Fields

name
Chris Koopman
Person

John Hardin

Custom Fields

name
John Hardin
Person

Richard Evans

Custom Fields

name
Richard Evans
Person

Parker Jeppesen

Custom Fields

name
Parker Jeppesen