In the midst of complex and contentious debates surrounding artificial intelligence (AI), a notable measure included in President Donald Trump’s expansive domestic policy bill is raising alarms across various sectors. This provision, if enacted, would impose a pause on states’ ability to enforce regulations related to artificial intelligence for an entire decade. Such a decision has sparked worry among numerous tech advocates and industry observers who are increasingly aware of the profound implications of AI regulation—or lack thereof—on society.
The urgency of this moratorium emerges as artificial intelligence is creeping into diverse aspects of daily life for many Americans, spanning areas as crucial as healthcare, law enforcement, personal relationships, and employment. While Silicon Valley champions the potential of AI to solve complex problems, there are parallel concerns regarding the risks associated with this technology. Among these risks are job displacement for millions and the ability to disseminate misinformation, which could disrupt democratic processes and societal norms.
Currently, there is no comprehensive federal law governing AI, although recent legislative measures such as the Take It Down Act, signed by President Trump, criminalizes the non-consensual sharing of explicit images, indicating a growing recognition of the need for regulation in this sphere. Several states have taken the initiative to establish their own AI-related laws, targeting issues such as the utilization of deepfake technology during elections and AI-based discrimination in hiring practices.
Under the proposed legislation, existing state laws would likely become unenforceable if the Senate’s version of the “big, beautiful bill” reaches President Trump’s desk. Recently, Senate Commerce Committee Republicans linked the moratorium to essential federal funding for broader internet infrastructure. The tech community remains divided on the necessity of a pause in state regulations. Some argue that this would prevent a fragmented regulatory landscape, while others firmly oppose the moratorium, fearing it would enable both unchecked innovation and potential abuses.
Resistance against this provision has surfaced from various groups, including a bipartisan coalition of 40 attorneys general led by North Carolina’s Jeff Jackson, urging lawmakers to abandon the moratorium. Jackson expressed his belief that this aspect of the reconciliation bill may become a significant topic of discussion in the future. In a recent interview with CNN, Jackson articulated his skepticism concerning Congress’s resolve to enact strict AI regulations, emphasizing both the immediate and long-term risks posed by such a moratorium.
When first alerted about the moratorium’s inclusion in the bill, Jackson was startled by its breadth and potential consequences. He described this provision as one that could inadvertently repeal many consumer and voter protection laws established by states to mitigate harmful applications of AI. Specifically, Jackson highlighted the concern that without regulatory safeguards, individuals could fall victim to manipulative AI tactics employed by malicious actors. As he elaborated, existing state laws have focused on safeguarding voters and consumers by prohibiting harmful practices, such as unauthorized duplication of personal likenesses using AI technology.
Furthermore, Jackson conveyed his fears about the future implications of such a regulatory pause, predicting an uptick in misinformation, particularly through deepfakes designed to deceive a wide audience. He expressed concern that the lack of legal recourse against AI-generated malicious content could leave citizens unprotected in the face of manipulated information campaigns.
In highlighting the stakes involved, Jackson pointed out that while he has local criminal laws to uphold, his larger concern relates to the legislative climate around AI regulation and the potential inability for states to address emerging challenges collaboratively. He articulated a profound skepticism regarding Congress’s ability to implement necessary regulations proactively. Based on previous legislative do-nothing records concerning the internet and social media, Jackson fears that this trend will continue, especially regarding such a multifaceted and transformative technology as AI.
Amid debates about whether states should adopt their own regulations versus a unified national standard, Jackson stood firm in his belief that allowing states the autonomy to safeguard their constituents against potential AI misuse must not be sacrificed for the sake of convenience. He warned that accepting inaction for a decade could put citizens at greater risk than having a series of protective state laws in place. As public discussions surrounding AI grow more critical, the balance between innovation and regulation remains a paramount concern, prompting vigorous advocacy from both governmental and non-governmental stakeholders.