How States are Setting Boundaries for AI Amid Limited Federal Oversight

With little federal regulation in place, U.S. state legislatures have become the primary arenas for establishing safeguards around artificial intelligence. The clear rejection in Congress of a proposed pause on state-level AI laws has effectively left states free to keep addressing the regulatory void.
Image Credits: Pixabay

With little federal regulation in place, U.S. state legislatures have become the primary arenas for establishing safeguards around artificial intelligence. The clear rejection in Congress of a proposed pause on state-level AI laws has effectively left states free to keep addressing the regulatory void.

Several states have already passed laws governing AI, and all 50 introduced AI-related legislation in 2025.

From a regulatory standpoint, four key areas are drawing the most attention: government use of AI, healthcare applications, facial recognition, and generative AI.

Government Use of AI


Ensuring oversight and accountability is particularly important when AI is used in the public sector. Predictive AI—used for analyzing data to forecast outcomes—has reshaped how governments operate, influencing everything from social service eligibility to criminal sentencing and parole decisions.

However, relying heavily on algorithmic decision-making can introduce significant, often unseen, risks. These include racial and gender bias in systems used to deliver government services.

To address these concerns, many state legislatures have introduced bills aimed at regulating public sector AI use. These proposals focus on transparency, protecting consumers, and identifying potential risks tied to AI deployment.

Multiple states now mandate that AI developers disclose potential risks associated with their systems. Colorado’s Artificial Intelligence Act, for example, requires transparency and risk disclosure from both developers and deployers of AI used in consequential decision-making.

Montana’s recently passed “Right to Compute” law calls for AI developers working on systems tied to critical infrastructure to implement risk management frameworks that address privacy and security during development. Additionally, some states—like New York through its SB 8755 bill—are creating regulatory bodies to oversee and enforce AI-related rules.

Artificial Intelligence in Healthcare

In the first six months of 2025, lawmakers in 34 states proposed more than 250 health-related AI bills. These legislative efforts typically fall into four main categories: transparency, consumer protection, use of AI by insurers, and use of AI by healthcare providers.

Transparency bills establish what information AI developers and deploying organizations must disclose.

Consumer protection measures focus on preventing bias in AI systems and ensuring that people can challenge decisions made by the technology.

Legislation related to insurers addresses how insurance companies use AI to determine healthcare coverage and payments.

Bills concerning clinical applications regulate how healthcare professionals use AI for diagnosis and treatment.

Biometric Identification and Monitoring Systems

In the U.S., privacy protections have traditionally focused on safeguarding individual autonomy from government intrusion—a principle that also applies to facial surveillance. Within this framework, facial recognition technologies raise serious privacy concerns and introduce risks related to algorithmic bias.

Frequently used in predictive policing and national security, facial recognition systems have been shown to disproportionately misidentify people of color, raising red flags for civil rights advocates. Groundbreaking research by computer scientists Joy Buolamwini and Timnit Gebru revealed that these systems struggle to accurately recognize darker-skinned individuals, highlighting their potential to harm marginalized groups.

Bias can also stem from the training data used to develop these algorithms, particularly when development teams lack racial and gender diversity.

By the close of 2024, 15 U.S. states had passed legislation aimed at curbing the harms of facial recognition. These laws often include requirements for vendors to disclose bias testing results, outline their data handling practices, and incorporate human oversight when deploying these systems.

Synthetic AI and Foundational Models

The rapid adoption of generative AI has raised concerns among lawmakers across various states. In Utah, the Artificial Intelligence Policy Act mandates that individuals and organizations must disclose when they’re using generative AI in interactions—if asked—particularly in situations involving advice-giving or the handling of sensitive information. However, lawmakers later limited the requirement to these specific contexts.

In 2024, California enacted AB 2013, a law focused on generative AI, requiring developers to publicly share details on their websites about the data used to train their AI systems, including foundation models. These models are trained on massive datasets and can be adapted to perform multiple tasks without needing retraining.

Historically, AI developers have not been transparent about their training data. Laws like these aim to improve visibility and potentially assist copyright holders in addressing how their content is used during AI training.

Efforts to Bridge the Gap

In the absence of a unified federal framework, many states have stepped in with their own AI legislation to address the regulatory gap. Although this state-by-state approach may create compliance challenges for AI developers, it also offers valuable oversight in areas like privacy, civil rights, and consumer protection.

On July 23, 2025, the Trump administration released its AI Action Plan, declaring that the federal government would withhold AI-related funding from states that impose what it considers burdensome AI regulations.

This policy could discourage states from enacting stronger AI regulations if doing so risks losing access to essential federal funding, effectively limiting their ability to govern AI technologies within their jurisdictions.


Read the original article on: Tech Xplore

Read more: Improved Components Extend the Movement Potential of Humanoid Robots