Vice President JD Vance recently advocated for reduced AI regulations in Europe, but similar efforts are underway in the United States that could stifle innovation. Despite President Trump's pro-innovation executive order replacing the Biden administration's restrictive AI framework, some organizations are pushing for state-level regulations that mirror the previous administration's approach.
The Future of Privacy Forum (FPF), a non-profit with significant federal funding, is reportedly drafting state bills with vague concepts like "algorithmic discrimination" and "high-risk" systems. These broad terms could give regulators excessive power, deterring startups and growing tech companies.

Venture capitalist Marc Andreessen has expressed concerns about overly restrictive AI regulations. Progressive groups, having spent years establishing organizations focused on AI "safety," are poised to influence any new regulatory bodies, potentially leading to an expansion of government control.
This push for regulation ignores Hayek's "knowledge problem," which highlights the inability of central authorities to effectively manage complex, evolving systems. Sweeping regulations with unclear mandates could lead to cronyism, favoring established companies while hindering smaller innovators. Even exemptions for startups can solidify the dominance of tech giants.

While proponents argue these measures address "algorithmic harms," existing laws already cover issues like defamation and fraud. States can update their legal codes without creating new bureaucracies. Targeted interventions are preferable to broad frameworks that could benefit Big Tech at the expense of smaller competitors.
States should avoid creating new regulatory bodies that could hinder innovation. Instead, they should focus on fostering a thriving entrepreneurial ecosystem that can drive American leadership in AI.