SACRAMENTO, the United States, Jan. 22 (Xinhua) -- On his first day back in office, U.S. President Donald Trump revoked a landmark artificial intelligence (AI) executive order signed by former U.S. President Joe Biden, igniting widespread concern about the risks of allowing AI development to proceed unchecked.
The order, originally signed by Biden in October 2023, laid out comprehensive guidelines to ensure the safe and responsible rollout of AI across sectors such as healthcare, national security, and the economy.
With a single stroke of his pen, Trump has effectively halted the requirement for AI developers to submit safety test results to the government, plunging the future of the U.S. AI Safety Institute, founded by the U.S. Department of Commerce following Biden's executive order on AI, into doubt.
Critics argue that removing federal oversight could accelerate the deployment of risky AI systems lacking rigorous vetting. Jennifer Everett, a partner in Alston & Bird's technology and privacy group, noted that while the federal government appears to be loosening its regulatory grip, individual states may step forward to fill the void.
Alexandra Reeve Givens, CEO of the Center for Democracy and Technology, also criticized the repeal, noting that Biden's executive order was designed to promote the safe and responsible use of AI tools. With important guardrails now removed, Americans could be exposed to greater risks, especially as tech companies push the boundaries of rapid innovation.
Trump's decision aligns with the 2024 Republican Party platform, which pledged to rescind Biden's order amid claims that it stifled innovation. Seen as part of a broader deregulatory push designed to spur economic growth and maintain America's technological edge, the move nonetheless raises serious alarms about ethical standards, accountability, and the broader societal impacts of AI.
IMPACTS TO COME
Without a formal requirement for safety assessments, experts warn that AI systems risk being released without proper evaluations for bias, reliability, or security flaws.
One particular concern is the ethical minefield of generative AI models, which can already manufacture convincing misinformation and deep fakes -- tools that, if left unregulated, could stoke social unrest or be weaponized for disinformation campaigns.
The absence of federal oversight leaves the question of accountability in limbo. Under Biden's framework, government-led safety testing provided a level of public assurance and held developers to more stringent standards. Now, this responsibility shifts entirely to private companies, many of which may prioritize profitability over ethics.
Trust could erode as a result, especially in critical sectors like healthcare and finance where public confidence is paramount. Under Biden's executive order, the U.S. Department of Health and Human Services was already in the process of creating a safety program for AI deployments in medical and public health services -- progress that now hangs in the balance.
A further complication is the likely emergence of a fragmented regulatory landscape, as individual states craft their own rules in the vacuum left by the federal government. This disjointed patchwork could spawn compliance challenges for AI developers operating across multiple jurisdictions, potentially driving up costs and dampening the very competitiveness Trump's deregulation purportedly aims to bolster.
Moreover, withdrawing federal leadership on AI governance could weaken America's influence on the global stage, particularly in negotiations around international AI standards. Other nations are forging ahead with cohesive frameworks, and the United States now risks ceding its leadership role to them.
The disbanding of the U.S. AI Safety Institute is especially consequential. Established to coordinate research, set safety benchmarks, and foster collaboration between public agencies and private industry, its dissolution removes a vital cornerstone for managing the complexities of advanced AI.
Without it, the government may be ill-equipped to mitigate emerging threats, ranging from AI-driven cyberattacks to privacy breaches via large-scale data collection. Enditem
Go to Forum >>0 Comment(s)