Expert: Biden’s AI Order Has ‘Wrong Priorities’ Despite Some Positive Reviews

President Joe Biden signed a “landmark” executive order (EO) on artificial intelligence, drawing mixed reviews from experts in the quickly developing technology.

“One key area the Biden AI (executive order) is focused on includes the provision of ‘testing data’ for review by the federal government. If this provision allows the federal government a way to examine the ‘black box’ algorithms that could lead to a biased AI algorithm, it could be helpful,” said chief analytics officer of Pioneer Development Group, Christopher Alexander. 

“Since core algorithms are proprietary, there really is no other way to provide oversight and commercial protections,” added Alexander. “At the same time, this needs to be a bipartisan, technocratic effort that checks political ideology at the door, or this will likely make the threat of AI worse rather than mitigate it.”

Alexander’s comments follow Biden’s unveiling of a long-anticipated executive order containing new regulations for AI and hailed it as the “most sweeping actions ever taken to protect Americans from the potential risks of AI systems.”

The executive order will require AI developers to share the results of safety tests with the government, erect guardrails meant to protect Americans’ privacy as AI technology grows rapidly and create standards to monitor and ensure the safety of AI.

“AI is all around us,” said Biden before signing the order, according to a report by The Associated Press. “To realize the promise of AI and avoid the risk, we need to govern this technology.”

Policy director of the American Principles Project, Jon Schweppe, said the concerns about AI that led to the executive order are “warranted” and complimented some details of the president’s executive order but also argued some of the order focuses “on the wrong priorities.”

“There’s a role for direct government oversight over AI, especially when it comes to scientific research and homeland security,” said Schweppe. “But ultimately, we don’t need government bureaucrats micromanaging all facets of the issue. Certainly, we shouldn’t want a Bureau of Artificial Intelligence running around conducting investigations into whether a company’s AI algorithm is adequately ‘woke.'”

Schweppe argued there is a role for “private oversight” of the growing technology while noting 

AI developers should be exposed to “significant liability.”

“AI companies and their creators should be held liable for everything their AI does, and Congress should create a private right of action giving citizens their day in court when AI harms them in a material way,” said Schweppe. “This fear of liability would lead to self-correction in the marketplace — we wouldn’t need government-approved authentication badges because private companies would already be going out of their way to protect themselves from being sad.”

Order was designed to build on voluntary commitments by some large tech companies

The order was designed to build on voluntary commitments by some large technology companies President Biden helped broker earlier in the year, which will require the firms to share data about the safety of AI with the government.

Policy director of the Bull Moose Project, Ziven Havens, said Biden’s order is a “decent first attempt at AI policy.”

“A significant portion of the EO is setting expectations for guidelines and regulations for topics including watermarks, workforce impact, and national security,” said Havens. “All of which are crucial in the future of this new technology.”

Havens also cautioned there should still be concerns about “how long it will take to develop this guidance.”

“Falling behind in the AI race due to a slow and inefficient bureaucracy will amount to total failure,” said Havens.

Founder of the Center for Advanced Preparedness and Threat Response Simulation, Phil Siegel, said Biden’s order was “thorough” but questioned if it attempted to “take on too much.”

Siegel argued there are “four pillars to AI regulation,” including protecting vulnerable populations, including the elderly and children; developing laws that “take into account the scope of AI”; ensuring algorithms are fair by eliminating bias; and ensuring safety and trust in algorithms.

“I would give the EO high marks on (pillars) three and four and more of an incomplete on one and two,” said Siegel. “Sadly, there is only so much that can be done in EO anyway, and it is necessary for Congress to engage with the White House to make some of this into law.”