AI ‘Action Plan’ paves way for new controls on semiconductor subcomponents
The US government will explore new export controls on semiconductor manufacturing subsystems and create a ‘strategic plan’ to ‘induce allies to adopt…complementary export controls across the chain,’ the White House said in its new artificial intelligence strategy document, ‘Winning the Race – America’s AI Action Plan,’ published 23 July.
But the report is unflattering in its treatment of non-US attempts at regulation.
‘Too many’ efforts by international bodies, such as the G7, OECD and others, to establish AI governance frameworks and development strategies have ‘advocated for burdensome regulations, vague “codes of conduct” that promote cultural agendas that do not align with American values,’ or have been ‘influenced by Chinese companies attempting to shape standards for facial recognition and surveillance.’
Under Pillar III of the plan, the White House vows to ‘lead in international AI diplomacy and security’, driving the adoption of American AI systems and standards. The United States, it says, must meet global demand for AI because failing to do so would constitute an ‘unforced error’ that would cause allies to turn to rivals for their AI needs.
But, it says, the United States must deny foreign adversaries access to ‘advanced AI compute [sic]’ as a matter of both geostrategic competition and national security.’
Amongst specific next steps, it says that it will look to expand new initiatives for promoting ‘plurilateral controls for the AI tech stack, avoiding the sole reliance on multilateral treaty bodies to accomplish this objective, while also encompassing existing U.S. controls and all future controls to level the playing field between U.S. and allied controls,’ and,
‘Coordinate with allies to ensure that they adopt U.S. export controls, work together with the U.S to develop new controls, and prohibit U.S. adversaries from supplying their defense-industrial base or acquiring controlling stakes in defense suppliers.’
‘Pain points’
Writing for the Atlantic Council’s website in response to the Plan, Mark Scott, senior resident fellowat the Digital Forensic Research Lab’s Democracy + Tech Initiative within the Atlantic Council Technology Programs, says that while much of it aligns with proposed EU policy,
‘[W]here problems likely will arise is how Washington seeks to promote a “Make America Great Again” approach to the export of US AI technologies to allies and the wider world.’
‘Much of that focuses on prioritizing US interests, primarily against the rise of China and its indigenous AI industry, in multinational standards bodies and other global fora—at a time when the White House has significantly pulled back from previously bipartisan issues like the maintenance of an open and interoperable internet,’ writes Scott, adding that the dichotomy – ‘where the United States and EU agree on separate domestic-focused AI industrial policy agendas but disagree on how those approaches are scaled internationally’ – will likely became a ‘central pain point’ in the EU-US technology relationship, which should be eased at a time that “EU and US officials are threatening tariffs against each other.”
Hallucinating Vikings?
The Trump administration’s related executive order (‘E.O.’), in which the President vows to ‘prevent “Woke AI” in Federal Government,” signals a different kind of misalignment between the United States and Europe, which analysts say may also hinder take-up from allies who have traditionally shared values with the US.
The E.O. takes especial exception to ‘so-called “diversity, equity, and inclusion” (DEI)’, which said Trump, ‘poses an existential threat to reliable AI’ by suppressing or distorting ‘factual information about race or sex.’
By way of illustration, the E.O., says, ‘One major AI model changed the race or sex of historical figures — including the Pope, the Founding Fathers, and Vikings — when prompted for images because it was trained to prioritize DEI requirements at the cost of accuracy.’
Another, he said, ‘refused to produce images celebrating the achievements of white people, even while complying with the same request for people of other races. In yet another case, an AI model asserted that a user should not “misgender” another person even if necessary to stop a nuclear apocalypse.’
The E.O. said that while the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace, ‘in the context of Federal procurement, it has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas.’