The threat of on-chip AI hardware controls

Jai Vipra | CyberBRICS Fellow, Center for Technology and Society, FGV Direito Rio

Introduction

Computational infrastructure is one of the most important constraints on AI development. Chips used to train and run AI models are expensive to produce and supplied by an extraordinarily concentrated market via a complex international supply chain. 

Recently, governments have become concerned about the strategic value of AI. This strategic value has two dimensions: one, that AI can potentially function as a lever of innovation and economic development, creating an advantage for any country that controls it [1]. Two, that AI can provide direct defence or military advantage through its use in surveillance and weapons systems [2]. Separately, potential security risks can arise from the malfunctioning, misuse, or malicious use of AI. These considerations have led to the United States’ restrictions on the export of the most advanced AI chips to China [3].

Against this backdrop, some analysts have recommended a policy of intervening at the design and manufacturing stages of AI chips to include hardware modifications to achieve policy goals [4]. This policy – hereafter “hardware controls” – is specifically recommended as a way for the United States to maintain its advantage in advanced chip design technology in relation to China and the rest of the world. It is also discussed to ensure that the ultimate control over AI development remains with a government (de facto the US government) [5]. 

The fact that all the most advanced AI chips today use American technology at some point in their value chain would allow the US government to unilaterally impose hardware controls by making them a condition of export. Existing export controls on semiconductor technology follow this route as well, invoking the foreign direct product rule (a regulation under the US Department of Commerce) and using it to disallow any semiconductor manufacturing company anywhere in the world from serving any Chinese design company to make AI chips [6]. China has already initiated a dispute at the WTO against these export controls, accusing the United States of using export controls meant for global security and non-proliferation, in the service of commercial technological leadership [7].

Hardware controls are conceived as an extension of these export controls that would include the ability to monitor and modify the use to which AI chips are put. They can allow a third party to monitor the kind and level of usage for a given chip or a cluster of chips; they can also allow them to remotely shut down the functioning of chips and therefore of an AI system [8]. In a sense, hardware controls are a kind of technological protection measure (TPM) [9], but unlike well-known applications of TPMs like paywalls, read-only controls, and watermarks, they are not being proposed (at least overtly) primarily for the protection of commercial interests through intellectual property. As we shall see, they might end up primarily protecting intellectual property for commercial reasons anyway.

The primary justifications provided for hardware controls include making chips “secure” and “governable” [10]. Sastry et al. (2024) mention some risks related to hardware controls, in particular the risks of security, privacy, and abuse of power [11]. They also contend that some risks, especially those affecting national security, might be so large as to call for ex-ante measures like hardware controls. This article challenges this primacy of national security and provides an analysis of the problems presented by hardware controls, even if they were to be safeguarded from data security and privacy harm. 

Problems posed by hardware controls

Undue surveillance and government control

It is well-known that government control over certain aspects of technology can lead to a function creep – the extension of a law, rule, or technology beyond its intended purpose [12]. For instance, facial recognition technology can be introduced to police departments as a tool to find missing children, but soon ends up being used for mass surveillance [13]. Effective hardware controls would make it possible and perhaps even easy for governments to monitor and shut down any computing activity that is considered inconvenient or dangerous. This can include even normal commercial activity under a state that favours certain corporations over others. 

Commercial control

The latter concern above is one of the over-regulations of commercial computing activity. A certain degree of openness in both software and hardware has been, and continues to be, crucial to computational progress. Bob Young, then CEO of open-source software provider Red Hat, noted in the introduction to ardent open-source advocate Eric Raymond’s influential 1991 book The Cathedral and the Bazaar that the computer hardware industry innovated much faster than the computer software industry because hardware was freer [14]. In contrast, software controls like digital rights management (DRM) have created locked consumers in to certain providers by making moving to another provider onerous, concentrated various markets (especially in entertainment), and led to obsolescence and the impossibility of preserving “protected” intellectual material [15]. 

Regulation is not well-suited to determine where exactly in the supply chain open source might add most value. The concepts of hardware freedom and free and open source hardware (FOSH) have served as a rallying call around the concerns of government overreach and inappropriate propertization of hardware technology [16]. Some developers have even coalesced around somewhat similar ideas in the realm of chips, under the Free and Open Source Silicon Foundation [17]. The “right to repair” movement also responds to the closedness of devices and the loss of consumer control over the use of devices [18].

This is not to say that the opposite of hardware controls is open hardware; however, central to the vision of proponents of hardware controls is a system of temporary licences that allow the hardware provider, or a third party, to determine the purpose and manner of hardware use.  

Efforts to limit the use of products once they are sold have been controversial. In The End of Ownership (2016), law professors Aaron Perzanowski and Jason Schultz describe the erosion of personal property in the digital age, with providers of goods and services arguing that licensing allows them to arbitrarily delete or control the goods and services provided [19]. Instituting hardware controls means that chip designers, fabricators, and/or the US government will be able to determine the purposes and manner in which advanced AI chips are used. This creates entirely arbitrary restrictions on the ability of commercial or even academic entities to train AI models in the direction they seek. In the context of a very concentrated AI market, such controls do not bode well for new competitors and consequently for innovation.

Distrust in international relations

Importantly, some researchers consider the concentration in hardware markets desirable because it makes these markets easier to regulate, prevents a reckless race in hardware development, and preserves US hegemony over advanced computation. Certainly not everyone shares this desire for US hegemony, and the implications of one government determining what the entire world can do in terms of advanced AI training are not discussed, but these implications are wide-ranging [20]. For this reason alone it is important to counter these efforts. However, the same researchers also claim that hardware controls can “widen the space of possible [international] agreements and policies by providing a trustworthy verification platform” [21]. “Trustworthy” here is used in a technical sense: countries could potentially mutually monitor AI training activity in one another’s territory and use hardware controls to “trust” their own monitoring. 

The larger question of trust is ignored in this hypothetical. If the United States can unilaterally impose hardware controls on advanced chips, there is no manner of verifying that it also does not manufacture (or allow to be manufactured) chips that cannot be monitored, or that it does not create backdoors to monitoring known only to itself. There is an even larger question of trust in international relations: in an environment of intense rivalry between the US and China, and plummeting faith in the rules-based international order in the Global South, unilateral actions aimed at depriving other countries of advanced general technology can hardly be expected to increase trust. International relations scholars have shown that trust is a precondition for international agreements rather than a result of them [22]. A cycle of misperceptions about the motives and actions of rivals in AI governance is already underway, threatening strategic stability [23]. Trust-building measures, in both the rationally calculated and emotive senses [24], must occur before an international agreement on AI governance is arrived at. 

Industrial policy is a legitimate goal, but US export controls on AI chips, as well as proposed hardware controls, conflate national security goals with industrial policy goals. This is a general characteristic of The US AI policy so far, which has not followed the US government’s general approach of distancing industrial policy from national champions [25].

The promotion of delinking

One consequence of growing mistrust in international relations is the delinking of technological systems. Biden’s February 2024 Executive Order to “protect Americans’ sensitive personal data” explicitly targets China’s AI development [26]. Russia and China already have independent digital markets and infrastructure, and all three countries have banned the use of some types of foreign technology in government work. After the US implemented export controls on advanced chips, China accused it of “unilateral bullying” [27]. While US Secretary of State Antony Blinken later stated that the export control measures were not meant to impede China’s progress, China has understood these measures as an imprimatur to develop its own advanced chip technology. It has since taken several steps to develop domestic capability for chip design and production, and has succeeded in important respects [28]. This success has come in spite of expert predictions that it would be nearly impossible with escalating export controls [29]. 

In the US, China’s successes in overcoming some aspects of export controls are used by many commentators to argue for further export controls, and indeed the call for hardware controls is an example of such a push. Here, we ought to examine an assumption that drives these calls – AI is economically and strategically useful, and so the US must limit China’s chip development. However, if AI is quite that economically and strategically useful, China will have extremely high incentives to dramatically increase investments in chip development if the US tries to restrict chip technology exports to China from all countries. In trying to restrict China’s access to advanced chips, the US might be spurring on China’s own delinked chip industry.

In addition, other countries are learning from this example and investing in their own semiconductor industries and in international projects to stave off geopolitical risk at least partially. Various countries including the US have been active in developing RISC-V, an open-source project for an instruction set architecture (ISA) - an interface between hardware and software used by chip designers. Using an open-source ISA reduces dependence on licensed ISA providers like Arm and Intel, and therefore India, China and the EU have been trying to advance its development [30]. All these actors consider such open-source projects as important for digital sovereignty.  

Delinking is not objectively undesirable. Paris Marx (2024) envisions a collective splinternet, without massive global platforms and with interoperability and open protocols [31]. Such a system would make regulation more striaghtforward because it does not pit single nation-states against global platforms. Parallel AI development in various parts of the world might prevent the rise and entrenchment of global corporate AI giants, make regulation easier for governments, and provide choices for consumers. Hardware controls intend to preclude such possibilities entirely, trying to leave the world with only US AI giants being able to develop advanced AI.

In any case, as argued earlier, hardware controls might render themselves ineffective by increasing incentives for delinking and growth in the semiconductor industries of target countries. While their primary purpose may be unfulfilled, their function creep will remain – the increased ability of governments to surveil individuals and corporations, and the increased ability of corporations to control hardware. 

Resisting hardware controls 

We have seen that there are compelling reasons to resist hardware controls. Unlike open-source projects of the past, in this case it is quite difficult for such resistance to arise from a dissident group of programmers. The semiconductor industry is highly concentrated, has heavily guarded intellectual property, and chip production requires enormous amounts of money [32]. Even the most successful open-source projects of the past have faced challenges due to illegality [33]. It is therefore incumbent upon policymakers and civil society to resist the implementation of these hardware controls in the first place. 

Once implemented, hardware controls will likely fail to reach their original objectives (some of which are not necessarily desirable), and can entrench government and corporate control over hardware. AI research is already concentrated in industry rather than academia, and government interests in AI are not always aligned with public interests in AI. Policymakers must look beyond short-term and vague fears over national security and consider the broader implications of hardware controls.

Endnotes

1. Amba Kak and Sarah Myers West, ‘A Modern Industrial Strategy for AI?: Interrogating the US Approach’, AI Nationalism(s) (AI Now Institute, 12 March 2024), https://ainowinstitute.org/publication/a-modern-industrial-strategy-for-aiinterrogating-the-us-approach.

2. Vincent Boulanin et al., ‘Artificial Intelligence, Strategic Stability and Nuclear Risk’ (SIPRI, June 2020), https://www.sipri.org/publications/2020/policy-reports/artificial-intelligence-strategic-stability-and-nuclear-risk.; AI Now Institute, ‘US-China AI Race: AI Policy as Industrial Policy’, 11 April 2023, https://ainowinstitute.org/publication/us-china-ai-race.

3. Gregory C. Allen, ‘Choking off China’s Access to the Future of AI’ (Center for Strategic and International Studies, 10 November 2022), https://www.csis.org/analysis/choking-chinas-access-future-ai.

4. Onni Arne, Tim Fist, and Caleb Withers, ‘Secure, Governable Chips’ (Center for a New American Security, 8 January 2024), https://www.cnas.org/publications/reports/secure-governable-chips; Luke Muehlhauser, ‘12 Tentative Ideas for US AI Policy | Open Philanthropy’, Open Philanthropy (blog), 17 April 2023, https://www.openphilanthropy.org/research/12-tentative-ideas-for-us-ai-policy/.; Gabriel Kulp et al., ‘Hardware-Enabled Governance Mechanisms: Developing Technical Solutions to Exempt Items Otherwise Classified Under Export Control Classification Numbers 3A090 and 4A090’ (RAND Corporation, 18 January 2024), https://www.rand.org/pubs/working_papers/WRA3056-1.html.

5. Ibid.

6. Gregory C. Allen.

7. World Trade Organization, ‘China Initiates WTO Dispute Complaint Targeting US Semiconductor Chip Measures’, 15 December 2022, https://www.wto.org/english/news_e/news22_e/ds615rfc_15dec22_e.htm.

8. Onni Aarne et al. 

9. A TPM is a technical tool that allows a provider or third party to restrict the kinds and levels of use of digital materials. Access-control TPMs include time limits like for digital movie rentals among other measures, and copy-control TPMs include, for instance, blocking downloads of streaming content. See for more: ‘Technological Protection Measures (TPM) - Fact Sheet | SFU Library’, Simon Fraser University, 2023, https://www.lib.sfu.ca/help/academic-integrity/copyright/technological-protection-measures

10. Onni Aarne et al.

11. Girish Sastry et al., ‘Computing Power and the Governance of Artificial Intelligence’ (arXiv, 13 February 2024), https://doi.org/10.48550/arXiv.2402.08797.

12. Bert-Jaap Koops, ‘The Concept of Function Creep’, Law, Innovation and Technology 13, no. 1 (2 January 2021): 29–56, https://doi.org/10.1080/17579961.2021.1898299.

13. Jai Vipra, ‘The Use of Facial Recognition Technology for Policing in Delhi’ (New Delhi: Vidhi Centre for Legal Policy, 16 August 2021), https://vidhilegalpolicy.in/research/the-use-of-facial-recognition-technology-for-policing-in-delhi/

14. Bob Young, ‘Introduction’, in The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, by Eric S. Raymond, revised edition (O’Reilly, 2001).

15. ‘DRM and the Secret War inside Your Devices’, in The End of Ownership: Personal Property in the Digital Economy, by Aaron Perzanowski and Jason Schultz (The MIT Press, 2018), https://doi.org/10.7551/mitpress/10524.001.0001.

16. See for instance: ‘Our Mission | FreeIO’, accessed 4 May 2024, http://freeio.org/about-freeio/our-mission/.

17. See: ‘FOSSi Foundation: The International Not for Profit Organisation Which Promotes and Protects the Open Source Silicon Chip Movement’, accessed 4 May 2024, https://fossi-foundation.org/.

18. See: ‘Learn About the Right to Repair’, The Repair Association, accessed 2 May 2024, https://www.repair.org/stand-up.

19. Aaron Perzanowski and Jason Schultz, The End of Ownership: Personal Property in the Digital Economy (The MIT Press, 2018), https://doi.org/10.7551/mitpress/10524.001.0001.

20. Meredith Whittaker, ‘Social Media, Authoritarianism, and the World As It Is’, LPE Project (blog), 28 March 2024, https://lpeproject.org/blog/social-media-authoritarianism-and-the-world-as-it-is/.

21. Onni Aarne et al.

22. Brian C. Rathbun, Trust in International Cooperation: International Security Institutions, Domestic Politics, and American Multilateralism, Cambridge Studies in International Relations 121 (Cambridge, UK ; New York: Cambridge University Press, 2012). 

23. Anna Nadibaidze and Nicolò Miotto, ‘The Impact of AI on Strategic Stability Is What States Make of It: Comparing US and Russian Discourses’, Journal for Peace and Nuclear Disarmament 6, no. 1 (2 January 2023): 47–67, https://doi.org/10.1080/25751654.2023.2205552.

24. Torsten Michel, ‘Time to Get Emotional: Phronetic Reflections on the Concept of Trust in International Relations’, European Journal of International Relations 19, no. 4 (December 2013): 869–90, https://doi.org/10.1177/1354066111428972

25. Amba Kak and Sarah Myers West.

26. Griffin, Riley, and Jennifer Jacobs. ‘Biden Poised to Limit American Personal Data Going to China’. Bloomberg.Com, 7 February 2024. https://www.bloomberg.com/news/articles/2024-02-07/biden-poised-to-restrict-americans-personal-data-going-to-china.

27. Swanson, Ana. ‘U.S. Tightens China’s Access to Advanced Chips for Artificial Intelligence’. The New York Times, 17 October 2023, sec. Business. https://www.nytimes.com/2023/10/17/business/economy/ai-chips-china-restrictions.html.

28. Patel, Dylan, Afzal Ahmad, and Myron Xie. ‘China AI & Semiconductors Rise: US Sanctions Have Failed’. SemiAnalysis (blog), 12 September 2023. https://www.semianalysis.com/p/china-ai-and-semiconductors-rise.

29. Gregory Allen.

30. Working group on open source hardware and software, ‘Recommendations and Roadmap for European Sovereignty on Open Source Hardware, Software and RISC-V Technologies | Shaping Europe’s Digital Future’, 8 September 2022, https://digital-strategy.ec.europa.eu/en/library/recommendations-and-roadmap-european-sovereignty-open-source-hardware-software-and-risc-v.; Press Information Bureau of India, ‘India Launches Digital India RISC-V (DIR-V) Program for next Generation Microprocessors to Achieve Commercial Silicon & Design Wins by December’2023’, 27 April 2022, https://pib.gov.in/pib.gov.in/Pressreleaseshare.aspx?PRID=1820621.; Stephen Nellis and Max A. Cherney, ‘RISC-V Technology Emerges as Battleground in US-China Tech War’, Reuters, 7 October 2023, https://www.reuters.com/technology/us-china-tech-war-risc-v-chip-technology-emerges-new-battleground-2023-10-06/

31. Paris Marx, ‘Embrace the Splinternet’, Disconnect (blog), 2 May 2024, https://disconnect.blog/embrace-the-splinternet/

32. Jai Vipra and Sarah Myers West, ‘Computational Power and AI’ (AI Now Institute, 27 September 2023), https://ainowinstitute.org/publication/policy/compute-and-ai

33. Christopher Tozzi, For Fun and Profit: A History of the Free and Open Source Software Revolution (The MIT Press, 2017), https://doi.org/10.7551/mitpress/10803.001.0001.

STAIR Journal

St. Antony’s International Review (STAIR) is Oxford’s peer-reviewed Journal of International Affairs.