Global Entanglement: AI, Innovation, and Influence

A look at the global struggle for AI dominance with respect to regulatory landscapes, the battle for hardware infrastructure, the export of regional values and shifting geopolitical tensions.

My research represents the geopolitical entanglement of AI and AI robotics. I am an American researcher, living within the European Union, working with a Chinese robot. These three key powers have emerged as the core of AI governance and development. They each have their own distinct culture which dictates the priorities for three distinct regulatory landscapes. The EU is rights based, the US is largely innovation driven and China is primarily state directed. Each regulatory landscape reflects the culture of the region and influences how AI and AI robotics develops within the confines of the policies. A state’s reputation can greatly influence how the world perceives technology emerging from its innovation and trust becomes a key factor when dealing with governments and titans of industry from certain world leaders in innovation. AI and AI robotics require massive underlying infrastructure of hardware and software as well. Control over the resources facilitating that infrastructure can be highly politicized in today’s global economy and geopolitical landscape. It seems that whoever will exert dominion over AI will control the epistemological future and many powers have emerged as key competitors. The question is who will win and how will their ethos impact our shared global future alongside AI and AI robotics. As my own research represents the triad of AI policy leaders containing the US, the EU and China, I’ll focus on these three key players and their regulatory philosophies.

The EU, where I am conducting my research, holds fast to a vision of AI regulation that upholds the rights of users, prioritizes data privacy, preserves human dignity and promotes democratic values. The EU has produced some of the earliest, most respected and most widespread policy on AI that the world has seen so far and these works have served as the gold standard which is proliferating around the globe. The GDPR or The General Data Protection Regulation regulates how a user’s data can be collected, stored and processed. It prevents against government and industry overreach through protective measures, it makes users active rights-holders with legal control over how AI interacts with their data, it enshrines the idea that humans must retain agency, oversight, and recourse in the face of automated systems, and it protects democratic societies by putting limits on surveillance, data storage, and opaque algorithmic governance. The AI Act similarly regulates data collection, storage and processing but also goes further into detail to protect EU users from AI specific misuse. This includes outright bans on practices like social scoring by governments, biometric surveillance in public spaces, AI systems that exploit vulnerable communities and subliminal or manipulative AI techniques that distort or influence behavior. High risk AI is tightly regulated and must undergo strict compliance including risk management systems, data quality controls, human oversight and transparency and documentation. This highly regulated landscape has the benefit of operating some of the most trustworthy AI applications but has also slowed innovation and created a dependency on foreign imported models from less regulated regions.

The US, where I am from, is a global leader in AI innovation and has commercial dominance over the field. The regulatory landscape is influenced by the industry itself and its powerful leaders. The frameworks for safety are voluntary and are often simply ignored for the sake of faster innovation and condensed timelines as businesses compete to get to market first. The AI industry in the US claims to self regulate with internal and independent research teams constantly pressure testing the newest models as they are completed, yet there is very little required of the companies themselves for the sake of national compliance. Any legal protections generally come from the state-level with California, my state and the home of Silicon Valley, being the most proactive in regulation. California Created the Office of Artificial Intelligence to oversee AI governance and addendums were added to the CCPA or California Consumer Privacy Act to include provisions for AI profiling and automated decision making. As far back as 2023, CA passed Assembly Bill 331, requiring impact assessments and algorithmic audits for automated decision systems used in high-stakes contexts like hiring, credit and education. This level of regulation on a state level in California is in stark contrast to the incentive policy states like Texas, Utah and North Carolina. These states are banking on a future in AI and have created utopias for innovation including lax policy regulations and strong investments in infrastructure. Texas invested heavily in AI research centers and semiconductor infrastructure while keeping regulation light, Utah created an AI research council and funded autonomous systems innovation zones and North Carolina passed legislation funding AI workforce development, coding in schools, and partnerships between universities and local businesses to promote the use of AI. This lack of unified protection and varied regulatory landscape means that AI models coming out of the US, although dominant, aren’t necessarily subjected to consistent compliance for protective measures.

China, where my robot was created, views AI as a tool for social cohesion and strategic national advancement. The element of centralized planning implemented by China means that there are cohesive and well defined national AI strategies that the government assists in funding research in order to achieve. China set forth its goal to be a world leader in AI by 2030 in their 2017 Plan, AIDP or New Generation Artificial Intelligence Development Plan. The government will often select strong companies which align with its interest and offer funding, support and favorable regulation in order to ensure the company’s success which will achieve China’s broader national goals. China’s strict National Security laws ensure that there is cooperation with intelligence agencies and that there are surveillance permissions granted to the government ensuring that the technology is both developed in line with loyalty to the state and that it can be used to serve state purposes at any time. Companies must maintain compliance with China’s 2017 National Intelligence Law which requires all Chinese companies and citizens to assist in intelligence work when requested. There is also a strong emphasis on data sovereignty for China. This comes in the form of strict restrictions on data transfer, laws ensuring domestic storage and regular national audits. An example of this data sovereignty is China’s Data Security Law of 2021 which strongly encouraged development of data “clean zones” within China’s borders. Even though China set forth their own GDPR equivalent in 2021 called PIPL or Personal Information Protection Law, the individual data rights outlined are still subordinate to state interests. China’s national architecture offers cohesive application opportunities across its military, government and industry, however, China’s strong national security apparatus directly shapes how AI must operate and this elicits international suspicion, as those who import China’s progress in AI, also import the implications of their state values.

Just as each regulatory landscape is an expression of the region’s values and influences their contributions to innovation in the field, each power also engages in a geopolitical tango in the pursuit of the development of its AI infrastructure. This tango can involve negotiations for their own regional dominance and in some cases, negotiations against their competitors' ability to access key infrastructure for development. One of the hardware aspects of this infrastructure is the AI Chip. AI Chips are, in theory, similar to computer chips but they are much more efficient and have incredible parallel processing capabilities, they can do millions of calculations very quickly and accurately and they are necessary for our development and use of AI. The key players in the creation of these Chips are the USA, China and Taiwan. As early as 2022, the USA restricted sales of advanced AI chips to China. The US also joined the “CHIP4” Alliance designed to safeguard the semiconductor supply chain and counter Chinese self-sufficiency. This alliance included the US, Taiwan, South Korea and Japan. The US also encouraged the Netherlands and Japan to restrict the sale of chip precision fabrication equipment to China. China responded by investing billions into the programs titled Made in China 2025 and National IC Fund in order to promote their own chip self reliance and independence. Amidst this great geopolitical chip struggle, the EU launched the European Chips Act of 2022 in an effort to develop domestic semiconductor capabilities. The EU aims to control 20% of all chip production by 2030, but as of now is still dependent on imported AI chips from allied nations. The national importance of leading the future of AI globally is underscored by the intense competition to control its budding infrastructure.

The products that emerge from each region act as ambassadors of national ideology and the differences in the moral architecture that they export will work to shape our shared global future. The regulations surrounding their methods of operation in public settings will influence the development and applications of these products. The content moderation guidelines will interplay with global concepts of content acceptability. And the language, history, and bias that are baked into training data will shape the knowledge that emerges from these AI products moving forward. The future of AI is a global one but the current race is run by global powers with vastly differing priorities. The entanglement and power struggle between these nations happen to all be represented by the robot sitting in my living room. Ideally in the future, there will be some coordinated global regulation surrounding universal standards of compliance for risk and safety. Until then, we will likely see the competition intensify until some level of interoperability and cooperation is introduced. In the meantime users can do their best to choose to utilize AI which was developed in line with their values and will offer the accuracy and privacy balance that suits their personal preferences.

Previous
Previous

Knowing or Growing: Job Security and Innovation

Next
Next

Societal Shifts of the Industrial Revolution Echoed Today