The State of the Art of AI Regulations

The State of the Art of AI Regulations
Every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilisation. – Stephen Hawking

Introduction:

Artificial Intelligence has fascinated humans since ancient times, but modern timelines of the phenomenon, especially as related to computation, generally begin with the mathematician Alan Turing’s 1950 paper,  “Computing Machinery and Intelligence. ”  Though Turing dismissed the idea of  “thinking machines” as close to an absurdity in the paper, he proposed a test for determining the relative capacity for a machinic system to simulate human intelligence. In the ensuing decades, computers’ capacity to store and process information has grown exponentially, and, in the past few years, the technology’ s usability has become available to the wider public.

Large language models are a specific form of AI system, the capacity of which for processing information and creating novel content have reached levels previously only imaginable in sci-fi films. Harvard Researcher Harry Law’s Substack insightfully considers AI history and its substantial consequences. Harvard and Live Science also offer a comprehensive overview of AI evolution. Before reaching the mainstream through ChatGPT and other services,  AI had already been implemented in many industries, such as banking, marketing, and entertainment.  In his article “The History of Artificial Intelligence, ” Rockwell Anyoha writes that, over the years, we have seen a more noticeable improvement in the management of big data and computer power, rather than at the algorithmic level. [1]

Growing alongside the discussion of what this fascinating technology is and means,  are questions regarding whether it will surpass human intelligence or even take over the world and eventually kill us all. So far, the risk of being enslaved by AI seems low, yet the capacity to replace workers, spread misinformation, or expand ecologies of warfare, are genuine and serious concerns.

Whatever the present state of AI, there is certainly a need for balance between fostering innovation and protecting society from potential harms. Governments and international institutions are coming together to debate how to proceed with the arduous task of regulating the production and use of this technology.  In contrast to the usual reactive stand towards regulations, they are trying to confront it proactively this time. In the case of AI, proactive regulation is imperative, given the technology’s rapid development and far-reaching implications. But finding consensus is hard.

It is impossible to encompass all what is happening at present in the race for governing AI, but we will try to trace an overview of the main discussions happening at the moment. This text is not only meant to be informative, but also compile a set of relevant resources, that have helped us to gain a better understanding of the AI ecosystem regarding regulation, and which conversations are being held at the moment. Probably the updates in this article will become obsolete very soon, but we hope that cumulatively the resources will keep being helpful for an ongoing actualisation.

AI Advancements:

There are many ways to categorise AI models including Reactive AI, Limited Memory machines, [2] but the ones most discussed currently are Artificial General Intelligence (AGI), Artificial Super Intelligence (ASI) or Generative Artificial Intelligence (GAI) amongst others. But, what are these the focus of discussion? And what lies behind these widely-circulated acronyms? The following are some definitions of the component systems involved in the development of AI discourse and modelling:

  1. Limited memory-based AI can store data from past experiences temporarily.
  2. General AI also known as “artificial general intelligence” or AGI, refers to the concept of AI systems that possess human-like intelligence, i.e. systems of machinic cognition that can effectively simulate human activities (e.g. conversation), and potentially apply machinic reasoning to novel problems independently.
  3. Artificial superintelligence, a machinic intelligence with cognitive capacities beyond human capabilities.
  4. Generative AI focuses on creating new content or ideas based on existing data. GAI has specific applications and is a subset of AI that excels at solving particular tasks. Typically, GAI is fed an immense amount of data for training in order to create text, images, and audio at near-human levels of quality. The programme however, at present is still  unable to judge the accuracy and topicality of the information it recombines and generates.
  5. Frontier AI Models [3] are highly capable foundation models, whose capabilities can arise unexpectedly, and, thus, potentially become dangerous unexpectedly, threatening public safety and global security.

Airstreet Capital’s State of AI report:

In the past few weeks, the AI investor Nathan Benaich and the venture capital fund Air Street Capital released its yearly report about the state of AI. Their findings are summarised by the writer Michael Spencer as follows:

  1. GPT-4 is the master of all it surveys (for now), beating every other LLM
  2. Efforts are growing to try to clone or surpass proprietary performance
  3. LLMs and diffusion models continue to drive real-world breakthroughs
  4. Computation is the new oil, with the software and processor producer NVIDIA printing record earnings and startups
  5. GenAI is saving the VC world amid a slump in tech valuations
  6. The safety debate has exploded into the mainstream
  7. Challenges mount in evaluating state-of-the-art models

Challenges and Concerns: What is at stake?

In May 2023, senior leaders of the company behind ChatGPT, OpenAI, argued that something like an International Atomic Energy Agency (IAEA) for AI may be required in future to safely govern more sophisticated artificial intelligence systems. [4] A paper published in July 2023 titled “Frontier AI Regulation: Managing Emerging Risks to Public Safety” and co-authored by a large group of researchers, claims that, as AI’s capabilities continue to advance, new foundation models could pose severe risks to public safety, be it via misuse or accident. [5]  So far, the prospect of stopping the model’s capabilities from proliferating broadly (and preventing misuse) is not merely difficult, but ultimately impossible.

The independent publisher Tech Policy Press has been thoroughly reporting on the main iterations of official regulation attempts, while interviewing relevant researchers such as the Columbia Law School scholar and author of “Digital Empires, ”  Anu Bradford,  who notes that, increasingly, tech is the source of economic and geopolitical power. Geopolitical powers perceive each other as threats and competitors in the race for dominating AI, as it is “the key ingredient that determines the future of military power. ” In this light, an important question to answer, more pressing to potential regulators than the safety of users, is “who will have the supremacy. ” [6]

Below are key critical risks AI poses in the eyes of researchers such as Bradford et al.:

  • Warfare

Geopolitical supremacy goes hand in hand with the management of military hegemony. This is a parallel “race that will overshadow any AGI or ASI prospect, as the competition between China and the US grows in intensity,” the  A.I. writer, researcher, and art curator Michael Spencer claims. In his words:  “Far more immediate than global warming and far more likely to be experienced than nuclear war. Far more threatening to our global economy, and the prospect of biotechnologies being unleashed and weaponised increases dramatically. ” [7] AI-based projects by companies such as Anduril, Mach Industries, Scale A.I., Palantir, and Lockheed Martin have proliferated in the past years and their products might define the future of war, conflict and space-tech. [8]  The company Anduril, for instance, produces unmanned aerial systems (UAS), counter-UAS (CUAS), semi-portable autonomous surveillance systems, and networked command and control software all using AI technologies.

Michael Spencer claims that behind these kinds of companies there is a venture capital mafia looking to profit from the American war-machine, but, sadly, this race won’t lead anywhere since the economic damages of an armed conflict such as a war with China will not be affordable by either power (in large part because of the deep interdependence of the two states’ economies).

  • Misrepresentation

In her recent report Tate Ryan-Mosley discusses the outcomes of the AI Insight Fora with Inioluwa Deborah Raji, a researcher at the University of California, Berkeley, and a fellow at Mozilla, for MIT Technology Review. [9] Raji argues that the hyperbolic risks depicted by the tech companies’ representatives divert attention from the current risks from existing technology failing and behaving in unexpected ways - often due to prematurely deploying products that are not ready for mass market use. These kinds of failures particularly impact the underrepresented or misrepresented in society. For instance, some medical AI technology is “disproportionately under prioritising black (sic) and brown patients in terms of getting a bed at a hospital; it’s disproportionately misdiagnosing them, and misinterpreting lab tests for them. ” [10]

  • Copyright

In 2018,  Joy Buolamwini co-wrote with AI researcher Timnit GebruGender Shades, ” which exposed how commercial facial recognition systems often failed to recognise the faces of Black and brown people. In a recent interview with MIT Technology Review she referred to the foundation models as today’s “sparkliest AI toys, ” which, however enticing, remain largely useless to underserved and racialised communities. In order to serve as a springboard for many other AI applications, from chatbots to automated movie-making, they scrape masses of data from the internet - including personal information and copyrighted content, a problem that has unleashed major discontent and protests among creators, who subsequently sued the AI companies claiming intellectual property breaches. However, the legal territory defining intellectual property in relation to AI is a grey zone, and Buolamwini refers to its appropriation by tech companies as “data colonialism. ” [11]

  • Disinformation

Disinformation has emerged as a means of political warfare. Discriminatory and inflammatory ideas can easily enter the public discourse, spread internationally before counternarratives can even be formed, thus highlighting social differences and divisions and diminishing our collective grasp of truth and our decision-making capacity, and therefore widening the divide in beliefs, catalysing violence. [12] Furthermore, as AI develops there is a high risk that they may gain further predictive capacities, and depending on the form AI presentation takes, perhaps develop stronger bonds with humans. AI’s potential to sway our decisions only seems likely to increase looking into the near future. Since AIs have the potential to centralise and control sensitive information, entities controlling these systems could abuse this trust, and, given their relatively small number, they could eventually monopolise the dispersal of tailored narratives [13] and spread them on an unprecedented scale.

  • Environment

In their 2023 “Artificial Intelligence Index Report, ” Stanford University researchers point to CO2 emissions in relation to AI development. They estimate 502 tonnes  of CO2 have been expended in the training of GPT3 (a car’s average fuel use over a lifetime is 63 tonnes).

  • Cybersecurity

The cybersecurity software company Malware Bytes reports that one of the main risks in relation to AI is the optimisation of cyberattacks, along with the production of automated malware, and risks to the physical safety, privacy, data manipulation and impersonations of individuals. Full report can be found here.

  • Security

In his article AI and Leviathan: Part I, Samuel Hammond writes “Depending on how the offensive-defensive balance shakes out, AI has the potential to return us to a world in which security is once again invisible. You won’t have to remove your shoes and debase yourself before a TSA agent. Instead, a camera will analyse your face as you walk into the terminal, checking it against a database of known bad guys while extracting any predictive signal hidden in your facial expressions, body language, demographic profile, and social media posts. ” [14]

  • Piracy Bootleggers

As Marc Andreessen, the co-founder of Netscape and the venture capital firm Andreessen Horowitz, writes the following in relation to the risks of intellectual property theft in an AI age:

“Bootleggers” are the self-interested opportunists who stand to financially profit by the imposition of new restrictions, regulations, and laws that insulate them from competitors. For alcohol prohibition, these were the literal bootleggers who made a fortune selling illicit alcohol to Americans when legitimate alcohol sales were banned. For AI risk, these are CEOs who stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new startup and open source competition – the software version of “too big to fail” banks. [15]

Further problems envisioned in the current regulatory debate include are:

  • Agility & enforceability in regulatory frameworks
  • The erosion of democratic regulatory institutions amidst the rise of private regulatory services providers
  • Inadequate environmental protections throughout the AI system lifecycle (including both software and hardware components of the AI system lifecycle)
  • Inadequate IP & privacy protections in data supply chains
  • Inadequate worker protections in data supply chains & hardware supply chains
  • Inadequate worker protections against harmful applications of AI in the workplace, e.g.like worker surveillance and unsafe automation solutions
  • Lack of redistributive policies to ensure economic gains resulting from mass AI adoption

Who is involved?  Regulatory Bodies and Agencies:

International and governmental institutionsOfficial organisms such as the United Nations, the US Congress, and the European Union have attempted to find consensus in how to proceed and who to invite to participate in the regulation of AI. The G7 (via the Hiroshima AI process), OECD, and Global Partnership on AI have been hosting several gatherings and fora, in addition toon top of the AI Safety Summit – of which China was also part – , which took place at the time of writing.  

In a report about the United Nations discussions, Time Magazine cites Bill Drexel, an associate fellow at the centrist military affairs thinktank the Center for a New American Security, is quoted as stating that international cooperation on AI issues is “really clunky, slow and generally inefficient.” He believes that it may take a serious AI-related incident to generate the political will required to form a substantial agreement among countries. In the meantime, he advocates for “bilateral or more limited multilateral fora to try to govern [advanced AI systems] and even to scale with the expansion of companies that might be able to train frontier models. ”

In fact, a new forum has been created by Anthropic, Google, Microsoft, and OpenAI in order to self regulate so-called frontier models. On top of this, there are numerous non-profit organisations carrying out research and studying these iterations closely. Among others, the Ada Lovelace Institute and the Center for the Governance of AI are researching and generating their own reports centred on crucial topics in the field.

Additionally, China seems to have an ambivalent position towards participating in these debates, while China maintains its own internal research into topics in AI and has for some time. In his paper for the centrist thinktank the Carnegie Endowment for International Peace, Matt Sheenan lists the motivations of the Chinese Government to regulate AI, since the “Chinese leadership increasingly is shaping the regulatory debate. ” [16]

  • United Nations

As TIME magazine reported, in the week of the 21st of September 2023, The United Nations Assembly gathered in New York.  Secretary-General António Guterres and the Secretary-General's envoy on technology, Amandeep Gill, have said they believe that a new UN agency will be required. By December 2023, they commit to producing an interim report presenting “a high-level analysis of options for the international governance of artificial intelligence, ” and a second report, to be submitted by 31 August 2024, including “detailed recommendations on the functions, form, and timelines for a new international agency for the governance of artificial intelligence. ” [17] The High-Level Advisory Body of the UN must determine which model, if any, is most suitable. Gill says that an international research effort is a “classic kind of international collaboration problem where a UN agency might play a role. ” [18]

  • The US Congress

Senate Majority Leader Charles Schumer arranged 2 of 9 fora to debate AI regulation in the US Congress. The first was held on September 13th and the second on October 23rd, as part of his “SAFE Innovation” initiative.
The group of attendees to the first forum was rather homogeneous, but in the following ones a greater number of academics, citizens, labour unions, thinktanks and consulting firms, and venture capital bodies will be included.

The topics discussed so far were the following:

Forum #1:

— Asking the right questions

— AI innovation

— Copyright and IP

— Use cases and risk management

— Workforce

— National security

— Guarding against doomsday scenarios

— AI’s role in our social world

— Transparency, explainability, and alignment

— Privacy and liability

Forum #2:

— “Transformational” innovation that pushes the boundaries of medicine, energy,  and science

— “Sustainable” innovation that drives advances in security, accountability,  and transparency in AI

— Government research and development funding (R&D) that incentivises equitable and responsible AI innovation

— Open source AI models: balancing national security concerns while recognising this existing market could be an opportunity for American innovation

— Making government datasets available to researchers

— Minimising harms,  such as job loss, racial and gender biases, and economic displacement

Tech Polity Press reported on both sessions.

  • European Union

Systems to be regulated are categorised as follows:

  1. Unacceptable risk: systems considered to be a threat to people will be banned, e.g. cognitive behavioural manipulation of people or specific vulnerable groups, social scoring, or real-time and remote biometric identification systems, such as facial recognition
  2. High risk: including systems that negatively affect safety or fundamental rights. This includes toys, aviation, cars, medical devices, and lifts falling under the EU’s product safety legislation. The second category includes biometric identification and categorisation of natural persons, management and operation of critical infrastructure, education, employment, access to and enjoyment of essential private services and public services and benefits, law enforcement, migration, asylum and border control, and assistance in legal interpretation and application of the law
  3. Generative AI: e.g. ChatGPT, would have to disclose content generated by AI, as well as designing their model to prevent it from generating illegal content, and publish summaries of copyrighted data used for training
  4. Limited risk: should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio, or video content, for example, deepfakes

The full report can be found here.

  • Frontier Model Forum

The Frontier Model Forum is an industry body focused on ensuring the safe and responsible development of frontier AI models. [19] On October 25, 2023 Anthropic, Google, Microsoft, and OpenAI announced the Chris Meserole’s appointment as the first Executive Director of the Frontier Model Forum, and the creation of a new AI Safety Fund, a more than $10 million initiative to promote research in the field of AI safety, in order to develop the appropriate guardrails to mitigate risk. The Forum will host discussions and actions on safety standards and evaluations to ensure frontier AI models are developed and deployed responsibly.

Resources about the Frontier Model can be found here.

  • Joe Biden’s Executive Order

Politico and The Wall Street Journal reported on Monday the 30th of October that US President Joe Biden had passed an Executive Order aiming to “step into a global regulatory vacuum over a fast-growing technology. ” The order basically skipped the ongoing discussions in the Senate and other bodies.  Amidst the slowness of the process, seeking to mitigate risks ranging from privacy threats to job losses, and to manage national security risks as cybersecurity threats, such as the race of more powerful cyber weapons, the President took executive action to outline a response to AI risks. [20] Under the order, companies have to “submit reports to the federal government detailing how they train and test so-called “dual-use foundation models. ””

The order contemplates the following points:

— Labour

— Copyright

— Housing

— Education

— Telecoms

— Microchip manufacturing

— Immigration

— Privacy

— Competition

— Health

— Cybersecurity

  • UK Safety Summit

Twenty-eight governments, The Guardian reports,  signed up to the so-called Bletchley declaration on the first day of the AI safety summit hosted by the British government. The countries agreed to work together on AI safety research, even amid signs that the US and UK are competing to take the lead in developing new regulations. The UK’s Secretary for Science Innovation and Technology, Michelle Donelan, told reporters: “For the first time we now have countries agreeing that we need to look not just independently but collectively at the risks around frontier AI. ” [21]

The Betcheley Declaration addresses the following topics [22]:

— Identifying AI safety risks

—  Building scientific and evidence-based understandings of these risks

—  Understanding the impact of AI in our societies

— Building respective risk-based policies across countries

— Collaborating while recognising regulations may differ amongacross nations

— Increasing transparency by private actors developing frontier AI capabilities

— Building appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research

  • China

China is becoming a leader in the governance and regulation of AI. Parity with the he US in AI is one of their goals because they, as everyone else, see this technology as a critical tool for boosting their economy and national power. In his article for Carnegie Endowment for International Peace,  Sheenan claims that China’s main motivation for AI leadership is to serve the Chinese Communist Party (CCP)’ s agenda for political and social stability and especially information control over Chinese academics, policy analysts, journalists, and technocrats who shape the regulations, for instance regarding workers and their schedules and salaries, which are calculated by algorithm.

There are no firm deadlines for the national AI law, but a draft version could be released in late 2023, or 2024, followed by six to eighteen months dedicated to revising the law. Here is an extract of the Chinese regulation timeline:

  1. 2022: the document focuses on internal ethics and governance mechanisms scientists and technology developers should deploy, with AI listed as one of three areas of particular concern, along with the life sciences and medicine
  2. 2022: the regulation targets many AI applications used to generate text, video, and audio, prohibits the generation of “fake news” and requires synthetically generated content to be labeled
  3. 2023: the regulation covers almost the exact same ground as the deep synthesis regulation, but with more emphasis on text generation and training data, requiring providers to ensure that both the training data and generated content be “true and accurate”

Read the full article here.

The Role of Regulation: Who should decide? Who should regulate?  

Buolamwini advocates for a radical rethinking of how AI systems are built. She tells the MIT Technology Review that the real risk of letting technology companies pen the rules that apply to them is repeating the very mistake that has previously allowed biased and oppressive technologies to thrive. “What concerns me is … giving so many companies a free pass … we're applauding the innovation while turning our head, ” Buolamwini says. [23]

The Brown University Professor Suresh Venkatasubramanian,  who attended the AI Insight Forum at the US Senate, claimed that the overwhelming regulatory question has become that of “what” and “how” to regulate. [24]  Indeed this question is important, but there are other levels to consider, such as which organisations and geopolitical powers to include in these decision-making processes, and whether to carry negotiations out in a centralised or decentralised way.  AI Insight Forum attendee Alondra Nelson told Tech Policy Press that “we need the innovation to be transformational, because the status quo is not sustainable. ” [25]

So, who should regulate and what should be regulated?

  • Regulation in the use or in the tech itself?

In his article “Congress should regulate Artificial Intelligence, ” Professor Mark MacCarthy points to the fact that the government has kept its focus on the “uses of AI, not the technology itself. ” [26]  Some responsibility should lie on the deployer, yet Connor Dunlop thinks that putting the full burden of compliance on the deployer is not really feasible. [27] Dunlop is the European Public Policy Lead at The Ada Lovelace Institute in the United Kingdom and the author of the briefingAn EU AI Act That Works for People and Society. ”  Some propositions, therefore, would ask developers to obtain a license before training and deploying frontier models, yet this could “increase industry concentrations, as well as harm innovation. ” Hugging Face is supportive of the open-source approach, and so is Meta, whose leadership had some disagreements with the Senators. [28]

  • Which organisation?

United Nations representatives advocate for an international, centralised regime to regulate AI. Gill says that the UN is well-placed to oversee an international treaty or organisation for the governance of AI. and Bill Drexel believes that it could be prudent to undertake the through the UN. [29]

In an interview with Tech Policy Press,  Dunlop notes that the United States, China, and the EU are the three primary regulatory and technology powers and are three distinct jurisdictions, each with a “different vision for the digital society,” which, so far, they each have embedded “a different type of regulatory framework.” Dunlop believes that it makes sense to think at a global level to centralise regulation, instead of each member state regulating after its own initiative [30],  an arduous task amidst the current geopolitical tensions.

  • International cooperation: What about the Global South and China?

In her article for The Oxford Centre for the Governance of AI, Sumaya Nur Adan claims that policymakers prefer exclusive fora, because limiting the number of parties can make the decision-taking less arduous. She finds this is not an ideal methodology because “early inclusion is vital for securing future buy-in, preventing the emergence of competing coalitions, drawing on additional expertise, and avoiding the ethical problems inherent in exclusion.”[31]

More on China.

Existing AI Regulations:

There were 37 AI-related bills passed into law through different countries in 2022.  Here is a link enumerating them.

Blockchain - Decentralisation -  Open Source

In terms of regulatory power, an emerging cartel composed of the best funded companies including Open AI, Alphabet, Anthropic, DeepMind, and Hugging Face is growing more powerful by the day. Most of the bills introduced in the US by state lawmakers were written by the leading corporations. [32] In 2021 alone, the lobbying expended by tech companies reached $110. [33]

Max von Thun,  the Director of Europe and Transatlantic Partnerships at the Open Markets Institute , is one of many who agree that market concentration harms innovation, consumers and workers, and reduces aggregate investment. In another article for Tech Policy Press,  von Thun describes how digital corporations have managed to gather power through the extraction of valuable data, which has allowed them to impose their terms over what information is consumed and communicated. [34] Monopolistic social media platforms already are  damaging to the economy and for democracy. [35] However, despite the harm such concentration causes, von Thun thinks that the companies will continue building the technology to favour their economic interests, rather than the public’ s. [36]

Another issue is the lack of the AI’ s structural transparency. In his State of AI Report, Nathan Benaich comments on the industry’s move away from openness,  supposedly “amid safety and competition concerns.” The reports by Open AI and Google were limited, and Anthropic “simply didn’t bother. ” [37] In contrast, open source models such as LLaMa, for instance, seem to be growing, as their products have been downloaded millions of times on Hugging Face. [38]

This regulatory condition prompts at least two important questions. First, whether it is necessary to involve more actors in order to decentralise the power of governance. Secondly, what would be the level of transparency required to maintain democratic input. By distributing power and decision-making across a network of stakeholders, models could mitigate the risks of corruption and bias. Joy Buolamwini also points to the the problem that not enough actors are involved in the regulation (beyond politicians and tech CEOs).[39] She stresses the importance of creating a global regulatory community based on principles of transparency and security for the users.

One other possible avenue for secure, transparent governance is blockchain technology. Known for its decentralised structure, blockchain allows for the collective storage of data across a network of computers, making it highly resistant to tampering. This technology could enable a transparent and unchangeable ledger of AI algorithms, training data sets, and operational protocols which would be accessible for audit by multiple stakeholders.

Lastly, a very interesting idea around decentralising governance is “countergovernance, ” described by Blair Attard Frost as a collective, marginalised citizen opposition or contestation to sources of power that have failed to serve their needs, ” for the sake of reorganising power and decisions-making. [40]  Attard-Frost is a Ph.D. candidate and SSHRC Joseph-Armand Bombardier Canada Graduate Scholar at the University of Toronto’s Faculty of Information.

Conclusion:

The advent of new-generation AI models, for example the GPT series, offers the potential for unprecedented gains in productivity. However, the dynamics between big corporations, technological innovation, and governmental structures present a complex evolving landscape at the dawn of AI regulation.

If big tech tech companies continue seeking competitive advantages and securing their quasi-monopoly over cutting-edge technologies such as advanced AI models, they will rapidly amass even more power, leading to an even more profound lack societal influence in the way information is managed. This tendency to centralisation and monopoly is consequently influencing government policy, making the need for checks and balances even more pressing.

So, what is to be done?

As discussed earlier, maintaining transparency in models and adding more stakeholders to the regulatory discussion, as well as developing countergovernance systems, ultimately to decentralise the governance power could foster fairer development. In a decentralised governance framework, no single entity has monopoly control over AI technology; decision-making authority is distributed across a network of stakeholders. This can be further expanded to include public bodies, private companies, and independent entities such as watchdog organisations. Such a system could offer a greater degree of accountability and security than is currently afforded by prevailing centralised models.

Decentralisation will not remove the need for regulation but can refine the focus of regulatory concern, creating a governance model that's adaptive, transparent, and more resistant to corrupting influences. In sum, an greater level of decentralisation in the regulation and management of the technology is necessary in order to safeguard society’s interests and security.

Bibliography:

EU AI Act enters final negotiations

Air Street Capital

Summary of the State of AI Report 2023

Welcome to the State of AI Report 2023

Global Push to Regulate Artificial Intelligence (plus other AI stories to read this month)

An EU AI Act that works for People and Society

An EU AI Act that works for People and Society

Frontier AI Regulation 1

Frontier AI Regulation 2

Frontier AI Regulation 3

Frontier AI Regulation 4

The Case for Including the Global South in AI Governance Discussions.

Executive Order 30.10.2023. Politico

Executive Order 30.10.2023. WSJ

Index Report Stanford 2023

History of AI

Alignment Research

AI Safety Summit in the UK

Safety Summit Frontier AI

Governance of Superintelligence

EU Regulations

Hiroshima Process



Show Comments