Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

AI Safety Summit: What to expect as global leaders eye AI regulation

The AI Safety Summit, convened by the UK government, is the latest in a series of regional and global political initiatives to shape the role AI will play in society.

Prime Minister Rishi Sunak sees the summit as an opportunity for the UK, sidelined since its departure from the European Union, to create a role for itself alongside the US, China, and the EU in defining the future of AI.

The summit, on November 1-2, is to consider the risks posed by AI, especially “frontier” AI models such as the more advanced examples of generative AI. Its goals are to convince people of the need to take action to reduce risks; identify measures organizations should take to increase AI safety; and to agree on processes for international collaboration on AI safety, including on research and governance standards.

If Sunak’s ambitions are realized, then the summit could lead to requirements for enterprises to take more precautions in their deployment of advanced AI technologies, and limitations on the development of such tools by software vendors.

At the same time, there already exist many regulations — guaranteeing privacy, for example, or prohibiting discrimination — that implicitly impose limits on what enterprises can or should do with AI or any other technology.

What is frontier AI?

Frontier AI, as defined by the UK government, refers to highly capable general-purpose AI models that can perform a wide variety of tasks at a level that meets or exceeds the most powerful technologies available today.

Today’s frontier AI includes foundation models using transformer architectures such as GPT-4, its rivals and successors — although as the technology advances, views on what constitutes the frontier are likely to move, too.

Enterprises such as Unilever are already using GPT to deliver business value, although rarely in business-critical situations and almost always only to recommend courses of action for an employee to review and approve — the so-called “human in the loop” approach.

Why is frontier AI considered unsafe?

Frontier AI models may take significant computing and financial resources to train, but once that’s done, they can be deployed to, or accessed from, almost anywhere for relatively little cost.

All new technologies come with a range of risks and benefits, but people are particularly concerned about the safety of frontier AI technologies because of the speed and scale at which their impact could be felt, especially if they’re left to function autonomously without human supervision or intervention.

The potential risks identified by the UK government include threats to biosecurity, cybersecurity, and election fairness, as well as the potential loss of control over the development and operation of the foundation AI models themselves. There’s also the possibility of “unknown unknowns” arising from unpredictable leaps in the capabilities of frontier AI models as they develop.

How might the AI Safety Summit change things?

Behind the references to biosecurity and cybersecurity on the summit agenda are fears that super-powered AI could facilitate or accelerate the development of lethal bioweapons or cyberattacks that bring down the global internet, posing an existential risk to humanity as a whole, or to modern civilization.

There’s also the alignment problem to contend with: whether an AI system will pursue its programmers’ intended goals, or follow its instructions to the letter, ignoring implicit moral considerations such as the need not to harm humans. A classic thought experiment illustrating this is to consider just how far an AI system might go if given the narrow goal of optimizing the output of a factory, making paper clips, for example, and pursuing it to the exclusion of all else.

The threat of something like this happening has prompted much letter-writing and hand wringing — and even a few street protests around the world, such as those by Pause AI, which is calling for a global halt to training of general AI systems more powerful than GPT-4 until the alignment problem is provably solved.

While creating such things is probably not something AI developers intend, unconstrained enhancement of AI capabilities could make it possible for bad actors to misuse them or, if the alignment problem isn’t solved, for the use of AI systems to have unintended side-effects. That’s why learning to better forecast unpredictable leaps in AI capability, and keeping AI under human control and oversight, are also on the summit agenda.

But there’s a danger, say some observers, that by focusing on the unlikely but existential risks to civilization that frontier AI may pose, longstanding concerns about algorithmic bias, fairness, transparency, and accountability will be pushed to the fringe.

What to do about those risks, both existential and everyday, is less clear.

The UK government’s first suggestion is “responsible capability scaling” — asking industry to set its own risk thresholds, assess the threat its models pose, choose to follow less risky paths, and to specify in advance what it will do if something goes wrong.

At a national level, the UK government is suggesting it and other countries monitor what enterprises are up to, and perhaps require enterprises to obtain a license for some AI activities.

As for international collaboration and regulation, more research is needed, the UK government says. It’s inviting other countries to talk about how they can work together to talk about the most urgent areas for research, and where promising ideas are already emerging.

Who is attending the AI Safety Summit?

When the UK government first announced the summit, its intention was to include “country leaders” from the world’s largest economies, alongside academics and representatives of tech companies leading AI development, with a view to set a new global regulatory agenda.

A week or two before the summit, though, reports emerged that leaders of several countries with strong AI industries were unlikely to attend, raising doubts about how effective the summit will be.

French President Emmanuel Macron will not be there, and German Chancellor Olaf Scholz is unlikely to show up either, European political news site Politico.eu reported. US President Joe Biden will not attend either, although Vice President Kamala Harris may.

While some of the European Union’s biggest member states are disengaging from the summit, the bloc as a whole will be well-represented. European Commission President Ursula von der Leyen will be there and, according to her official engagement calendar, she plans to meet Secretary-General of the United Nations António Guterres at the event.

Meanwhile, European Commission Vice-President Věra Jourová’s calendar indicates she’ll meet South Korean Minister of Science and ICT Lee Jong-ho there.

Google DeepMind CEO Demis Hassabis is expected to be among the 100 or so attendees — a safe bet since the company was founded in London and maintains its headquarters there.

The UK government has been playing up the recent decisions of a number of other AI companies to open offices in London, including ChatGPT developer OpenAI and Anthropic, whose CEO Dario Amodei is reportedly also attending. Palantir Technologies, too, has announced plans to move its European headquarters to the UK, and is said to be sending a representative to the event. A Microsoft representative will also reportedly attend, although not its CEO.

Where else are AI directions being set?

The UK’s AI Safety Summit is far from the only place that governments and enterprises are attempting to influence AI policy and development.

One of the first big attempts of a commitment to ethical AI in the enterprise was the Rome Call. In 2020, Microsoft and IBM signed on to a non-denominational initiative of the Vatican to promote six principals of AI development: transparency, inclusion, responsibility, impartiality, reliability, and security/privacy.

Since then, legislative, regulatory, industry, and civil society initiatives have multiplied. The European Union’s all-encompassing Artificial Intelligence Act seemed ahead of its time and full of good intention, but has drawn criticism and calls for stronger action from civil society groups, including Statewatch and service workers’ union Uni Europa.

Also, the White House has secured voluntary commitments to AI safety standards from seven of the largest AI developers, the Cyberspace Administration of China has issued regulations on generative AI training, and New York City has set rules on the use of AI in hiring.

Even the United Nations Security Council has been debating the issue.

Software developers are joining in, too. The Frontier Model Forum is the industry’s attempt to get ahead of state or international controls by demonstrating its members — including Microsoft, Google, Anthropic, and OpenAI — can be good global citizens through self-regulation.

All this activity puts the UK AI Safety Summit in a highly competitive environment, with legislators competing on the one hand to create a safe environment for their citizens, free from the menace of opaque automated discrimination or even — if the most alarmist critics are to be believed — global extinction, while on the other hand allowing businesses to innovate and benefit from the increases in productivity that AI may enable.

Who gets to set those regulations, and who will have to abide by them, is unlikely to be decided any time soon, much less this week.

Artificial Intelligence, Generative AI, Government, IT Leadership, Regulation, Security Infrastructure, Security Practices
Read More from This Article: AI Safety Summit: What to expect as global leaders eye AI regulation
Source: News

Category: NewsOctober 30, 2023
Tags: art

Post navigation

PreviousPrevious post:Want AI? Here’s how to get your data and infrastructure AI-readyNextNext post:The rise of the chief transformation officer

Related posts

Barb Wixom and MIT CISR on managing data like a product
May 30, 2025
Avery Dennison takes culture-first approach to AI transformation
May 30, 2025
The agentic AI assist Stanford University cancer care staff needed
May 30, 2025
Los desafíos de la era de la ‘IA en todas partes’, a fondo en Data & AI Summit 2025
May 30, 2025
“AI 비서가 팀 단위로 지원하는 효과”···퍼플렉시티, AI 프로젝트 10분 완성 도구 ‘랩스’ 출시
May 30, 2025
“ROI는 어디에?” AI 도입을 재고하게 만드는 실패 사례
May 30, 2025
Recent Posts
  • Barb Wixom and MIT CISR on managing data like a product
  • Avery Dennison takes culture-first approach to AI transformation
  • The agentic AI assist Stanford University cancer care staff needed
  • Los desafíos de la era de la ‘IA en todas partes’, a fondo en Data & AI Summit 2025
  • “AI 비서가 팀 단위로 지원하는 효과”···퍼플렉시티, AI 프로젝트 10분 완성 도구 ‘랩스’ 출시
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.