You are currently viewing Google DeepMind Establishes Dedicated AI Safety Organization

Google DeepMind Establishes Dedicated AI Safety Organization

  • Post category:Blog
  • Post last modified:22 February 2024

As artificial intelligence systems become more powerful, leading AI lab Google DeepMind has created a specialized group focused on AI safety to ensure responsible development. The Alphabet-owned company has developed its share of popular Artificial Intelligence tools and chatbots such as Gemini Pro, Gemini 1.5, Gemma, Video Poet etc and counting.

What is Google DeepMind’s AI Safety and Alignment?

San Francisco, CA – Google’s DeepMind AI research division has formed a new organization called AI Safety and Alignment committed to anticipating and addressing risks associated with advanced AI. This proactive move aims to ensure artificial intelligence continues providing extraordinary benefits to society while avoiding potential negative consequences.

“AI has huge potential to positively transform society and the economy, but like all powerful technology it can be used for good or ill,” said DeepMind CEO Demis Hassabis. “We want AI systems that are not just beneficial, but also beneficial to society no matter what. We need to ensure AI is developed safely so it does not reflect human prejudices and biases or run out of control.”

The new AI safety group will focus on technical issues like security, ethics and governance as well as specialized research into artificial general intelligence (AGI) – AI with more broad human-like capabilities. UC Berkeley professor Anca Dragan has been appointed to lead teams studying challenges unique to more broadly intelligent AI systems.

“As AI capabilities advance, we want to ensure research happens in a principled, equitable and forward-looking manner,” Dragan said. “Creating safe and beneficial AGI is crucial and alignment of AGI values with human values is key.”

Aligning AI Goals and Human Values

The field of AI value alignment aims to ensure intelligent systems behave according to human ethics and priorities even as AI becomes more autonomous and capable. Rather than top-down rules, the approach trains AI agents through interaction and feedback.

“Value alignment research ensures that an AI system tries to do what humans actually want it to do,” Dragan explained. “We don’t want machines optimizing flawed objectives or going off track due to distributional shift issues in training data.”

Dragan will collaborate with other groups studying AI ethics and representation. DeepMind believes responsible development requires identifying and resolving sources of unfairness or potential exclusion early in the design process.

“Teams focused on ethics, fairness and inclusive development are crucial for catching issues early,” said DeepMind ethics lead Azim Shariff. “AI should empower everyone regardless of gender, race, age or background.”

In addition to research, the new organization includes DeepMind’s existing AI safety engineers and the Responsibility & Safety Council reviewing the societal impact of projects. Members see the initiative enhancing accountability and oversight for safer innovation.

Tech Giants Collaboration for Responsible AI

While internal safety procedures are a priority, DeepMind also helped form organizations like Partnership on AI to develop industry-wide best practices. Co-founded in 2016 by DeepMind, Amazon, Google, Facebook, IBM and Microsoft, the nonprofit works to advance public understanding and address ethical challenges.

“Technology as powerful as AI requires cooperation across organizations and domains of expertise to ensure it benefits as many people as possible,” said Partnership on AI CEO Terah Lyons.

The collaboration has grown to over 100 partners across 13 countries – uniting technologists with specialists in law, ethics, policy and social issues. Lyons sees DeepMind’s dedication to safety as setting an important precedent.

“It will take continued leadership from pioneering companies like DeepMind along with coordinated action to integrate ethics and safety best practices into the AI development life cycle,” said Lyons. She believes the investments being made today in AI safety will pay dividends down the road.

“Initiatives creating governance structures and technical standards will allow us to reap the blessings of AI while minimizing risks,” Lyons explained. “Most risks arise from well-intentioned but underprepared deployments rather than inherently unsafe AI.”

DeepMind CEO Hassabis agrees responsible innovation is crucial to realizing AI’s potential while avoiding pitfalls.

“AI offers enormous opportunities to improve lives and society as a whole,” said Hassabis, “but we cannot take those benefits for granted. We hope efforts like our safety organization provide a template for the thoughtful, ethical and collaborative development of AI.”

What are the threats from AI why is there a need for regulatory organizations?

Some of the key threats from AI that warrant an organization dedicated to AI safety include:

  1. Potential for malicious use: AI could potentially be used by bad actors for harmful purposes such as aiding terrorism, destabilizing society, or causing other intentional damage. An AI safety organization can help mitigate these risks.
  2. Unintended consequences: Even with good intentions, AI systems could cause unintentional harm through accidents, bugs, or unintended side effects. For example, an AI medical diagnosis system could amplify unfair bias and impact care. AI safety research aims to anticipate and address these risks.
  3. Lack of alignment with human values: As AI becomes more advanced, there are risks that its objectives could diverge from ethics, social norms, and human priorities. Work on “value alignment” aims to close this gap by developing techniques to ensure AI goals remain compatible with human values as systems become more autonomous.
  4. Representation and fairness issues: The data used to train AI systems may reflect societal biases and exclusion that could be amplified. Proactive analysis of fairness and representation by AI safety teams could reduce this risk.
  5. Security and control issues: More broadly capable AI could become difficult to control and contain. Work is needed to keep AI systems secure, avoid uncontrolled self-improvement cycles, and ensure human oversight.

In general, advanced AI has the potential for great benefit but also carries risks if not developed safely and deliberately. Dedicated safety organizations can help address these unique challenges in collaboration with ethics boards, policy, and the public. How for these measures will be successful in mitigating the dangers remains to be seen as AI is growing exponentially on almost a day-by-day basis.