Sunday, 26 May 2019

Google won´t use Artificial Intelligence for weapons

Google promised not to use Artificial Intelligence in weapons that could potencially harm humans.

The tech giant Google has some basic principles when it comes to AI. Earlier this week, the company forbid the use of artificial intelligence in weapons and human rights violations.

The decision comes after being announced, as Wired reported, that Google will not continue on the controversial “Project Maven” after 2019. The project in which the US military is also involved, uses AI to help satellites target drones, but recent criticism against Google made the company rethink its priorities and release a set of Principles.

In the text signed by CEO Sundar Pichai, Google discloses a letter of principles regarding AI. The company claims that artificial intelligence  should benefit society, not discriminating against people and being controlled by humans. “These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions,” wrote CEO Sundar Pichai.

 

Photo: Getty Images

The principles are as follows. Relevant portions are quoted from their descriptions:

  1. Be socially beneficial: Take into account a broad range of social and economic factors, and proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides…while continuing to respect cultural, social, and legal norms in the countries where we operate.
  2. Avoid creating or reinforcing unfair bias: Avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
  3. Be built and tested for safety: Apply strong safety and security practices to avoid unintended results that create risks of harm.
  4. Be accountable to people: Provide appropriate opportunities for feedback, relevant explanations, and appeal.
  5. Incorporate privacy design principles: Give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.
  6. Uphold high standards of scientific excellence: Work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches…responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.
  7. Be made available for uses that accord with these principles: Limit potentially harmful or abusive applications. (Scale, uniqueness, primary purpose, and Google’s role to be factors in evaluating this.)

 

The uses of artificial intelligence have been one of the new topics of debate between the biggest technological giants, after several employees took their leave of Google and more than 4,000 signed a protest letter (released by The New York Times) against partnerships with the US army in artificial intelligence projects.

In its communication of principles for the use of artificial intelligence, Google states that although it does not continue to work with “governments and armies” in the use of weapons, it will  continue to work in other areas, such as cybersecurity, training and military recruitment, among others.

 

Leave a Reply