DeepMind Technologies Limited

author:frank  counts:312  time:2017-01-06


From Wikipedia, the free encyclopedia
DeepMind Technologies Limited
DeepMind logo.png
Type of business Subsidiary
Founded September 23, 2010; 6 years ago [1]
  • 5 New Street Square,[2]
    London EC4A 3TW, UK
CEO Demis Hassabis
Industry Artificial Intelligence
Employees >350 [3]
Parent Independent (2010-2014)
Google Inc. (2014-2016)
Alphabet Inc. (2015-present)

DeepMind Technologies Limited is a British artificial intelligence company founded in September 2010. It was acquired by Google in 2014. The company has created a neural network that learns how to play video games in a fashion similar to that of humans,[4] as well as a Neural Turing Machine, or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain.[5]

The company made headlines in 2016 after its AlphaGo program beat a human professional Go player for the first time.[6]


The start-up was founded by Demis HassabisShane Legg and Mustafa Suleyman in 2010.[7][8] Hassabis and Legg first met at University College London's Gatsby Computational Neuroscience Unit.[9] On 26 January 2014, Google announced the company had acquired DeepMind for $500 million,[10][11][12][13][14][15] and that it had agreed to take over DeepMind Technologies.

Since then major venture capital firms Horizons Ventures and Founders Fund have invested in the company,[16] as well as entrepreneurs Scott Banister[17] and Elon Musk.[18] Jaan Tallinn was an early investor and an adviser to the company.[19] The sale to Google took place after Facebook reportedly ended negotiations with DeepMind Technologies in 2013.[20] The company was afterwards renamed Google DeepMind and kept that name for about two years.[2]

In 2014, DeepMind received the "Company of the Year" award by Cambridge Computer Laboratory.[21]

After Google's acquisition the company established an artificial intelligence ethics board.[22] The ethics board for AI research remains a mystery, with both Google and DeepMind declining to reveal who sits on the board.[23] DeepMind, together with Amazon, Google, Facebook, IBM, and Microsoft, is a member of Partnership on AI, an organization devoted to the society-AI interface.[24]

Machine learning[edit]

DeepMind Technologies' goal is to "solve intelligence",[25] which they are trying to achieve by combining "the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms".[25] They are trying to formalize intelligence[26] in order to not only implement it into machines, but also understand the human brain, as Demis Hassabis explains:

[...] attempting to distil intelligence into an algorithmic construct may prove to be the best path to understanding some of the enduring mysteries of our minds.[27]

Google Research has released a paper in 2016 regarding AI Safety and avoiding undesirable behaviour during the AI learning process.[28] Deepmind has also released several publications via their website.[29]

To date, the company has published research on computer systems that are able to play games, and developing these systems, ranging from strategy games such as Go[30] to arcade games. According to Shane Legg human-level machine intelligence can be achieved "when a machine can learn to play a really wide range of games from perceptual stream input and output, and transfer understanding across games[...]."[31] Research describing an AI playing seven different Atari 2600 video games (PongBreakoutSpace InvadersSeaquestBeamriderEnduro, and Q*bert) reportedly led to their acquisition by Google.[4] Hassabis has mentioned the popular e-sport game StarCraft as a possible future challenge, since it requires a high level of strategic thinking and handling imperfect information.[32]

Deep reinforcement learning[edit]

As opposed to other AIs, such as IBM's Deep Blue or Watson, which were developed for a pre-defined purpose and only function within its scope, DeepMind claims that their system is not pre-programmed: it learns from experience, using only raw pixels as data input. Technically it uses deep learning on a convolutional neural network, with a novel form of Q-learning, a form of model-free reinforcement learning.[2][33] They test the system on video games, notably early arcade games, such as Space Invaders or Breakout.[33][34] Without altering the code, the AI begins to understand how to play the game, and after some time plays, for a few games (most notably Breakout), a more efficient game than any human ever could.[34]

For most games (Space Invaders, Ms Pacman, Q*Bert for example), DeepMind plays below the current[when?] World Record. The application of DeepMind's AI to video games is currently[when?] for games made in the 1970s and 1980s, with work being done on more complex 3D games such as Doom, which first appeared in the early 1990s.[34]


Main article: AlphaGo

In October 2015, a computer Go program called AlphaGo, powered by DeepMind, beat the European Go champion Fan Hui, a 2 dan (out of 9 dan possible) professional, five to zero.[35] This is the first time an artificial intelligence (AI) defeated a professional player.[6] Previously, computers were only known to have played Go at "amateur" level.[35][36] Go is considered much more difficult for computers to win compared to other games like chess, due to the much larger number of possibilities, making it prohibitively difficult for traditional AI methods such as brute-force.[35][36] The announcement of the news was delayed until 27 January 2016 to coincide with the publication of a paper in the journal Nature describing the algorithms used.[35] In March 2016 it beat Lee Sedol—a 9th dan Go player and one of the highest ranked players in the world—with 4-1 in a five-game match.


In July 2016, a collaboration between DeepMind and Moorfields Eye Hospital was announced.[37] DeepMind would be applied to the analysis of anonymised eye scans, searching for early signs of diseases leading to