From the start, the enemy aliens are making kills — three times they destroy the defending laser cannon within seconds. Half an hour in, and the hesitant player starts to feel the game’s rhythm, learning when to fire back or hide. Finally, after playing ceaselessly for an entire night, the player is not wasting a single bullet, casually shooting the high-score floating mothership in between demolishing each alien. No one in the world can play a better game at this moment.

This player, it should be mentioned, is not human, but an algorithm on a graphics processing unit programmed by a company called DeepMind. Instructed simply to maximise the score and fed only the data stream of 30,000 pixels per frame, the algorithm — known as a deep Q-network – is then given a new challenge: an unfamiliar Pong-like game called Breakout, in which it needs to hit a ball through a rainbow-coloured brick wall. “After 30 minutes and 100 games, it’s pretty terrible, but it’s learning that it should move the bat towards the ball,” explains DeepMind’s cofounder and chief executive, a 38-year-old artificial-intelligence researcher named Demis Hassabis. “Here it is after an hour, quantitatively better but still not brilliant. But two hours in, it’s more or less mastered the game, even when the ball’s very fast. After four hours, it came up with an optimal strategy — to dig a tunnel round the side of the wall, and send the ball round the back in a superhuman accurate way. The designers of the system didn’t know that strategy.”

In February, Hassabis and colleagues including Volodymyr Mnih, Koray Kavukcuoglu and David Silver published a Nature paper on the work. They showed that their artificial agent had learned to play 49 Atari 2600 video games when given only minimal background information. The deep Q-network had mastered everything from a martial-arts game to boxing and 3D car-racing games, often outscoring a professional (human) games tester. “This is just games, but it could be stockmarket data,” Hassabis says. “DeepMind has been combining two promising areas of research — a deep neural network and a reinforcement-learning algorithm – in a really fundamental way. We’re interested in algorithms that can use their learning from one domain and apply that knowledge to a new domain.”

DeepMind has not, admittedly, launched any products — nor found a way to turn its machine gameplay into a revenue stream. Still, such details didn’t stop Google buying the London company — backed by investors such as Elon Musk, Peter Thiel and Li Ka-shing — last January in its biggest European acquisition. It paid £400 million.

DeepMind: inside Google’s super-brain