DeepMind, an artificial intelligence firm that was acquired by Google in 2014 and is now under the Alphabet umbrella, has developed a computer than can refer to its own memory to learn facts and use that knowledge to answer questions.
That’s huge, because it means that future AI could respond to queries from humans without being taught every possible correct answer.
DeepMind says its new AI model, called a differentiable neural computer (DNC), can be fed with things like a family tree and a map of the London Underground network, and can answer complex questions about the relationships between items in those data structures.
For example, you could get responses to questions like, “Starting at Bond street, and taking the Central line in a direction one stop, the Circle line in a direction for four stops, and the Jubilee line in a direction for two stops, at what stop do you wind up?” DeepMind says its DNC could also help you plan an efficient route from Moorgate to Piccadilly Circus.
Similarly, it could understand and answer questions about the relationships between people from a large family, like, ““Who is Freya’s maternal great uncle?”
This discovery builds on the concept of neural networks, which mimic the way the human mind works. They are great for machine learning applications where you want a computer to learn to do things by recognizing patterns.
It’s these networks that helped DeepMind’s AlphaGo AI defeat world champions at the complex game of Go. But AlphaGo had to be trained by feeding it data about 30 million moves from historical games. By augmenting an AI’s capabilities with the power of learning from memory, it’ll likely be able to complete far more complex tasks on its own.
Read more here: thenextweb.com/artificial-intelligence/2016/10/13/deepminds-new-computer-can-learn-from-its-own-memory/