Dr Raia Hadsell - Artificial Intelligence researcher
Pint of Science
0:00
0:00

Dr. Raia Hadsell is a senior research scientist that's part of the world renowned artificial intelligence company, DeepMind. Originally from California, she transitioned from her undergrad degree in religion and philosophy to computer science at a Ph.D. level.

AI was intended for any computer program meant to emulate the decision processes of a human. DeepMind was originally a startup company pre-2014. It was bought by Google, but it is still separate. They act like an independent research group, and their goal is to solve the problem of intelligence and figure out new problems to solve once the company has figured out intelligence.

Go is an ancient, simple game that involves one or two players that put down black or white stones and capture territory. It’s popularly played in Korea. There are many different ways and options that the game can go. DeepMind learned the game from scratch with an AGI approach of having a neural network that learns from the visible data and the experience of the game.

Dr. Hadsell states, unfortunately, that it’s nowhere near exciting as in the movies. It looks like a set of graphs and plots over time that reflects the AI’s expectation of winning and what the probabilities of winning and losing are at any point. Also, it’s visible to see the prediction of what the opponent will play next.

Dr. Hadsell states that this would be a problem in the AI study. She says that the algorithms would have a tough time in the creative mode because it's an open world with no objectives.

It’s difficult to do. There are algorithms that try to mimic curiosity. When an AI agent is playing the creative mode in Minecraft and comes across something it didn't predict, it will be surprised, and that gives it a happy little bonus. So naturally, you'll try to seek new areas. Those type of algorithms aren't as well developed compared to maximizing an objective or win rate.

Deep reinforcement learning is when artificial neural networks are brought together. When it comes to deep reinforcement learning, it uses a neural network that can have millions of neurons that are adapted and changed based on the input and output of the networks, which creates an action in the world.

The human brain and other nervous systems in the biological world are the only examples we have of artificial intelligence. Dr. Hadsell and her team takes inspiration from neuroscience, but they want to make it efficient to run on modern computers, which means that it looks different.

There has been a rise in research of seeing things that look and act like grid cells in the neural network and training, when trained in a particular way, which have opened up new avenues for exploration and research.

It depends on how you define AI. Realistically, anything that involves media or understanding things that scientists couldn’t do before having large neural networks that could solve these core tasks, like in videos, to summarize or caption what's happening. We are also seeing changes in the medical world to automatically analyze and read medical scans.

Having a curated curriculum for each student could be really valuable since each student learns different. Dr. Hadsell thinks that it could be possible, and that the roles of human teaches are very important. Humans are much better and more efficient at learning a curriculum of knowledge, applying it, and putting it all together. There are not machines that can learn that way-- as of now.

They find it out through careful analysis and additional experiences in order to understand what’s happening.

When writing code in Google, it's hard to not use a lot of Google tools that are proprietary. Dr. Hadsell’s team open sources things that provide a lot of value for others. You want to share the code so you can build a community of people working toward the same goal. You also want other labs to try to reproduce. When having disagreements between results, you learn.

Dr. Hadsell believes the former not the latter, that they are trying to develop AI as a general problem-solving approach. On the way, they are inspired a lot by human intelligence. Right now, mouse level AI is what they can successfully achieve.

The most challenging things are when humans are making decisions that are abstract, vague, and contain inconsistent information. It’s when they are trying to come up with a not so black and white decision. When it involves a lot of human interaction and bringing together uncertain information, it remains to be a complex challenge for AI.

Continually learning, having the knowledge continue over time. It’s far away from what scientists can do with the neural networks. Learning from data set is different from how humans learn. It’s critical to learn in stages and to grow the knowledge and capability of the system to educate the AI to start with the small problems first.



powered by SmashNotes