You can access the full course here: The Complete Artificial Neural Networks Developer Course
Why do we even have artificial intelligence? Computers are really dumb machines! When we write code and programs, we give the computer a very explicit set of instructions that it isn’t allowed to deviate from. Inside of our program, we must handle every since point of error or user input. Humans, on the other hand, have the capability to learn and generalize. For example, if you were handed a pen from a brand new pen company, you would immediately recognize what it was and how to use it. In other words, you didn’t need to be taught how to use that particular pen; you generalized your knowledge of pens.
Good old-fashioned AI (GOFAI) began in the 1960s at the Massachussetts Institute of Technology (MIT) by the pioneer of artificial intelligence: Marvin Minsky. All of these AI algorithms were based on search, not actual learning. During this era, your goal was to rephrase your AI problem as a search problem, then use standard search algorithms, such as breadth-first/depth-first search, uniform cost search, A*, etc., to solve your problem and arrive at an answer. For game AI, adversarial search algorithms, such as minimax, was used. Knowledge-based systems, such as MYCIN, used a series of questions and logic rules to solve the problem. In all of these cases, there was no learning: they either looked through the entire search space or used pre-defined rules.
The modern approach to AI is focused on learning instead of search. Our AI algorithms are data-driven: we provide them with many examples to show them how to perform our task or solve our problem. What we’re learning are the parameters of our AI algorithm. We can use these parameters to generalize our algorithm to new data that it hasn’t seen before.
One example of machine learning includes image captioning. The goal of this task is to generate a novel, i.e., not retrieved/fetched, caption given an image. In this particular case, we’re merging image and text modalities into a single AI algorithm.
We can even perform the opposite task and generate an image given a text description!
In another example, we can use AI to “re-paint” provided images in the style of a particular artist. This is called Neural Style. For example, we provide any image to this algorithm and then a reference image of an artist, such as Van Gogh’s Starry Night, and the algorithm will modify our image to look like it was painted in the style of Starry Night!
In another example of merging images and text, we can train a model to perform visual question answering. In other words, we can provide an image and submit text queries to it, and the model will attempt to answer the query. For example, if we gave an image of giraffes, we can ask the model “how many giraffes are in the image?” and it would attempt to answer by returning a number.
There are many more examples of the kinds of amazing tasks that AI and neural networks can accomplish, and we’ll be learning about the fundamentals that allow us to train neural networks so they can perform these tasks.
In this video, I just need to introduce the concept of what artificial intelligence and machine learning to then help give some background information. So, first of all, what is artificial intelligence and why do we actually need it?
And so to kind of think of this case, you have to accept the fact that the computers are actually really dumb machines. I mean if you think about it whenever you write a program you’re giving the computer an explicit set of instructions that it must do and operate for until the end of the program and these instructions are very very explicit. You say let two equals four or something like that and so the instructions that you’re giving it are actually really explicit and they can’t be vague in any sense they have to be very clear.
And you get another problem with this is, that you have to account for every possible kind of scenario in your code so think of this as being like switch cases or terms and tons of else blocks so when the user gives in some kind of input you have to do lots of checking to make sure that it’s within your parameters of the program and so on, so it’s really this is really you have to account for all these cases when you’re dealing with when you’re dealing with these programs.
And again this behavior isn’t really human-like, humans can deal with vagueness I mean they have so much more computing power mentally and they’re able to handle a lot of these cases. They don’t have to think about every possible case in order to do you know a task and they don’t have to be given incredibly explicit instruction, I mean it helps sure but they can, humans can handle vagueness actually quite well. So that’s kind of like the goal that we’re trying to get to with artificial intelligence, if you can build something that can think and reason similarly to humans.
So to begin our story we have to first talk about a good old-fashioned AI, so this actually started during the 1960s lead by Marvin Minsky and this actually happens at MIT and he actually started bringing in, he started like the MIT’s Artificial Intelligence group there and so the thing with good old-fashioned AI is that it’s not really learning of any kind, if you think about it so the whole point of good old-fashioned AI is it was all about search actually and so by search I mean things like graph searching I’ll share an example of that actually in the next slide.
On these different kinds of search problems and pretty much the thing with good old-fashioned AI was could you formulate your artificial intelligence problem as a search problem? And if you could, then you can use good old-fashioned AI to basically solve your problem and so there’s really no learning that’s going on because in these good old-fashioned AI systems you’re actually just enumerating all it’s kind of like, the thing about talking about programs, kind of like enumerating all possible states and then you’re just searching, searching isn’t really learning and so we’ll see this in the next slide but yeah so they had different kinds of graph search and then there’s actually you had adversarial search actually.
This was particularly useful for playing games like PAC-MAN, so if you remember PAC-MAN there’s like a little PAC-MAN character and he has to like eat all the pellets while you’re trying to avoid the ghosts and so this is an example of an adversarial search problem because your goal is not only to eat all the pellets it’s also to avoid the ghosts, and so that you can kind of frame this as like an adversarial search problem and then there’s like other kinds of these knowledge based systems where you basically just ask the computer a ton of or the computer asks you a ton of questions that you answered and it would just kind of find the answer but the whole point is that there’s really no learning that was going on, this is still technically under the branch of artificial intelligence.
Anyway let’s look at one of these search problems. So here is just a little crudely drawn map and so suppose your goal was to – you’re in Seattle and you want to get to New York City, and you’re booking flights. And maybe there’s these edges between these different cities represent some kind of cost like maybe it’s airfare, or maybe you want to drive there for some reason and they represent distances or you know different kinds of cost metrics.
And so this is an example of a search problem where you want to go from one city to another this is a pretty actually classic search problem and you want to minimize the total of past cost so and then you might think oh well I can just go from Seattle to Washington D.C and New York. You know that’s like two edges but that might not be the most optimal path instead maybe we can go from Seattle to Chicago to New York City or Seattle to Chicago to Columbus to New York City and it ends up costing us less overall than if we went from Washington D.C. to New York City.
So this is an example of a kind of problem that we can frame as a search problem and then we can use old-fashioned AI to solve this using like different search algorithms, depth-first, breadth-first search, a-star, and whatnot.
So anyway, now so what we discussed was good old-fashioned AI and as I mentioned it’s more focused on a search than it is on actual learning so now machine learning comes into the picture and this is where we’re actually taking in examples and learning from those examples and so this is, machine learning is technically a subfield of AI it’s probably the largest subfield of AI and some people have gone onto call it kind of like true AI, or pure AI and so this is kind of like the modern approach where people have been doing for at least the past few decades is you take examples and you learn from these examples and that way you can generalize better.
So I’m going to give you lots of examples of these and the last point I mentioned that there are parameters that you learn that’s you use these parameters so that when you get new data you can do whatever task you want whether that be processing or regression and we’ll talk about those later but let’s actually look at some of the cool tasks that you can do, just look, I selected a couple of research papers that talk about the different cool things that you can do with AI.
So just kind of selected a few papers, so here’s an example from the group at the Vision Group at Stanford. This was posted awhile back, what we can actually do is train a network to generate a caption given an image and note that these captions aren’t actually they’re not just being retrieved they’re actually being generated, new captions are being generated and so this is a really cool example of you’re taking two different modalities of data images and text and you’re kind of blending them together to get something like this where you can automatically caption images.
Here’s another popular paper it’s also colloquially known as Neural Style. This is where you can take a painting of a famous author like this one which would be Van Gogh, Starry Night right so you tan take a painting like that and you can take a picture that you’ve taken and using this Neural Style approach you can actually it’ll actually change your image so that it looks as if it was painted by this particular artist and so you can see here’s like the picture, and then based on these different reference paintings we can repaint the picture, this is another really cool example of artificial intelligence.
Here’s also a really neat example and this is published again pretty recently it was called StackGAN and GAN stands for Generative Adversarial Network. And what this is actually doing, this is doing the inverse of what we saw just a couple slides ago, instead of going from an image and generating text from an image this is actually generating an image from text. So you give it a text description the StackGAN and it will actually generate a picture. And so you can see that here are some of the text descriptions that people have given, and here are some of the images that have been generated and this last row is StackGAN’s resultants and they actually look pretty realistic, so again this is one of the cool things that you can do with artificial intelligence.
And another thing is called visual question answering where you have an image and you can ask a question about it and you can you know it will produce some text results and so here there’s actually, they actually have a web demo that you can go and upload your own images so for example I asked a question how many giraffes are in this image and it produced three. Another thing I can say you know, what is the cat sleeping on? And it produces the right text and so on.
This is another example by Google, and this is from particularly from their Google Neural Machine Translate group and this is actually really cool because what you can do is you can take two languages but you don’t have explicit examples from like say Japanese to Korean instead you use kind of like an intermediary language sort of and then it can translate that without having explicit examples from Japanese to Korean ’cause it kind of like changes them. I forget, it’s more complicated than this I’m just trying to give a top little overview.
Okay and this is probably fairly famous, it was on the news this is AlphaGo this is how you can train a reinforcement learning agent to learn to play Go and it turns out it actually plays it really well as you’ve probably seen in the news.
Okay so here’s all of the technologies that were used in two of those examples that I discussed. Neural Networks obviously a big portion, and there are just like tons of networks that were used. All the image based tasks had Convolutional Neural Networks, and all the text based tasks had Recurrent Neural Networks. And I made a small reference to Generative Adversarial Networks.
Now there’s a lot of technologies that are going on that, but all of them kind of central around this concept of Neural Networks so if you have a really good understanding of Neural Networks you should be able to at least have a very cursory understanding of you can gain a cursory understanding fairly easily of the other kinds of networks. Because they all kind of rely on some more principals, that’s where I’m gonna stop this video, yeah again so just to recap we kind of discuss a little bit about artificial intelligence, we talk about the old-fashioned way to do it using just search as well as the new way of doing this using learning and so we’re gonna get much more in-depth into Neural Networks.
Interested in continuing? Check out the full The Complete Artificial Neural Networks Developer Course, which is part of our Machine Learning Mini-Degree.