Artificial Intelligence: A Modern Approach (3rd Edition)

Category: Computer Science
Author: Stuart Jonathan Russell, Peter Norvig
All Stack Overflow 11
This Year Stack Overflow 1
This Month Stack Overflow 1

About This Book

Dr. Peter Norvig, contributing Artificial Intelligence author and Professor Sebastian Thrun, a Pearson author are offering a free online course at Stanford University on artificial intelligence.

According to an article in The New York Times , the course on artificial intelligence is “one of three being offered experimentally by the Stanford computer science department to extend technology knowledge and skills beyond this elite campus to the entire world.” One of the other two courses, an introduction to database software, is being taught by Pearson author Dr. Jennifer Widom.


by boltzmannbrain   2019-01-03
> study textbooks. Do exercises. Treat it like academic studying

This. Highly recommend Russel & Norvig [1] for high-level intuition and motivation. Then Bishop's "Pattern Recognition and Machine Learning" [2] and Koller's PGM book [3] for the fundamentals.

Avoid MOOCs, but there are useful lecture videos, e.g. Hugo Larochelle on belief propagation [4].

FWIW this is coming from a mechanical engineer by training, but self-taught programmer and AI researcher. I've been working in industry as an AI research engineer for ~6 years.





by henning   2018-12-28
Would you enjoy something that gives a broad overview? Norvig's AI book should give you a very broad perspective of the entire field. There will be many course websites with lecture material and lectures to go along with it that you may find useful.

The book website - which might be more directly relevant to your interests.

There's also what I guess you would call "the deep learning book".

(People have different preferences for how they like to learn and as you can see I like learning from books.)

(I apologize if you already knew about these things.)

by danielmorozoff   2018-11-10
I think you misunderstood me.

I do not believe "you need to understand all these deep and hard concepts before you start to touch ML." That is a contortion of what I said.

First point: ML is not a young field- term was coined in 1959. Not to mention the ideas are much older. *

Second Point: ML/'AI' relies on a slew of various concepts in maths. Take any 1st year textbook -- i personally like Peter Norvig's. I find the breadth of the field quite astounding.

Third Point: Most PhDs are specialists-- aka, if I am getting a PhD in ML, i specialize in a concrete problem domain/subfield, so I can specialize in all subfields. For example, I work on event detection and action recognition in video models. Before being accepted into a PhD you must pass a Qual, which ensures you understand the foundations of the field. So comparing to this is a straw man argument.

If your definition of ML is taking a TF model and running it, then I believe we have diverging assumptions of what the point of a course in ML is. Imo the point of an undergraduate major is to become acquainted with the field and be able to perform reasonably well in it professionally.

The reason why so many companies (Google,FB,MS etc) are paying for this talent, is that it is not easy to learn and takes time to master. Most people who just touch ML have a surface level understanding.

I have seen people who excel at TF (applied to deep learning) without having an ML background, but even they have issues when it comes to understanding concepts in optimization, convergence, model capacity that have huge bearings on how their models perform. *

by jadedhacker   2018-08-03
For what it's worth, I feel like we already have a version of that book by Peter Norvig:

by Chad Okere   2018-03-19

Neural Networks are kind of declasse these days. Support vector machines and kernel methods are better for more classes of problems then backpropagation. Neural networks and genetic algorithms capture the imagination of people who don't know much about modern machine learning but they are not state of the art.

If you want to learn more about AI and machine learning, I recommend reading Peter Norvig's Artificial Intelligence: A Modern Approach. It's a broad survey of AI and lots of modern technology. It goes over the history and older techniques too, and will give you a more complete grounding in the basics of AI and machine Learning.

Neural networks are pretty easy, though. Especially if you use a genetic algorithm to determine the weights, rather then proper backpropagation.

by cr0sh   2018-01-07
TL;DR - read my post's "tag" and take those courses!


As you can see in my "tag" on my post - most of what I have learned came from these courses:

1. AI Class / ML Class (Stanford-sponsored, Fall 2011)

2. Udacity CS373 (2012) -

Note that it is a textbook, with textbook pricing...

Another one that I have heard is good for learning neural networks with is:

There are tons of other resources online - the problem is separating the wheat from the chaff, because some of the stuff is outdated or even considered non-useful. There are many research papers out there that can be bewildering. I would say if you read them, until you know which is what, take them all with a grain of salt - research papers and web-sites alike. There's also the problem of finding diamonds in the rough (for instance, LeNet was created in the 1990s - but that was also in the middle of an AI winter, and some of the stuff written at the time isn't considered as useful today - but LeNet is a foundational work of today's ML/AI practices).

Now - history: You would do yourself good to understand the history of AI and ML, the debates, the arguments, etc. The base foundational work come from McCulloch and Pitts concept of an artificial neuron, and where that led:

Also - Alan Turing anticipated neural networks of the kind that wasn't seen until much later:

...I don't know if he was aware of McCulloch and Pitts work which came prior, as they were coming at the problem from the physiological side of things; a classic case where inter-disciplinary work might have benefitted all (?).

You might want to also look into the philosophical side of things - theory of mind stuff, and some of the "greats" there (Minsky, Searle, etc); also look into the books written and edited by Douglas Hofstadter:,_Escher,_Bach

There's also the "lesser known" or "controversial" historical people:

* Hugo De Garis (CAM-Brain Machine)

* Igor Aleksander

* Donald Michie (MENACE)

...among others. It's interesting - De Garis was a very controversial figure, and most of his work, for whatever it is worth - has kinda been swept under the rug. He built a few computers that were FPGA based hardware neural network machines that used cellular automata a-life to "evolve" neural networks. There were only a handful of these machines made; aesthetically, their designs were as "sexy" as the old Cray computers (seriously).

Donald Michie's MENACE - interestingly enough - was a "learning computer" made of matchboxes and beads. It essentially implemented a simple neural network that learned how to play (and win at) naughts and crosses (TIC-TAC-TOE). All in a physically (by hand) manipulated "machine".

Then there is one guy, who is "reviled" in the old-school AI community on the internet (take a look at some of the old newsgroup archives, among others). His nom-de-plume is "Mentifex" and he wrote something called "MIND.Forth" (and translated it to a ton of other languages), that he claimed was a real learning system/program/whatever. His real name is "Arthur T. Murray" - and he is widely considered to be one of the earliest "cranks" on the internet:

Heck - just by posting this I might be summoning him here! Seriously - this guy gets around.

Even so - I'm of the opinion that it might be useful for people to know about him, so they don't go to far down his rabbit-hole; at the same time, I have a small feeling that there might be a gem or two hidden inside his system or elsewhere. Maybe not, but I like to keep a somewhat open mind about these kinds of things, and not just dismiss them out of hand (but I still keep in mind the opinions of those more learned and experienced than me).

EDIT: formatting

by mindcrime   2017-10-31
What? No. Why in the world do people even ask this kind of question. To a first approximation, the answer to "is it too late to get started with ..." question is always "no".

If no, what are the great resources for starters?

The videos / slides / assignments from here:

This book:

This book:

These books:

This book:

These subreddits:

These journals:

This site:

Any tips before I get this journey going?

Depending on your maths background, you may need to refresh some math skills, or learn some new ones. The basic maths you need includes calculus (including multi-variable calc / partial derivatives), probability / statistics, and linear algebra. For a much deeper discussion of this topic, see this recent HN thread:

Luckily there are tons of free resources available online for learning various maths topics. Khan Academy isn't a bad place to start if you need that. There are also tons of good videos on Youtube from Gilbert Strang, Professor Leonard, 3blue1brown, etc.

Also, check out Doing Kaggle contests can be a good way to get your feet wet.

And the various Wikipedia pages on AI/ML topics can be pretty useful as well.

by anonymous   2017-09-24

In Artificial Intelligence Book,under the finding solution by searching topic we discussed about partial searching and complete searching algorithms. But really can't understand the difference between partial searching and complete searching.Wanted to have an explanatory.

by anonymous   2017-08-20

There is numerous classical books:

The first two are the easiest, the second one covers more than machine learning. However, there is little "pragmatic" or "engineering" stuff in there. And the math is quite demanding, but so is the whole field. I guess you will do best with O'Reilly's programming collective intelligence because it has its focus on programming.

by anonymous   2017-08-20

You ask a very broad question. There are many implementations of inference engines, but they would all rely on natural language processing and searching algorithms at their core so I would focus on that.

Try the book Artifical Intelligence : A Modern Approach. It has sections on both NLP and Search and is very good.

by anonymous   2017-08-20

Almost any learning algorithm would be better than neural networks. However it is a very deep topic -- what you need is a book and a few months, not a quick answer on SO.

So I'll recommend a few:

I don't recommend neural networks for this because they take a very long time to learn, even with the most modern learning-optimization techniques (which I don't think you're going to find in any book yet anyway). You want these invaders to learn on the fly, so you need something much more responsive.

I would probably use a decision tree, and keep a limited-length memory vector so that the critters can adapt quickly to changes in the player's strategy.

by austinl   2017-08-19
I recently implemented depth-first iterative deepening in an Artificial Intelligence class project to solve the classic missionaries and cannibals problem [0]. The professor remarked that while there have been some optimizations over the last few decades, using them can be quite messy – to the point where the combination of A* and iterative deepening is still commonly used in the field.

I'm fairly certain that the claim in the introduction – "Unfortunately, current AI texts either fail to mention this algorithm or refer to it only in the context of two-person game searches" – is no longer true.

From my current textbook (Artificial Intelligence: A Modern Approach [1]):

"Iterative deepening search (or iterative deepening depth-first search) is a general strategy, often used in combination with depth-first tree search, that finds the best depth limit. It does this by gradually increasing the limit — first 0, then 1, then 2, and so on — until a goal is found... In general, iterative deepening is the preferred uninformed search method when the search space is large and the depth of the solution is not known."



by nlawalker   2017-08-19
To anyone who's never done the Pacman projects: I highly recommend them[1]. They are an absolute blast and incredibly satisfying. Plus, if you don't know Python, they are a great way to learn.

The course I took used the Norvig text[2] as a textbook, which I also recommend.

[1] Note that the poor reviews center on the price, the digital/Kindle edition and the fact that the new editions don't differ greatly from the older ones. If you've never read it and you have the $$, a hardbound copy makes a great learning and reference text, and it's the kind of content that's not going to go out of date.