Has plenty of examples in Python. You can also look at different Udacity courses. They have a couple dealing with ML with Python.
The course I took used the Norvig text as a textbook, which I also recommend.
http://www.amazon.com/Artificial-Intelligence-Modern-Approac... Note that the poor reviews center on the price, the digital/Kindle edition and the fact that the new editions don't differ greatly from the older ones. If you've never read it and you have the $$, a hardbound copy makes a great learning and reference text, and it's the kind of content that's not going to go out of date.
I'm fairly certain that the claim in the introduction – "Unfortunately, current AI texts either fail to mention this
algorithm or refer to it only in the context of two-person game searches" – is no longer true.
From my current textbook (Artificial Intelligence: A Modern Approach ):
"Iterative deepening search (or iterative deepening depth-first search) is a general strategy, often used in combination with depth-first tree search, that finds the best depth limit. It does this by gradually increasing the limit — first 0, then 1, then 2, and so on — until a goal is found... In general, iterative deepening is the preferred uninformed search method when the search space is large and the depth of the solution is not known."
Almost any learning algorithm would be better than neural networks. However it is a very deep topic -- what you need is a book and a few months, not a quick answer on SO.
So I'll recommend a few:
I don't recommend neural networks for this because they take a very long time to learn, even with the most modern learning-optimization techniques (which I don't think you're going to find in any book yet anyway). You want these invaders to learn on the fly, so you need something much more responsive.
I would probably use a decision tree, and keep a limited-length memory vector so that the critters can adapt quickly to changes in the player's strategy.
You ask a very broad question. There are many implementations of inference engines, but they would all rely on natural language processing and searching algorithms at their core so I would focus on that.
Try the book Artifical Intelligence : A Modern Approach. It has sections on both NLP and Search and is very good.
There is numerous classical books:
The first two are the easiest, the second one covers more than machine learning. However, there is little "pragmatic" or "engineering" stuff in there. And the math is quite demanding, but so is the whole field. I guess you will do best with O'Reilly's programming collective intelligence because it has its focus on programming.
In Artificial Intelligence Book,under the finding solution by searching topic we discussed about partial searching and complete searching algorithms. But really can't understand the difference between partial searching and complete searching.Wanted to have an explanatory.
If no, what are the great resources for starters?
The videos / slides / assignments from here:
Any tips before I get this journey going?
Depending on your maths background, you may need to refresh some math skills, or learn some new ones. The basic maths you need includes calculus (including multi-variable calc / partial derivatives), probability / statistics, and linear algebra. For a much deeper discussion of this topic, see this recent HN thread:
Luckily there are tons of free resources available online for learning various maths topics. Khan Academy isn't a bad place to start if you need that. There are also tons of good videos on Youtube from Gilbert Strang, Professor Leonard, 3blue1brown, etc.
Also, check out Kaggle.com. Doing Kaggle contests can be a good way to get your feet wet.
And the various Wikipedia pages on AI/ML topics can be pretty useful as well.
As you can see in my "tag" on my post - most of what I have learned came from these courses:
1. AI Class / ML Class (Stanford-sponsored, Fall 2011)
2. Udacity CS373 (2012) - https://www.amazon.com/Artificial-Intelligence-Modern-Approa...
Note that it is a textbook, with textbook pricing...
Another one that I have heard is good for learning neural networks with is:
There are tons of other resources online - the problem is separating the wheat from the chaff, because some of the stuff is outdated or even considered non-useful. There are many research papers out there that can be bewildering. I would say if you read them, until you know which is what, take them all with a grain of salt - research papers and web-sites alike. There's also the problem of finding diamonds in the rough (for instance, LeNet was created in the 1990s - but that was also in the middle of an AI winter, and some of the stuff written at the time isn't considered as useful today - but LeNet is a foundational work of today's ML/AI practices).
Now - history: You would do yourself good to understand the history of AI and ML, the debates, the arguments, etc. The base foundational work come from McCulloch and Pitts concept of an artificial neuron, and where that led:
Also - Alan Turing anticipated neural networks of the kind that wasn't seen until much later:
...I don't know if he was aware of McCulloch and Pitts work which came prior, as they were coming at the problem from the physiological side of things; a classic case where inter-disciplinary work might have benefitted all (?).
You might want to also look into the philosophical side of things - theory of mind stuff, and some of the "greats" there (Minsky, Searle, etc); also look into the books written and edited by Douglas Hofstadter:
There's also the "lesser known" or "controversial" historical people:
* Hugo De Garis (CAM-Brain Machine)
* Igor Aleksander
* Donald Michie (MENACE)
...among others. It's interesting - De Garis was a very controversial figure, and most of his work, for whatever it is worth - has kinda been swept under the rug. He built a few computers that were FPGA based hardware neural network machines that used cellular automata a-life to "evolve" neural networks. There were only a handful of these machines made; aesthetically, their designs were as "sexy" as the old Cray computers (seriously).
Donald Michie's MENACE - interestingly enough - was a "learning computer" made of matchboxes and beads. It essentially implemented a simple neural network that learned how to play (and win at) naughts and crosses (TIC-TAC-TOE). All in a physically (by hand) manipulated "machine".
Then there is one guy, who is "reviled" in the old-school AI community on the internet (take a look at some of the old comp.ai newsgroup archives, among others). His nom-de-plume is "Mentifex" and he wrote something called "MIND.Forth" (and translated it to a ton of other languages), that he claimed was a real learning system/program/whatever. His real name is "Arthur T. Murray" - and he is widely considered to be one of the earliest "cranks" on the internet:
Heck - just by posting this I might be summoning him here! Seriously - this guy gets around.
Even so - I'm of the opinion that it might be useful for people to know about him, so they don't go to far down his rabbit-hole; at the same time, I have a small feeling that there might be a gem or two hidden inside his system or elsewhere. Maybe not, but I like to keep a somewhat open mind about these kinds of things, and not just dismiss them out of hand (but I still keep in mind the opinions of those more learned and experienced than me).
Neural Networks are kind of declasse these days. Support vector machines and kernel methods are better for more classes of problems then backpropagation. Neural networks and genetic algorithms capture the imagination of people who don't know much about modern machine learning but they are not state of the art.
If you want to learn more about AI and machine learning, I recommend reading Peter Norvig's Artificial Intelligence: A Modern Approach. It's a broad survey of AI and lots of modern technology. It goes over the history and older techniques too, and will give you a more complete grounding in the basics of AI and machine Learning.
Neural networks are pretty easy, though. Especially if you use a genetic algorithm to determine the weights, rather then proper backpropagation.