As the name suggests, a broad introduction to the field of AI.
- Intelligent agents
- Uninformed and informed (heuristic) search
- Local search, adversarial search
- Constraint satisfaction
- Knowledge representation (unification, resolution principle)
- Introductory machine learning (decision trees, PAC, neural networks) uninformed search: BFS, DFS, Depth limited search, iterative deepening depth first search. About a day or two’s worth of learning if you’ve taken 2110.
informed search: greedy best first search, uniform cost search, A*. These are useful, but they are all more or less slight variations on dijkstra’s algorithm. if you know dijkstra, also not too much work to learn.
stochastic search: hill climbing, simulated annealing, genetic algorithms. this part of the course is really cool IMO. do a wiki search on these, they are very practical in AI.
minimax search: used for making a decision by playing out all possible turn sequences in advance and seeing what the best outcome is.
knowledge representation: we spent a LOT of time reducing SAT sentences to CNFSAT. i’m not sure why, it’s not really a useful skill or helpful.
the topics below were crammed into a few weeks time and should have been extensively covered because they seem useful
decision trees and entropy: if you know shannon entropy already, not much new material. learning how to create a decision tree is useful though
PAC: not sure what this is after taking the class honestly
neural networks: very cool stuff, but didn’t learn how to apply them, just how to “solve” them.
1 prelim (supposed to be evening, but ended up being in-class). 1 final.
4 homeworks, due sporadically.
Uses Piazza, at least supposedly. Average response time was more than 3 hours.
If you’re pursuing the AI vector, it’s required. Otherwise, you might reconsider (see below).
This course suffers from an astonishing lack of content. All that stuff up there could probably be taught within 2 months, but was somehow stretched out to a 4-month course. That in itself wouldn’t be terrible, except that Prof. Selman manages to make AI incredibly boring. The textbook, however, is the classic book by Russell and Norvig; reading that carefully is more than enough to get you through the class. The prelim was easy, and the final was an unending monstrosity worth 200 points.
The homeworks were rather long, but there were only four. They were also assigned at random times – the delay between successive homeworks, in order, was 3 weeks, 1 month, and 1 week. The first two homeworks (on agents and local search) were tedious. The last two (on logic and learning) were actually rather interesting, though that may just be due to this writer’s taste.
Annoyingly, the staff purposefully hid the homework and prelim means on CMS, but they hovered around 80%. The mean on the final was about 67%.
The class was better when Hod Lipson taught it.
|Semester||Time||Professor||Median Grade||Course Page|
|Fall 2012||MWF 11:15 - 12:05||Bart Selman||B+||http://www.cs.cornell.edu/Courses/cs4700/2012fa/|
|Fall 2013||MWF 11:15 - 12:05||Bart Selman||B+||http://www.cs.cornell.edu/Courses/cs4700/2013fa/|
The textbook is Artificial Intelligence: A Modern Approach, 3rd Edition, by Russell and Norvig (http://aima.cs.berkeley.edu/). If you’re really into AI, then I would recommend owning it, as it’s quite a good book.