This post turned out to be more of random ramblings about what I learned about the current state of AI this past year mostly; sorry if it's not really going anywhere.
I believe we're really really really far from general purpose AI. I read dozens of machine learning papers on Arxiv and, while there are great ideas and huge improvements, much of what we have are just newer and newer tricks to teach the networks for different kinds of pattern recognition, regression, generation problems faster and faster from less and less data with deeper and deeper networks for smaller and smaller improvements. If we're looking at it from the point of developing a Super AI, these are some of the very early steps of preparation, gathering the hay for the bricks for the castle.
The most lifelike application is reinforcement learning, where the rewards are more separated from the actions (just like in reality) than with more direct ways of learning, where the feedback is immediate. There are really cool improvements here, especially from last year, and when you see a computer playing e.g. a FPS like a pro, that's kinda creepy. Still, there is no real model for "thinking" in the conscious human sense, to have aspirations, aesthetics, goals.
Another huge area that's missing (though work is being done) is longer term planning. Right now, things that require hierarchic plans have to use some way to store state outside the network. It's not necessarily a problem, but we humans can somehow manage to do complex stuff without jotting down every step in a notebook, so this is clearly an area for improvement.
By the way, we don't even have a real mathematical framework for artificial neural networks. I'm not talking about the basic matrix calculus of course, but many of the algorithms and practices that work do so for no apparent reason, or even against it. I read/heard Geoff Hinton say things like (and I'm paraphrasing) "I believe max pooling is a huge mistake; it's a shame it's working so well" and "everybody thinks we had a good reason to use the sigmoid for the nonlinearity, but really it just looked like a good idea." Similarly, Ian Goodfellow wrote it (I think) as an answer during his Reddit AMA that he's continuously trying countless ideas in parallel (i.e. even during answering the questions), and then some of them happen to work better and those can be investigated.