Maybe cars can teach themselves to drive in the more structured states (the MANIAC book)
I recently finished The MANIAC, a concise novelized biography of John Von Neumann bizarrely bolted onto a history of computer programs that dominate chess and go. Somehow the combination works! What I hadn’t realized was how quickly programs that play chess and go can evolve when entirely freed from human guidance. Apparently, in a matter of just a few hours, a program can go from knowing almost nothing about chess other than the basic rules to being able to beat a grandmaster.
This kind of success has famously eluded those who promised us self-driving cars. We’ve gone from failing via humans encoding rules to failing via AI-style training sets of good driving and bad driving (coded by people in India? if you’ve ever been to Delhi or Mumbai maybe that explains the failure). Benjamin Labatut (the MANIAC author) reminds us that when the situation is sufficiently structured computers can learn very fast indeed.
Returning from a helicopter trip from Los Angeles to Great Barrington, Maskachusetts, my copilot commented on the chaos of road markings as we entered Cambridge. “Are there three lanes here or two?” he asked. This is a question that wouldn’t be posed in most parts of Texas or Florida, I’m pretty sure, and certainly not on the main roads of the Netherlands or Germany. Instead of the computer promising to handle all situations, I wonder if “full self-driving” should be targeted to the states where roads are clearly structured and marked. Instead of the computer telling the human to be ready to take over at any time for any reason, the computer could promise to notify in advance (via reference to a database, updated via crowd sourcing from all of the smart cars) that the road wasn’t sufficiently structured/marked and tell the human “I won’t be able to help starting in 30 seconds because your route goes through an unstructured zone.” The idea that a human will be vigilant for a few months or even years waiting for a self-driving disconnect that occurs randomly seems impractical. The MANIAC suggests that if we shift gears (so to speak) to redefining the problem to self-driving within a highly structured environment a computer could become a better driver than a human in a matter of weeks (it takes longer to look at videos than to look at a chess or go board, so it would be weeks and not hours). We might not be able to predict when there will be enough structure and enough of a data set and enough computer power for this breakthrough to occur, but maybe we can predict that it will be sudden and the self-driving program will work far better than we had dreamed. The AI-trained chess and go systems didn’t spend years working their way into being better than the best humans, but got there from scratch in just a few hours by playing games against themselves.
Regardless of your best estimate as to when we’ll get useful assistance from our AI overlords, I recommend The MANIAC (note that the author gives Von Neumann a little too much credit for the stored program computers that make the debate regarding self-driving possible).
Separately, based on a visit to the Harvard Book Store here’s what’s on the minds of the world’s smartest people (according to Harvard University research)..
Full post, including comments