Maybe cars can teach themselves to drive in the more structured states (the MANIAC book)

I recently finished The MANIAC, a concise novelized biography of John Von Neumann bizarrely bolted onto a history of computer programs that dominate chess and go. Somehow the combination works! What I hadn’t realized was how quickly programs that play chess and go can evolve when entirely freed from human guidance. Apparently, in a matter of just a few hours, a program can go from knowing almost nothing about chess other than the basic rules to being able to beat a grandmaster.

This kind of success has famously eluded those who promised us self-driving cars. We’ve gone from failing via humans encoding rules to failing via AI-style training sets of good driving and bad driving (coded by people in India? if you’ve ever been to Delhi or Mumbai maybe that explains the failure). Benjamin Labatut (the MANIAC author) reminds us that when the situation is sufficiently structured computers can learn very fast indeed.

Returning from a helicopter trip from Los Angeles to Great Barrington, Maskachusetts, my copilot commented on the chaos of road markings as we entered Cambridge. “Are there three lanes here or two?” he asked. This is a question that wouldn’t be posed in most parts of Texas or Florida, I’m pretty sure, and certainly not on the main roads of the Netherlands or Germany. Instead of the computer promising to handle all situations, I wonder if “full self-driving” should be targeted to the states where roads are clearly structured and marked. Instead of the computer telling the human to be ready to take over at any time for any reason, the computer could promise to notify in advance (via reference to a database, updated via crowd sourcing from all of the smart cars) that the road wasn’t sufficiently structured/marked and tell the human “I won’t be able to help starting in 30 seconds because your route goes through an unstructured zone.” The idea that a human will be vigilant for a few months or even years waiting for a self-driving disconnect that occurs randomly seems impractical. The MANIAC suggests that if we shift gears (so to speak) to redefining the problem to self-driving within a highly structured environment a computer could become a better driver than a human in a matter of weeks (it takes longer to look at videos than to look at a chess or go board, so it would be weeks and not hours). We might not be able to predict when there will be enough structure and enough of a data set and enough computer power for this breakthrough to occur, but maybe we can predict that it will be sudden and the self-driving program will work far better than we had dreamed. The AI-trained chess and go systems didn’t spend years working their way into being better than the best humans, but got there from scratch in just a few hours by playing games against themselves.

Regardless of your best estimate as to when we’ll get useful assistance from our AI overlords, I recommend The MANIAC (note that the author gives Von Neumann a little too much credit for the stored program computers that make the debate regarding self-driving possible).

Separately, based on a visit to the Harvard Book Store here’s what’s on the minds of the world’s smartest people (according to Harvard University research)..

6 thoughts on “Maybe cars can teach themselves to drive in the more structured states (the MANIAC book)

  1. it would be pretty neat if your car could identify “unstructured zones”, maybe you could even choose to avoid driving through them

  2. That sudden improvement in AI capabilities is called emergence. We have seen emergence in chess and large language models. Not sure about self-driving cars and robotics.

  3. Four scores and seven years ago… scratch it … four decades and seven years ago chess programs running on ancient hardware played on master candidate level, impressing youngsters. Chess program.beating a Grand Master in 2024 is natural development that is a result of technology, primary hardware technology, improvements over past 4 decades. The improvements is way less dramatic then the invention of wheel and domestication of horses. Ever since humans lost hope to outrun horsemen and chariots. But chess grandmasters can train to defeat a chess program.

  4. We cannot compare chess playing software to car driving software. This is not apples-to-apples comparison.

    A software for playing chess has far, far less complexity to deal with. It is based on well defined rules, well understood moves and a clear endgame/outcome/goal. Not to mentioned, the software is run in a sand-box with almost no external chaos to account for.

    A software for driving a car, is extremely complex. While the rules-of-the-road can be considered to be well defined, the software is tested in a sand-box with controlled chaos. In the open, the software cannot account for the many uncontrollable and unanticipated chaos. It is like solving the three-body problem.

    And, can we please stop using AI when talking about software? There is nothing “intelligence” in any software.

  5. Surely that’s the significance of “A”rtificial? Like the meat-free burger-alike that I tasted a couple of days ago that nonetheless seemed to have many of the expected characteristics of a burger.

    As soon as the underlying logic of your software is not a specific algorithm to get its result, but a learning framework that infers how to generate results from examples, all bets are off as to where “intelligence” may lie.

    But even humans who are commonly said to have intelligence are not safe drivers of moving vehicles. It’s difficult to imagine how a networked system that deprives human drivers of practice could be safer. We’ve seen how poorly managed automation of aircraft controls can itself cause safety failures (hitting the water at 500kts, etc) and there the only external networking was voice-based to ATC.

Leave a Reply

Your email address will not be published. Required fields are marked *