Invest in Estonian-style e-governance to be ready for the next plague?

Quite a few Boston-area businesses have shut down their physical offices. Employees of Amazon, for example, are working from home. Towns and cities, however, can’t close down their respective Town Halls and City Halls because the only way to access quite a few government services is to show up in person. The same enterprise of state/local government that tries, via its public health department, to get everyone to stay home, may ironically end up being one of the only information processing operations that insists that everyone show up and get within contagious distance.

Supposedly Estonia allows citizens to do almost anything that they’d do at a city hall from the disease-free safety of their own homes.

The U.S. track record for government-run IT is admittedly mixed, e.g., with the $1 billion insurance site. But maybe if we could adopt the Estonian system unmodified for state and local transactions we would be able to save time in non-plague periods and save lives in plague periods.

Readers: What do you think? Should people have to brave coronavirus to get (or issue) a building permit?


  • “Estonia, the Digital Republic” (New Yorker, 2017)
  • e-Estonia (Wikipedia)
  • e-governance (from Estonians themselves): “Estonia is probably the only country in the world where 99% of the public services are available online 24/7. E-services are only impossible for marriages, divorces and real-estate transactions – you still have to get out of the house for those.” (don’t get too excited about those family law transactions; they are not as lucrative as in the U.S. From a 2017 post: “In all three Baltic countries I learned that having sex with the richest person in the country would yield only about 200 euros per month in child support” (similar to nearby Sweden))
  • “Estonia: Tough campaign stop for Bernie Sanders”
Full post, including comments

Iowa Caucuses, Boeing 737 MAX, and Apollo 11: software is usually the weakest link

From Techcrunch:

A smartphone app tasked with reporting the results of the Iowa caucus has crashed, delaying the result of the first major count in nominating a Democratic candidate to run for the U.S. presidency.

Previously, I wrote about how a handful of lines of code could have prevented the Boeing 737 MAX’s software from trimming the airliner into a dive-bomber-at-Midway nose-down attitude (see “Boeing 737 MAX crash and the rejection of ridiculous data”, for example).

I recently visited the Kennedy Space Center visitor center. In the building housing one of the leftover Saturn V rockets there is a compelling “Lunar Theater” presentation explaining that software overloaded the computer system in the Lunar Module during Apollo 11, the first landing on the moon. According to the dramatic retelling, the mission was saved only because the crew hand-flew the spaceship to a successful landing. In other words, all of the civil, mechanical, electrical, and aeronautical engineering challenges were met, but the software failed.

[Update: See comments below for how the the software in this case may have been blameless!]

The books for sale at the KSC do not encourage young visitors to become computer programmers…

Maybe it is time to switch to Haskell?

Also, what if the Iowa debacle had happened in some other country? Would U.S. media report it as resulting from a fundamental problem with that country’s culture and educational system? Whereas if it happens here in the U.S. it is just an unfortunate freak event?


  • Apollo 11: Mission Out of Control (WIRED): “The inside story of how Neil Armstrong and Buzz Aldrin struggled to touch down on the moon, while their guidance computer kept crashing. Again and again.”
Full post, including comments

Masters Thesis Idea: Conversational Flight Planner

In working on the slides for a flight planning section of our FAA Private Pilot Ground School at MIT (videos and slides available free online), it occurred to me that none of the fancy computer tools were as convenient or efficient as talking to a competent human.

What about a system where the input is, e.g.,”I’m thinking about going from Bedford to Washington, D.C. this weekend.” (could be entered via menus; does not have to be natural language)

The Conversational Flight Planner responds after looking at the voluminous official FAA briefing and some of the long-term weather forecast products, such as MOS:

There will be a strong wind from the north so you should consider paying up to fly into Dulles and land Runway 1R rather than deal with the crosswind at KGAI.

Looks like ice is possible on Sunday evening so you’ll need to depart Sunday at noon.

It will be below freezing overnight Saturday night so you need to arrange for a hangar or a preheater plug-in.

Interesting Master’s Thesis project for a computer science or aero/astro major?

Full post, including comments

When does the great age of machine intelligence reach our desktop computers?

Turning Google Contacts into address labels for Christmas/New Year’s cards is a task that I expected to be simple. The plan was

  • export to “Google CSV” format
  • upload to to generate a PDF for printing

This fails because the Google export process produces a CSV file with nearly 100 columns, which is too many for the Avery system to handle.

No problem, open in Microsoft Excel and cut down to about 5 columns, right?

What happens when you combine programmer’s from two of the world’s smartest companies? Excel is not smart enough to recognize a column of 5- or 9-digit values as ZIP codes, even if they appear right after a column of two-character state abbreviations. The leading zeros are trimmed off, turning Massachusetts ZIP codes into four-digit values, e.g., “02138” to “2138” (the ZIP code of the great minds of Harvard and Harvard Square, who will soon be tapped by President Warren to optimize our government).

What if we keep this as a Google-only process? The people who built Contacts apparently don’t talk to the people who built Sheets. There is no way to export directly from Contacts to a Google spreadsheet.

Save to the local disk and then upload, right? The behavior is exactly the same as with Excel: leading zeroes of all of the five-digit ZIP codes are trimmed off. This is the company we’re going to trust with medical diagnoses? (“The doctor will Google you now” turning into “The Google will doctor you now.”)

As with most other challenges, if you’re a skilled user of Excel the solution is straightforward: create a blank workbook and then use the Data tab to import “From Text/CSV”. Even on the full automatic setting, it correctly infers that the ZIP column is text, not number. But if the fully automated import works, why doesn’t it work simply to open the CSV file in Excel?

(The whole process ended up taking way longer than if I’d simply addressed 180 envelopes by hand, of course.)

The particular challenge of wrestling with Google Contacts or generating addressed envelopes is not that interesting, but I think it is a good starting point for a discussion of how machine learning and AI can ever be integrated back into the computer systems we use day to day. Google Translate does some impressive stuff, but why isn’t it easy to enhance Google Sheets?

Separately, the Google Contacts software has a long way to go to reach the same level of quality as what Sharp was shipping with the Wizard organizer in 1989. A contact with a single street address, once exported, will appear in a CSV-file row without any street address. Why is it difficult for Google to do what Apple, Microsoft, Motorola, Nokia, and Sharp were doing successfully in the 1990s?

Full post, including comments

Feel better about the time you’ve invested in writing documentation

During a recent rental car excursion I became curious about the USB-C port in the front of the Nissan Maxima. Could one run a laptop from the car, for example? I decided to open the glovebox and read the specs from the owner’s manual. After one year and more than 20,000 miles of rental by perhaps 100 different drivers…

(still in its shrink wrap)

Full post, including comments

Is autocorrect location-aware?

Annals of our future self-driving overlords… I was at the Burlington Mall with the kids. Apple iMessage exchange:

  • where are you?
  • “in Arhaus” autocorrected to “in Arafat’s”

The Burlington Mall has pretty good WiFi and LTE so I think that the phone should have been able to figure out where it was. Does autocorrect not even try to adjust its behavior based on location?

Full post, including comments

Why wasn’t Google Glass popular for translation?

Aside from missing family and friends and finding that wearing an air pollution mask tended to fog up my glasses, one reason that I was happy to return home from China was that it was no fun being illiterate. WeChat can be used to translate a sign or menu into English, but it is somewhat cumbersome. Same deal with Google Translate, which works to turn English text into characters to show shop and restaurant personnel.

It occurred to me to wonder why systems such as Google Glass hadn’t caught on simply for the purpose of finding text in every scene and translating into the traveler’s home language. Was there simply not enough battery power to have the thing running continuously? It would have added a lot to the trip if I could have just walked around streets and museums and, without having to take any explicit action, seen English language versions of all of the surrounding writing.

Full post, including comments

The female roots of all computer science, vol 17: Barbara Liskov

“The Architect of Modern Algorithms” (Quanta) is a recently popular link among some computer nerds on Facebook (all of the sharers, when I last checked, identified as older white males):

Barbara Liskov pioneered the modern approach to writing code.

But by the late 1960s, advances in computing power had outpaced the abilities of programmers. Many computer scientists created programs without thought for design. They wrote long, incoherent algorithms riddled with “goto” statements — instructions for the machine to leap to a new part of the program if a certain condition is satisfied. Early coders relied on these statements to fix unforeseen consequences of their code, but they made programs hard to read, unpredictable and even dangerous.

When she was still a young professor at the Massachusetts Institute of Technology, she led the team that created the first programming language that did not rely on goto statements. The language, CLU (short for “cluster”), relied on an approach she invented — data abstraction — that organized code into modules. Every important programming language used today, including Java, C++ and C#, is a descendant of CLU.

Note that in the discredited male-authored history of computer nerdism, the modern programming language dates back at least to ALGOL 60, developed when Professor Liskov was 21 years old. The public war on goto was waged not by Liskov, but by the developers of ALGOL and Edsger W. Dijkstra, a Dutch curmudgeon, who wrote “Go To Statement Considered Harmful” in 1968, pointing out that “the remark about the undesirability of the go to statement is far from new” and describes some of the history since at least 1959 (criticism by by Heinz Zemanek). Note that Dijkstra is also known for saying “The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offense.” Liskov, for her part, was known at MIT primarily for developing and teaching the standard software engineering class, 6.170, in which the CLU language was used by students. She was a usually modest and always hard-working person who believed that properly engineered software could function perfectly: “If you find a bug in your code, you should be as embarrassed as if you found a cockroach in your kitchen,” she memorably noted (we had a lot of cockroaches in our East Campus dorm and they were regularly visible during visits to restaurants in Central Square at the time!).

[The article also notes that Liskov is concerned about the impact of the Internet:

I’m worried about the divorced couple in which the husband publishes slander about the wife, including information about where she lives. There is terrible stuff going on.

Yet if one of these two sued the other, the most common precursor to their divorced status, the lawsuit, and anything said by a party during it, as well as the mailing address where the plaintiff wants the checks sent, was already public information, available to anyone who wanted to go down to the courthouse, decades before women developed microprocessors and TCP/IP. (see this study on Massachusetts court records, though records of litigation following out-of-wedlock sex are sealed) Reporters were covering divorce litigation in newspaper stories prior to the computer age, e.g., a November 11, 1939 piece in the NYT describing an allegation of “cruelty”, and one from December 2, 1934, “a charge of extreme cruelty won a divorce here today for Mrs. Edith Crocker Sanger from Prentice Sanger of New York City.” Divorce was apparently a good business even in the Depression. From September 24, 1931: “More than $1,000,000 was handed to Mrs. Eunice Essig Brach of Winnetka today with a divorce from her husband, Frank V. Brach, president of a candy company.” Certainly someone launching a divorce lawsuit and obtaining a profitable judgment in 2019 gets a lot less publicity than he or she would have prior to the Internet.]

Readers: What will the next edition in the “female roots of all computer science” saga be? What other fundamental technologies can be plausibly attributed to a person who identified as a “woman”? My vote: find a woman to replace William Shockley as developer of the semiconductor transistor and Silicon Valley. How can it be done? Here’s a National Public Radio story that credits Hedy Lamarr with having invented frequency hopping. Wikipedia contradicts this story to some extent and the actual patent to Ms. Lamarr and George Anthell reveals that they narrowly claimed a specific piano roll-style mechanism for controlling frequency hopping, not the broad invention of frequency hopping. So we need to find an early patent on a specific application of semiconductor transistors in which one of the inventors has a female-sounding name. Then we can discover the female roots of the modern transistor and rely on the fact that reporters won’t read the patent claims to see that they narrowly cover an application of transistors, not the transistor itself.

Also, will this article on Barbara Liskov and the promotion of the article by “allies” have the desired effect of getting more people who identify as “women” into computer nerdism? The article reveals that Barbara Liskov, despite having invented essentially all of practical programming technology, was not popularly recognized until she reached the age of 80. Moreover, she describes having to struggle as a result of her identification as a “woman” (see also a 2008 interview, in which she notes that “there were a large percentage of women” at her first programming job at MITRE in the early 1960s, at which she learned FORTRAN (several years after inventing ALGOL?) and then got a PhD working with John McCarthy, credited for now at least with the development of Lisp, and then met Dijkstra in 1969 (giving him the idea to write his 1968 screed against goto?)). Compare to Britney Spears, a top-of-the-charts success at age 17 who has never described being a cisgender female as a career handicap in her industry. Why wouldn’t a cisgender female aware of both Liskov and Spears conclude that computer science should be a last resort?


Full post, including comments

Give thanks that we don’t live in the early PC age

Happy Thanksgiving! (Or National Day of Mourning, depending on your perspective/ethnicity.)

Here’s a friend’s nostalgia shelf:

I hope that we can all agree to give thanks that we’ve moved on from this phase of personal computing!

Separately, with no Thanksgiving to slow them down, China can concentrate fully on Christmas decoration weeks earlier than Americans. “There’s Snow Place Like Shanghai Disney Resort” shirts in a city where November high temps had fallen to around 70 degrees…


Full post, including comments

Why isn’t Apple Wallet smart enough to delete old boarding passes?

Software is so smart that it will soon be driving us around, recognizing and dodging pedestrians. WIRED and similar cheerleaders for technology assure us that our software pals will also diagnose us based on medical imaging and other tests.

Apple Wallet is an app from one of the world’s leading software companies. Every time I open it there is a boarding pass from a flight that occurred a month or two ago. The software is neither smart enough to delete the pass a month after the flight (and after the location subsystem shows that the flight was actually taken) nor pop up a “would you like to delete these old boarding passes?” offer.

If this is the best that Apple can do, how is it that we can have confidence in the hard problems being solved?

Full post, including comments