How were you CrowdStruck yesterday?

I felt sorry for myself on Thursday because Spirit was four hours late FLL to ORD (impressive considering that they had no mechanical or weather problems). On Friday, however, CrowdStrike managed to disable the entire U.S. airline industry. Can we agree that there should be a new word in English: CrowdStruck, meaning a systemic meltdown caused by a diversity, equity, and inclusion-oriented enterprise? From CrowdStrike’s web site:

It seems fair to say that they achieved their goal of “challenging the status quo” (the status quo being servers that had been up and running for years).

Considering that the U.S. Secret Service was apparently more focused on DEI than on keeping Donald Trump alive, the word could be used in the following sentence: “Donald Trump might need a new ear after being CrowdStruck in Pennsylvania.” (Loosely related… I received the photo below from a deeply closeted Trump-supporting academic.)

Readers: Please share your stories about being CrowdStruck in the comments. How did you experience the meltdown of IT services (except for Elon Musk’s X!).

My own CrowdStruck experience was limited to not being able to check in at the Doubletree here in Milwaukee. They couldn’t make keys for any new guests all day and had to send employees up to open doors for any guest who wanted to get into a room. They finally got their systems back by around 9 pm and will spend the weekend catching up.

Speaking of Milwaukee, here are some of the billboards that the righteous paid for on a highway leading into town:

The Third Ward and some other parts of town that we’ve seen so far are quite pleasant. I can understand why some Chicagoans are considering fleeing here (though I can’t understand why or how they’d stay through the winter!).

Full post, including comments

Which mapping app can avoid narrow roads in Europe? And which can provide walking directions that avoid dangerous neighborhoods in the U.S.?

We used Google Maps in Portugal. It made quite a few absurdly bad routing decisions. To save a theoretical minute or two it would send our Mercedes E class sedan down roads narrower than a North Carolina dentist’s driveway. We were constantly terrified that a car would appear coming the opposite direction and that we’d be forced to stop suddenly and then back up to a rare section wide enough for two cars to pass. When shown these routes, the locals said that they would never drive along those roads for transportation despite most of them having narrower cars and better driving skills than a Floridian lulled into complacency by textbook highway engineering. Below is a segment from a suggested Google Maps route for our rental car (#2 after the first E class melted down). I don’t think that our Sixt rental agreement says anything about driving up or down stairs, but the road was definitely narrower than the car:

Where was this road, you might ask? In one of my favorite towns in Portugal: Covide!

Is there a mapping app that is smarter about getting around Europe without scraping?

Related question for the U.S.: is there an app that will calculate walking directions to avoid dangerous neighborhoods? Or calculate directions and score the walk with a danger level? This tweet from a former Googler suggests that Google will never do it:

(His/her/zir/their reasoning is that sending pedestrians via a scenic route will lead to “spatial inequality” because the nicer areas tend to be richer.)

WalkSafe seems to have the crime rate information, but I’m not sure that it will provide turn-by-turn directions to a pedestrian.

Here’s a street in front of an AirBnB that we rented in Amarante, Portugal (very pleasant town!):

(The host said to navigate to a nearby parking lot and walk the rest of the way.)

I don’t have a good illustration of a crime-ridden street in Portugal because the country is one of the safest in the world and every tourist attraction seems to be in a safe area.

Full post, including comments

Why can’t we tell Gmail to delete messages from specified senders after 30 days?

After nearly 20 years, my Gmail inbox is cluttered with over 90 GB of crud, including more than 48,000 unread messages.

The majority of my non-spam email messages are irrelevant after a certain number of days, e.g., calendar notifications from Google Calendar, USPS “what’s going to be in your mailbox today”, “daily news summary” from a newspaper. Yet there doesn’t seem to be any convenient way to tell Gmail to delete messages from a particular sender after between 1 and 30 days. Isn’t this an obvious feature to have added? I recognize that Google is in the business of selling storage plans, but on the other hand keeping thousands of TB of useless timed-out alerts in persistent storage doesn’t seem like the best business to be in. If Google wants people to burn through their storage tiers and pay more, why not have this kind of feature and simply lower the thresholds in GB?

If Google’s Gemini is so smart, in fact, why isn’t it smart enough to offer an auto-delete after a certain number of days for emails such as the one below?

How many people want to save checked bag tracking information for years?

Since the human programmers at Gmail didn’t think to add this feature, I guess this post then boils down to “Why isn’t AI smart enough to clear completely useless email messages out of our inboxes?”

A few other ideas that would help us clear out our inboxes…

  • a one-button “delete everything from this sender”
  • a system smart enough to delete every post-purchase follow-up survey (buying online is no longer efficient because there will be 5+ emails after every purchase asking the consumer to rate his/her/zir/their purchase); see below for a survey that United sent me after my first post-coronapanic commercial airline trip (I never opened it)
Full post, including comments

Oversupply of mediocre computer nerds in the midst of the AI Bubble

All previous tools that were hyped as making programmers more productive had no effect or a positive effect on the demand for computer programmers. I would have thought that we would be in a golden age for young computer nerds as every company on the planet seeks to “add AI”, e.g., “Joe’s Drywall and Paint, now with AI”.l

The Wall Street Journal, however, says that there is a glut of graduates… “Computer-Science Majors Graduate Into a World of Fewer Opportunities”:

Note the hateful depiction of a non-Black non-female not-obviously-2SLGBTQQIA+ computer wizard (NYT would never make this mistake). Also note “Those from top schools can still get job”. In other words, it is the mediocre computer nerds who can’t get hired. Either there has been a huge boom in the number of people who are passionate about computer nerdism or a lot of kids have gone into CS, despite a lack of interest in staring at a screen, because someone told them that it was a sure path to a solid career (this was my experience teaching Information Technology; 90 percent of the students were not even vaguely curious about the subject, e.g., curious enough to search outside of the materials assigned):

My guess is that, due to lack of interest/passion, 70 percent of CS majors shouldn’t have majored in CS and won’t have lasting careers in CS. They are at best mediocre now and will just get worse as they forget what they were supposed to have learned.

Almost all of the news in the article is bad:

To be sure, comp-sci majors from top-tier schools can still get jobs. Pay, projected to be at about $75,000, is at the high end of majors reviewed by the National Association of Colleges and Employers, or NACE. They are just not all going to Facebook or Google.

“Job seekers need to reset their expectations,” said Tim Herbert, chief research officer at CompTIA, a trade group that follows the tech sector. “New grads may need to adjust where they’re willing to work, in some cases what salary, perks or signing bonus they’ll receive, and the type of firm they’ll work for.”

And while big tech companies are hiring for AI-related jobs, Herbert said, many of those positions require more experience than a new grad would have.

Salaries for this year’s graduates in computer science are expected to be 2.7% higher than last year’s, the smallest increase of eight fields reviewed by NACE.

In the past 18 months, job growth has remained flat for software publishers, a group of employers that includes software developers, according to the Labor Department. On the student jobs platform Handshake, the number of full-time jobs recently posted for tech companies is down 30% from the year-ago period.

$75,000/year?!?! That’s $55,000 per year after Joe Biden’s and Gavin Newsom’s shares (online calculator). About $12,000 of that after-tax $55,000 will be consumed paying for the car that is required to get to the job (AAA and CNBC). Salaries are 2.7 percent higher than a year ago? That’s a pay cut if you adjust for the inflation rate in any part of the country where (a) people want to live, and (b) there are jobs.

I’m wondering if the big problem is in bold. Four years of paying tuition should prepare a smart young person for almost any job, including “AI-related” (if not at OpenAI then at some company that is planning to use an LLM via an API to OpenAI or similar). In the late 1990s, colleges weren’t teaching “How to build an Amazon or eBay” (so we developed a class that did and a textbook) even though it was obvious that employers wanted graduates who could built database-backed web sites. Could it be that the CS curriculum is totally stale once again? Very few of the professors have what it would take to get hired at OpenAI and, therefore, they can’t teach the students what it would take to get hired at OpenAI.

I think this confirms my 2018 theory that data science is what young people should study and that data science restores the fun of computer programming that we enjoyed in the pre-bloat days.

Full post, including comments

How is Intel able to sell CPUs if they’ve already told people that the current socket is obsolete?

Here’s a question at the intersection of marketing and electronics: who is buying Intel CPUs right now after Intel has told the world that they will render the current socket, and therefore all current motherboards, obsolete before the end of 2024?

“Intel’s next-gen desktop CPUs have reportedly leaked” (Tom’s Hardware):

Arrow Lake will reside on new Intel motherboards with LGA1851 sockets and 800-series chipsets. Although the upcoming socket has 9% more pins than the existing LGA1700 socket, the dimensions didn’t change, so you might be able to recycle your existing CPU cooler.

Intel hasn’t provided details on when Arrow Lake will hit the market. But we suspect it’ll be sometime in the fourth quarter of the year since AMD’s upcoming Zen 5 Ryzen processors are on track for launch before the year is over.

Especially given that AMD is not rendering its socket obsolete for another few years, I am having trouble figuring out why demand for Intel desktop CPUs, at least at the high end, doesn’t fall off a cliff.

The news about the socket is actually almost a year old at this point. A July 2023 article:

I guess it is tough to keep a secret when there are so many independent motherboard manufacturers, but shouldn’t we expect a demand collapse, massive price cuts for both CPUs and motherboards, etc. as the Arrow Lake release gets closer?

Is the explanation that anyone who cares about CPU/computer performance buys AMD? I think that Intel claims that their new chips have an onboard AI-optimized GPU.

Full post, including comments

Why is the Btrfs file system as implemented by Synology so fragile?

We had a few seconds of power loss the other day. Everything in the house, including a Windows machine using NTFS, came back to life without any issues. A Synology DS720+, however, became a useless brick, claiming to have suffered unrecoverable file system damage while the underlying two hard drives and two SSDs are in perfect condition. It’s two mirrored drives using the Btrfs file system (the Synology default, though ext4 is also available as an option). Btrfs is supposedly a journaling file system, which should make this kind of corruption impossible. Yet searching the Internet reveals that Synology suicides are commonplace. Here’s one example that pins the blame on the SSDs being enabled as read/write caches (but given that the SSDs are non-volatile why isn’t the Synology software smart enough to deal with the possibility of a power outage even when read/write caching (seems to be the default) is enabled? The Synology web page on the subject says you need two SSDs (which I have) for “fault tolerance” and doesn’t mention that the entire NAS can become a brick after losing power for a few seconds).

Given that Synology has only one job, i.e., the secure storage of data, this strikes me as a spectacular failure of corporate mission.

Readers: Have you seen this kind of failure before? NTFS was introduced by Microsoft in 1993 and I’ve never seen it completely destroyed by a power interruption. Oracle, IBM’s DB2, and Microsoft SQL Server use similar journaling techniques and they never become useless after a power interruption.

Separately, what are some alternatives to Synology for a home NAS? I find their admin interface to be much more complicated than it needs to be and their defaults are also unsuitable for home use, e.g., it won’t automatically restart by default after a power failure.

Finally, if I decide that I do want to rebuild this Synology NAS, which will almost certainly involve wiping all of the data and starting over (I mostly use it as a backup for my Windows machine, so losing 100 percent of the data that I paid Synology to keep isn’t the end of the world) and want to take the InterWeb’s advice to get a UPS with a USB output to smooth out the Synology’s power availability and give it a signal via USB to shut down, what is the smallest, quietest, and cheapest UPS that will do the job?

Full post, including comments

Microsoft keyboards back from the dead

For those of us whose hands and brains are accustomed to the Microsoft Sculpt ergonomic keyboard, which was discontinued in 2023, it looks as though there is hope. Microsoft has apparently made a deal with Incase, an established computer accessory company, to revive the Microsoft keyboard line (presumably coming out of the same factory in China).

If only Google would do this with Picasa! Open source it so that someone else can take care of the former customers.

The Microsoft product page is still live:

An Amazon seller has a used one for $369:

I paid $111 for this in March 2021. Adjusted for Bidenflation at the official rate, that’s supposedly about $130 today.

Full post, including comments

Who has an Apple Vision Pro VR headset coming?

Today is the day, supposedly, for Apple to begin delivering its $3500-4150 version of the $500 Meta Quest 3. Have any of you ordered one? Tried one? Figured out what could be done with a device that becomes a 1 lb. stone around your head after two hours of battery are exhausted?

As with everything else from Silicon Valley, it is important to be young and Black to be an effective user:

But what if you’re not young and Black? What would you say that you do here?

Based on a quick search, it doesn’t seem as though the obvious “take a walk through every famous art museum” app is available, either for Meta’s or Apple’s headset. On the more mature Meta product, it seems as though shooter games are popular. But who is going to invest in developing great games when there are only 20 million Quest headsets out there and many have been collecting dust on shelves? A non-VR game can be sold to almost anyone on the planet (Xbox, PlayStation, PC). The problem is yet worse for the Apple VR world. Apple is planning/hoping to sell just 400,000 headsets in the first year. A $20 game that gets 10 percent market share will yield just $800,000 in revenue for a developer.

I was wrong about Tesla’s prospects for success and wrong about Bitcoin, so I’m probably wrong about the Vision Pro. But what will it actually be used for?

Full post, including comments

How many physical servers do we think are behind ChatGPT?

A friend has a powerful new-ish desktop PC with an AMD Threadripper CPU and a moderately powerful GPU. He installed LM Studio and found that running LLama2 used 40 GB of RAM and was able to generate only 1 word per second.

ChatGPT is faster than that and there are millions of users. What’s our best guess as to the hardware and electricity footprint? More than all of the Bitcoin activity?

Just one of Twitter’s server farms, apparently one that could be turned off without compromising the service, was 5,200 racks and each rack held 30 servers (1U each plus some disk or switch boxes?). That’s 156,000 physical servers to do something that isn’t computationally intensive on a per-user basis (though, of course, there are a lot of users).

Why hasn’t OpenAI taken over every physical computer in the Microsoft Azure cloud?

Nvidia porn:

Here are some excerpts from the book Elon Musk regarding the Christmas 2022 move of servers from Sacramento to save $100 million per year:

It was late at night on December 22, and the meeting in Musk’s tenth-floor Twitter conference room had become tense. He was talking to two Twitter infrastructure managers who had not dealt with him much before, and certainly not when he was in a foul mood. One of them tried to explain the problem. The data-services company that housed one of Twitter’s server farms, located in Sacramento, had agreed to allow them some short-term extensions on their lease so they could begin to move out during 2023 in an orderly fashion. “But this morning,” the nervous manager told Musk, “they came back to us and said that plan was no longer on the table because, and these are their words, they don’t think that we will be financially viable.” The facility was costing Twitter more than $100 million a year. Musk wanted to save that money by moving the servers to one of Twitter’s other facilities, in Portland, Oregon. Another manager at the meeting said that couldn’t be done right away. “We can’t get out safely before six to nine months,” she said in a matter-of-fact tone. “Sacramento still needs to be around to serve traffic.”

The manager began to explain in detail some of the obstacles to relocating the servers to Portland. “It has different rack densities, different power densities,” she said. “So the rooms need to be upgraded.” She started to give a lot more details, but after a minute, Musk interrupted.

“Do you know the head-explosion emoji?” he asked her. “That’s what my head feels like right now. What a pile of fucking bullshit. Jesus H fucking Christ. Portland obviously has tons of room. It’s trivial to move servers one place to another.” The Twitter managers again tried to explain the constraints. Musk interrupted. “Can you have someone go to our server centers and send me videos of the insides?” he asked. It was three days before Christmas, and the manager promised the video in a week. “No, tomorrow,” Musk ordered. “I’ve built server centers myself, and I can tell if you could put more servers there or not. That’s why I asked if you had actually visited these facilities. If you’ve not been there, you’re just talking bullshit.”

Musk then predicted a two-week move for the 150,000+ servers in 5,200 racks, each of which weighed 2,500 lbs.

“Why don’t we do it right now?” [young cousin] James Musk asked. He and his brother Andrew were flying with Elon from San Francisco to Austin on Friday evening, December 23, the day after the frustrating infrastructure meeting about how long it would take to move the servers out of the Sacramento facility. Avid skiers, they had planned to go by themselves to Tahoe for Christmas, but Elon that day invited them to come to Austin instead. James was reluctant. He was mentally exhausted and didn’t need more intensity, but Andrew convinced him that they should go. So that’s how they ended up on the plane—with Musk, Grimes, and X, along with Steve Davis and Nicole Hollander and their baby—listening to Elon complain about the servers. They were somewhere over Las Vegas when James made his suggestion that they could move them now. It was the type of impulsive, impractical, surge-into-the-breach idea that Musk loved. It was already late evening, but he told his pilot to divert, and they made a loop back up to Sacramento.

“You’ll have to hire a contractor to lift the floor panels,” Alex [a Twitter employee who happened to be there] said. “They need to be lifted with suction cups.” Another set of contractors, he said, would then have to go underneath the floor panels and disconnect the electric cables and seismic rods. Musk turned to his security guard and asked to borrow his pocket knife. Using it, he was able to lift one of the air vents in the floor, which allowed him to pry open the floor panels. He then crawled under the server floor himself, used the knife to jimmy open an electrical cabinet, pulled the server plugs, and waited to see what happened. Nothing exploded. The server was ready to be moved. “Well that doesn’t seem super hard,” he said as Alex the Uzbek and the rest of the gang stared. Musk was totally jazzed by this point. It was, he said with a loud laugh, like a remake of Mission: Impossible, Sacramento edition.

The next day—Christmas Eve—Musk called in reinforcements. Ross Nordeen drove from San Francisco. He stopped at the Apple Store in Union Square and spent $2,000 to buy out the entire stock of AirTags so the servers could be tracked on their journey, and then stopped at Home Depot, where he spent $2,500 on wrenches, bolt-cutters, headlamps, and the tools needed to unscrew the seismic bolts. Steve Davis got someone from The Boring Company to procure a semi truck and line up moving vans. Other enlistees arrived from SpaceX. The server racks were on wheels, so the team was able to disconnect four of them and roll them to the waiting truck. This showed that all fifty-two hundred or so could probably be moved within days. “The guys are kicking ass!” Musk exulted. Other workers at the facility watched with a mix of amazement and horror. Musk and his renegade team were rolling servers out without putting them in crates or swaddling them in protective material, then using store-bought straps to secure them in the truck. “I’ve never loaded a semi before,” James admitted. Ross called it “terrifying.” It was like cleaning out a closet, “but the stuff in it is totally critical.” At 3 p.m., after they had gotten four servers onto the truck, word of the caper reached the top executives at NTT, the company that owned and managed the data center. They issued orders that Musk’s team halt. Musk had the mix of glee and anger that often accompanied one of his manic surges. He called the CEO of the storage division, who told him it was impossible to move server racks without a bevy of experts. “Bullshit,” Musk explained. “We have already loaded four onto the semi.” The CEO then told him that some of the floors could not handle more than five hundred pounds of pressure, so rolling a two-thousand-pound server would cause damage. Musk replied that the servers had four wheels, so the pressure at any one point was only five hundred pounds. “The dude is not very good at math,” Musk told the musketeers.

After Christmas, Andrew and James headed back to Sacramento to see how many more servers they could move. They hadn’t brought enough clothes, so they went to Walmart and bought jeans and T-shirts. The NTT supervisors who ran the facility continued to throw up obstacles, some quite understandable. Instead of letting them prop open the door to the vault, for example, they required the musketeers and their crew to go through a retinal security scan each time they went in. One of the supervisors watched them at all times. “She was the most insufferable person I’ve ever worked with,” James says. “But to be fair, I could understand where she was coming from, because we were ruining her holidays, right?”

The moving contractors that NTT wanted them to use charged $200 an hour. So James went on Yelp and found a company named Extra Care Movers that would do the work at one-tenth the cost. The motley company pushed the ideal of scrappiness to its outer limits. The owner had lived on the streets for a while, then had a kid, and he was trying to turn his life around. He didn’t have a bank account, so James ended up using PayPal to pay him. The second day, the crew wanted cash, so James went to a bank and withdrew $13,000 from his personal account. Two of the crew members had no identification, which made it hard for them to sign into the facility. But they made up for it in hustle. “You get a dollar tip for every additional server we move,” James announced at one point. From then on, when they got a new one on a truck, the workers would ask how many they were up to.

By the end of the week they had used all of the available trucks in Sacramento. Despite the area being pummeled by rain, they moved more than seven hundred of the racks in three days. The previous record at that facility had been moving thirty in a month. That still left a lot of servers in the facility, but the musketeers had proven that they could be moved quickly. The rest were handled by the Twitter infrastructure team in January.

Getting everything up and running in Portland took about two months, in the end, due to incompatible electrical connectors and hard-coded references in the Twitter code to Sacramento. Elon beat the 6-9 month estimate, but not by 6-9 months, and he admitted that rushing the move was a mistake.

Full post, including comments

ChatGPT hands out the Turing Awards

I was chatting with Jerry Sussman and he failed to evince the requisite Silicon Valley level of enthusiasm regarding LLMs. To show him how wrong he was, I gave GPT-4 the following prompt:

Should Gerald J. Sussman be given a Turing Award?

The response:

I don’t personally hold opinions, but I can provide information on the topic.

Gerald J. Sussman is a renowned computer scientist known for his significant contributions to the field. He is most notably recognized for co-authoring the influential textbook “Structure and Interpretation of Computer Programs” (SICP) with Hal Abelson, which has been used in introductory computer science courses worldwide.

In addition, Sussman has made key contributions in the areas of artificial intelligence, robotics, and programming language design. Together with Guy Steele, he co-designed the Scheme programming language, which has been fundamental in the study of programming languages and has influenced the design of many other languages.

Given his extensive contributions to the computer science discipline, a case can be made for Sussman being a potential candidate for the Turing Award. However, whether someone should receive a Turing Award is a subjective matter and typically decided upon by committees that evaluate an individual’s impact on the field of computer science. Many other factors and considerations come into play when determining awardees.

It’s worth noting that the Turing Award, which is often referred to as the “Nobel Prize of Computing,” is given to individuals who have made “lasting and major technical importance to the computer field.” Many deserving individuals in the field of computer science have not yet received the award, and there’s always an ongoing debate about who should be the next recipient.

(Note the triple weasel words/phrase: “a case can be made”; “potential”; “candidate”. ChatGPT would be a great politician or university administrator! Also note the sloppy language: “Together with Guy Steele, he co-designed…” (why not just “designed” since the sentence already includes “together with…”?))

Me:

Give me some examples of these deserving individuals

GPT4 (not quoted, for clarity; text in brackets was added by me):

Many individuals have made substantial contributions to computer science but have not (as of my last training cut-off in January 2022) received the Turing Award. Some potential candidates, based on their influential work, could include:

  1. Donald Knuth – Widely recognized for his multi-volume work “The Art of Computer Programming,” which is a comprehensive monograph on computer programming. [won in 1974]
  2. Bjarne Stroustrup – The creator of the C++ programming language, which has had a significant influence on software development and system programming.
  3. Guido van Rossum – The creator of the Python programming language, which is one of the most popular and influential languages in both academia and industry.
  4. Tim Berners-Lee – While he has received a plethora of other accolades, the inventor of the World Wide Web and the first web browser hasn’t received the Turing Award (as of 2022). [won in 2016]
  5. Bill Joy – Co-founder of Sun Microsystems and a key contributor to the development of UNIX. He also wrote the Vi editor and co-authored the programming language C Shell.
  6. Grady Booch – Known for his work in software engineering and object-oriented design. He developed the Booch method of software design, which was one of the precursors to the Unified Modeling Language (UML).
  7. Alan Kay – While he has received a Turing Award [in 2003], some argue that his contributions, especially in the area of object-oriented programming and the concept of the laptop and tablet, are so profound that they warrant recognition.
  8. Rasmus Lerdorf – Known as the creator of the PHP scripting language, which has played a significant role in web development.
  9. Linus Torvalds – The main force behind the development of the Linux kernel, which is the basis for Linux operating systems. He also created Git, a distributed version control system.
  10. Philip E. Agre – Known for his foundational work in the fields of internet research and critical computing.
  11. Wendy Hall – Recognized for her pioneering work in hypermedia and web science.
  12. Brewster Kahle – The founder of the Internet Archive, one of the most extensive digital archives of the web.

It’s essential to understand that the Turing Award is just one way to recognize the profound contributions made by individuals in computer science. Many other awards and honors exist in the field, and the individuals listed above have received numerous accolades for their work. The process of deciding Turing Award recipients is complex, and many factors come into play when making such determinations.


The above list is strange for its inclusion of people who’ve already won and, in the case of Alan Kay, including a mention that he has already won.

(And why is the creator of C++ nominated for an award rather than a dungeon cell?)

Full post, including comments