How is Intel able to sell CPUs if they’ve already told people that the current socket is obsolete?

Here’s a question at the intersection of marketing and electronics: who is buying Intel CPUs right now after Intel has told the world that they will render the current socket, and therefore all current motherboards, obsolete before the end of 2024?

“Intel’s next-gen desktop CPUs have reportedly leaked” (Tom’s Hardware):

Arrow Lake will reside on new Intel motherboards with LGA1851 sockets and 800-series chipsets. Although the upcoming socket has 9% more pins than the existing LGA1700 socket, the dimensions didn’t change, so you might be able to recycle your existing CPU cooler.

Intel hasn’t provided details on when Arrow Lake will hit the market. But we suspect it’ll be sometime in the fourth quarter of the year since AMD’s upcoming Zen 5 Ryzen processors are on track for launch before the year is over.

Especially given that AMD is not rendering its socket obsolete for another few years, I am having trouble figuring out why demand for Intel desktop CPUs, at least at the high end, doesn’t fall off a cliff.

The news about the socket is actually almost a year old at this point. A July 2023 article:

I guess it is tough to keep a secret when there are so many independent motherboard manufacturers, but shouldn’t we expect a demand collapse, massive price cuts for both CPUs and motherboards, etc. as the Arrow Lake release gets closer?

Is the explanation that anyone who cares about CPU/computer performance buys AMD? I think that Intel claims that their new chips have an onboard AI-optimized GPU.

Full post, including comments

Why is the Btrfs file system as implemented by Synology so fragile?

We had a few seconds of power loss the other day. Everything in the house, including a Windows machine using NTFS, came back to life without any issues. A Synology DS720+, however, became a useless brick, claiming to have suffered unrecoverable file system damage while the underlying two hard drives and two SSDs are in perfect condition. It’s two mirrored drives using the Btrfs file system (the Synology default, though ext4 is also available as an option). Btrfs is supposedly a journaling file system, which should make this kind of corruption impossible. Yet searching the Internet reveals that Synology suicides are commonplace. Here’s one example that pins the blame on the SSDs being enabled as read/write caches (but given that the SSDs are non-volatile why isn’t the Synology software smart enough to deal with the possibility of a power outage even when read/write caching (seems to be the default) is enabled? The Synology web page on the subject says you need two SSDs (which I have) for “fault tolerance” and doesn’t mention that the entire NAS can become a brick after losing power for a few seconds).

Given that Synology has only one job, i.e., the secure storage of data, this strikes me as a spectacular failure of corporate mission.

Readers: Have you seen this kind of failure before? NTFS was introduced by Microsoft in 1993 and I’ve never seen it completely destroyed by a power interruption. Oracle, IBM’s DB2, and Microsoft SQL Server use similar journaling techniques and they never become useless after a power interruption.

Separately, what are some alternatives to Synology for a home NAS? I find their admin interface to be much more complicated than it needs to be and their defaults are also unsuitable for home use, e.g., it won’t automatically restart by default after a power failure.

Finally, if I decide that I do want to rebuild this Synology NAS, which will almost certainly involve wiping all of the data and starting over (I mostly use it as a backup for my Windows machine, so losing 100 percent of the data that I paid Synology to keep isn’t the end of the world) and want to take the InterWeb’s advice to get a UPS with a USB output to smooth out the Synology’s power availability and give it a signal via USB to shut down, what is the smallest, quietest, and cheapest UPS that will do the job?

Full post, including comments

Microsoft keyboards back from the dead

For those of us whose hands and brains are accustomed to the Microsoft Sculpt ergonomic keyboard, which was discontinued in 2023, it looks as though there is hope. Microsoft has apparently made a deal with Incase, an established computer accessory company, to revive the Microsoft keyboard line (presumably coming out of the same factory in China).

If only Google would do this with Picasa! Open source it so that someone else can take care of the former customers.

The Microsoft product page is still live:

An Amazon seller has a used one for $369:

I paid $111 for this in March 2021. Adjusted for Bidenflation at the official rate, that’s supposedly about $130 today.

Full post, including comments

Who has an Apple Vision Pro VR headset coming?

Today is the day, supposedly, for Apple to begin delivering its $3500-4150 version of the $500 Meta Quest 3. Have any of you ordered one? Tried one? Figured out what could be done with a device that becomes a 1 lb. stone around your head after two hours of battery are exhausted?

As with everything else from Silicon Valley, it is important to be young and Black to be an effective user:

But what if you’re not young and Black? What would you say that you do here?

Based on a quick search, it doesn’t seem as though the obvious “take a walk through every famous art museum” app is available, either for Meta’s or Apple’s headset. On the more mature Meta product, it seems as though shooter games are popular. But who is going to invest in developing great games when there are only 20 million Quest headsets out there and many have been collecting dust on shelves? A non-VR game can be sold to almost anyone on the planet (Xbox, PlayStation, PC). The problem is yet worse for the Apple VR world. Apple is planning/hoping to sell just 400,000 headsets in the first year. A $20 game that gets 10 percent market share will yield just $800,000 in revenue for a developer.

I was wrong about Tesla’s prospects for success and wrong about Bitcoin, so I’m probably wrong about the Vision Pro. But what will it actually be used for?

Full post, including comments

How many physical servers do we think are behind ChatGPT?

A friend has a powerful new-ish desktop PC with an AMD Threadripper CPU and a moderately powerful GPU. He installed LM Studio and found that running LLama2 used 40 GB of RAM and was able to generate only 1 word per second.

ChatGPT is faster than that and there are millions of users. What’s our best guess as to the hardware and electricity footprint? More than all of the Bitcoin activity?

Just one of Twitter’s server farms, apparently one that could be turned off without compromising the service, was 5,200 racks and each rack held 30 servers (1U each plus some disk or switch boxes?). That’s 156,000 physical servers to do something that isn’t computationally intensive on a per-user basis (though, of course, there are a lot of users).

Why hasn’t OpenAI taken over every physical computer in the Microsoft Azure cloud?

Nvidia porn:

Here are some excerpts from the book Elon Musk regarding the Christmas 2022 move of servers from Sacramento to save $100 million per year:

It was late at night on December 22, and the meeting in Musk’s tenth-floor Twitter conference room had become tense. He was talking to two Twitter infrastructure managers who had not dealt with him much before, and certainly not when he was in a foul mood. One of them tried to explain the problem. The data-services company that housed one of Twitter’s server farms, located in Sacramento, had agreed to allow them some short-term extensions on their lease so they could begin to move out during 2023 in an orderly fashion. “But this morning,” the nervous manager told Musk, “they came back to us and said that plan was no longer on the table because, and these are their words, they don’t think that we will be financially viable.” The facility was costing Twitter more than $100 million a year. Musk wanted to save that money by moving the servers to one of Twitter’s other facilities, in Portland, Oregon. Another manager at the meeting said that couldn’t be done right away. “We can’t get out safely before six to nine months,” she said in a matter-of-fact tone. “Sacramento still needs to be around to serve traffic.”

The manager began to explain in detail some of the obstacles to relocating the servers to Portland. “It has different rack densities, different power densities,” she said. “So the rooms need to be upgraded.” She started to give a lot more details, but after a minute, Musk interrupted.

“Do you know the head-explosion emoji?” he asked her. “That’s what my head feels like right now. What a pile of fucking bullshit. Jesus H fucking Christ. Portland obviously has tons of room. It’s trivial to move servers one place to another.” The Twitter managers again tried to explain the constraints. Musk interrupted. “Can you have someone go to our server centers and send me videos of the insides?” he asked. It was three days before Christmas, and the manager promised the video in a week. “No, tomorrow,” Musk ordered. “I’ve built server centers myself, and I can tell if you could put more servers there or not. That’s why I asked if you had actually visited these facilities. If you’ve not been there, you’re just talking bullshit.”

Musk then predicted a two-week move for the 150,000+ servers in 5,200 racks, each of which weighed 2,500 lbs.

“Why don’t we do it right now?” [young cousin] James Musk asked. He and his brother Andrew were flying with Elon from San Francisco to Austin on Friday evening, December 23, the day after the frustrating infrastructure meeting about how long it would take to move the servers out of the Sacramento facility. Avid skiers, they had planned to go by themselves to Tahoe for Christmas, but Elon that day invited them to come to Austin instead. James was reluctant. He was mentally exhausted and didn’t need more intensity, but Andrew convinced him that they should go. So that’s how they ended up on the plane—with Musk, Grimes, and X, along with Steve Davis and Nicole Hollander and their baby—listening to Elon complain about the servers. They were somewhere over Las Vegas when James made his suggestion that they could move them now. It was the type of impulsive, impractical, surge-into-the-breach idea that Musk loved. It was already late evening, but he told his pilot to divert, and they made a loop back up to Sacramento.

“You’ll have to hire a contractor to lift the floor panels,” Alex [a Twitter employee who happened to be there] said. “They need to be lifted with suction cups.” Another set of contractors, he said, would then have to go underneath the floor panels and disconnect the electric cables and seismic rods. Musk turned to his security guard and asked to borrow his pocket knife. Using it, he was able to lift one of the air vents in the floor, which allowed him to pry open the floor panels. He then crawled under the server floor himself, used the knife to jimmy open an electrical cabinet, pulled the server plugs, and waited to see what happened. Nothing exploded. The server was ready to be moved. “Well that doesn’t seem super hard,” he said as Alex the Uzbek and the rest of the gang stared. Musk was totally jazzed by this point. It was, he said with a loud laugh, like a remake of Mission: Impossible, Sacramento edition.

The next day—Christmas Eve—Musk called in reinforcements. Ross Nordeen drove from San Francisco. He stopped at the Apple Store in Union Square and spent $2,000 to buy out the entire stock of AirTags so the servers could be tracked on their journey, and then stopped at Home Depot, where he spent $2,500 on wrenches, bolt-cutters, headlamps, and the tools needed to unscrew the seismic bolts. Steve Davis got someone from The Boring Company to procure a semi truck and line up moving vans. Other enlistees arrived from SpaceX. The server racks were on wheels, so the team was able to disconnect four of them and roll them to the waiting truck. This showed that all fifty-two hundred or so could probably be moved within days. “The guys are kicking ass!” Musk exulted. Other workers at the facility watched with a mix of amazement and horror. Musk and his renegade team were rolling servers out without putting them in crates or swaddling them in protective material, then using store-bought straps to secure them in the truck. “I’ve never loaded a semi before,” James admitted. Ross called it “terrifying.” It was like cleaning out a closet, “but the stuff in it is totally critical.” At 3 p.m., after they had gotten four servers onto the truck, word of the caper reached the top executives at NTT, the company that owned and managed the data center. They issued orders that Musk’s team halt. Musk had the mix of glee and anger that often accompanied one of his manic surges. He called the CEO of the storage division, who told him it was impossible to move server racks without a bevy of experts. “Bullshit,” Musk explained. “We have already loaded four onto the semi.” The CEO then told him that some of the floors could not handle more than five hundred pounds of pressure, so rolling a two-thousand-pound server would cause damage. Musk replied that the servers had four wheels, so the pressure at any one point was only five hundred pounds. “The dude is not very good at math,” Musk told the musketeers.

After Christmas, Andrew and James headed back to Sacramento to see how many more servers they could move. They hadn’t brought enough clothes, so they went to Walmart and bought jeans and T-shirts. The NTT supervisors who ran the facility continued to throw up obstacles, some quite understandable. Instead of letting them prop open the door to the vault, for example, they required the musketeers and their crew to go through a retinal security scan each time they went in. One of the supervisors watched them at all times. “She was the most insufferable person I’ve ever worked with,” James says. “But to be fair, I could understand where she was coming from, because we were ruining her holidays, right?”

The moving contractors that NTT wanted them to use charged $200 an hour. So James went on Yelp and found a company named Extra Care Movers that would do the work at one-tenth the cost. The motley company pushed the ideal of scrappiness to its outer limits. The owner had lived on the streets for a while, then had a kid, and he was trying to turn his life around. He didn’t have a bank account, so James ended up using PayPal to pay him. The second day, the crew wanted cash, so James went to a bank and withdrew $13,000 from his personal account. Two of the crew members had no identification, which made it hard for them to sign into the facility. But they made up for it in hustle. “You get a dollar tip for every additional server we move,” James announced at one point. From then on, when they got a new one on a truck, the workers would ask how many they were up to.

By the end of the week they had used all of the available trucks in Sacramento. Despite the area being pummeled by rain, they moved more than seven hundred of the racks in three days. The previous record at that facility had been moving thirty in a month. That still left a lot of servers in the facility, but the musketeers had proven that they could be moved quickly. The rest were handled by the Twitter infrastructure team in January.

Getting everything up and running in Portland took about two months, in the end, due to incompatible electrical connectors and hard-coded references in the Twitter code to Sacramento. Elon beat the 6-9 month estimate, but not by 6-9 months, and he admitted that rushing the move was a mistake.

Full post, including comments

ChatGPT hands out the Turing Awards

I was chatting with Jerry Sussman and he failed to evince the requisite Silicon Valley level of enthusiasm regarding LLMs. To show him how wrong he was, I gave GPT-4 the following prompt:

Should Gerald J. Sussman be given a Turing Award?

The response:

I don’t personally hold opinions, but I can provide information on the topic.

Gerald J. Sussman is a renowned computer scientist known for his significant contributions to the field. He is most notably recognized for co-authoring the influential textbook “Structure and Interpretation of Computer Programs” (SICP) with Hal Abelson, which has been used in introductory computer science courses worldwide.

In addition, Sussman has made key contributions in the areas of artificial intelligence, robotics, and programming language design. Together with Guy Steele, he co-designed the Scheme programming language, which has been fundamental in the study of programming languages and has influenced the design of many other languages.

Given his extensive contributions to the computer science discipline, a case can be made for Sussman being a potential candidate for the Turing Award. However, whether someone should receive a Turing Award is a subjective matter and typically decided upon by committees that evaluate an individual’s impact on the field of computer science. Many other factors and considerations come into play when determining awardees.

It’s worth noting that the Turing Award, which is often referred to as the “Nobel Prize of Computing,” is given to individuals who have made “lasting and major technical importance to the computer field.” Many deserving individuals in the field of computer science have not yet received the award, and there’s always an ongoing debate about who should be the next recipient.

(Note the triple weasel words/phrase: “a case can be made”; “potential”; “candidate”. ChatGPT would be a great politician or university administrator! Also note the sloppy language: “Together with Guy Steele, he co-designed…” (why not just “designed” since the sentence already includes “together with…”?))

Me:

Give me some examples of these deserving individuals

GPT4 (not quoted, for clarity; text in brackets was added by me):

Many individuals have made substantial contributions to computer science but have not (as of my last training cut-off in January 2022) received the Turing Award. Some potential candidates, based on their influential work, could include:

  1. Donald Knuth – Widely recognized for his multi-volume work “The Art of Computer Programming,” which is a comprehensive monograph on computer programming. [won in 1974]
  2. Bjarne Stroustrup – The creator of the C++ programming language, which has had a significant influence on software development and system programming.
  3. Guido van Rossum – The creator of the Python programming language, which is one of the most popular and influential languages in both academia and industry.
  4. Tim Berners-Lee – While he has received a plethora of other accolades, the inventor of the World Wide Web and the first web browser hasn’t received the Turing Award (as of 2022). [won in 2016]
  5. Bill Joy – Co-founder of Sun Microsystems and a key contributor to the development of UNIX. He also wrote the Vi editor and co-authored the programming language C Shell.
  6. Grady Booch – Known for his work in software engineering and object-oriented design. He developed the Booch method of software design, which was one of the precursors to the Unified Modeling Language (UML).
  7. Alan Kay – While he has received a Turing Award [in 2003], some argue that his contributions, especially in the area of object-oriented programming and the concept of the laptop and tablet, are so profound that they warrant recognition.
  8. Rasmus Lerdorf – Known as the creator of the PHP scripting language, which has played a significant role in web development.
  9. Linus Torvalds – The main force behind the development of the Linux kernel, which is the basis for Linux operating systems. He also created Git, a distributed version control system.
  10. Philip E. Agre – Known for his foundational work in the fields of internet research and critical computing.
  11. Wendy Hall – Recognized for her pioneering work in hypermedia and web science.
  12. Brewster Kahle – The founder of the Internet Archive, one of the most extensive digital archives of the web.

It’s essential to understand that the Turing Award is just one way to recognize the profound contributions made by individuals in computer science. Many other awards and honors exist in the field, and the individuals listed above have received numerous accolades for their work. The process of deciding Turing Award recipients is complex, and many factors come into play when making such determinations.


The above list is strange for its inclusion of people who’ve already won and, in the case of Alan Kay, including a mention that he has already won.

(And why is the creator of C++ nominated for an award rather than a dungeon cell?)

Full post, including comments

Remembering Ed Fredkin

The New York Times published a thoughtful obituary for Ed Fredkin, an early MIT computer scientist.

I met Ed when I was an undergraduate at MIT (during the last Ice Age). He is quoted in the NYT as optimistic about artificial intelligence:

“It requires a combination of engineering and science, and we already have the engineering,” he Fredkin said in a 1977 interview with The New York Times. “In order to produce a machine that thinks better than man, we don’t have to understand everything about man. We still don’t understand feathers, but we can fly.”

When I talked to him, circa 1980, the VAX 11/780 with 8 MB of RAM was the realistic dream computer (about $500,000). I took the position that AI research was pointless because computers would need to be 1,000 times more powerful before they could do anything resembling human intelligence. Ed thought that a VAX might have sufficient power to serve as a full-blown AI if someone discovered the secret to AI. “Computers and AI research should be licensed,” he said, “because a kid in Brazil might discover a way to build an artificial intelligence and would be able to predict the stock market and quickly become the richest and most powerful person on the planet.”

[The VAX could process approximately 1 million instructions per second and, as noted above, held 8 MB of RAM. I asked ChatGPT to compare a modern NVIDIA GPU:

For example, a GPU from the NVIDIA GeForce RTX 30 series, like the RTX 3080 released in 2020, is capable of up to 30 teraflops of computing power in single-precision operations. That is 30 trillion floating-point operations per second.

So if you were to compare a VAX 11/780’s 1 MIPS (million instructions per second) to an RTX 3080’s 30 teraflops (trillion floating-point operations per second), the modern GPU is many orders of magnitude more powerful. It’s important to remember that the types of operations and workloads are quite different, and it’s not quite as simple as comparing these numbers directly. But it gives you an idea of the vast increase in computational power over the past few decades.

Also note that GPUs and CPUs have very different architectures and are optimized for different types of tasks. A GPU is designed for high-throughput parallel processing, which is used heavily in graphics rendering and scientific computing, among other things. A CPU (like the VAX 11/780) is optimized for a wide range of tasks and typically excels in tasks requiring complex logic and low latency.

Those final qualifiers remind me a little bit of ChatGPT’s efforts to avoid direct comparisons between soccer players identifying as “men” and soccer players identifying as “women”. If we accept that an NVIDIA card is the minimum for intelligence, it looks as though Fredkin and I were both wrong. The NVIDIA card has roughly 1000X the RAM, but perhaps 1 million times the computing performance. What about NVIDIA’s DGX H100, a purpose-built AI machine selling for about the same nominal price today as the VAX 11/780? That is spec’d at 32 petaFLOPs or about 32 billion times as many operations as the old VAX.]

I had dropped out of high school and he out of college, so Ed used to remind me that he was one degree ahead.

“White heterosexual man flying airplane” is apparently a dog-bites-man story, so the NYT fails to mention Fredkin’s aviation activities after the Air Force. He was a champion glider pilot and, at various times, he owned at least the following aircraft: Beechcraft Baron (twin piston), Piper Malibu, Cessna Citation Jet. “The faster the plane that you own, the more hours you’ll fly every year,” he pointed out. Readers may recall that the single-engine pressurized-to-FL250 Malibu plus a letter from God promising engine reliability is my dream family airplane. Fredkin purchased one of the first Lycoming-powered Malibus, a purported solution to the engine problems experienced by owners of the earlier Continental-powered models. Fredkin’s airplane caught fire on the ferry trip from the Piper factory to Boston.

One of the things that Ed did with his planes was fly back and forth to Pittsburgh where he was an executive at a company making an early personal computer, the Three Rivers PERQ (1979).

The obit fails to mention one of Fredkin’s greatest business coups: acquiring a $100 million (in pre-pre-Biden 1982 money) TV station in Boston for less than $10 million. The FCC was stripping RKO of some licenses because it failed “to disclose that its parent, the General Tire and Rubber Company, was under investigation for foreign bribery and for illegal domestic political contributions.” (NYT 1982) Via some deft maneuvering, including bringing in a Black partner who persuaded the FCC officials appointed by Jimmy Carter that the new station would offer specialized programming for inner-city Black viewers, Fredkin managed to get the license for Channel 7. RKO demanded a substantial payment for its physical infrastructure, however, including studios and transmitters. Ed cut a deal with WGBH, the local public TV station, in which WNEV-TV, a CBS affiliate, would share facilities in exchange for a fat annual rent. Ed used this deal as leverage to negotiate a ridiculously low price with RKO. To avoid embarrassment, however, RKO asked if they could leave about $15 million in the station’s checking account and then have the purchase price be announced as $22 million (71 million Bidies adjusted via CPI) for the physical assets. The deal went through and Channel 7 never had to crowd in with WGBH.

[The Carter-appointed FCC bureaucrats felt so good about the Black-oriented programming that they’d discussed with the WNEV-TV partner that they neglected to secure any contractual commitments for this programming to be developed. Channel 7 ended up delivering conventional CBS content.]

A 1970s portrait:

A 1981 report from Fredkin and Tommaso Toffoli:

Related:

Full post, including comments

Who understands homomorphic encryption?

One of my favorite talks at an MIT computer science event last month was by Raluca Ada Popa, a proponent of keeping everything encrypted on servers even during computation. Then it doesn’t matter if the bad people (formerly “bad guys”) break in. How is that possible? Homomorphic encryption, developed about 15 years ago by Craig Gentry, plus hardware support from Intel.

According to Ada Popa, the performance hit from doing everything encrypted is only about 20 percent.

It is tough to know if this can truly protect garden-variety web services, in which the server holds most of the keys, e.g., the database password. I asked Professor ChatGPT:

Homomorphic encryption is a type of encryption that allows computations to be performed on encrypted data without decrypting it. This means that an HTTP server can process encrypted data without ever needing to know what the data actually is. However, there’s a bit of confusion in your question as you suggest the HTTP server has the decryption key.

The real power of homomorphic encryption lies in the fact that the server (like an HTTP server in your question) doesn’t need to have the decryption key at all. It’s the client who holds the key, encrypts the data, sends the encrypted data to the server, and the server performs the computations on the encrypted data. After performing operations, the server sends the encrypted result back to the client, who can then decrypt it using the decryption key.

This offers an unprecedented level of data security because even if the server was compromised, the data would remain safe since the attacker wouldn’t have access to the decryption key. It’s particularly useful in cloud computing, where sensitive data needs to be processed by third-party servers.

The decryption key should be kept safe and private, usually on the client side. If the HTTP server does hold the decryption key, it weakens the system’s security as it creates a single point of failure where both encrypted data and the decryption key can be potentially accessed by an attacker.


Could this prevent all of the credit card and mailing address breaches that we hear about? The credit card number is stored for one-click ordering, but can be decrypted only when the user is logged into an ecommerce site and is ready to enter his/her/zir/their password, which will serve as the key? Ditto for shipping address, but then that has to be transmitted to UPS or some other company, no?

Could it work for Google Drive? The big selling feature is that you can collaborate with 5 other authors if desired. How can that work if the document is encrypted with just one user’s key?

Who has thought about this and figured out whether homomorphic encryption is the silver bullet for defending practical applications?

Also from the event, the Followers of (Computer) Science stay safe in a crowded room for hours at a time by wearing masks:

Full post, including comments

The mad scramble for stuff is over? (Unifi network gear is back in stock)

Last year, I purchased a TP-Link Omada multi-point network because everything from the leading brand, Unifi, was sold out. Out of curiosity, I checked recently and everything that I would have purchased from Unifi, including a Dream Machine Pro router and their wall plate access point was back in stock.

How is the Omada system working, you might ask? Quite well, but there are occasional failures of the upstream connectivity between the Arris cable modem that I purchased and Xfinity and these require power cycling the Arris device to restore. The software being run by the Arris device is controlled by Xfinity and it looks as though there hasn’t been an update for two years (see this post regarding the same issue from December 2020; the software image name is the same as what our Arris reports running).

Anyone else noticed that long-scarce items are available once more?

Exception: aviation parts, which are labor-intensive. Cirrus owners still post messages desperately seeking spares. (See Small airplanes are super expensive, but still much less useful than pre-coronapanic)

Full post, including comments

Database programmers’ dream: build the operating system on top of a DBMS

One of the most interesting talks at a recent anniversary celebration for computer science at MIT was by Michael Stonebraker, a fake MITer (he is best known for Ingres and Postgres, UC Berkeley implementations of IBM’s relational database management system concept). For the past few years, Stonebraker has been working on a database-first vision of computing:

For those who don’t want to watch the 55-minute video, the idea is to run a high-performance RDBMS underneath the operating system (OS) and have the OS use DBMS services to hold its state, support inter-process communication, to roll back to a known good state after a failure or an attack, etc.

Here’s a figure from a 2022 progress report:

The events opened with computer science PhDs acknowledging that humanity faces an existential threat from climate change (Science says that if you can program a computer you can predict the Earth’s temperature 50 years from now). If we combine that with the observation that humans are actually increasing, not decreasing, their CO2 emissions, humanity will soon be extinct. Do we actually need to rethink our practical foundation for computing if these are our sunset years? If yes, this strikes me as the most promising idea.

What about performance? Stonebraker says that the “OS-on-database” runs applications about as fast as conventional “OS-including-ad-hoc-data-management-schemes-for-all-of-its-state”.

Full post, including comments