One year until many of the world’s PCs can be taken over by a 10-year-old?

As the righteous in Tim Walz’s Minnesota, California, and Maskachusetts observe Indigenous Peoples’ Day and deplore the time when a hated Jew inflicted a Nakba on the entirely peaceful Native Americans, let’s look at the 21st century way to conquer a big part of the world.

From Microsoft, back in June:

I think this is because my desktop computer was running a CPU/chipset/motherboard configuration from 2015 that lacks some modern security features, such as a Trusted Platform Module.

I wonder if a lot of people won’t upgrade despite Microsoft’s threats. After all, improvements in computer hardware have slowed to a crawl (see GPU performance improvements since 2015 (and why not just use motherboard graphics?) in which we learn that the 2015 CPU had reasonable performance by 2022 standards. With Microsoft not bothering to continue with security updates and nearly every PC connected to the Internet, will a smart 10-year-old be able to take over a substantial fraction of the world’s computers? This is not an open-source computer program that folks other than Microsoft can debug and patch.

Circling back to Indigenous Peoples’ Day… remember that immigration wasn’t the best thing that ever happened to Native Americans, but Science proves that immigration will be the best thing that has ever happened to Native-Born Americans.

Full post, including comments

Why not a simple web site or phone app to determine whether one must evacuate?

An American faced with hazardous weather who wants to know whether to evacuate his/her/zir/their house or apartment must first do a web search to find a site that maps flood or evacuation zones, typically A through E. Then the citizen, documented immigrant, temporary protected status migrant, or undocumented migrant must scour various state and county web sites to try to figure out what the latest evacuation orders are by city, county, or state. Here’s part of a story from our local newspaper:

There are many ways for the above process to go wrong. Why not a phone app that gets GPS data from the phone hardware and operating system and does all of the above work reliably? The server just needs to have a database of evacuation and flood zones and a canonical up to date list of evacuation orders. Why is it a human’s job to do something that can be done much more reliably by a computer?

For Floridians during hurricane season the app could run continuously in the background and send alerts as necessary.

One wrinkle is that people who live in mobile homes are often ordered to evacuate even if they aren’t in a surge-prone zone. The ideal app, therefore, would know about trailer parks and maybe get loaded with a database from Zillow or similar regarding the housing type at a given address.

What about people who aren’t competent users of smartphones? Nearly all of them have an app-capable TV and I think those TVs can and do run software when the TV appears to be off. Some code could be built into TVs to connect to the same server that the phone apps connect to. In the event of an applicable evacuation order, the TV would wake up and display/speak “Time to evacuate!”. This would be a little more complex to set up because TVs don’t include GPS receivers and the street address of the TV might have to be entered.

As an added bonus to this app infrastructure, a resident of the U.S. could register his/her/zir/their address and phone/email with the server. The server could then put the registrants into a geospatially indexed database and query to find those affected by a newly issued alert and then email/text the relevant subscribers: “If you’re at 1141 George Perry Floyd Memorial Boulevard right now, which you said was your home address, your county has issued an evacuation order covering your neighborhood. Click here for more information, including a list of county-run shelters.” No matter how fast the U.S. population grows via open borders the computational capability of server CPUs should grow yet faster and, therefore, it would never be impractical to issue personalized alerts to every resident of the U.S.

With all of the hundreds of billions of dollars spent by the federal government on disaster-related projects over the years, why hasn’t something like this been built by the government? Google, Apple, or Amazon could probably build it pretty easily given that those companies already know our addresses, phone numbers, and email addresses. If the above capabilities were built into Android and iOS that would cover almost everyone. Maybe these big companies wouldn’t want to implement this capability, though, due to fear of liability in case they happen to miss an evacuation order. (Maybe they could be protected from liability as the COVID-19 vaccine manufacturers were?)

Here’s a concrete example from Tampa (wiped out in 1848 and hit badly again in 1921), starting with the “evacuation zone map” for Hillsborough County:

The official evacuation order says “Hillsborough County has issued a mandatory evacuation order for Evacuation Zones A and B…”, but the the legend doesn’t mention “zones”. The legend refers to an “evacuation level” of either A or B:

If we look at a satellite view of the city we can see that a lot of people shouldn’t have to run away:

My favorite steakhouse, Bern’s, is in the center of the city and Zone C. Same deal for Columbia Restaurant in Ybor City. The art museum, on the other hand, is in Zone A. Need to go to the hijab store in Brandon, Florida (suburban Tampa)? That’s not in any evacuation zone (i.e., the hijab inventory should be safe). The Tampa Zoo, on the other hand, seems to be in Zone A, which is not great news for the animals. Busch Gardens is not in any evacuation zone. The big airport? Zone A.

During the Tampa evacuation it seems that some people ran away who didn’t need to and some people stayed despite an order to evacuate because they didn’t know what zone they were in. Once on the road, things got more chaotic with shelters that filled up and traffic jams. Officials were saying “You don’t have to go more than 10 or 20 miles”, but residents didn’t know which shelter was the most sensible destination so some folks might have driven 100+ miles away to a hotel or relative’s house. Ships always have muster stations so that people know where to go in the event that the whistle blows 7 times and then there is a long horn sound. Maybe the app could have a preplanned idea of which shelter people in which blocks of a city should go to first, adjusted for the pet ownership status of the app user (it’s more complex to evacuate with a pet than one might think; only some shelters are pet-friendly and the owner is required to have and bring a crate big enough for the pet and the owner can’t stay with the pet while in the shelter). This could be refined if information is received that a shelter is full and turning people away.

What about after the hurricane arrives? The app/server combo could send an SMS or push notification reminding people to put their phones into low-power mode. The software could then notify people when it was safe to return to their individual neighborhoods (this can be complicated after a hurricane because sometimes bridges to barrier islands are destroyed and/or roads are blocked by trees). Using data from poweroutage.us, the software could include SMS information about whether power was likely to be available at a user’s home (maybe someone would choose to remain with friends or relatives until power was likely back).

Separately, here were our neighbors’ Hurricane Milton preparations as of yesterday, which may or may not meet FEMA standards:

Related:

  • “NY governor slammed for saying black children don’t know what computers are” (BBC). If Democrats don’t think that Black people can use computers and Democrats run the U.S. (which they do right now), why hasn’t the above-described app already been built and released by FEMA?
  • “FEMA Scrambles to Confront Two Storms—and Misinformation” (WSJ): “Instead, federal officials’ efforts to save lives are being complicated by an unusual level of politically charged misinformation, which authorities say risks leading people to disregard evacuation orders…” (the authorities are sure that the problem is that Americans are allowed to speak their minds on Twitter and not that people in a country where IQ is falling might not have the brainpower and diligence to get through the multiple web sites that are required to make an evacuation decision. (If the “authorities” are correct maybe Twitter and Facebook need to be shut down any time that an emergency has been declared? If “misinformation” is killing people and saving lives from COVID-19 justified suspending the First Amendment right to assemble then surely it would make sense to suspend the First Amendment as a hurricane approaches the U.S.)
Full post, including comments

Checking on the Snowflake stock price

On April 1, 2021 I began questioning the value of Snowflake (SNOW) stock relative to Oracle (ORCL). I did a follow-up two years ago, How is Snowflake stock doing?:

SNOW is down nearly 30 percent while the S&P 500, thanks to Joe Biden’s careful stewardship of the U.S. economy, is down 10 percent (but actually that 10 percent over 1.5 years is more like 25 percent once inflation is factored in, a stunning loss of wealth for Americans).

In April 2021, SNOW was valued at roughly 30 percent of the value of Oracle (ORCL), the backbone of business data processing. What is the company’s market cap today, as a percentage of Oracle’s market cap? SNOW is worth $54 billion. Oracle is worth $165 billion. So I think the philip.greenspun.com fact checking department must rate my April 2021 claim as #False. SNOW turned out to be a loser for an investor, but not because 30 percent of Oracle’s valuation was absurd.

Let’s try to figure out how an investor who shorted SNOW on April 1, 2021 to buy ORCL would have done. It’s a touch tricky because Oracle has paid investors a dividend of $1.28-$1.60 per year during this period while Snowflake hasn’t paid dividends (the company has been losing money every quarter so where would the funds for a dividend come from?). It seems that Google Finance and Yahoo! Finance have been stripped of features so I can’t figure out how to get a custom-date-range chart out of either. If we look at the five-year chart on Google, though, we can see that Snowflake has gone down from $236 to about $111 (Sept 26 price) while Oracle has more than doubled (in nominal dollars) from $72 to $168. In other words, Oracle stock has kept pace with inflation in the cost of buying a house (house prices up 50 percent and mortgage rates up to the point that a monthly mortgage payment has roughly doubled) while Snowflake stock, um, hasn’t.

Since there is more to corporate life than delivering profits to shareholders, let’s also check in on “Snowflake’s commitment to diversity, equity and inclusion” (from the apparently white male CEO):

“While diversity, equity and inclusion has long been a focus for Snowflake, we are committed to doing more. We have the responsibility to lead, and we will do so. Snowflake, under my personal leadership, will undertake a comprehensive review across our company of all of our diversity, equity and inclusion efforts to help ensure that we are taking appropriate steps. We have a Diversity, Equity and Inclusion council at Snowflake, and I am proud of the work they have done.”

“Diversity, equity and inclusion are not causes — they are important pillars that are central in what we do as a company. This important effort continues, and we will do our part to lead.”

So… Snowflake is a leader in diversity, but was not a great stock to buy if you wanted to preserve your purchasing power.

For completeness, let’s also look at the S&P 500 on the five-year chart:

The S&P 500 has gone up from about 4,000 on April 1, 2021 to 5,750 today. In other words, someone who bought and held the S&P 500 would have experienced an erosion of purchasing power during this period (up in nominal dollars; down in real dollars adjusted for the cost of buying a house). The erosion is more severe when one considers that the S&P 500 investor owes capital gains taxes (24 percent federal plus up to 13.3 percent California state tax) on what are entirely fictitious gains (due solely to inflation).

Full post, including comments

What good are the AI coprocessors in the latest desktop CPUs for users who have standard graphics cards?

Intel is supposedly putting an AI coprocessor into its latest Arrow Lake desktop CPUs, but these don’t the 40 trillion operations per second (TOPS) minimum performance to run Windows 11 Copilot+. Why is valuable chip real estate being taken up by this mental midget, relative to a standard graphics card?

“Intel’s Arrow Lake-S won’t be an AI powerhouse — 13 TOPS NPU is only slightly better than Meteor Lake, much less than Lunar Lake” (Tom’s Hardware, July 9, 2024):

Arrow Lake-S will be the first Intel desktop architecture with a neural processing unit (NPU), but it won’t be as fast as people might expect. @Jaykihn on X reports that Arrow Lake-S will include an NPU that is only slightly more powerful than Meteor Lake’s NPU, featuring just 13 TOPS of AI performance.

Having an NPU in a desktop environment is virtually useless; the main job of an NPU is to provide ultra-high AI performance with a low impact on laptop battery life. Desktops can also be used more often than laptops in conjunction with discrete GPUs, which provide substantially more AI performance than the best NPUs from Intel, AMD, or Qualcomm. For instance, Nvidia’s RTX 40 series graphics cards are capable of up to 1,300 TOPS of AI performance.

The bottom-of-the-line Nvidia RTX 4060 has a claimed performance of “242 AI TOPS” and is available on a card for less than 300 Bidies. Is the idea that a lot of desktop machines are sold without a GPU and that Microsoft and others will eventually find a way to “do AI” with however much NPU power is available within the Arrow Lake CPU? (Software that evolved to require less hardware would be a historic first!)

AMD already has a desktop CPU with distinct NPU and GPU sections, the Ryzen 8000G.

AMD Ryzen 8000G Series processors bring together some of the best, cutting-edge AMD technologies into one unique package; high-performance processing power, intense graphics capabilities, and the first neural processing unit (NPU) on a desktop PC processor.

Based on the powerful “Zen 4” architecture, these new processors offer up to eight cores and 16 threads, 24MB of total cache, and AMD Radeon™ 700M Series graphics. Combining all of this into one chip enables new possibilities for customers, in gaming, work, and much more; without the need to purchase a discrete processor and graphics card, customers can keep their budget lower, while enjoying outstanding performance.

“The Ryzen 7 8700G leads the pack …The processor has a combined AI throughput of 39 TOPS, with 16 TOPS from the NPU.” (source) If the 39 TOPS number is correct, it seems unfortunate given the Windows 11 Copilot+ demand for 40 TOPS.

Why not just build more GPU power and let it be used for graphics or AI depending on what programs are running? The big advantage of the NPU seems to be in power efficiency (source), but why does that matter for a desktop computer? Even at California or Maskachusetts electricity rates, the savings converted to dollars can’t be significant.

Full post, including comments

Why doesn’t anyone want to buy Intel’s Gaudi AI processors, supposedly cheaper than Nvidia’s H100?

Intel claims to have a faster and more cost-effective AI system than Nvidia’s H100. It is called “Gaudi”. First, does the name make sense? Antoni Gaudí was famous for doing idiosyncratic creative organic designs. The whole point of Gaudí was that he was the only designer of Gaudí-like buildings. Why would you ever name something that will be mass-produced after this individual outlier? Maybe the name comes from the Israelis from whom Intel acquired the product line (an acquisition that should have been an incredible slam-dunk considering that it was done just before coronapanic set in and a few years before the LLM revolution)?

Intel claims that their Gaudi 3-based systems are faster and more efficient per dollar and per watt than Nvidia’s H100. Yet the sales are insignificant (nextplatform):

Intel said last October that it has a $2 billion pipeline for Gaudi accelerator sales, and added in April this year that it expected to do $500 million in sales of Gaudi accelerators in 2024. That’s nothing compared to the $4 billion in GPU sales AMD is expecting this year (which we think is a low-ball number and $5 billion is more likely) or to the $100 billion or more that Nvidia could take down in datacenter compute – just datacenter GPUs, no networking, no DPUs – this year.

Nvidia’s tools are great, no doubt, but if Intel is truly delivering 2x the performance per dollar, shouldn’t that yield a market share of more than 0.5 percent?

Here’s an article from April 2024 (IEEE Spectrum)… “Intel’s Gaudi 3 Goes After Nvidia The company predicts victory over H100 in LLMs”:

One more point of comparison is that Gaudi 3 is made using TSMC’s N5 (sometimes called 5-nanometer) process technology. Intel has basically been a process node behind Nvidia for generations of Gaudi, so it’s been stuck comparing its latest chip to one that was at least one rung higher on the Moore’s Law ladder. With Gaudi 3, that part of the race is narrowing slightly. The new chip uses the same process as H100 and H200.

If the Gaudi chips work as claimed, how is Intel getting beaten so badly in the marketplace? I feel as though I turned around for five minutes and a whole forest of oak trees had been toppled by a wind that nobody remarked on. Intel is now the General Motors circa 2009 of the chip world? Or is the better comparison to a zombie movie where someone returns from a two-week vacation to find that his/her/zir/their home town has been taken over? Speaking of zombies, what happens if zombies take over Taiwan? Humanity will have to make do with existing devices because nobody else can make acceptable chips?

Related:

Full post, including comments

Six months with the Apple Vision Pro augmented reality headset

A friend was one of the first to order and receive an Apple Vision Pro headset. He’s had it for about six months. He’s a great programmer and a sophisticated user of technology. I asked him what he’s done with the $3500 device. “I use it to watch streaming movies,” he responded. Does it have a full two hours of battery life? “I don’t know,” he said, “because I always use it plugged in.”

AR is the technology of the future and always will be? Apple claims to be the company that makes everything useful. (They’re bringing us AI next, which is upsetting when you reflect on the fact that the iPhone isn’t smart enough to correctly oriented a picture of an English-language museum sign nor can it fill out an online shopping form with the owner’s name and address, despite having seen hundreds of similar forms that all get filled in with the same info.)

Readers: Have you figured out what to do with one of these?

One possibility: ForeFlight Voyager, a free “playground for aviation enthusiasts” from the flight planning nerds who were acquired by Boeing. It includes real-time traffic. This was purportedly being demoed in the Boeing pavilion at Oshkosh, but I didn’t see anyone with the headset on. The ForeFlight folks were happy to talk about it, but didn’t offer to demonstrate it. I wonder if it is too cumbersome to get a new user into and out of a Vision Pro. Or maybe people throw up as soon as they are in the VR world?

Full post, including comments

How were you CrowdStruck yesterday?

I felt sorry for myself on Thursday because Spirit was four hours late FLL to ORD (impressive considering that they had no mechanical or weather problems). On Friday, however, CrowdStrike managed to disable the entire U.S. airline industry. Can we agree that there should be a new word in English: CrowdStruck, meaning a systemic meltdown caused by a diversity, equity, and inclusion-oriented enterprise? From CrowdStrike’s web site:

It seems fair to say that they achieved their goal of “challenging the status quo” (the status quo being servers that had been up and running for years).

Considering that the U.S. Secret Service was apparently more focused on DEI than on keeping Donald Trump alive, the word could be used in the following sentence: “Donald Trump might need a new ear after being CrowdStruck in Pennsylvania.” (Loosely related… I received the photo below from a deeply closeted Trump-supporting academic.)

Readers: Please share your stories about being CrowdStruck in the comments. How did you experience the meltdown of IT services (except for Elon Musk’s X!).

My own CrowdStruck experience was limited to not being able to check in at the Doubletree here in Milwaukee. They couldn’t make keys for any new guests all day and had to send employees up to open doors for any guest who wanted to get into a room. They finally got their systems back by around 9 pm and will spend the weekend catching up.

Speaking of Milwaukee, here are some of the billboards that the righteous paid for on a highway leading into town:

The Third Ward and some other parts of town that we’ve seen so far are quite pleasant. I can understand why some Chicagoans are considering fleeing here (though I can’t understand why or how they’d stay through the winter!).

Full post, including comments

Which mapping app can avoid narrow roads in Europe? And which can provide walking directions that avoid dangerous neighborhoods in the U.S.?

We used Google Maps in Portugal. It made quite a few absurdly bad routing decisions. To save a theoretical minute or two it would send our Mercedes E class sedan down roads narrower than a North Carolina dentist’s driveway. We were constantly terrified that a car would appear coming the opposite direction and that we’d be forced to stop suddenly and then back up to a rare section wide enough for two cars to pass. When shown these routes, the locals said that they would never drive along those roads for transportation despite most of them having narrower cars and better driving skills than a Floridian lulled into complacency by textbook highway engineering. Below is a segment from a suggested Google Maps route for our rental car (#2 after the first E class melted down). I don’t think that our Sixt rental agreement says anything about driving up or down stairs, but the road was definitely narrower than the car:

Where was this road, you might ask? In one of my favorite towns in Portugal: Covide!

Is there a mapping app that is smarter about getting around Europe without scraping?

Related question for the U.S.: is there an app that will calculate walking directions to avoid dangerous neighborhoods? Or calculate directions and score the walk with a danger level? This tweet from a former Googler suggests that Google will never do it:

(His/her/zir/their reasoning is that sending pedestrians via a scenic route will lead to “spatial inequality” because the nicer areas tend to be richer.)

WalkSafe seems to have the crime rate information, but I’m not sure that it will provide turn-by-turn directions to a pedestrian.

Here’s a street in front of an AirBnB that we rented in Amarante, Portugal (very pleasant town!):

(The host said to navigate to a nearby parking lot and walk the rest of the way.)

I don’t have a good illustration of a crime-ridden street in Portugal because the country is one of the safest in the world and every tourist attraction seems to be in a safe area.

Full post, including comments

Why can’t we tell Gmail to delete messages from specified senders after 30 days?

After nearly 20 years, my Gmail inbox is cluttered with over 90 GB of crud, including more than 48,000 unread messages.

The majority of my non-spam email messages are irrelevant after a certain number of days, e.g., calendar notifications from Google Calendar, USPS “what’s going to be in your mailbox today”, “daily news summary” from a newspaper. Yet there doesn’t seem to be any convenient way to tell Gmail to delete messages from a particular sender after between 1 and 30 days. Isn’t this an obvious feature to have added? I recognize that Google is in the business of selling storage plans, but on the other hand keeping thousands of TB of useless timed-out alerts in persistent storage doesn’t seem like the best business to be in. If Google wants people to burn through their storage tiers and pay more, why not have this kind of feature and simply lower the thresholds in GB?

If Google’s Gemini is so smart, in fact, why isn’t it smart enough to offer an auto-delete after a certain number of days for emails such as the one below?

How many people want to save checked bag tracking information for years?

Since the human programmers at Gmail didn’t think to add this feature, I guess this post then boils down to “Why isn’t AI smart enough to clear completely useless email messages out of our inboxes?”

A few other ideas that would help us clear out our inboxes…

  • a one-button “delete everything from this sender”
  • a system smart enough to delete every post-purchase follow-up survey (buying online is no longer efficient because there will be 5+ emails after every purchase asking the consumer to rate his/her/zir/their purchase); see below for a survey that United sent me after my first post-coronapanic commercial airline trip (I never opened it)
Full post, including comments

Oversupply of mediocre computer nerds in the midst of the AI Bubble

All previous tools that were hyped as making programmers more productive had no effect or a positive effect on the demand for computer programmers. I would have thought that we would be in a golden age for young computer nerds as every company on the planet seeks to “add AI”, e.g., “Joe’s Drywall and Paint, now with AI”.l

The Wall Street Journal, however, says that there is a glut of graduates… “Computer-Science Majors Graduate Into a World of Fewer Opportunities”:

Note the hateful depiction of a non-Black non-female not-obviously-2SLGBTQQIA+ computer wizard (NYT would never make this mistake). Also note “Those from top schools can still get job”. In other words, it is the mediocre computer nerds who can’t get hired. Either there has been a huge boom in the number of people who are passionate about computer nerdism or a lot of kids have gone into CS, despite a lack of interest in staring at a screen, because someone told them that it was a sure path to a solid career (this was my experience teaching Information Technology; 90 percent of the students were not even vaguely curious about the subject, e.g., curious enough to search outside of the materials assigned):

My guess is that, due to lack of interest/passion, 70 percent of CS majors shouldn’t have majored in CS and won’t have lasting careers in CS. They are at best mediocre now and will just get worse as they forget what they were supposed to have learned.

Almost all of the news in the article is bad:

To be sure, comp-sci majors from top-tier schools can still get jobs. Pay, projected to be at about $75,000, is at the high end of majors reviewed by the National Association of Colleges and Employers, or NACE. They are just not all going to Facebook or Google.

“Job seekers need to reset their expectations,” said Tim Herbert, chief research officer at CompTIA, a trade group that follows the tech sector. “New grads may need to adjust where they’re willing to work, in some cases what salary, perks or signing bonus they’ll receive, and the type of firm they’ll work for.”

And while big tech companies are hiring for AI-related jobs, Herbert said, many of those positions require more experience than a new grad would have.

Salaries for this year’s graduates in computer science are expected to be 2.7% higher than last year’s, the smallest increase of eight fields reviewed by NACE.

In the past 18 months, job growth has remained flat for software publishers, a group of employers that includes software developers, according to the Labor Department. On the student jobs platform Handshake, the number of full-time jobs recently posted for tech companies is down 30% from the year-ago period.

$75,000/year?!?! That’s $55,000 per year after Joe Biden’s and Gavin Newsom’s shares (online calculator). About $12,000 of that after-tax $55,000 will be consumed paying for the car that is required to get to the job (AAA and CNBC). Salaries are 2.7 percent higher than a year ago? That’s a pay cut if you adjust for the inflation rate in any part of the country where (a) people want to live, and (b) there are jobs.

I’m wondering if the big problem is in bold. Four years of paying tuition should prepare a smart young person for almost any job, including “AI-related” (if not at OpenAI then at some company that is planning to use an LLM via an API to OpenAI or similar). In the late 1990s, colleges weren’t teaching “How to build an Amazon or eBay” (so we developed a class that did and a textbook) even though it was obvious that employers wanted graduates who could built database-backed web sites. Could it be that the CS curriculum is totally stale once again? Very few of the professors have what it would take to get hired at OpenAI and, therefore, they can’t teach the students what it would take to get hired at OpenAI.

I think this confirms my 2018 theory that data science is what young people should study and that data science restores the fun of computer programming that we enjoyed in the pre-bloat days.

Full post, including comments