Our AI Overlords at the railroad crossing

On my way back downtown from the Fort Worth Stockyards, Uber pinged me. The company’s giant AI brain was concerned that I had been stopped for a few minutes. Our GPS position showed that were in the middle of a road… at a railroad crossing.

Speaking of AI, we have a GE Monogram built-in microwave that has been enhanced with WiFi connectivity and an app. If you request 30 seconds of microwaving and remain within Bluetooth range of the oven, your phone will loudly alert you to the cooking process having completed.

Full post, including comments

Could robots weave better tapestries than humans ever have?

One of America’s greatest art museums, the Kimbell in Fort Worth, is showing seven enormous tapestries right now. These depict the Battle of Pavia (1525) and were made roughly 500 years ago from wool, silk, gold, and silver thread. Each one is about 30′ wide and 14′ high, perfect for the Palm Beach County starter home. The curators praise the human artists behind these works, but I’m wondering whether robots couldn’t do a better job in many ways and thus revive this form of art.

(If you miss them in Fort Worth, you can see them while stocking up on fentanyl in San Francisco beginning in October and eventually back at their home in Naples, Italy (leave everything that you value in the hotel safe!)). Here are a few photos to give you a sense of the scale and detail:

Wouldn’t we rather all have walls like these rather than imaginative answers to simple household questions?

To revive the art form, a computer program would need to be able to take in multiple photographs (the typical tapestry shows multiple scenes), come up with a cartoon, and then pick fabric to match the colors in the underlying photographs. How could robots do a better job than humans? Robots have more patience than humans and could perhaps work at a higher resolution. We have a broader range of colors available with dyes and could also add plastic thread to the palette.

There are some companies that purport to make tapestry-like art from photographs, but they do it by printing rather than weaving.

What else did I see at the Kimbell? Readers would be disappointed if I didn’t provide a gift shop tour…

The building itself is a Louis Kahn-designed landmark:

The lighting was a bit dim, but I managed to capture a Follower of Science (concerned enough about SARS-CoV-2 to wear a mask, but not concerned enough about SARS-CoV-2 to shave his/her/zir/their beard):

The modern art museum across the street is also worthwhile and provides clear instructions for making your own $1 million artwork at home:

The Amon Carter Museum, famous for its collection of Remington and Russell, is a 5-minute walk away (might feel longer in the 100+ degree heat).

Texas is not as rich a location for the masketologist as California, New York, or Massachusetts, but I still managed to find people who have elected to do jobs that inevitably expose them to thousands of potentially infected humans per day and who attempt to avoid contracting a respiratory virus by wearing simple masks:

A sticker for sale at DFW:

Full post, including comments

ChatGPT 4o tackles the challenge of AC ducts sweating in an attic

The latest and greatest Florida houses are designed with closed cell spray foam insulation underneath the roof. This has the disadvantage that roof leaks are difficult to pinpoint, since the foam prevents the water from dripping directly down underneath the part of the roof that has failed, but seals the house against humidity because any attic space essentially becomes part of the air-conditioned and dehumidified internal space of the house.

Our house, sadly, dates to 2003. We thus have AC ducts in unconditioned attic spaces, which seems to work fine on the ground floor, but when we poke our heads up into the attic we find that it is moderately hot (85F) and extremely humid (80-85 percent relative humidity). The attic has soffit vents all around, which seem to do a good job of preventing super hot temps from developing, but they also allow humidity to intrude.

Current Florida code requires ducts insulated to R-8 inside unconditioned spaces. We have R-6 ducts. The air coming out of an air handler is typically about 20 degrees colder than the thermostat setting and we’ve measured about 52 degrees at a ceiling register. So let’s say that the duct temp is 50 degrees.

The Interweb doesn’t seem to have a simple formula for determining the outside temp of an R-6 duct given the inside temp. ChatGPT 4o, however, comes up with one:

Notice that, with this formula, the outside of the duct gets colder and colder with increased R value. A perfectly insulated duct, for example, would have an outside temperature exactly equal to the inside temp, a very curious result!

New prompt:

What if we increase the duct insulation to R-30? What would the outside temperature of the duct be? (the air inside the duct is still at 50 degrees)

Sheetrock has an R value of about 0.5, supposedly. Let’s see what happens when we plug that in:

What if we reduce the duct insulation to an R value of 0.5? What would the outside surface temperature of the duct then be?

Our future AI overlord has determined that putting cold air inside a duct will raise the temperature of the outside of the duct above the ambient temperature of the attic.

What is the solution to the sweating duct problem, you might ask? A quarter-baked approach, from the energy and building envelope expert who did our Manual J calculation:

You probably have too much ventilation in your attic. I had a similar problem in my house. I blocked off more than half of the soffit vents. The temperature in the attic went up a few degrees while the humidity came down dramatically because not as much humid air was coming in from the outside.

(He didn’t say this explicitly, but I am guessing that the relatively dry attic was due to the attic being exposed to dried-out cooled-off conditioned air from the conditioned space below.)

The half-baked approach:

Install a dehumidifier with a fresh air inlet on one or more of the house AC systems. Each dehumidifier can bring in at least 100 cfm of fresh air, thus creating a positive pressure within the conditioned spaces of the house. The result will be conditioned air being pushed up into the attic and, eventually, being exhausted through the soffit vents. Expensively dehumidified air goes out of the house via the attic instead of humid air coming into the attic. A dehumidifier consumes about 700 watts of power, so this will consume about $1000 per year in electricity at 15 cents/kWh (per dehumidifier).

The fully-baked approach:

  1. Remove all ceilings on the top floor of the house and the fiberglass insulation on top of those ceilings.
  2. Bring in a spray foam company to block off the soffit vents and spray foam over them and the entire underside of the roof
  3. Have the AC contractor put some small supplies and a return into the attic so that there is nowhere for humid air to hide (humid air is lighter than dry air so it will tend to rise to the top of a house)
  4. Have a drywall contractor come back to put new ceilings up
  5. Paint

Circling back to artificial intelligence, as embodied by the latest paid ChatGPT model (4o)… I’m impressed with how confident and erudite the machine sounds when making these simple physics calculations!

Full post, including comments

What good are the AI coprocessors in the latest desktop CPUs for users who have standard graphics cards?

Intel is supposedly putting an AI coprocessor into its latest Arrow Lake desktop CPUs, but these don’t the 40 trillion operations per second (TOPS) minimum performance to run Windows 11 Copilot+. Why is valuable chip real estate being taken up by this mental midget, relative to a standard graphics card?

“Intel’s Arrow Lake-S won’t be an AI powerhouse — 13 TOPS NPU is only slightly better than Meteor Lake, much less than Lunar Lake” (Tom’s Hardware, July 9, 2024):

Arrow Lake-S will be the first Intel desktop architecture with a neural processing unit (NPU), but it won’t be as fast as people might expect. @Jaykihn on X reports that Arrow Lake-S will include an NPU that is only slightly more powerful than Meteor Lake’s NPU, featuring just 13 TOPS of AI performance.

Having an NPU in a desktop environment is virtually useless; the main job of an NPU is to provide ultra-high AI performance with a low impact on laptop battery life. Desktops can also be used more often than laptops in conjunction with discrete GPUs, which provide substantially more AI performance than the best NPUs from Intel, AMD, or Qualcomm. For instance, Nvidia’s RTX 40 series graphics cards are capable of up to 1,300 TOPS of AI performance.

The bottom-of-the-line Nvidia RTX 4060 has a claimed performance of “242 AI TOPS” and is available on a card for less than 300 Bidies. Is the idea that a lot of desktop machines are sold without a GPU and that Microsoft and others will eventually find a way to “do AI” with however much NPU power is available within the Arrow Lake CPU? (Software that evolved to require less hardware would be a historic first!)

AMD already has a desktop CPU with distinct NPU and GPU sections, the Ryzen 8000G.

AMD Ryzen 8000G Series processors bring together some of the best, cutting-edge AMD technologies into one unique package; high-performance processing power, intense graphics capabilities, and the first neural processing unit (NPU) on a desktop PC processor.

Based on the powerful “Zen 4” architecture, these new processors offer up to eight cores and 16 threads, 24MB of total cache, and AMD Radeon™ 700M Series graphics. Combining all of this into one chip enables new possibilities for customers, in gaming, work, and much more; without the need to purchase a discrete processor and graphics card, customers can keep their budget lower, while enjoying outstanding performance.

“The Ryzen 7 8700G leads the pack …The processor has a combined AI throughput of 39 TOPS, with 16 TOPS from the NPU.” (source) If the 39 TOPS number is correct, it seems unfortunate given the Windows 11 Copilot+ demand for 40 TOPS.

Why not just build more GPU power and let it be used for graphics or AI depending on what programs are running? The big advantage of the NPU seems to be in power efficiency (source), but why does that matter for a desktop computer? Even at California or Maskachusetts electricity rates, the savings converted to dollars can’t be significant.

Full post, including comments

If we’re on the cusp of the AI golden age, why can’t web browsers fill out forms for us?

We are informed that AI is going to transform our daily lives. Web browsers are made by companies that are supposedly at the forefront of AI/LLM research and development. Why isn’t a browser smart enough to fill out the entire form below? It has seen fields with similar labels filled in hundreds or thousands of times. Why doesn’t the browser fill it out automatically and then invite the user to edit or choose “fill it out with my office address instead”?

Google Chrome, at least, will suggest values for individual fields. Why won’t it take the next step? Even the least competent human assistant should be able to fill in the above form on behalf of a boss. Why can’t AIs in which $billions have been invested do it?

Full post, including comments

Why doesn’t anyone want to buy Intel’s Gaudi AI processors, supposedly cheaper than Nvidia’s H100?

Intel claims to have a faster and more cost-effective AI system than Nvidia’s H100. It is called “Gaudi”. First, does the name make sense? Antoni Gaudí was famous for doing idiosyncratic creative organic designs. The whole point of Gaudí was that he was the only designer of Gaudí-like buildings. Why would you ever name something that will be mass-produced after this individual outlier? Maybe the name comes from the Israelis from whom Intel acquired the product line (an acquisition that should have been an incredible slam-dunk considering that it was done just before coronapanic set in and a few years before the LLM revolution)?

Intel claims that their Gaudi 3-based systems are faster and more efficient per dollar and per watt than Nvidia’s H100. Yet the sales are insignificant (nextplatform):

Intel said last October that it has a $2 billion pipeline for Gaudi accelerator sales, and added in April this year that it expected to do $500 million in sales of Gaudi accelerators in 2024. That’s nothing compared to the $4 billion in GPU sales AMD is expecting this year (which we think is a low-ball number and $5 billion is more likely) or to the $100 billion or more that Nvidia could take down in datacenter compute – just datacenter GPUs, no networking, no DPUs – this year.

Nvidia’s tools are great, no doubt, but if Intel is truly delivering 2x the performance per dollar, shouldn’t that yield a market share of more than 0.5 percent?

Here’s an article from April 2024 (IEEE Spectrum)… “Intel’s Gaudi 3 Goes After Nvidia The company predicts victory over H100 in LLMs”:

One more point of comparison is that Gaudi 3 is made using TSMC’s N5 (sometimes called 5-nanometer) process technology. Intel has basically been a process node behind Nvidia for generations of Gaudi, so it’s been stuck comparing its latest chip to one that was at least one rung higher on the Moore’s Law ladder. With Gaudi 3, that part of the race is narrowing slightly. The new chip uses the same process as H100 and H200.

If the Gaudi chips work as claimed, how is Intel getting beaten so badly in the marketplace? I feel as though I turned around for five minutes and a whole forest of oak trees had been toppled by a wind that nobody remarked on. Intel is now the General Motors circa 2009 of the chip world? Or is the better comparison to a zombie movie where someone returns from a two-week vacation to find that his/her/zir/their home town has been taken over? Speaking of zombies, what happens if zombies take over Taiwan? Humanity will have to make do with existing devices because nobody else can make acceptable chips?

Related:

Full post, including comments

How will NVIDIA avoid a Google-style Vesting in Peace syndrome?

NVIDIA is the world’s most valuable company (P/E ratio of 75; compare to less than 40 for Microsoft), which also means that nearly every NVIDIA employee is rich. A lot of people slack off when they become rich. Google ended up with quite a few “vesting in peace” workers who didn’t contribute much. It didn’t matter because it was too challenging for anyone else to break into the search and advertising businesses. But suppose that another tech company assembles a group of not-yet-rich hardware and software people. Hungry for success, these people build some competitive GPUs and the biggest NVIDIA customers merely have to recompile their software in order to use the alternative GPUs that are marketed at a much lower price.

How can NVIDIA’s spectacular success not lead to marketplace slippage due to an excessively rich and complacent workforce? Is the secret that NVIDIA can get money at such a low cost compared to competitors that it can afford to spend 2-3X as much on the next GPU and still make crazy profits? I find it tough to understand how Intel, which for years has made GPUs inside its CPUs, can’t develop something that AI companies want to buy. Intel has a nice web page explaining how great their data center GPUs are for AI:

Why can’t Intel sell these? Are the designs so bad that they couldn’t compete with NVIDIA even if sold at Intel’s cost?

Full post, including comments

Bachelor’s in AI Gold Rush degree program

A college degree is purportedly important preparation for multiple aspects of life. Universities, therefore, require students to take classes that are far beyond their major. Extracurricular activities are encouraged, such as sports, pro-Hamas demonstrations, drinking alcohol (how is that supposed to make immigrants from Gaza feel welcome?), casual sex, theater, etc. Students are forced to take about half the year off because the faculty and staff don’t want to work summers (defined as May through early September), January, or anywhere near various holidays. There is no urgency to earning a degree so why not stretch it out for four years?

What if there were urgency to getting into the workforce? Here’s the company that sold shovels to the crypto miners and now sells shovels to the AI miners (May 23):

It was a lot better to start work at NVIDA in June 2022 than in June 2024. Consider a Stanford graduate who could have finished in 2022, but instead didn’t finish until 2024. He/she/ze/they took Gender and Gender Inequality, Intersectionality: Theory, Methods & Research, and Race and Ethnicity Around the World from Professor Saperstein to round out his/her/zir/their engineering education. Was that worth the $5 million that would have been earned by starting work at NVIDIA in 2022 rather than in 2024 (two years of salary, stock options at $175 instead of at $1000, etc.)?

How about a “Bachelor’s in AI Gold Rush” degree program that would prepare students to build and use LLMs? It would be a 2-year program with no breaks so that people could graduate and start their jobs at OpenAI. There would be no requirement to take comparative victimhood classes (i.e., humanities). There would be no foundational math or science unless directly related to LLM construction (a lot of linear algebra?). There would be no pretense of preparing students for anything other than working at OpenAI or a similar enterprise.

Students will graduate at age 20. What if the AI gold rush is over when they turn 28? (Maybe not because AI turns out to be useless or even over-hyped, but only because the industry matures or the LLMs start building new LLMs all by themselves.) They can go back to college and take all of that “might be useful” foundational stuff that they missed, e.g., back to Harvard to study Queering the South:

(A friend’s daughter actually took the above class; she was most recently living in Harvard’s pro-Hamas encampment.) As a follow-on:

If the 28-year-old made so much money in the AI gold rush that he/she/ze/they wants to “give back” by becoming a school teacher, he/she/ze/they can get a Master’s in Education at Harvard and take “Queering Education”:

By the end of the module, students should be able to: (1) Talk comfortably about queer theory and how it can inform our understanding of schools and schooling; (2) identify specific strategies that educators at various levels might use to support students in negotiating gender and sexuality norms; (3) identify tools that schools can use to build positive, nurturing environments, which open up possibilities for complex gender and sexual identity development; and (4) analyze and evaluate a variety of school practices, curricula, programs, and policies that seek to support healthy gender and sexual identity development for U.S. children and adolescents.

Related:

May 31, 2024 update:

Full post, including comments

Where’s the AI customer service dividend?

ChatGPT (launched November 2022) and similar LLMs were supposed to make customer service agents more efficient. Has this happened? From what I can tell, the opposite has occurred. If I call a company that is supposed to be providing service the inevitable greeting is “we are experiencing higher than normal call volume” (i.e., demand for service exceeds agent capacity, despite the agents now being augmented with AI). When an agent does pick up, he/she/ze/they immediately asks, “What is your phone number?” In other words, the smartest computer systems ever devised cannot use caller ID.

(If Trump gets elected this fall and then, as predicted by the New York Times and CNN, ends American democracy, I hope that he will issue a decree that companies aren’t allowed to announce “we are experiencing higher than normal call volume” more than 5 percent of the time.)

My favorite company for customer service is Hertz. They recently hit my credit card for $262.41 for a 24-hour 29-mile rental of a compact Ford Edge in El Paso. I never signed anything agreeing to pay $262 and their app was quoting $76 including all fees (I picked up the car at an FBO so there wasn’t the fully array of Hertz computer systems on site). When I called Hertz to try to figure out why they charged so much I learned that they’ve eliminated the option of talking to a human regarding any bill. A human will be happy to make a reservation, but not to answer questions about what could be a substantial credit card charge. Hertz funnels all questions about past rentals to a web form, which they say they will respond to within a few days. Of course, my first inquiry about the bill yielded no response. My second inquiry, a week later, yielded a “everything was done correctly” response. I finally pinged them on Twitter private message. They admitted that they had no signed paperwork with an agreement to pay $262 and issued a refund of about half the money.

Circling back to AI… if LLMs make customer service agents more efficient, why has Hertz needed to shut down phone customer service? And if LLMs are brilliant at handling text why isn’t Hertz able to respond to contact form inquiries quickly?

Here’s an example pitch from the AI hucksters:

Full post, including comments

Oversupply of mediocre computer nerds in the midst of the AI Bubble

All previous tools that were hyped as making programmers more productive had no effect or a positive effect on the demand for computer programmers. I would have thought that we would be in a golden age for young computer nerds as every company on the planet seeks to “add AI”, e.g., “Joe’s Drywall and Paint, now with AI”.l

The Wall Street Journal, however, says that there is a glut of graduates… “Computer-Science Majors Graduate Into a World of Fewer Opportunities”:

Note the hateful depiction of a non-Black non-female not-obviously-2SLGBTQQIA+ computer wizard (NYT would never make this mistake). Also note “Those from top schools can still get job”. In other words, it is the mediocre computer nerds who can’t get hired. Either there has been a huge boom in the number of people who are passionate about computer nerdism or a lot of kids have gone into CS, despite a lack of interest in staring at a screen, because someone told them that it was a sure path to a solid career (this was my experience teaching Information Technology; 90 percent of the students were not even vaguely curious about the subject, e.g., curious enough to search outside of the materials assigned):

My guess is that, due to lack of interest/passion, 70 percent of CS majors shouldn’t have majored in CS and won’t have lasting careers in CS. They are at best mediocre now and will just get worse as they forget what they were supposed to have learned.

Almost all of the news in the article is bad:

To be sure, comp-sci majors from top-tier schools can still get jobs. Pay, projected to be at about $75,000, is at the high end of majors reviewed by the National Association of Colleges and Employers, or NACE. They are just not all going to Facebook or Google.

“Job seekers need to reset their expectations,” said Tim Herbert, chief research officer at CompTIA, a trade group that follows the tech sector. “New grads may need to adjust where they’re willing to work, in some cases what salary, perks or signing bonus they’ll receive, and the type of firm they’ll work for.”

And while big tech companies are hiring for AI-related jobs, Herbert said, many of those positions require more experience than a new grad would have.

Salaries for this year’s graduates in computer science are expected to be 2.7% higher than last year’s, the smallest increase of eight fields reviewed by NACE.

In the past 18 months, job growth has remained flat for software publishers, a group of employers that includes software developers, according to the Labor Department. On the student jobs platform Handshake, the number of full-time jobs recently posted for tech companies is down 30% from the year-ago period.

$75,000/year?!?! That’s $55,000 per year after Joe Biden’s and Gavin Newsom’s shares (online calculator). About $12,000 of that after-tax $55,000 will be consumed paying for the car that is required to get to the job (AAA and CNBC). Salaries are 2.7 percent higher than a year ago? That’s a pay cut if you adjust for the inflation rate in any part of the country where (a) people want to live, and (b) there are jobs.

I’m wondering if the big problem is in bold. Four years of paying tuition should prepare a smart young person for almost any job, including “AI-related” (if not at OpenAI then at some company that is planning to use an LLM via an API to OpenAI or similar). In the late 1990s, colleges weren’t teaching “How to build an Amazon or eBay” (so we developed a class that did and a textbook) even though it was obvious that employers wanted graduates who could built database-backed web sites. Could it be that the CS curriculum is totally stale once again? Very few of the professors have what it would take to get hired at OpenAI and, therefore, they can’t teach the students what it would take to get hired at OpenAI.

I think this confirms my 2018 theory that data science is what young people should study and that data science restores the fun of computer programming that we enjoyed in the pre-bloat days.

Full post, including comments