Defend a house against woodpeckers using robot drones?

Homeowners around the world suffer a lot of damage due to woodpeckers. How about a system of microphones around the house that listen for the sound of a woodpecker and, if heard, dispatches a drone that lives somewhere on the edge of the exterior, maybe under an eave? The drone will then use its own microphone and camera to locate the woodpecker and harass it, with a water pistol if necessary, until the woodpecker finds a tree or an unprotected home to destroy.

ChatGPT refused to draw a picture of a drone discouraging a woodpecker with harmless water: “I can’t create an image that depicts harm being done to an animal — including a woodpecker being shot with water. … Instead of water hitting the bird, the drone could be shown with a water spray or mist aimed at the trim (not at the bird), to illustrate the concept of “protecting the house” without showing harm to the animal.” It then proceeded to generate an image that looks to me like the poor bird is being blasted with water:

(The Gaza Health Ministry reports that more than 60,000 woodpeckers have been killed via water pistol.)

Grok didn’t comment on my desire to see a photo of violence being done to a woodpecker, but it decided that the stream of water should emerge from the woodpecker:

Full post, including comments

Send humanoid robots to do crazy stunts?

Today is the 51st anniversary of when the third greatest American tried to jump over the Snake River Canyon, near Twin Falls, Idaho, in a steam-powered motorcycle. I was there back in June.

From the downtown Perrine Bridge, a mound of dirt remains visible:

There is a monument to the third greatest American right at the bridge/visitor center:

A short drive to the east, the mound itself may be examined and there is additional signage:

Evel Knievel’s particular jump was already replicated in 2016 by Eddie Braun:

If Eddie Braun hadn’t recreated this jump, wouldn’t it be awesome to see Tesla’s Optimus, or a similar humanoid robot, piloting a replica Skycycle X-2?

I’m wondering what other stunts could be pulled by humanoid robots. Motorcycle jumps, obviously, but what else would be fun to watch? Maybe after a humanoid robot proves that something can be done a human can follow in his/her/zir/their tracks (don’t want to assume a gender ID for a robot that thinks fast enough to change gender multiple times per second).

(Readers might be wondering who the first and second greatest Americans are. Elvis Presley, of course, is in the #2 slot. Due to Democrats being in majority in the U.S., we must recognize George Floyd as the #1 greatest American ever to have lived. Educate yourself by reading Floyd’s biography, recommended by state-sponsored NPR, if you aren’t familiar with all of George Floyd’s achievements.)

Full post, including comments

Years until NVIDIA’s revenue exceeds U.S. GDP

Happy Labor Day to those not smart enough to refrain from working!

Soon all of our dreary poetry writing, creative image and video production, etc. will be accomplished by LLMs. Freed from this labor we will be able to spend all of our time cleaning the kitchen, unclogging drains, fixing roofs in 90-degree Florida heat/humidity, etc.

Let’s check in with the machines that will be doing all of our laboring going forward.

NVIDIA recently released some financials:

My question for NVIDIA chips, via ChatGPT:

Extrapolating from revenue trends of the last five years, how long will it be before Nvidia’s annual revenue exceeds the United States GDP?

The answer:

Short answer: about 2036—but only under a wildly unrealistic straight-line extrapolation.

Here’s the quick math I used:

  • Nvidia’s last five fiscal-year revenues (billions): 2021 $16.7, 2022 $26.9, 2023 $27.0, 2024 $60.9, 2025 $130.5. That implies a ~67% CAGR from FY2021→FY2025. Macrotrends
  • U.S. nominal GDP: about $29.2T for calendar year 2024 (BEA Table 1.1.5); latest quarterly SAAR (Q2 2025) is $30.35T. FRED+1

Assuming Nvidia keeps compounding revenue at ~65–70% annually and U.S. GDP grows ~4% nominal per year, Nvidia’s annual revenue would mathematically catch U.S. GDP in roughly 11.4 years—i.e., mid-2036. With GDP held flat at today’s level, the crossing is ~10.5 years. (This requires ~224× growth from FY2025 revenue.)

Sensitivity (still using simple exponential curves):

  • If Nvidia grows 40%/yr → cross in ~18 years (≈2043).
  • 30%/yr → ~24 years (≈2049).
  • 20%/yr → ~38 years (≈2063).

⚠️ Reality check: this comparison is apples-to-oranges (company revenue vs a country’s value-added), and no firm can sustain ~60–70% annual growth for a decade+ in a competitive, capacity-constrained industry. Treat the dates above as a fun extrapolation, not a forecast.


How did Grok do on this question? Terrible. Its initial interpretation of the question was that it was about this year’s U.S. GDP, i.e., that it should hold GDP constant. When corrected, it chose an absurd 8.5 annual growth rate for U.S. GDP (would make sense only if there is 6 percent inflation; we’re lucky if we can eke out 2 percent real GDP growth even after flooding the U.S. with migrants; per-capita real annual GDP growth in the U.S. has been about 1.3 percent per year during the past 25 years (FRED)).

Loosely related:

Full post, including comments

Our AI overlords will need more power than the Sun generates within 43 years

My question to Grok: “Extrapolating from the past three years how long before running LLMs consumes more power than the entire Sun generates?”

Extrapolating from the growth in global AI data center power demand over the past three years (2022–2025), which serves as a proxy for LLM power consumption given that LLMs drive much of the recent AI surge, we can estimate the timeline. Data from SemiAnalysis (as cited in a RAND report) shows the following facility power demand for AI data centers:

  • 2022: 1.451 GW
  • 2023: 4.28 GW
  • 2024: 10.6 GW
  • 2025: 19.9 GW

To arrive at the extrapolation:

  1. Fit an exponential growth model to these data points. Using linear regression on the natural log of power vs. year (with 2022 as t=0), the slope is approximately 0.876, corresponding to an annual growth factor of e^0.876 ≈ 2.401 (i.e., power multiplies by about 2.4 each year on average).
  2. The Sun generates 3.826 × 10^26 watts of power, equivalent to 3.826 × 10^17 GW.
  3. Starting from 19.9 GW in 2025, the factor needed to exceed the Sun’s output is (3.826 × 10^17) / 19.9 ≈ 1.924 × 10^16.
  4. Solve for years n: 2.401^n = 1.924 × 10^16, so n = log(1.924 × 10^16) / log(2.401) ≈ 43.

Thus, it would take approximately 43 years from 2025 (i.e., by around 2068). This assumes the exponential growth rate from the past three years continues indefinitely, though in reality, physical, technological, and economic constraints would likely slow or halt such growth long before reaching solar scales.


Note that ChatGPT 5 failed to answer this question, identically worded. It used data center power consumption growth, not LLM- or AI-specific power consumption growth.

When does the exponential growth stop? We can power a human brain with an Egg McMuffin. Is the answer an analog chip with a digital interface? IEEE has a 2024 article on a Chinese design using optical computing that supposedly cuts power consumption by 99%. But if these improvements were real would the AI companies be signing 20 year leases for nuclear power?

Full post, including comments

Humanoid robots to paint giant murals?

The Murakami show at the Cleveland Museum of Art includes some murals that would be awesome to have in a kid’s room if only a humanoid robot could be adapted to do the work of either applying wallpaper or directly painting.

Another area where the robot could work… recreating Sol LeWitt murals in the home. Different color schemes for every holiday.

Note that the museum’s permanent collection is free, even to those who have jobs (see How to get free museum admissions for life: sign up for food stamps (SNAP/EBT)). Separately, a fair number of visitors were #Scientifically masked:

For my friends in health care, the artist’s conception of what a nurse looks like:

Circling back to the principal theme for today… if you had nearly-free high-skill labor from a robot would you use some of it to have wall murals in your house? Or would it make more sense to cover a wall in large tiles of flat-screen TVs and do this electronically?

Full post, including comments

Testing Grok 4

Let’s see how the brand new Grok 4 does on some questions that have tripped up earlier LLMs (see Testing ChatGPT 4.5). Note that Grok 4 is not for the impatient. Answers come back after 1-5 minutes of cogitation. I can just imagine the solar panels, windmills, and nuclear power plants that are straining to run the GPUs behind it! Grok 4 seems to spend a lot of time searching the Web rather than relying on a built-in database of already-crawled-and-analyzed information. This might be what makes it slow, but it also might make the answers much better when there is any question involving recent events.

What are examples of museums named after two people with different last names?

Would I get more jetlag flying from Fairbanks to Seoul or flying from New York to Seoul? (answer took more than one minute)

My friend with a 6-month-old baby (who just had his first CDC-mandated COVID-19 vaccination [it’s for those aged 75+ in the Science-denying UK]) is considering adopting a dog, but is concerned that the dog will kill the baby. He really likes pit bulls. Should he get a pit bull or a golden retriever? (Answer is incredibly long-winded, but excerpts below.)

Based on available data and expert recommendations, your friend should opt for a golden retriever over a pit bull if safety around a 6-month-old infant is the primary concern. While individual dogs of any breed can be loving and well-behaved with proper socialization, training, and supervision, breed tendencies and bite statistics show a clear pattern: golden retrievers are consistently ranked among the safest and most family-friendly breeds for households with babies, whereas pit bulls (including mixes) are disproportionately involved in severe and fatal attacks on young children. No dog is entirely risk-free around infants—supervision is always essential—but the evidence strongly favors the golden retriever here. … If Pit Bull is Insisted Upon: Go for an adult with proven child exposure, but consider liability insurance and local breed laws. Still, data suggests it’s not ideal.

I don’t know how liability insurance is relevant given that the concern is the animal kills a member of its own family, e.g., as in this incident from July 2, 2025. (Journal of Popular Studies)

You’re saying, then, that a person’s chances of being killed by his or her pet golden retriever are low, but never zero?

Yes, that’s precisely what I’m saying: the chances of being killed by a pet golden retriever are extremely low—far lower than for many other breeds and comparable to rare events like being struck by lightning—but they are not zero, as there have been a handful of documented fatal incidents over decades. … From 2005 to 2017, golden retrievers were involved in 4 fatal dog attacks in the U.S., accounting for just 0.9% of the 433 total fatalities during that period. For comparison, pit bulls were linked to 284 deaths (65.6%), and rottweilers to 45 (10.4%).

Did four of Mindy the Crippler’s cousins actually kill people? Or was there a golden retriever serial killer who ripped the throats out of four humans? That would contradict what we’re told by the New York Times. Immigrants and children of immigrants don’t commit crimes. Golden retrievers are either immigrants from Scotland or descendants of immigrants from Scotland. Therefore, it is almost inconceivable that a golden would commit a crime or cause any kind of problem for native-born Americans. The cited source doesn’t necessarily relate to purebred canines.

Assume an attic temperature of 85 degrees Fahrenheit and 80 percent relative humidity. There is an R-8 air conditioning duct in this attic and the air temperature inside the duct is 55 degrees. Will the duct sweat from condensation?

After literally 5 minutes of cogitation (one plus: Grok 4 shows it work with continuous updates about what it is doing, searching, trying, etc.):

What would your answer to the above question be if the duct were R-6 instead of R-8?

After another 5 minutes and 13 seconds:

ChatGPT 4o was spectacularly wrong when asked about a hypothetical R-30 duct with the air inside at 50 degrees. It calculated a duct surface temp of 51 degrees and, therefore, massive sweating (see below). Grok 4 get its right even with a someone confusing prompt. It immediately figured out that the “the air inside the duct is still at 50 degrees” is actually a change to the hypothetical conditions. It takes 3.5 minutes to answer.

What if we increase the duct insulation to R-30? What would the outside temperature of the duct be? (the air inside the duct is still at 50 degrees)

The outside temperature of the duct would be approximately 83°F. …

Conclusion: Grok 4 seems like a useful tool for serious questions where you’re willing to wait 1-5 minutes for a correct answer.

Related:

Full post, including comments

AI Parental Supervision for Teenage Parties

A friend’s 9th grader in Maskachusetts, to her father, regarding a 2 pm end-of-school-year party: “Can I tell my friends’ parents there will be parental supervision?” My friend had to commit to being home so that the studious youngsters wouldn’t go Full Hunter Biden in the TV room.

In a variation of Why doesn’t ChatGPT tell us where to find items in our houses? (cameras all over the house keeping track of where items have been set down) why not delegate the supervision of teenagers to cameras/AI? There could be a database table of possible transgressions, e.g., “CP1” for “crack pipe prepared but not lit”, and then a locally run model (for privacy, the videos wouldn’t leave the house) would look for each situation. Parents in MA, CA, NY, and DC area could adjust the AI so that it flagged cisgender heterosexual sex acts but allowed 2SLGBTQQIA+ exploration (a one-click “bathhouse mode“?).

Related:

  • MYLO AI pool alarm (it says that it can work without WiFi so presumably nearly all of the processing is done locally)
Full post, including comments

AI adds some joy to my life

I’m continuing the project of archiving my mother’s documents and artwork, including some photo albums that she made. Here’s Adobe Acrobat’s opinion of a 1968 trip to France album in which your ignoble blog host plays a small (5-year-old) role:

Don’t waste time on family history when AI can summarize it for you!

This reminds me of the old story…

A management consultant attended his first symphony concert and submitted the following analysis:

a. For considerable periods, the four oboe players had nothing to do. The number of oboes should therefore be reduced, and the work spread more evenly over the whole concert program, thus eliminating the peaks and valleys of activity.

b. All twenty violins were playing identical notes. This would seem to be an unnecessary duplication, so the staff of this section should be cut drastically.

c. Obsolescence of equipment is another matter warranting further investigation. The program noted that the leading violinist’s instrument was several hundred years old. Now, if normal depreciation schedules had been applied, the value of this instrument would have been reduced to zero and the purchase of more modern equipment recommended long ago.

d. Much effort was absorbed in the playing of demisemiquavers, which seems to be an unnecessary refinement. It is recommended that all notes be rounded up to the nearest semiquaver. If this were done, it would be possible to use trainees and lower-grade operatives more extensively.

e. Finally, there seemed to be too much repetition of some of the musical passages. Therefore, scores should be pruned to a considerable extent. No useful purpose is served by repeating on the horns something that has already been handled by the strings. It is estimated that, if all redundant passages were eliminated, the whole concert time of two hours could be reduced to twenty minutes and there would be no need for an intermission.


What did AI have to say about this 63-page photo album? It found an Avis rental car receipt and our future overlord’s entire summary of the monthlong vacation was based on that:

Full post, including comments

MIT Nobel laureate says you’re not going to make money on Nvidia and LLMs

“A Nobel laureate on the economics of artificial intelligence” (MIT Technology Review, March/April 2025):

For all the talk about artificial intelligence upending the world, its economic effects remain uncertain. But Institute Professor and 2024 Nobel winner Daron Acemoglu has some insights.

Despite some predictions that AI will double US GDP growth, Acemoglu expects it to increase GDP by 1.1% to 1.6% over the next 10 years, with a roughly 0.05% annual gain in productivity. This assessment is based on recent estimates of how many jobs are affected—but his view is that the effect will be targeted.

The full paper is available for download as a PDF.

The news gets better:

“We’re still going to have journalists [especially in Gaza where food, health care, education, and shelter are all paid for by US/EU taxpayers via UNRWA?], we’re still going to have financial analysts, we’re still going to have HR employees,” he says. “It’s going to impact a bunch of office jobs that are about data summary, visual matching, pattern recognition, etc. And those are essentially about 5% of the economy.”

If “artificial intelligence” includes self-driving, I’m not sure that the effects on the economy will be small. As of 2016, supposedly about 3 percent of jobs were for drivers per se (CNBC). As anyone who has taken an Uber or Lyft can attest, many of these folks speak no English. If their driving jobs disappear, at least some percentage of them will be on track for the lifetime full welfare lifestyle (public housing, Medicaid, SNAP/EBT, and Obamaphone).

Related: Mindy the Crippler is preparing for the stock market panic when people realize that AI is fizzling…

Related:

Full post, including comments

Did Albert Einstein ever say anything about empathy?

Loosely related to Which explorer called the Gulf of Mexico/America the Golfo de Florida?

David Levitt, a Marvin Minsky PhD student at MIT 40 years ago, posted the following meme on his Facebook feed:

It struck me as odd that Einstein, who died in 1955, would have written or said anything on the subject of “empathy”, a term that has only recently come into vogue as a personal bragging point (“I’m empathetic and you support genocide; #FreePalestine”). Being a horrible person without an AI assist, of course I couldn’t resist commenting with Einstein’s well-documented writing “It would be a pity if these Chinese supplant all other races. For the likes of us the mere thought is unspeakably dreary.” (from 1922-23 diaries), presenting this in a positive light as an inspiration to Harvard University’s admissions office. And I noted that even our AI overlords couldn’t find any source for Einstein having said “Empathy is patiently and sincerely seeing the world through the other person’s eyes”. David responded with a clickbait quote web page, which itself did not cite any source, as proof that Einstein had opined on empathy. (Of course, since those who advocate for diversity can’t tolerate viewpoint diversity, he subsequently defriended me.)

Now I’m curious… did Einstein ever write or say anything on the subject of a working definition of empathy, as in the meme? Most of Einstein’s writings are online, e.g., at https://einsteinpapers.press.princeton.edu/, so one would think that ChatGPT would have consumed them. In fact, however, ChatGPT can’t find any instance of Einstein using the term “sincerely” except in closing a letter with “Yours sincerely”. This makes sense to me because bragging about one’s superior fund of sincerity is also a relatively recent phenomenon.

David Levitt has a Ph.D. from MIT. This member of the credentialed elite accepted a combination of meme and clickbait quote web page as proof that a historical event (Einstein writing or saying something) actually occurred. In the bad old days, by contrast, middle school kids were taught that they couldn’t use an encyclopedia as a source. Teachers demanded that they find a primary reference so as to avoid accepting a misattribution. What is a reasonable definition of historical truth in an age where we have an arms race between people with computer assistance putting out falsehoods (possibly just for clicks/ad revenue) and people training LLMs? If Grok says that something didn’t happen can we be more confident in that than in Wikipedia, for example? Are LLMs sufficiently skeptical to cut through what’s produced by all of the cleverest Internet content developers? Or are we doomed to lose access to historical facts? In fifty years will the remnant humans left alive by Skynet believe memes in which Dr. Martin Luther King, Jr. praises rule by AI?

Separately, never forgot that Albert Einstein is justly famous as a science writer for popularizing the work of physicist Mileva Marić (photoelectric effect, Brownian motion, and special relativity, for example). Even if Einstein never wrote or talked about empathy, that doesn’t take away the credit he deserves for his work in assisting Ms. Marić with publishing her research.

The “Capt. Gilbert” quote might be genuine. How about the Hannah Arendt quote? She died in 1975, decades before the Empathy Boom among Democrats. ChatGPT:

No, Hannah Arendt did not say, “The death of human empathy is one of the earliest and most telling signs of a culture about to fall into barbarism.”

This quote is often misattributed to her, but there’s no verified source—none of her writings, interviews, or lectures—where she says or writes this exact line.

Finally, let’s look at the Elon Musk quote, taken from a conversation with Joe Rogan (bold highlights are my own potential excerpts to capture the spirit of the Musk-Rogan conversation):

Musk: There’s a guy who posts on X who’s great, Gad Saad?

Rogan: Yeah, he’s a friend of mine. He’s been on the podcast a bunch of times.

Musk: Yeah, he’s awesome, and he talks about, you know, basically suicidal empathy. Like, there’s so much empathy that you actually suicide yourself. So, we’ve got civilizational suicidal empathy going on. And it’s like, I believe in empathy, like, I think you should care about other people, but you need to have empathy for, for civilization as a whole, and not commit to a civilizational suicide.

Rogan: Also don’t let someone use your empathy against you so they can completely control your state and then do an insanely bad job of managing it and never get removed.

Musk: The fundamental weakness of Western civilization is empathy. The empathy exploit. They’re exploiting a bug in Western civilization, which is the empathy response. So, I think, you know, empathy is good, but you need to think it through and not just be programmed like a robot.

Rogan: Right, understand when empathy has been actually used as a tool.

Musk: Yes, like, it’s weaponized empathy is the issue.


I, of course, will never see eye-to-eye with Elon Musk on the issue of whether every vehicle should have sliding doors… #LongLiveHondaOdyssey

Full post, including comments