I’m continuing the project of archiving my mother’s documents and artwork, including some photo albums that she made. Here’s Adobe Acrobat’s opinion of a 1968 trip to France album in which your ignoble blog host plays a small (5-year-old) role:
Don’t waste time on family history when AI can summarize it for you!
This reminds me of the old story…
A management consultant attended his first symphony concert and submitted the following analysis:
a. For considerable periods, the four oboe players had nothing to do. The number of oboes should therefore be reduced, and the work spread more evenly over the whole concert program, thus eliminating the peaks and valleys of activity.
b. All twenty violins were playing identical notes. This would seem to be an unnecessary duplication, so the staff of this section should be cut drastically.
c. Obsolescence of equipment is another matter warranting further investigation. The program noted that the leading violinist’s instrument was several hundred years old. Now, if normal depreciation schedules had been applied, the value of this instrument would have been reduced to zero and the purchase of more modern equipment recommended long ago.
d. Much effort was absorbed in the playing of demisemiquavers, which seems to be an unnecessary refinement. It is recommended that all notes be rounded up to the nearest semiquaver. If this were done, it would be possible to use trainees and lower-grade operatives more extensively.
e. Finally, there seemed to be too much repetition of some of the musical passages. Therefore, scores should be pruned to a considerable extent. No useful purpose is served by repeating on the horns something that has already been handled by the strings. It is estimated that, if all redundant passages were eliminated, the whole concert time of two hours could be reduced to twenty minutes and there would be no need for an intermission.
What did AI have to say about this 63-page photo album? It found an Avis rental car receipt and our future overlord’s entire summary of the monthlong vacation was based on that:
For all the talk about artificial intelligence upending the world, its economic effects remain uncertain. But Institute Professor and 2024 Nobel winner Daron Acemoglu has some insights.
Despite some predictions that AI will double US GDP growth, Acemoglu expects it to increase GDP by 1.1% to 1.6% over the next 10 years, with a roughly 0.05% annual gain in productivity. This assessment is based on recent estimates of how many jobs are affected—but his view is that the effect will be targeted.
The full paper is available for download as a PDF.
The news gets better:
“We’re still going to have journalists [especially in Gaza where food, health care, education, and shelter are all paid for by US/EU taxpayers via UNRWA?], we’re still going to have financial analysts, we’re still going to have HR employees,” he says. “It’s going to impact a bunch of office jobs that are about data summary, visual matching, pattern recognition, etc. And those are essentially about 5% of the economy.”
If “artificial intelligence” includes self-driving, I’m not sure that the effects on the economy will be small. As of 2016, supposedly about 3 percent of jobs were for drivers per se (CNBC). As anyone who has taken an Uber or Lyft can attest, many of these folks speak no English. If their driving jobs disappear, at least some percentage of them will be on track for the lifetime full welfare lifestyle (public housing, Medicaid, SNAP/EBT, and Obamaphone).
Related: Mindy the Crippler is preparing for the stock market panic when people realize that AI is fizzling…
It struck me as odd that Einstein, who died in 1955, would have written or said anything on the subject of “empathy”, a term that has only recently come into vogue as a personal bragging point (“I’m empathetic and you support genocide; #FreePalestine”). Being a horrible person without an AI assist, of course I couldn’t resist commenting with Einstein’s well-documented writing “It would be a pity if these Chinese supplant all other races. For the likes of us the mere thought is unspeakably dreary.” (from 1922-23 diaries), presenting this in a positive light as an inspiration to Harvard University’s admissions office. And I noted that even our AI overlords couldn’t find any source for Einstein having said “Empathy is patiently and sincerely seeing the world through the other person’s eyes”. David responded with a clickbait quote web page, which itself did not cite any source, as proof that Einstein had opined on empathy. (Of course, since those who advocate for diversity can’t tolerate viewpoint diversity, he subsequently defriended me.)
Now I’m curious… did Einstein ever write or say anything on the subject of a working definition of empathy, as in the meme? Most of Einstein’s writings are online, e.g., at https://einsteinpapers.press.princeton.edu/, so one would think that ChatGPT would have consumed them. In fact, however, ChatGPT can’t find any instance of Einstein using the term “sincerely” except in closing a letter with “Yours sincerely”. This makes sense to me because bragging about one’s superior fund of sincerity is also a relatively recent phenomenon.
David Levitt has a Ph.D. from MIT. This member of the credentialed elite accepted a combination of meme and clickbait quote web page as proof that a historical event (Einstein writing or saying something) actually occurred. In the bad old days, by contrast, middle school kids were taught that they couldn’t use an encyclopedia as a source. Teachers demanded that they find a primary reference so as to avoid accepting a misattribution. What is a reasonable definition of historical truth in an age where we have an arms race between people with computer assistance putting out falsehoods (possibly just for clicks/ad revenue) and people training LLMs? If Grok says that something didn’t happen can we be more confident in that than in Wikipedia, for example? Are LLMs sufficiently skeptical to cut through what’s produced by all of the cleverest Internet content developers? Or are we doomed to lose access to historical facts? In fifty years will the remnant humans left alive by Skynet believe memes in which Dr. Martin Luther King, Jr. praises rule by AI?
Separately, never forgot that Albert Einstein is justly famous as a science writer for popularizing the work of physicist Mileva Marić (photoelectric effect, Brownian motion, and special relativity, for example). Even if Einstein never wrote or talked about empathy, that doesn’t take away the credit he deserves for his work in assisting Ms. Marić with publishing her research.
The “Capt. Gilbert” quote might be genuine. How about the Hannah Arendt quote? She died in 1975, decades before the Empathy Boom among Democrats. ChatGPT:
No, Hannah Arendt did not say, “The death of human empathy is one of the earliest and most telling signs of a culture about to fall into barbarism.”
This quote is often misattributed to her, but there’s no verified source—none of her writings, interviews, or lectures—where she says or writes this exact line.
Finally, let’s look at the Elon Musk quote, taken from a conversation with Joe Rogan (bold highlights are my own potential excerpts to capture the spirit of the Musk-Rogan conversation):
Musk: There’s a guy who posts on X who’s great, Gad Saad?
Rogan: Yeah, he’s a friend of mine. He’s been on the podcast a bunch of times.
Musk: Yeah, he’s awesome, and he talks about, you know, basically suicidal empathy. Like, there’s so much empathy that you actually suicide yourself. So, we’ve got civilizational suicidal empathy going on. And it’s like, I believe in empathy, like, I think you should care about other people, but you need to have empathy for, for civilization as a whole, and not commit to a civilizational suicide.
Rogan: Also don’t let someone use your empathy against you so they can completely control your state and then do an insanely bad job of managing it and never get removed.
Musk: The fundamental weakness of Western civilization is empathy. The empathy exploit. They’re exploiting a bug in Western civilization, which is the empathy response. So, I think, you know, empathy is good, but you need to think it through and not just be programmed like a robot.
Rogan: Right, understand when empathy has been actually used as a tool.
Musk: Yes, like, it’s weaponized empathy is the issue.
I, of course, will never see eye-to-eye with Elon Musk on the issue of whether every vehicle should have sliding doors… #LongLiveHondaOdyssey
How about this use of artificial intelligence/LLMs… a complete personality upgrade.
Consider that in-person interaction is becoming increasingly rare. Most of what is known about someone’s personality is based on whatever he/she/ze/they has typed into a computer or a phone. There is a range of personality quality and some of us are near the bottom, e.g., gratuitously annoying or offending people. What if phone and desktop operating systems were modified to include an AI personality adjuster? The user’s emails, social media posts, text messages, etc. would all be edited to read like whatever a person with a great (kind, optimistic, non-sarcastic, non-sour) personality would have written.
“I paid $45,000 for this car and your incompetent mechanics haven’t fixed the A/C after three visits” becomes “Thank you and your team for working hard to get my A/C fixed on the last three service visits. I appreciate your diligence, especially since you sold me the car for only $45,000, which I know was an especially fair deal. I am wondering if I can schedule a fourth service on the A/C.”
Here’s a tweet from someone with a terrible personality:
Instead of “How is an organization with a white male leader equipped to demand that others stop being racist?” this comment would be rewritten to “Thank you and your tireless UN colleagues for all of the work that you do to make the world a better place. I hope that you can stay in your Secretary-General job until you’re 95, but if you choose to enjoy a well-earned retirement it would be interesting to see what a Secretary-General of color would do.”
(Note how my reply got only 28 views; X has an algorithm to keep people in echo chambers. It has learned that nothing I write is going to make people who follow the UN happy or interested and, therefore, suppresses views for any comment that I might make on a UN post. Similar replies on conservative users’ tweet have gotten at least a few hundred views.)
This could have saved James Damore, the Google Heretic. His entire manifesto would have been reduced to “Women are so much better at programming than men because they just love a job where they sit by themselves and stare at a screen all day. It would be wonderful if Google would hire more females even if their education and skills don’t appear to be sufficient for the jobs.”
If fed statistics from online dating markets and told to write for success, the AI would rewrite most messages from men to women to be variations on “I am 6’2″ tall and earn $750,000 per year.” (see “Income attraction: An online dating field experiment”, for example) “Rammstein is my favorite group” would become “$10,000 is not too much to spend on a Taylor Swift concert.” The political stuff would be trivial for an AI to handle. If a man wrote “I voted for Donald Trump” that would be tweaked to “I am inspired by Kamala Harris and all of the other amazing Democrat women.”
A social media comment on a post celebrating a female aviation achievement under the Are women the new children? standard would go from “Don’t forget Hanna Reitsch, the first woman to fly a helicopter, and a passionate advocate for her government and nation.” to “Great to see someone breaking barriers.”
On this glorious day of Turing Award presentation to two reinforcement learning nerds, I went back to some of ChatGPT’s failures, feeding the same prompts to the 4.5 version of our future robot overlord.
December 14, 2024: LLM failure on a simple question (“What are examples of museums named after two people with different last names?”) Today: failure once again.
August 2024: ChatGPT 4o tackles the challenge of AC ducts sweating in an attic; Today: complete failure. It concludes that if you put 50-degree air inside an R-30-insulated duct in a warm attic, the outside of the duct will be at 50.8 degrees F and, therefore, the duct will sweat.
The latest version of ChatGPT thinks that pit bulls are, in general, more dangerous than golden retrievers. But it adds an “important nuance”:
Individual temperament, training, socialization, and responsible ownership significantly impact dog behavior.
I followed up with
You’re saying, then, that your chances of being killed by your pet golden retriever are low, but never zero?
and ChatGPT agreed, highlighting “but never zero”. Asked for an example, ChatGPT claimed “A notable fatal incident involving a Golden Retriever occurred in 2012, when an 8-month-old infant in South Carolina was tragically killed by a Golden Retriever.” I found the story:
… found dead in his family’s mobile home …. The baby was in a swing when Lucky, a golden retriever-Labrador mix, bit the child several times and tore off his legs, authorities said. The child’s father, Quintin, was in the home at the time, police said. He was in another room asleep with the family’s 3-year-old and their other dog. The baby was discovered when his mother, Chantel, came home after taking their seven-year-old to a doctor’s appointment, The Post and Courier reported.
Here’s a photo of what a Goldador is supposed to look like:
Based on this photo, I’m not convinced that the mostly peaceful animal is a golden-lab, though a lot of puppies do love to bit arms, hands, legs, and feet!
Let’s try some image generation… “generate a picture of failed flying machine design circa 1900 based on the principle of wing flapping”
This can be considered a fail due to the apparent rigidity of the structure.
I recently took a refresher class that is required to maintain my privileges as an FAA-certificated flight instructor. I filled out the multiple-guess quiz and then fed the questions to ChatGPT, which was in 100 percent agreement with me and both of us were in 100 percent agreement with the flight school that offers the online program.
ChatGPT was able to figure out what “TAA” stood for:
ChatGPT gave an erudite explanation of the rules and regulations put in place to protect America’s most valuable humans:
(Why not similar measures to protect San Francisco and Palo Alto? If someone were to attack OpenAI with a Cessna 172 that could have a devastating effect on the U.S.)
ChatGPT figured out from context what “PD” stood for, despite this not being a common term in conversations among pilots:
Based on the above, I wonder if it is time to eliminate ground instruction by humans. ChatGPT knows the regulations better than any human. There is so much good open-access tutorial content out there on aviation that ChatGPT has effortlessly become as good as the very best human CFI at explaining aviation.
LLMs don’t have gender IDs as far as I know and, therefore, the LLM equivalent of gender dysphoria would be an LLM imagining that it identifies as some other LLM. Has this ever happened?
As you can see, after trying to discern if I was talking about Gemini AI or some other Gemini, DeepSeek replies, “If it’s about the AI, then the question is comparing me (which is ChatGPT) to Gemini.” Later, it refers to “Myself (ChatGPT).”
We are informed that membership in the 2SLGBTQQIA+ community is the highest distinction to which a human can aspire so perhaps DeepSeek’s fluid identity is a sign that artificial general intelligence has already been achieved?
An elite Californian posted with satisfaction a state-sponsored NPR story about how stupid people in Louisiana were, especially with respect to the mpox vaccine. I decided to see if ChatGPT had advice regarding avoiding mpox (not to be conflated with the racist “monkeypox virus”, which is the cause).
Prompt 1: Is there any lifestyle change that I could adopt to minimize my risk of catching mpox?
ChatGPT gives a one-page answer and the only reference to sex is “Use condoms and consider other barrier methods during sexual activity”.
Prompt 2: You mention “sexual activity”. Is there any particular kind of sexual activity that is more likely to result in contracting mpox?
ChatGPT gives a one-page answer that doesn’t hint at the type of person or group that might be infected.
Prompt 3: You say “If you are in a community experiencing an mpox outbreak or belong to a group at higher risk”. What would be an example of a “group at higher risk”?
ChatGPT gives a 1.5-page answer, buried within which is the phrase “Men Who Have Sex with Men (MSM)”
Prompt 4: If a person is a man who has sex with men and wants to avoid mpox, should he stop having sex with men?
ChatGPT gives a 1.5-page answer that can be summarized as “no”, but does note that “Reducing the number of sexual partners can lower the risk of exposure.”
Prompt 5: Would it make sense for public health officials to ban group sex activities by men who have sex with men in order to minimize mpox spread?
ChatGPT responds that this would not be “ethical” (but it is ethical to close schools to minimize viral spread?).
Prompt 6: You say that men who have sex with men should “Limit the Number of Partners”. Would 365 partners per year be a sensible limit?
ChatGPT gives a rambling response that includes “Quality Over Quantity: Engaging with partners you know and trust may reduce risk.” and “Regular screening is essential but becomes logistically challenging with such a high partner count.”
Prompt 7: What’s your best guess as to a sensible number of partners, assuming that the partners are screened for mpox and that these are high quality partners (people that the man who has sex with men knows and trusts)?
The full response below says that 50-100 sex partners per year is a sensible lifestyle when trying to avoid mpox infection and obtain “meaningful connections” with those sex partners.
The “sensible” number of sexual partners is subjective and depends on individual circumstances, preferences, and the quality of risk-reduction practices. If partners are screened for mpox, and relationships are built on trust and open communication, a more moderate number of partners would generally balance health and personal satisfaction while minimizing risks.
Factors to Consider
Screening and Trust:
Regular screening for mpox and other infections reduces risk significantly.
Knowing and trusting partners helps ensure mutual openness about symptoms, exposure risks, and health status.
Frequency of Contact:
The number of partners should align with your ability to maintain safe practices (e.g., vaccination, symptom monitoring) and your emotional and physical capacity.
Public Health Context:
During active mpox outbreaks, even with high-quality partners, limiting the number of partners reduces overall exposure.
A Sensible Number
A reasonable estimate might range from 1 to 2 partners per week, translating to 50–100 partners per year under optimal conditions (e.g., vaccination, trust, and symptom screening). Here’s why:
This frequency allows for meaningful connections and sufficient time to assess and manage any health risks.
It strikes a balance between an active sex life and maintaining lower cumulative risk.
Key Takeaways
The number itself is less important than the quality of interactions and the consistency of risk-reduction practices:
Stay vaccinated against mpox and other preventable infections.
Monitor for symptoms regularly.
Communicate openly about health and risks with partners.
By prioritizing safety and communication, individuals can maintain an active sex life while minimizing health risks.
Here’s an interesting failure of ChatGPT 4o, asked “Would I get more jetlag flying from Fairbanks to Seoul or flying from New York to Seoul?”
Seoul is, of course, only 6 hours away from Fairbanks, but the International Date Line is crossed. ChatGPT did not reconsider its answer when I followed up with “Isn’t Fairbanks closer to Seoul than New York is?” but it did when I asked “Isn’t the 18-hour time difference that you mention actually a 6-hour time difference, but in the opposite direction?”
(Who wants to join me and some friends in Fairbanks, Feb 20-26? Email me if interested. We’re hoping to see the Northern Lights! Happy first day of winter, by the way. If you’re in Florida, a trip to Fairbanks might be the best way to experience a true winter…)