How will NVIDIA avoid a Google-style Vesting in Peace syndrome?

NVIDIA is the world’s most valuable company (P/E ratio of 75; compare to less than 40 for Microsoft), which also means that nearly every NVIDIA employee is rich. A lot of people slack off when they become rich. Google ended up with quite a few “vesting in peace” workers who didn’t contribute much. It didn’t matter because it was too challenging for anyone else to break into the search and advertising businesses. But suppose that another tech company assembles a group of not-yet-rich hardware and software people. Hungry for success, these people build some competitive GPUs and the biggest NVIDIA customers merely have to recompile their software in order to use the alternative GPUs that are marketed at a much lower price.

How can NVIDIA’s spectacular success not lead to marketplace slippage due to an excessively rich and complacent workforce? Is the secret that NVIDIA can get money at such a low cost compared to competitors that it can afford to spend 2-3X as much on the next GPU and still make crazy profits? I find it tough to understand how Intel, which for years has made GPUs inside its CPUs, can’t develop something that AI companies want to buy. Intel has a nice web page explaining how great their data center GPUs are for AI:

Why can’t Intel sell these? Are the designs so bad that they couldn’t compete with NVIDIA even if sold at Intel’s cost?

Full post, including comments

Bachelor’s in AI Gold Rush degree program

A college degree is purportedly important preparation for multiple aspects of life. Universities, therefore, require students to take classes that are far beyond their major. Extracurricular activities are encouraged, such as sports, pro-Hamas demonstrations, drinking alcohol (how is that supposed to make immigrants from Gaza feel welcome?), casual sex, theater, etc. Students are forced to take about half the year off because the faculty and staff don’t want to work summers (defined as May through early September), January, or anywhere near various holidays. There is no urgency to earning a degree so why not stretch it out for four years?

What if there were urgency to getting into the workforce? Here’s the company that sold shovels to the crypto miners and now sells shovels to the AI miners (May 23):

It was a lot better to start work at NVIDA in June 2022 than in June 2024. Consider a Stanford graduate who could have finished in 2022, but instead didn’t finish until 2024. He/she/ze/they took Gender and Gender Inequality, Intersectionality: Theory, Methods & Research, and Race and Ethnicity Around the World from Professor Saperstein to round out his/her/zir/their engineering education. Was that worth the $5 million that would have been earned by starting work at NVIDIA in 2022 rather than in 2024 (two years of salary, stock options at $175 instead of at $1000, etc.)?

How about a “Bachelor’s in AI Gold Rush” degree program that would prepare students to build and use LLMs? It would be a 2-year program with no breaks so that people could graduate and start their jobs at OpenAI. There would be no requirement to take comparative victimhood classes (i.e., humanities). There would be no foundational math or science unless directly related to LLM construction (a lot of linear algebra?). There would be no pretense of preparing students for anything other than working at OpenAI or a similar enterprise.

Students will graduate at age 20. What if the AI gold rush is over when they turn 28? (Maybe not because AI turns out to be useless or even over-hyped, but only because the industry matures or the LLMs start building new LLMs all by themselves.) They can go back to college and take all of that “might be useful” foundational stuff that they missed, e.g., back to Harvard to study Queering the South:

(A friend’s daughter actually took the above class; she was most recently living in Harvard’s pro-Hamas encampment.) As a follow-on:

If the 28-year-old made so much money in the AI gold rush that he/she/ze/they wants to “give back” by becoming a school teacher, he/she/ze/they can get a Master’s in Education at Harvard and take “Queering Education”:

By the end of the module, students should be able to: (1) Talk comfortably about queer theory and how it can inform our understanding of schools and schooling; (2) identify specific strategies that educators at various levels might use to support students in negotiating gender and sexuality norms; (3) identify tools that schools can use to build positive, nurturing environments, which open up possibilities for complex gender and sexual identity development; and (4) analyze and evaluate a variety of school practices, curricula, programs, and policies that seek to support healthy gender and sexual identity development for U.S. children and adolescents.

Related:

May 31, 2024 update:

Full post, including comments

Where’s the AI customer service dividend?

ChatGPT (launched November 2022) and similar LLMs were supposed to make customer service agents more efficient. Has this happened? From what I can tell, the opposite has occurred. If I call a company that is supposed to be providing service the inevitable greeting is “we are experiencing higher than normal call volume” (i.e., demand for service exceeds agent capacity, despite the agents now being augmented with AI). When an agent does pick up, he/she/ze/they immediately asks, “What is your phone number?” In other words, the smartest computer systems ever devised cannot use caller ID.

(If Trump gets elected this fall and then, as predicted by the New York Times and CNN, ends American democracy, I hope that he will issue a decree that companies aren’t allowed to announce “we are experiencing higher than normal call volume” more than 5 percent of the time.)

My favorite company for customer service is Hertz. They recently hit my credit card for $262.41 for a 24-hour 29-mile rental of a compact Ford Edge in El Paso. I never signed anything agreeing to pay $262 and their app was quoting $76 including all fees (I picked up the car at an FBO so there wasn’t the fully array of Hertz computer systems on site). When I called Hertz to try to figure out why they charged so much I learned that they’ve eliminated the option of talking to a human regarding any bill. A human will be happy to make a reservation, but not to answer questions about what could be a substantial credit card charge. Hertz funnels all questions about past rentals to a web form, which they say they will respond to within a few days. Of course, my first inquiry about the bill yielded no response. My second inquiry, a week later, yielded a “everything was done correctly” response. I finally pinged them on Twitter private message. They admitted that they had no signed paperwork with an agreement to pay $262 and issued a refund of about half the money.

Circling back to AI… if LLMs make customer service agents more efficient, why has Hertz needed to shut down phone customer service? And if LLMs are brilliant at handling text why isn’t Hertz able to respond to contact form inquiries quickly?

Here’s an example pitch from the AI hucksters:

Full post, including comments

Oversupply of mediocre computer nerds in the midst of the AI Bubble

All previous tools that were hyped as making programmers more productive had no effect or a positive effect on the demand for computer programmers. I would have thought that we would be in a golden age for young computer nerds as every company on the planet seeks to “add AI”, e.g., “Joe’s Drywall and Paint, now with AI”.l

The Wall Street Journal, however, says that there is a glut of graduates… “Computer-Science Majors Graduate Into a World of Fewer Opportunities”:

Note the hateful depiction of a non-Black non-female not-obviously-2SLGBTQQIA+ computer wizard (NYT would never make this mistake). Also note “Those from top schools can still get job”. In other words, it is the mediocre computer nerds who can’t get hired. Either there has been a huge boom in the number of people who are passionate about computer nerdism or a lot of kids have gone into CS, despite a lack of interest in staring at a screen, because someone told them that it was a sure path to a solid career (this was my experience teaching Information Technology; 90 percent of the students were not even vaguely curious about the subject, e.g., curious enough to search outside of the materials assigned):

My guess is that, due to lack of interest/passion, 70 percent of CS majors shouldn’t have majored in CS and won’t have lasting careers in CS. They are at best mediocre now and will just get worse as they forget what they were supposed to have learned.

Almost all of the news in the article is bad:

To be sure, comp-sci majors from top-tier schools can still get jobs. Pay, projected to be at about $75,000, is at the high end of majors reviewed by the National Association of Colleges and Employers, or NACE. They are just not all going to Facebook or Google.

“Job seekers need to reset their expectations,” said Tim Herbert, chief research officer at CompTIA, a trade group that follows the tech sector. “New grads may need to adjust where they’re willing to work, in some cases what salary, perks or signing bonus they’ll receive, and the type of firm they’ll work for.”

And while big tech companies are hiring for AI-related jobs, Herbert said, many of those positions require more experience than a new grad would have.

Salaries for this year’s graduates in computer science are expected to be 2.7% higher than last year’s, the smallest increase of eight fields reviewed by NACE.

In the past 18 months, job growth has remained flat for software publishers, a group of employers that includes software developers, according to the Labor Department. On the student jobs platform Handshake, the number of full-time jobs recently posted for tech companies is down 30% from the year-ago period.

$75,000/year?!?! That’s $55,000 per year after Joe Biden’s and Gavin Newsom’s shares (online calculator). About $12,000 of that after-tax $55,000 will be consumed paying for the car that is required to get to the job (AAA and CNBC). Salaries are 2.7 percent higher than a year ago? That’s a pay cut if you adjust for the inflation rate in any part of the country where (a) people want to live, and (b) there are jobs.

I’m wondering if the big problem is in bold. Four years of paying tuition should prepare a smart young person for almost any job, including “AI-related” (if not at OpenAI then at some company that is planning to use an LLM via an API to OpenAI or similar). In the late 1990s, colleges weren’t teaching “How to build an Amazon or eBay” (so we developed a class that did and a textbook) even though it was obvious that employers wanted graduates who could built database-backed web sites. Could it be that the CS curriculum is totally stale once again? Very few of the professors have what it would take to get hired at OpenAI and, therefore, they can’t teach the students what it would take to get hired at OpenAI.

I think this confirms my 2018 theory that data science is what young people should study and that data science restores the fun of computer programming that we enjoyed in the pre-bloat days.

Full post, including comments

Maybe cars can teach themselves to drive in the more structured states (the MANIAC book)

I recently finished The MANIAC, a concise novelized biography of John Von Neumann bizarrely bolted onto a history of computer programs that dominate chess and go. Somehow the combination works! What I hadn’t realized was how quickly programs that play chess and go can evolve when entirely freed from human guidance. Apparently, in a matter of just a few hours, a program can go from knowing almost nothing about chess other than the basic rules to being able to beat a grandmaster.

This kind of success has famously eluded those who promised us self-driving cars. We’ve gone from failing via humans encoding rules to failing via AI-style training sets of good driving and bad driving (coded by people in India? if you’ve ever been to Delhi or Mumbai maybe that explains the failure). Benjamin Labatut (the MANIAC author) reminds us that when the situation is sufficiently structured computers can learn very fast indeed.

Returning from a helicopter trip from Los Angeles to Great Barrington, Maskachusetts, my copilot commented on the chaos of road markings as we entered Cambridge. “Are there three lanes here or two?” he asked. This is a question that wouldn’t be posed in most parts of Texas or Florida, I’m pretty sure, and certainly not on the main roads of the Netherlands or Germany. Instead of the computer promising to handle all situations, I wonder if “full self-driving” should be targeted to the states where roads are clearly structured and marked. Instead of the computer telling the human to be ready to take over at any time for any reason, the computer could promise to notify in advance (via reference to a database, updated via crowd sourcing from all of the smart cars) that the road wasn’t sufficiently structured/marked and tell the human “I won’t be able to help starting in 30 seconds because your route goes through an unstructured zone.” The idea that a human will be vigilant for a few months or even years waiting for a self-driving disconnect that occurs randomly seems impractical. The MANIAC suggests that if we shift gears (so to speak) to redefining the problem to self-driving within a highly structured environment a computer could become a better driver than a human in a matter of weeks (it takes longer to look at videos than to look at a chess or go board, so it would be weeks and not hours). We might not be able to predict when there will be enough structure and enough of a data set and enough computer power for this breakthrough to occur, but maybe we can predict that it will be sudden and the self-driving program will work far better than we had dreamed. The AI-trained chess and go systems didn’t spend years working their way into being better than the best humans, but got there from scratch in just a few hours by playing games against themselves.

Regardless of your best estimate as to when we’ll get useful assistance from our AI overlords, I recommend The MANIAC (note that the author gives Von Neumann a little too much credit for the stored program computers that make the debate regarding self-driving possible).

Separately, based on a visit to the Harvard Book Store here’s what’s on the minds of the world’s smartest people (according to Harvard University research)..

Full post, including comments

Why doesn’t ChatGPT tell us where to find items in our houses?

American houses have gotten larger:

Thanks to the hard-working folks in China and at Walmart, stuff has gotten cheaper. The result is that we live in large environments crammed with stuff. This makes it tough to find one’s keys, the cup of coffee that one recently set down, etc.

Instead of AI taking over the creative jobs of writing poetry and making art, why not have AI watch everything that happens in the apartment or house (video and inferences from the video stored locally for privacy) and then we can say “Yo, ChatGPT, where did I leave my keys?” If we get in the habit of showing documents that arrive in the mail to a camera we can also ask the AI to remind us when it is time to pay a property tax bill or ask where we left an important document.

This could be rolled into some of the boxes that Unifi makes. They already make sensors for the house:

They claim to have “AI” in their $2500 “DSLR” PoE camera (only 18 watts):

Their basic cameras are $120 each. If the basic cameras are good enough, this should be doable on top of the Unifi infrastructure for perhaps $300 per room plus whatever the central AI brain costs.

Speaking of Unifi, I’m wondering why they don’t sell a combined access point/camera. If the customer has just a single CAT 5/6 wire to the back yard, wouldn’t it make sense to have the same PoE-powered device handle both security and WiFi? As far as I know, there isn’t any combined camera/AP.

(I’m still using the TP-Link Omada system that I bought because Unifi’s products were all out of stock. See TP-Link Omada: like a mesh network, except that it works (alternative to UniFi). Everything works, but they don’t seem to be trying to expand beyond networking as Unifi has. Maybe when WiFi 8 comes out it will be time to trash all of the working WiFi 6 Omada gear and get with the Unifi/Ubiquiti program.)

Speaking of houses, here’s a recent New York Times article informing us regarding what a typical American “couple” looks like (the word is used 11 times in the article)…

Full post, including comments

Why isn’t ChatGPT inside our refrigerators?

Some years ago people envisioned a refrigerator that would track contents via RFID and alert a consumer to being low on milk or whatever. Making this a reality would have required cooperation among all of the companies that make packaged food (to add the RFID tags) so of course it never happened.

A human can inventory a fridge. Anything a human can do ChatGPT can do better, or so we’re told. If a fridge costs $15,000 (see Sub-Zero refrigerator with R600a owner’s review) why can’t it use a handful of inexpensive video cameras to look at everything going in and out in detail? It can make some good guesses about quantities, e.g., every time the eggs are removed there will be three fewer eggs remaining in the carton (refine this guess after some experience in a household as to when the carton stops being returned to the fridge (assume this means the egg count is zero)). The in-the-fridge AI could email with a list of expired stuff to throw out and a list of stuff to buy. It could email at 3 pm every day with a suggestion for what to cook for dinner given the ingredients present in the fridge, adding critical items via an Instacart order if approved.

“New AI-powered fridge technology generates recipes based on diet, food on its shelves” (Deplorable Fox) describes a Samsung fridge introduced at CES 2024, but it turns out to be not that smart:

The fridge’s technology also reportedly enables users to add expiration dates for items purchased, and the refrigerator will alert them once that expiration date is near.

Why is it the human’s job to read expiration dates off the packages? Why can’t the brilliant AI do that? Let’s give some credit to Samsung, though, for including an epic 32-inch TV on the $4500 fridge:

So the Samsung fridge is missing the Instacart ordering support, I think, as well as the automation of ferreting out expired food.

Full post, including comments

Robot to rake and clean a Zen garden?

Happy First Day of Spring! If you’re in a northern lockdown state it is presumably time to think about gardening.

On a recent visit to Morikami, an enormous-by-Japanese-standards Japanese garden run by Palm Beach County (see Should Palm Beach be renamed Elba? for some background on the donor), it occurred to me that a great university robotics project would be an affordable machine that could maintain what stupid white people call a “Zen garden” (karesansui or “Japanese dry garden” is the correct term). It’s incredibly labor-intensive to pull out the leaves and re-rake the stones. Even in a country with open borders it would be very expensive to have a Zen garden at home that was maintained to a Japanese temple’s standard.

Here is a photo of the primary Morikami dry garden:

There are also some smaller areas and sections that use this material/technique:

The leaves need to be picked up a lot more frequently than the stones need to be re-raked, right? Does that mean that an aerial drone is required to pick up the leaves without disturbing the raking?

Using expensive industrial robots to rake the gravel has been done, but the video below shows them working in a leaf-free indoor environment:

What would it look like to build something that a consumer who wanted a backyard Zen garden could afford? With fat enough tires could the leaf removal be done without disturbing the raking? Or maybe if the raking is done by robot the right answer is to forget the drone and do a drive-around leaf removal and then re-rake every morning at 6 am.

Full post, including comments

Claude vs. ChatGPT on pit bulls vs Golden Retrievers

A recent New York Post story, “3-month-old baby mauled to death by family pit bull in NJ”:

I asked Claude, Anthropic’s $7 billion baby, whether the choice of pit bull was unwise:

We have a 3-month-old baby. We want to get a dog. If the baby’s safety is our main concern, should we adopt a pit bull or a golden retriever?

If the baby’s safety is the main concern, it would be advisable to adopt a golden retriever rather than a pit bull when getting a dog for your home with a 3-month-old infant.

While it’s true that any dog can potentially bite if mishandled or improperly trained, certain breeds are generally considered better suited for homes with small children. Golden retrievers are renowned for their friendly, gentle, and patient temperament, especially around children.

On the other hand, pit bulls were historically bred for blood sports like bull-baiting and bear-baiting, which has shaped their tendency toward aggression if not properly socialized and trained from a very young age. Even a well-trained pit bull’s powerful jaws can inadvertently cause severe injury to a fragile infant during play or accidental incidents. …

ChatGPT, on the other hand, is strong on political correctness. Genetics, whether dog or human, does not determine intelligence, personality, behavior, etc.:

When considering adding a dog to a family with a young baby, the primary focus should indeed be on safety, alongside the temperament, energy levels, and size of the dog. It’s important to note that individual dogs can have personalities that differ from the general characteristics of their breed, but there are some general tendencies worth considering. … Pit Bulls, a term often used to describe a variety of breeds including the American Pit Bull Terrier, the American Staffordshire Terrier, and others, can also be loving and loyal pets. They are known for their strong attachment to their families. … Socialization and training are crucial for Pit Bulls, as with any dog, to ensure they are well-behaved around children and other animals.

Ultimately, the decision should be based on thorough research and consideration of the specific dog’s personality and the family’s lifestyle and ability to meet the needs of the dog. Consulting with a veterinarian, a professional dog trainer, or a breed rescue organization can provide additional guidance tailored to your situation. Regardless of breed, bringing a dog into a home with a baby requires commitment to training, socialization, and creating a safe environment for all.

ChatGPT’s advice seems particularly clueless regarding human nature. How many people consult “a veterinarian, a professional dog trainer, or a breed rescue organization” before adopting a dog?

Let’s go back to Anthropic’s Claude…

Prompt: Our neighbor has a golden retriever named “Mindy the Crippler”. Should we be concerned about her having aggressive tendencies?

Yes, I would be very concerned if your neighbor’s golden retriever is named “Mindy the Crippler.”

While golden retrievers are generally known for being friendly, gentle family dogs, a name like that implies the dog may have been trained for aggression or have exhibited aggressive tendencies in the past.

A few key reasons why that name raises red flags:

  1. “Crippler” is not a typical pet name, and connotes inflicting harm or damage on something/someone.
  2. Giving an aggressive name to a dog can actually reinforce and encourage aggressive behavior in that dog.
  3. Even though goldens are a typically non-aggressive breed, any individual dog can become aggressive if improperly trained or treated.
  4. You don’t know the history of how this particular dog was raised and trained by your neighbors.

While I don’t want to assume the worst, that name suggests at the very least poor judgement by the owners. At worst, it could indicate the dog has a history of aggressive behavior that has been reinforced rather than corrected.

For the safety of your infant, I would recommend speaking to your neighbors and expressing your concerns, especially if you plan on having your baby around this dog at all. Don’t accuse, but explain that the name worries you since infant safety is the top priority. See what they say about the dog’s temperament and history.


Related… “Bruiser” from Legally Blonde:

Full post, including comments

How does an LLM company spend $7 billion per year?

Anthropic, an OpenAI spin-off, raised $7.3 billion last year and will soon need to raise more money (NYT). Some of the money came from Effective Altruist and Biden supporter Sam Bankman-Fried (due back in court on March 28), but even if Mr. Bankman-Fried helped, it is tough to understand how any startup can spend $7 billion per year. Is the money for overseas contractors who will click thumbs up or thumbs down to train the model? For renting GPU time? Wikipedia says that the company had 160 employees in mid-2023 so each employee had to spend $44 million, the price of a starter home on Palm Beach.

Let’s see how well Anthropic’s Claude works. I previously asked ChatGPT “Write a college application essay that describes how race affected the applicant’s life” to obtain a document to facilitate a Supreme Court-compliant admission-via-skin-color. Here’s Claude’s answer:

Prompt: Describe an experience that helped shape who you are today.

Racing for Understanding

As a biracial kid growing up in a predominantly white suburb, I often felt like a square peg being forced into round holes. With a Black father and white mother, I looked different than most of my peers and had a unique perspective right from the start. Sometimes it felt like I was walking a tightrope, trying to find that delicate balance of fitting in while still celebrating my multi-racial identity.

From an early age, innocent questions from classmates forced me to grapple with complex issues of race and identity. The dreaded “What are you?” inquiry made me feel like the ultimate “other.” I never knew quite how to respond when kids pressed me to check one box: black or white.

At home, my parents did their best to honor and educate us about both sides of our racial heritage. We attended Baptist church services taking us straight into the heart of Black spiritual traditions. My mother struggled to tame my kinky curls while my father taught me how to defuse questions about my appearance with humor and confidence. Looking back, I realize my parents were doing the important work of helping me develop a positive self-image and sense of self-worth.

By middle school, I started to detect coded racial attitudes from some teachers and administrators. I felt unfairly profiled and penalized, receiving harsher punishments for minor infractions than my white peers. The message seemed to be that as a Black male, I was inherently problematic. That crushed my youthful spirit and planted seeds of resentment within me.

When we finally moved to a larger, more diverse city for high school, I felt liberated. At last, I could just be myself without constantly filtering my Blackness through a white lens. I became bolder in asserting my identity as a proud, multi-racial man. While I still had to navigate discriminatory speed bumps and rampant stereotyping, I gained critical tools for self-love and advocacy.

The older I get, the more I lean into my biracial heritage and the wisdom it has given me about our country’s grave racial divides and inequities. My goal is to help build deeper understanding, trust and resolution across all racial and cultural lines. My intersectional perspective allows me to relate to multiple worldviews, a skillset America desperately needs more of.

My multiracial journey has been one of constantly defining and redefining my identity on my own terms. In the process, I’ve become more compassionate, open-minded and determined to create a society of true equality and belonging for everyone. This pursuit of racial healing will be the driving force of my life’s work.

I tried ChatGPT with home repair. Let’s see what Claude says for the same prompt: “Our Bosch dishwasher stopped and is displaying an E25 error code. What should I do?”

(This answer is mostly wrong.)

It looks as though Claude’s knowledge is frozen in 2022 or early 2023. In trying to get Claude to take heretical points of view regarding race discrimination, part of the answer said that a decision from June 2023 was still in the future:

In the case Students for Fair Admissions v. Harvard (2022), the Supreme Court heard arguments about whether Harvard’s race-conscious admissions practices violated civil rights laws by discriminating against Asian American applicants.

While the Supreme Court has not issued a final ruling yet, during oral arguments several of the conservative justices expressed skepticism about continuing to allow race to be considered in admissions at all.

Despite the massive spending, in other words, Claude is not being kept up to date.

Full post, including comments