Predicting the effects of AI on labor markets by looking at container shipping (a book about the Box called The Box)

I hope that everyone who celebrates is having a meaningful National Supply Chain Integrity Month.

I recently listened to The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger by Marc Levinson, against whom I was somewhat prejudiced because he has worked as an editor for The Economist, a magazine written and edited by experts who never met a human.

It’s a suprisingly interesting book on an uninteresting subject. The author is an economist so he is expecially weak on the engineering challenges of building ever-larger container ships and cranes. Nonetheless, we do learn about some of the engineering that went into designing the containers themselves, e.g., the corner connectors and figuring out how to support the weight of additional containers piled on top.

Like AI today, container shipping was perceived in its early days (1960s) as potentially saving a huge amount of human labor, especially dockside. Breakbulk shipping required large crews of longshoremen working for days to pack items into cargo holds, thus giving the mariners a relaxed week in port. Because longshoremen were usually unionized and had the power to shut down ports completely, they were able to negotiate the transfer of a massive share of the expected profits from container shipping to their members, either for not working at all or for working part-time. Non-unionized workers in the breakbulk shipping industry were completely out of luck. Future workers were also out of luck. As members of longshoremen unions died, the benefits of the union contract flowed only to those who were still alive and/or working. Crane operators in Los Angeles can make over $300,000 per year, but there aren’t many of them.

Practical advice for young people: Get a union job now and in a union that can shut down something important to the rest of the economy and/or the public. If containerization is any guide, unionized schoolteachers will be able to keep their wages even if Optimus can teach better. It would be ideal if one could think of a union that can shut down all AI data centers, but I am not sure there is one. Maybe the people who handle cooling? Even then, however, the data centers theoretically have the right to hire replacement workers during a strike. (School districts have this right too and it would be trivial to hire some adults to take over teaching/daycare responsibilities, but they don’t do it because, I guess, the union and the people who run the city are part of the same political party.)

Container shipping caused a massive shift in employment. Docks and their associated jobs in Manhattan and Brooklyn disappeared. So did factories that had been close to the docks in order to faciliate shipping to Europe. The replacement was Port Elizabeth in New Jersey, set up in 1963 to handle containers for Malcom McLean‘s Sea-Land. The factories moved to Upstate New York, Pennsylvania, Connecticut because with container shipping they just needed to be able to put a box on a railroad car headed for Port Elizabeth.

The advent of container shipping did not highlight the merits of technocratic government or credentialed experts. Governments, armed with expert advice and forecasts, were investing huge quantities of tax dollars in wharves for breakbulk ships just as the container boom was becoming established. Experts predicted minimal savings and disruption from containerization, perhaps partly due to government regulations that stifled the growth of the industry. Until President Gerald Ford kicked off the deregulation trend in the U.S., rates for shipping via rail and truck were set by a central planning agency (the ICC). International shipping rates over water were similarly regulated by a combination of bilateral agreements, cartels among shipping lines, etc. The rate to ship a load of refrigerators, for example, might not be different whether they were in a container or not. It wasn’t until the 1980s that the full benefits of containerization began to be experienced by shippers and consumers. In other words, government intervention in the market delayed the benefits of the technology by 15-20 years.

As with AI, which has made receptionists at NVIDIA richer than 99% of the people who live in Michigan (Detroit was once the richest city in the US and maybe the world!), the benefits of container shipping haven’t been equally distributed. A privately-owned non-union (at the time) port in Felixstowe took away thousands of jobs from unionized government-owned ports in other parts of the UK. Intelligent and efficient countries got dramatically richer, e.g., Singapore and the Netherlands, while countries that couldn’t get organized were left much farther behind than in the breakbulk days, where everyone was inefficient. It’s almost free to ship cargo among the world’s leading container ports and expensive/slow to ship cargo to places that aren’t regularly visited by big ships. The cycle tends to be a virtuous one. Because Panama has a busy container port that’s also the logical place to put factories that divide up and repackage pharmaceuticals for re-export to other Latin American countries. Being a landlocked country was already bad, but the penalty increased with containerization. (Our family experienced this with roofing tile. We got $30,000 of clay tiles from Spain, including container shipping and a truck ride up from Miami. It was going to cost $16,000 for tiles from Ohio…. just for the shipping.)

Consider Haiti, one of the world’s most violent and dysfunctional societies (which is why the U.S. is eager to import as many people from this society as possible?). It also has a violent and dysfunctional container port. UNICEF:

Armed groups breached the city’s main port a week ago, severing one of the capital’s last remaining lifelines for food and supplies as the country edges closer to collapse. Currently, over 260 humanitarian-owned containers are controlled by armed groups at the port.

Even if the rest of Haiti weren’t violent and dysfunctional, no factory could be set up profitably given the violent and dysfunctional nature of the port.

Moving over to the most functional country in Sub-Saharan Africa… “South African ports still rank among worst in the world” (BusinessTech 2025):

State-owned Transnet Port Terminals is pouring investment into cranes and new equipment after years of corruption and mismanagement that eroded the quality of its operations. The Container Port Performance Index from 2020 to 2024 took note of the upgrades and measures, including better weather forecasting at two local facilities. … Still, Cape Town was 400th in the survey, with Coega and Durban the penultimate and last of the 403 ports ranked.

The countries with high-ranked container ports are likely to be more advantageous spots for factories, at least the parts of those countries connected by good rail links to the efficient ports. Note that even today it can cost more to ship a container a few hundred miles by rail than thousands of miles by ship.

Also interesting from the above-cited report, what happens when you compare the best that Americans can do, considering all union and cultural factors, to our brothers, sisters, and binary-resisters in Asia?

(Note that some of this inferior performance might be robot vs. human. American unions have had a lot of success in obstructing the installation of automation at our ports.)

So… if the AI revolution turns out to have dramatic economic effects, as predicted, the benefits will be radically unequal. Maybe Californians won’t complain so much about inequality if it turns out that nearly all of the wealth of the U.S. ends up in California as a result of the AI economy? Will they be eager for federal tax policy that plucks wealth from California AI Achievers and pays it out to Left-Behind Mainers and Michiganders?

Can we predict the people and places that the AI boom will enrich the most? I hope that SE Florida will be fine, even if money is earned elsewhere in the U.S., thanks to the spectacular mismanagement and consequent high taxes of a lot of other parts of the U.S. (Maybe Jensen Huang will eventually retire and bring his personal $trillions to tax-free Florida?) California is an obvious candidate for a place where a lot of individuals will keep getting richer, but mostly the rich AI nerds will leave the other 40 million Californians in the dust.

Maybe the answer is that AI is most useful to the smartest humans and, therefore, the big winners from AI will be the smartest humans and places where smart humans cluster. This was the core point of the book The Bell Curve, improperly characterized as a book about IQ as a function of race. In fact, the main point is that, unlike in medieval times, the modern industrial economy delivers enormous rewards to the smartest people. A potato-picking peasant in 1500 who happened to have an IQ of 130 wasn’t going to earn a lot more than his counterpart with an IQ of 100. If AI accelerates the trend identified by The Bell Curve then maybe Korea, China, Japan, Singapore, and Taiwan will be the ultimate winners. (Average IQ in the U.S. started to fall shortly after our post-1965 opening of our borders to immigrants from countries with lower-than-100 IQs.)

Containerization was invented by an American. The first purpose-built container ships were built in U.S. shipyards. All of the early leaders in container shipping were American companies. One of the biggest early adopters of container shipping was the U.S. military (to support our ultimately futile efforts in South Vietnam). Today, however, the U.S. is insignificant in building and operating container ships. Merely because the world’s current AI leaders are in the U.S. we shouldn’t be complacent!

Readers: Who wants to make some predictions?

Fresh on X today, from the Financial Times, about how AI makes the cognitive elite more elite (i.e., another reason why the majority of Americans will eventually vote for everything that Bernie Sanders, Elizabeth Warren, and AOC propose):

(Photos from Panama, 2023)

Full post, including comments

What do we tell the AI-fearful?

A lady here in Florida asked me to reassure her that AI wouldn’t kill all humans. “I’ve seen the movies about Skynet, so I know what could happen,” she said. I reflected that an AI fed on a diet of the New York Times and Greta Thunberg could easily come to the conclusion that humans were destroying the planet via CO2 emission and, therefore, the best course of action would be to kill all humans. She responded, “You’re not making me feel better.”

What is the correct answer? If AI is embedded in networked robots and at least one robot is walking near every group of humans, what stops the robots from killing us in a coordinated attack? Maybe the AI will study the Lebanese Civil War in which 150,000 people were killed by their neighbors due to “religious diversity” and say “I can do a more thorough job than the Lebanese did with their neighbors back in 1990.”

Full post, including comments

AI catastrophists will feel better if they become climate catastrophists?

A friend in San Francisco is an AI catastrophist, as least as far as the economy is concerned. He’s not worried about robots taking over and, after reflecting on the damage that humans say that humans have caused to Mother Earth, killing all of the humans. He’s concerned about the value of his three-unit building in San Francisco. I said, “Why don’t you become a climate change catastrophist? You won’t have to worry about the trajectory of the U.S. economy if all cities except Denver are inundated by melting in Greenland and Antarctica.”

As a starting point towards transitioning (always a beautiful process!) from AI Doomer to Climate Doomer, here’s a 2015 article on how even Orlando (100′ above sea level) is doomed once Greenland and Antarctica melt:

Full post, including comments

AI economy terminates as petrostate?

A friend owns a three-unit building in San Francisco (rents carefully controlled by qualified government officials!) and is concerned about the value of his place if AI eliminates most programming jobs. My response was to consider the extreme case that 100 percent of GDP is generated by AI and robots. In that case, the economy becomes like a petrostate, e.g., Kuwait. The efforts of human residents aren’t significant economically compared to oil flowing or NVIDIA chips cogitating. In petrostates, however, the rulers don’t expel most or all of the citizens (like Bhutan did!), but instead use whatever money isn’t stolen by elites to house and feed everyone. Thus, in the typical petrostate, real estate still has substantial value. “Maybe everyone in San Francisco will be on Section 8 and the rent will be paid by the government instead of individuals,” I said, “but you’ll still get rent.”

How will the government get revenue? Petrostates often nationalize the oil industry, as Venezuela did in 1976 and 2007 (Hugo Chavez for the win!). If most of the wealth and income of the U.S. ends up in the hands of the owners of NVIDIA, Anthropic, et al., the government can simply nationalize the top 20 most successful AI-related companies. (We can see a half measure of this right now with Bernie Sanders and Ro Khanna proposing to harvest 5 percent of billionaire wealth every year.)

In other words, it doesn’t make sense to be an AI Doomer on economic grounds because being a citizen of a typical petrostate isn’t terrible (let’s ignore Venezuela for the moment!).

Let’s check in on a petrostate that has been shooting down U.S. fighter jets recently. Kuwait is rich, though not quite as rich per capita as it once was. It looks like they’ve grown the denominator via population growth and, thus, each individual’s share of the oil income has been reduced.

(Note that U.S. politicians, beginning with Lyndon Johnson in the mid-1960s, have been working desperately to grow U.S. population via immigration, exactly the opposite of what makes sense if our destiny is that most wealth comes from something that functions like an oil well.)

Full post, including comments

AI and Coding class at MIT: What should undergraduates be taught?

Back in January, I attended an “AI and Coding” class:

There were a couple of videographers present so I’m hopeful that eventually the lectures will appear on YouTube as some previous events in this “Expanding Horizons in Computing” series have.

During the intro, we were reminded that the first thing computer nerds want to do is get rid of computer nerds:

Inevitably, though, there have been haters. Alan Perlis:

Tim Kraska was the speaker who’d done the most to determine what LLMs can do. His grad students spent 2.5 months and about $100,000 in Claude API fees replicating the capabilities of the 500,000-line DuckDB embedded database management system but in a different language (I forget which! Sadly, not Lisp). It seems that for a complex project like this, the only people who can tell AI what to do are those who could do it themselves if they had to.

Continuing Carnegie-Mellon’s tradition as “the useful place in CS” (CMU gave us the Mach kernel, for example, which is inside nearly all Apple products), Graham Neubig talked about his experience building and using OpenHands, a system a little like Google’s Antigravity in that you can tell your “agents” to write software for you and the editor connects to the LLM of your choice. Prof. Neubig demonstrated using OpenHands to build a web site for the MIT event and the results were impressive!

How good are LLMs in practice? Contrary to my own experience where LLMs are amazing at diving into huge legacy codebases and telling the human “these are the relevant files”, AI felt good but actually slowed human programmers down:

AIs cut and paste like crazy, eventually producing code with so many duplicate blocks of code that only an AI will be able to make a change consistently through a real-world system:

I enjoyed the bathroom break. The smartest humans on the planet need a lot of coaching for the operation of sophisticated machinery:

Based on the period products in what was labeled a “men’s” room, the world’s smartest people are going to struggle with the “What is a woman?” question:

Speaking of bathrooms, the ground floor restroom signs are already falling apart in the nearly-new building. Fortunately, the sacred word “inclusive” hasn’t been marred.

Towards the end of the day, Varun Mohan showed up via Zoom to make the academics look like fools. While they were dithering to get a few papers published and secure a lifetime guarantee of employment at a wage that is 1/50th of what a receptionist at NVDIA earns, an apparent teenager glued together a few open-source developer tools and added LLM integration to create Windsurf, which Google then acquired in a non-acquisition for $2.4 billion. The result is Antigravity (see Antigravity as web developer (AI in an IDE)).

What nobody could offer at the event: A clear explanation of what skills made a person a good software developer in the Age of AI and, therefore, what an undergraduate CS program should teach. On the other hand, the slides did offer a clear picture of what a typical human software engineer looks like: female and, usually, non-Asian.

For the haters who say that there is no “science” in computer science… I learned about Science starting during the walk to the event. The hardware store in Inman Square’s most prominent sign:

Not Science-related, but I love seeing Black Lives Matter signs and here a commercial property owner had devoted a huge amount of space to one. (See Replacement of Black workers by migrants in Cambridge, Massachusetts for how the city’s merchants have kept the signs and discarded the people.)

Of course, there were the sidewalk maskers:

And the cycling maskers (note the filthy snow and trash in the background):

In the MIT event space there were 6 people sitting in front of me, 3 of whom were masked for the entire day. Here are a couple of them: I’m not sure that I understand the rationale for

The next day at the Harvard Art Museum, Arthur M. Sackler section (don’t forget that before developing ties to Jeffrey Epstein, Harvard was entwined with the Sackler family), an apparent couple in which one person doesn’t wear a mask while the other does. This has always mystified me. Partner 1 is protected by his/her/zir/their mask so only Partner 2 gets infected by SARS-CoV-2 at the public venue. Then they go home and, without masks, share a confined space for days during which time the virus can trivially hop from Partner 2 to Partner 1.

Also a mystery: the person who is afraid of catching a respiratory virus and chooses a job with guaranteed exposure to hundreds or thousands of strangers each day. The mask is great protection, I’m sure, but wouldn’t it be far safer to wear the mask while working in a regular office or warehouse with just a handful of other employees nearby?

Maybe one day an LLM will be able to explain these choices?

Full post, including comments

Product Idea: AI-fueled web site crawler that highlights HTML/CSS failures

In working with John Patrick Morgan to see what capabilities today’s LLMs have for doing web page design and upgrades of existing web pages/sites, it occurred to me that there is a gap in the marketplace. As far as I can tell, there isn’t a good tool that a publisher can run after every CSS tweak that will crawl the entire site and look for pages that were uglified by the tweak, broken by the tweak, never responded to the tweak because they reference some other CSS file or no CSS file, etc.

There are plenty of tools that will look for changes to pages, but they consider every change to be a mistake. If the publisher has intentionally decided on a fresh new look for the site, these tools will flag every page on the site (could be tens of thousands or hundreds of thousands for a newspaper, for example) as being defective.

Gemini says that these are the best current options:

1. The Best “Set it and Forget it” Tool: Percy (by BrowserStack)

Percy is likely the closest to what you are asking for because it integrates rendering into the crawling process.

  • How it works: You give it your sitemap. It visits every URL on both desktop and mobile (simulated iPhone/Pixel/iPad viewports).
  • The “AI” part: It uses “Smart Diffing.” It ignores “noise” (like a pixel moving 1mm due to browser rendering differences) and only alerts you to actual visual changes (e.g., your navigation bar collapsing into two rows instead of one).
  • Why for you: It handles the “entire site” requirement well if you feed it a list of URLs or a sitemap.

2. The “Smartest” AI: Applitools Eyes

Applitools is the leader in “Visual AI.” It attempts to mimic the human eye and brain.

  • The “AI” part: It is smart enough to know the difference between Content changes (a date changing from yesterday to today) and Layout disasters (text overlapping an image).
  • Unique Feature: It can sometimes detect “broken” layouts even without a baseline if you use their specific “Layout Match Level” which looks for structural integrity, though it is primarily a comparison tool.
  • Mobile: It uses the “Ultrafast Grid” to render your pages on dozens of real mobile devices in seconds.

3. The “Generic Brokenness” Finder: BrowserStack Responsive Checker

If you don’t want to set up baselines and just want a tool to crawl the site and show you screenshots of every page on an iPhone X, this is the tool.

  • How it works: You enter your URL (or a batch of URLs). It spins up real devices in the cloud, loads your page, and takes a high-res screenshot.
  • The downside: It doesn’t automatically “flag” errors (it doesn’t know that the image shouldn’t be there). You still have to scroll through the gallery of screenshots yourself, but it’s much faster than checking on your phone manually.

4. The Free / Open Source Option: BackstopJS

If you are comfortable with the command line and want a free solution you can run on your own machine.

  • How it works: You create a backstop.json file with a list of your URLs.
  • The Workflow:
    1. Run backstop reference (This crawls your current site and takes screenshots of every page).
    2. Apply your CSS changes.
    3. Run backstop test (This crawls again and compares).
  • Result: It generates an HTML report showing you “diffs” (difference highlights) for every page that changed.

My Recommendation for Your Upgrade

Since you are retrofitting a legacy site, BackstopJS (if you have dev skills) or Percy (if you want a paid service) is the ideal workflow:

  1. Run a scan NOW (with the old CSS). This is your “source of truth.”
  2. Upload the new CSS.
  3. Run the scan again.
  4. The tool will instantly show you exactly which 5 pages (out of 100) broke because they had unique HTML that the new CSS didn’t account for.

Me:

You say that Percy will highlight changes to pages, but if the goal of a site redesign is to change the look of the site won’t that result in Percy flagging every page as dramatically different? What would be useful is a tool that flags only those pages that have been broken or disfigured by the change, not those that have simply changed from one style to another.

That is a very sharp observation. You’ve hit on the main “pain point” of visual regression testing: when you intentionally change the design, every page becomes a “failure” in the eyes of a standard pixel-comparison tool.

To solve this, the industry has moved toward Visual AI and Layout-Only testing. Here is how you can use tools to find “broken” pages rather than just “different” pages.

1. Applitools “Layout Match” (The Intelligence Approach)

Applitools is currently the leader in this. Instead of looking at pixels, their AI builds a structural map of the page.

  • How it helps you: It has a “Layout” match level. This mode ignores changes in colors, fonts, and specific images. Instead, it checks if the relationships between elements are still healthy.
  • What it flags: It won’t flag that your text is now blue instead of black. It will flag if your text is now overlapping an image, if a column has dropped to the bottom of the page unexpectedly, or if a button is partially hidden.
  • The Workflow: You run it once on your old site to let the AI learn the “structure” of your content, then run it on the new site.

2. Galen Framework (The “Rules-Based” Approach)

If you want something that specifically looks for “disfigured” pages without needing a baseline at all, Galen is a powerful open-source tool.

  • How it works: You write a simple “spec” file that describes what a “correct” page looks like.
    • Example: “Image should be above text on mobile,” or “Main container should never be wider than the screen.”
  • Why it fits your goal: You aren’t comparing the new site to the old one. You are testing the new site against the rules of good design. If a page has a horizontal scrollbar or a margin that is 0px wide on a phone, Galen will fail that specific page.

3. Percy’s “Grouped Diffs” (The Efficiency Approach)

Percy knows that a CSS change will break everything. To keep you from losing your mind, it uses Diff Grouping.

  • How it helps you: If you change the global margins-and-ads.css and it shifts the header on 400 pages, Percy will group all 400 pages into a single “change set.”
  • The “Broken” Finder: You look at the first page in the group. If it looks “correct” in its new style, you hit Approve All. If 398 pages look great but 2 pages look “disfigured,” the AI often detects that those 2 pages are “outliers” (different from the rest of the group) and flags them for individual review.

4. “VisualEyes” (User Attention AI)

This is a slightly different type of AI. It doesn’t look for broken code; it looks for broken UX.

  • How it works: It generates “Attention Heatmaps” based on your new design.
  • Why it’s useful: If your CSS change accidentally makes a “Google Ad” look more prominent than your “Navigation Menu,” the AI heatmap will show a massive red blob over the ad. It flags pages where the visual hierarchy has been “disfigured” in a way that will confuse a human.

Summary Recommendation:

If you want a tool that “just knows” a page is broken:

  1. Applitools (Layout Mode) is the most “AI” way to ignore style changes but catch structural breaks.
  2. Google Search Console / Lighthouse (Free) is the simplest “No-Baseline” way. You can run a Lighthouse “Accessibility” and “Best Practices” audit on your site; it will automatically flag “Touch targets too close together” or “Content wider than screen”—the two most common ways a mobile site is “disfigured.”

[after a follow-up]

By default, Google Lighthouse does NOT look at your entire site.

If you open Chrome DevTools and click “Run Audit,” it only analyzes the single specific page you are currently looking at.

However, because you are dealing with a site-wide CSS redesign, you have three ways to handle this depending on how much “manual work” you want to do:

1. The Manual Way (Single Page)

If you use the standard Lighthouse tool (in Chrome or via PageSpeed Insights), you have to enter every URL one by one. This is useful for testing your “templates” (e.g., check one article page, one gallery page, and the home page), but it won’t find a broken layout on a random page deep in your archives.

2. The “Semi-Automatic” Way (Unlighthouse)

There is a popular open-source tool called Unlighthouse that specifically solves your problem.

  • What it does: You give it your homepage URL. It crawls your entire site, finds every link, and runs a Lighthouse audit on every single page automatically.
  • The Result: It gives you a dashboard showing a “Score” for every page. If your new CSS makes a specific page’s “Accessibility” or “Best Practices” score drop (for example, because text is now overlapping or a button is too small), it will stick out like a sore thumb in the list.

3. The “Google Search Console” Way (Field Data)

If your site is already live and verified in Google Search Console, Google is actually already auditing your site for you.

  • Under the “Experience” tab in Search Console, look at “Core Web Vitals” and “Page Experience.”
  • Instead of a lab test, this shows you real-world data from users. It will group your pages into categories like “Poor,” “Needs Improvement,” and “Good.”
  • The Catch: This data is not instant. It takes about 28 days of real user traffic to update, so it’s better for long-term monitoring than for checking a CSS change you made ten minutes ago.

Which should you use?

For your CSS project, I recommend Unlighthouse.

Since your site appears to be a collection of legacy HTML files with a shared CSS, you likely have “patterns.” Unlighthouse will let you see if your CSS change worked for the 1996 articles as well as it did for the 2007 articles by showing you a side-by-side gallery of screenshots for every page it finds.

Summary Tip: If you want a quick check without installing anything, run Lighthouse on your three most different pages (e.g., a text-heavy page like “Chapter X,” a photo-heavy page like “Architecture,” and your Home Page). If those three pass the “Mobile Friendly” test, 90% of your site is likely safe.

Full post, including comments

Antigravity as web developer on ancient and obscure technology (AI in an IDE)

A collaboration between John Morgan and Philip Greenspun.

This is a follow-up to LLMs to replace dead programmers who worked in dead languages (ChatGPT updates AOLserver Tcl API code).

We set Antigravity loose on a directory of AOLserver code behind philip.greenspun.com. The code base includes ArsDigita Community System, AOLserver configuration, and an installation of WordPress that was bolted on later. Antigravity was configured to use Gemini 3 Pro as the LLM.

Question 1

This is a directory for a web site called philip.greenspun.com. The page root is the www directory. If the / file is requested what is served is philip-home.html . What causes this file to be served instead of index.html and what functions process philip-home.html before it goes out to the browser?

Answer 1 (after 6 seconds)

Identified that
index.tcl
handles the root request, reads SpecialIndexPage parameter from
parameters/philip.tcl
(set to
philip-home.html
), and serves it using ns_returnfile without content processing.

Here is what I found regarding the serving of 

philip-home.html

:

1. Why 

philip-home.html

 is served instead of 

index.html

: The file 

www/index.tcl

 is the default directory index file (prioritized by the server configuration 

DirectoryFile

 in 

config/philip.tcl

). When 

index.tcl

 executes, it checks for a parameter named 

SpecialIndexPage

. This parameter is defined in 

parameters/philip.tcl

 (line 269) as: 

ns_param SpecialIndexPage /philip-home.html

 Because this parameter is set, 

index.tcl

 serves this file immediately.

2. What functions process 

philip-home.html

: The file itself is not processed by any template functions (like ADP parsing) because:

  • It is served via the  ns_returnfile  command in  index.tcl , which bypasses the ADP parser.
  • The  .html  extension is not mapped to the ADP parser in  config/philip.tcl  (only  .adp  is).

The functions in 

index.tcl

 that lead to the file being served are:

  • ad_parameter  (to retrieve the  SpecialIndexPage  value)
  • empty_string_p  (to check if the parameter is set)
  • ns_info pageroot  (to construct the full path for the existence check)
  • file exists  (to verify the file is there)
  • ns_url2file  (to get the absolute path for  ns_returnfile )
  • ns_returnfile  (to send the file content to the browser)

Question 2

What HTTP server is this code written for and that is likely being used for the live site?

Answer 2 (immediate)

Based on the codebase analysis, the site is written for AOLserver. … [extension explanation] … It is likely running AOLserver 4.5 (or potentially a compatible version of NaviServer) given the explicit path references in the configuration.

Question about ADP tags (a niche technology copied from Microsoft ASP)

[After browsing to /photography/learn/interior.adp] Where is the photonetheader ADP tag used in this file defined?

Answer re: ADP

Question about where to tweak

I want to modify either this function or the ad_header function that it calls to add a viewport META tag to the HEAD of every ADP page that is served. Which function should be modified and what would the modification look like?

Answer about where to tweak

I recommend modifying ad_header in d:\Backups\philip.greenspun.com\philip\tcl\2-hooks.tcl because this file already acts as a “hook” to inject content (currently a CSS file) into the header of all pages. This ensures the viewport tag is applied globally, including all ADP pages.

I have created an implementation plan to make this change.

Error: below is a fragment of the current file.

append extra_stuff_for_document_head "<LINK REL=STYLESHEET HREF=\"/margins-and-ads.css\" TYPE=\"text/css\">\n"

Unless wrapped in {}, the quotes within the META tag to be added need to be escaped with backslashes. Antigravity doesn’t mention this.

Question about home page rendering

I modified this file on my dev server to include a viewport META tag, but it still doesn’t render well on mobile because the image on the left is much too large. What can I do with CSS to make this look good on mobile?

Answer

Conclusion

Antigravity can be a significant time-saver when you’re groping around a code base that you don’t understand, either because you never understood it or haven’t touched it for 20 years. Much of a programmer’s time on legacy code is figuring out what files are relevant and where a reasonable place might be to insert a required modification. Antigravity is a great shortcut.

Full post, including comments

Antigravity as web developer (AI in an IDE)

As noted in previous posts, the web chat interface can be a cumbersome method of interacting with an LLM for the purpose of web development tasks such as adding a CSS file to an HTML page to make it render nicely on smartphones.

What if we try artificial intelligence as a companion to a programmer’s integrated development environment (IDE)? Google’s $2.4 billion baby, Antigravity, is the contender for this post, a collaboration between John Morgan and Philip Greenspun.

Installing Antigravity immediately presents the new user with a challenging question: Review-driven development vs. Agent-driven development. There is no explanation on the setup screen as to what the difference between these two might be:

Antigravity’s default user interface presents the developer with three panes: a file explorer on the left, a code editor in the middle, and an LLM chat interface on the right. We start this experiment by opening a folder we’ve created named bh-antigravity-gemini that contains the HTML for the decidedly archaic Berkshire Hathaway homepage and proceeding to write a prompt asking to generate CSS and edit the page’s HTML to make it responsive:

Antigravity offers a variety of models to choose from but for the purposes of this experiment we’ll use the default which is Gemini 3 Pro (High) in Planning mode. Planning mode causes Gemini to come up with plans before implementing them. After submitting the prompt Gemini begins to think through a plan:

As Gemini works to develop a plan Antigravity presents a Task document showing the steps to be taken:

Once Gemini has come up with its plan Antigravity presents us with an Implementation Plan document outlining the proposed changes as well as a verification plan:

Since the plan sounds sensible we instruct Gemini to implement it:

Once Gemini completes the task Antigravity presents us with a Walkthrough document with an overview of the results of completing the task:

Below the message announcing the completion of the task Antigravity presents us with a list of files that have been changed including counts for lines added and removed. Just below the list is a button to “Accept all” changes. We want to inspect the changes for ourselves first so we select the berkshire-hathaway.htm file to see what was added. Sure enough, Gemini has kept the changes to a minimum as requested and simply inserted a viewport <meta> tag and a <link> to incorporate the newly created stylesheet:

We accept the changes in the HTML and move on to examine the newly created style.css file. The CSS includes comments indicating that Gemini has tailored it to match the styling present in the original HTML while implementing the proposed changes to make the page responsive:

Satisfied with its suggestions we accept the changes and proceed to view the results. The images below are the original page as viewed on desktop followed by the new page as viewed on desktop. As promised Gemini has kept the desktop styling largely the same with the primary difference being greater spacing between links:

The images below are the original page as viewed on mobile followed by the new page as viewed on mobile. The new CSS has transformed the page such that it is no longer rendered as it would be on desktop but instead presents the links as a single column with text that is readable and easy to select without needing to zoom.

The quality of Gemini’s code generation in this experiment appears comparable to what we saw from it in our previous comparison of ChatGPT, Claude, Gemini, and Grok. This is not surprising as Antigravity is merely another interface for a developer to interact with an LLM such as Gemini.

Our conclusion is that having AI integrated tightly into the IDE where the chat log is presented and persisted alongside the code and the model can be quickly directed to analyze files within the codebase without the need to paste them into a chat window or manually upload them via a web interface saves time and leads to a more organized and less fragmented experience. We also find that the IDE’s ability to highlight proposed changes and allow us to accept or reject them and edit our files in place achieves a level of integration into the development workflow that a web chat interface cannot match.

Full post, including comments

If AI is useful why is gold gaining in value compared to the S&P 500?

One of my faith-based beliefs is that productive assets, such as a company that makes widgets, are more valuable than rocks or metal bars. This, of course, hasn’t been true lately. Here’s the price of gold over the period of Bidenflation (we’re still in the “Bidenflation” period even without Biden, since inflation is tough to tame once it gets going, e.g., because government is nearly half the economy and many government payments are automatically indexed to inflation):

On the other hand, the S&P 500 is also way up, especially the Big Tech/AI companies.

What does ChatGPT have to say? The S&P is worth 1.7X all of the above-ground gold:

How about 10 years ago when AI wasn’t functional and productivity gains from AI weren’t baked into investor expectations? The ratio was higher: 2.35X.

So the value of productive assets, which should be enhanced by AI, have actually fallen relative to an unproductive asset, whose value shouldn’t be directly affected by AI.

Does this mean that markets don’t think that AI is useful? Or perhaps they think that AI will make some companies more productive, but it will render so many humans useless that taxes on the productive to fund idle lifestyles for the useless will wipe out any economic gains? Or maybe there is a simpler explanation, e.g., people love gold.

Full post, including comments

AI competition for the redesign of a site that already has CSS

In our last post, John Morgan and Philip Greenspun compared four LLMs to see how they would do on redesigning the Berkshire Hathaway home page, an HTML relic of the 1990s untainted by CSS. Today we’ll give AI a tougher challenge: redesign the philip.greenspun.com site from four sample HTML files and the CSS that is referenced by them. The contenders are ChatGPT, Grok, Gemini, and Claude.

The Prompt

I want to update the CSS on a web site so that it renders nicely on mobile and, ideally, has an improved look on desktop as well. I’d like to not make too many changes to the HTML, though I could add a viewport meta tag in the head of every HTML file, for example. I’m going to upload four sample HTML files from this site and two CSS files that are referenced (margins-and-ads.css is the one that nearly all pages on the site reference). Please send me back new CSS and any changes that you think would be worth doing on all of the HTML pages on the site.

Sample Pages on Desktop Before (Chrome)

Sample Pages on Mobile Before (iPhone 17 Pro Max, Chrome)

Comment from Philip: I’m grateful to John for not looking at these and asking, “Did you think that mobile web browsing was a fad and would go away?” (on the actual device they all are absurdly small and hard to read)

from ChatGPT 5.2

Below is a “drop-in” modernization that keeps your existing HTML working, but makes it responsive on phones and nicer on desktop—without needing structural HTML changes.

It’s designed around what I see in your samples:

  • Most pages pull in margins-and-ads.css and currently just set big left/right margins plus floats for images/ads/notes. margins-and-ads
  • Some pages also include philg.css for paragraph spacing/indents and note backgrounds. philg
  • Layout is often “classic HTML”: <body bgcolor=...> , <center> , <table> for header layout, images with align/hspace/vspace , etc. Philip Greenspun on Photography Travels with Samantha Home
  • Photo pages use <div class="minigallery">…</div> with <br><br> breaks. How to Photograph Architecture …
  • Some pages include custom elements like <photonetpage> / <photonetpagenav> and <product…> tags. How to Photograph Architecture … How to Photograph Architecture …
  • p.marginnote exists and is sometimes floated right at 25% width (bad on mobile). margins-and-ads

One small HTML change I recommend on every page

Add this inside <head> :

<meta name="viewport" content="width=device-width, initial-scale=1">

That alone fixes a lot of “tiny text / zoomed-out” mobile behavior.

(Your pages currently don’t have this viewport meta in the sample heads. Philip Greenspun on Photography Travels with Samantha Home)

Optional but nice (safe even with older markup):

<meta name=”color-scheme” content=”light”>

Why these changes help (mapped to your existing markup)

  • Big fixed margins become mobile-friendly padding: your current body {margin-left/right: 3em} can still be too wide or awkward on phones. margins-and-ads
  • Floats collapse on mobile: images with align="right" and your .illustration floats (plus margin notes/ads) are the #1 cause of “narrow column” mobile misery. Philip Greenspun on Photography margins-and-ads
  • minigallery becomes usable: instead of relying on <br><br> breaks, it wraps naturally and centers on small screens. How to Photograph Architecture …
  • Layout tables stack: your header table (image + title) won’t force sideways scrolling. Philip Greenspun on Photography
  • Custom tags display correctly: <photonetpage> / <photonetpagenav> should behave like block containers. How to Photograph Architecture …

If you’re willing to do one more (still small) HTML improvement later

Not required, but high value over time:

  1. Remove bgcolor / text attributes from <body> and let CSS control it. (Right now you set it in HTML on many pages. Philip Greenspun on Photography)
  2. Replace <center> blocks with CSS classes (centering via CSS is more controllable). Travels with Samantha Home

But again: the CSS above works even if you do none of that.


If you want, I can also generate a tiny “smoke test” checklist (pages to open + what to verify) based on these exact sample files (header table, minigallery , marginnote , and a long chapter page).

[after asking for the checklist]

After ChatGPT

A huge amount of screen space is wasted when the browser is opened at my usual width:

Maybe ChatGPT is right that this is about as wide as a column of text should be, but on a site without ads it doesn’t look right and denies the user the choice to get more content on his or her screen. My previous style sheets didn’t set a font, which enabled the browser to default to whatever the browser authors thought was best or whatever the user had chosen via customization (of course, it would be a rare user who’d take the trouble). ChatGPT has hard-coded in a sans-serif font. Asked why, ChatGPT explains:

But then it backpedals and says that maybe serif could play a role:

Why serif may actually be the right choice for your site

Now the counter-argument—and this matters for your content.

Your site is:

  • long-form
  • essay-like
  • book-structured
  • historically text-forward
  • intentionally “print-ish” (paragraph indents, chapter structure)

That is exactly where serif shines, especially on desktop and tablets.

You’re not running a SaaS dashboard or news feed—you’re publishing durable prose.


My recommended compromise (best of both worlds)

Use serif for body text, sans-serif for UI and headings, with mobile-aware tuning.

Option A: Classic, conservative (very “Philip Greenspun-appropriate”)

body{
font-family: Georgia, "Times New Roman", Times, serif;
}

Then add (already supported by the rest of the CSS):

h1, h2, h3, h4,
photonetpagenav{
font-family: system-ui, -apple-system, "Segoe UI", Roboto, Arial, sans-serif;
}

This:

  • preserves the essay/book feel
  • keeps navigation and headings crisp
  • avoids over-modernizing the site

Option B: Modern serif optimized for screens (my favorite)

body{
font-family: Georgia, "New York", "Times New Roman", Times, serif;
font-size: 17px;
line-height: 1.6;
}

Plus mobile tweak:

@media (max-width: 720px){
body{
font-size: 16px;
}
}

This gives you:

  • excellent long-form readability
  • no “tiny Times on iPhone” problem
  • minimal aesthetic change from the original site

Option C: Keep sans-serif everywhere (only if you want a stylistic reset)

This is what I initially gave you, but I would not recommend it unless you’re intentionally rebranding the site visually.


Why I didn’t ask first (and what I’d do now)

I defaulted to sans-serif because:

  • the stated priority was mobile correctness
  • you wanted minimal HTML changes
  • CSS-only fixes favor safe defaults

But given:

  • your paragraph indents
  • chapter structures
  • archival nature of the content

I would switch the body back to serif and tune size/line-height instead.


TL;DR

  • I chose sans-serif for mobile safety and cross-platform predictability.
  • For your site, serif is not only appropriate—it’s arguably better.
  • Best solution: serif body, sans-serif headings, with slightly larger text and generous line-height.

If you want, I can:

  • give you a one-line diff to revert the font choice cleanly, or
  • mock up before/after screenshots (conceptually) for desktop vs mobile so you can decide instantly.

That last offer is interesting. Maybe ChatGPT actually does have a browser in its back pocket, contrary to what we’ve seen in earlier interactions. Asked for the screenshots, it responds that the first rule of Screenshot Club is that there are no screenshots.

Great—since I can’t literally render screenshots of your pages here, the most useful thing I can do is give you faithful visual mock-ups of what you’d see, tied directly to your actual content and layout patterns. Think of this as a guided “mental screenshot” you can compare against your browser in 30 seconds. Another page:

Things actually look pretty good on mobile. These are iPhone 14 Pro Max mockups from Desktop Chrome’s developer tools:

From Grok Expert

Analysis and Key Changes

  • Mobile Responsiveness:
    • Added media queries for screens under 768px (common breakpoint for tablets/mobile). On small screens, I remove floats (e.g., stack margin notes and ads vertically), ensure images scale down, reduce body margins for more content space, and increase relative font sizes where needed (e.g., small notes).
    • Images now have max-width: 100% and height: auto to prevent overflow and maintain aspect ratios.
    • Flexible layouts: Used relative units (em, %) more consistently. No fixed pixel widths were added; existing ones (like 25% for margin notes) are adjusted or overridden on mobile.
  • Desktop Improvements:
    • Modernized typography slightly: Set a base font size (16px for readability), used a sans-serif font stack (system fonts for performance), and added subtle line-height and padding for better readability.
    • Improved spacing: Increased paragraph margins slightly for breathing room, without altering the vintage feel.
    • Kept the look simple and consistent with the original— no major redesigns like grids or flexbox (to minimize potential breakage on old HTML).
  • Minimal HTML Impact:
    • No structural changes needed (e.g., no new wrappers or classes). The existing classes (e.g., .illustration, .marginnote) are leveraged.
    • The only global HTML change I recommend is adding a viewport meta tag to every <head> (this enables proper scaling on mobile).
    • Tested mentally against the provided HTML samples: They should render well without edits beyond the meta tag. For example:
      • “Travels with Samantha Home.htm”: Centered content and image galleries stack nicely on mobile.
      • “Chapter X_ Overcharged in Katmai.htm”: Narrative text with inline images flows better; philg.css styles (e.g., indented paragraphs) are preserved but made responsive.
      • “How to Photograph Architecture (Interior).htm”: Minigalleries (floated images) stack vertically on mobile.
      • “Philip Greenspun on Photography.htm”: Lists and blockquotes remain readable.
  • Other Notes:
    • Combined some redundant styles from philg.css into margins-and-ads.css (e.g., body margins, paragraph indents). If philg.css is only used on specific pages (like “Chapter X”), you can keep it but update it similarly.
    • Assumed no JavaScript or modern frameworks—just plain CSS.
    • Performance: Kept CSS lightweight; no bloat.
    • Browser Compatibility: Should work on modern browsers (IE11+ might need tweaks, but the site is old anyway).
    • Testing: I recommend testing on real devices (e.g., via BrowserStack or your phone). If you provide a live URL, I could refine further.

Recommended HTML Changes

Add this to the <head> of every HTML file (right after the <title> or existing meta tags):

HTML

<meta name="viewport" content="width=device-width, initial-scale=1.0">

This is the only change needed site-wide. It tells mobile browsers to use the device’s width for layout and start at 1:1 scale (no forced zooming out).

If you want to automate this (e.g., via a script or find/replace), search for </head> and insert it just before.

No other HTML changes are required—the CSS updates handle everything via existing selectors.

Updated CSS Files

Updated margins-and-ads.css

This is the primary CSS file (referenced by nearly all pages). I’ve incorporated relevant styles from philg.css (e.g., paragraph margins and indents) to make it self-contained. If a page references both, this won’t conflict, but you could remove redundant philg.css includes if desired.

after Grok

Not nearly the epic waste of horizontal screen space:

Based on the desktop samples: Advantage, Grok. How about mobile?

These seem fairly similar to what ChatGPT’s CSS produced.

from Gemini Pro

Here is a modern, mobile-responsive solution for your website.

1. The HTML Change (Crucial)

You must add the “viewport” meta tag to the <head> of every HTML file. Without this, mobile phones will zoom out to show the desktop view, making the text tiny and unreadable regardless of what CSS we write.

Insert this line inside the <head> section of all HTML files:

HTML

<meta name="viewport" content="width=device-width, initial-scale=1">

2. The CSS Updates

Since margins-and-ads.css is your primary stylesheet, we will transform it into a “Master” stylesheet that modernizes the look and handles mobile layouts.

Replace the entire content of margins-and-ads.css with this:

For philg.css : The old text-indent (indenting the first line of every paragraph) makes mobile text look messy and is generally outdated for web design. I recommend replacing the content of philg.css with the following, which aligns with the new modern style:

3. Summary of Changes

  1. Fixed Widths Removed: The old CSS set specific margins (3em) that squeezed phone screens. The new CSS uses max-width: 900px for desktop (comfortable reading width) and small padding for mobile.
  2. Images Tamed: Old HTML uses width="230" attributes and align="right" . The new CSS overrides this on mobile, forcing images to scale
Full post, including comments