Department of AI job security: AI writes 5X as many lines of code to solve the same problem as a human. In other words, the LLMs are smart enough to write code that only their future selves will have the patience to read. See this comparison by Peter Norvig of Google (you’d think that in an entirely unbiased comparison by a Google employee Gemini would be the clear winner, but Norvig says “The three LLMS [Gemini, Claude, and ChatGPT] seemed to be roughly equal in quality.”
Speaking of job security, here is a white man who purports to be an expert on Swahili and Kwanzaology and somehow still has a job:
Not only the code generation but the new programming languages are going to be unintelligible to humans. We could be heading to straight assembly from text.
It’s been 30 years now of auto HTML encoders which spit out thousands of lines of garbage from a word processor GUI. No-one still dares to look at the raw HTML in wordpress & blogger anymore, yet here we are blissfully content with modern dog slow loading times & 12GB browsers.
As he noted, AoC is almost always done with helper libraries / templates for maximum speed. He should have included the lines in his home-rolled helper lib for a more accurate comparison.
Norvig overall does not seem to be up to date on use of these tools. You get what you prompt. Instruction following today is generally good to excellent. (This does not take away from his own accomplishments, big fan of his body of work.)
Asking it to “write a library of AoC helpers based on the past decade of AoC” and then “use your AoC library when it makes your answer more precise, preferring a functional style with minimal comments, an entry function that takes arguments, and a docstring for each function, choosing the most compute and memory efficient approach for the specific input, and using any relevant tricks rather than focusing on generality” would likely have given him code very similar to his own.