Department of AI job security: AI writes 5X as many lines of code to solve the same problem as a human. In other words, the LLMs are smart enough to write code that only their future selves will have the patience to read. See this comparison by Peter Norvig of Google (you’d think that in an entirely unbiased comparison by a Google employee Gemini would be the clear winner, but Norvig says “The three LLMS [Gemini, Claude, and ChatGPT] seemed to be roughly equal in quality.”
Speaking of job security, here is a white man who purports to be an expert on Swahili and Kwanzaology and somehow still has a job:
Not only the code generation but the new programming languages are going to be unintelligible to humans. We could be heading to straight assembly from text.
It’s been 30 years now of auto HTML encoders which spit out thousands of lines of garbage from a word processor GUI. No-one still dares to look at the raw HTML in wordpress & blogger anymore, yet here we are blissfully content with modern dog slow loading times & 12GB browsers.
Straight to assembly from text? Do we have a lot of open source assembly code for LLM to poach or is someone training LLMs on combinations of software requirements and corresponding disassembly code for ready software? Why not then directly from text to binary?
All the data centers being built in the Rust Belt seem to indicate that AI has ambitions of being pure energy. “We have no use for humans or their primitive Turing Machines any longer.”
this is exactly what I suspect is the problem. We have computer that are vastly more powerful than in the 1990s – even a 10 year old laptop from 2015 should have no problem rendering modern webpages, yet with every passing year the browsers and webpages feel like they are crawling. Too much HTML slop. In the 2000s I could look at HTML code or any webpage and understand it, these days? forget about it.
@NL
AI and any program that executes on computer already is pure electromagnetism, however particles of matter such as electrons are involved as well. However, modern digital computers are digital, not analog, and require abstract translation level from number abstract to impulses, which are done through binary executable files that represent electromagnetic impulses as sets of 0s and 1s. So whatever AI writes could be represented as 0s and 1s and human-analyzed.
@DMen,
In my experience browsers are much faster nowadays, and serve much more multimedia data. You still can understand web pages from the browser using each browser F11 dev tools. It is rarely static html.with single request/response element nowadays, but all scripts, network calls with requests/responses, rendered html that can be visually associated to any displayed ui element are there.
As he noted, AoC is almost always done with helper libraries / templates for maximum speed. He should have included the lines in his home-rolled helper lib for a more accurate comparison.
Norvig overall does not seem to be up to date on use of these tools. You get what you prompt. Instruction following today is generally good to excellent. (This does not take away from his own accomplishments, big fan of his body of work.)
Asking it to “write a library of AoC helpers based on the past decade of AoC” and then “use your AoC library when it makes your answer more precise, preferring a functional style with minimal comments, an entry function that takes arguments, and a docstring for each function, choosing the most compute and memory efficient approach for the specific input, and using any relevant tricks rather than focusing on generality” would likely have given him code very similar to his own.
> should, would
Try it and post the results.
John, for the many of us here who know nothing about software coding, please explain your thoughts in basic terms. Sounds very interesting, but you lost me.
What does Alexandria Ocasio-Cortez have to do with anything?
Lmao; AOC is indicator for acronym over(ab)use.
My opinion has always been that unless AI can do negative coding, it can’t replace a good software engineer. I wrote more of it here:
https://m-chaturvedi.github.io/blog/011/
Negative coding: McIlroy is attributed the quote “The real hero of programming is the one who writes negative code,”[32] where the meaning of negative code is taken to be similar to the famous Apple developer, Bill Atkinson, team anecdote[33] (i.e., when a change in a program source makes the number of lines of code decrease (‘negative’ code), while its overall quality, readability or speed improves).
https://en.wikipedia.org/wiki/Douglas_McIlroy
My opinion has always been that unless AI can do negative coding, it can’t replace a good software engineer. In my opinion it will never be able to do that. I wrote more of it here:
https://m-chaturvedi.github.io/blog/011/
Negative coding: McIlroy is attributed the quote “The real hero of programming is the one who writes negative code,”[32] where the meaning of negative code is taken to be similar to the famous Apple developer, Bill Atkinson, team anecdote[33] (i.e., when a change in a program source makes the number of lines of code decrease (‘negative’ code), while its overall quality, readability or speed improves).
https://en.wikipedia.org/wiki/Douglas_McIlroy
The way AI-generated code works is predictive, relying on vast amounts of data. Even today, it’s already challenging to debug and predict the behavior of well-written code due to factors like thread utilization, third-party libraries, and external dependencies (network, database, virtual machines, etc.).
When you add AI-generated code to the mix, often copied and pasted by less, along with tight deadlines and poorly defined or understood use cases, buggy code increases significantly. Fixes for these issues often become patches on top of existing code, and over time, the entire project becomes difficult or even unmanageable. I have already seen this happening at my BIG company.
By the way, it’s not just buggy and complex code, performance is also a concern. The growing need for faster CPUs, more RAM, and additional disk space is another issue that doesn’t seem to make the headlines.