I’ve been noticing that stuff that I could have built in three days by myself (SQL data model, HTML templates, some scripts to bridge the two) now takes a team of multiple programmers weeks or months. The result is far prettier than the clunky rendered-by-Mosaic-or-Netscape sites of the 1990s but the function is about the same.
“The Sad State of Web Development” by Drew Hamlett is a fun read (I learned about it from Dave Winer’s comment on the article). It seems that the same forces that led the world of computer nerdism down the J2EE path in the 1990s are still at work:
The web (specifically the Javascript/Node community) has created some of the most complicated, convoluted, over engineered tools ever conceived.
At times, I think where web development is at this point is some cruel joke played on us by Ryan Dahl. You see, to get into why web development is so terrible, you have to start at Node.
You see the Node.js philosophy is to take the worst fucking language ever designed and put it on the server. Combine that with all the magpies that were using Ruby at the time, and you have the perfect fucking storm. Lets take everything that was great in Ruby and re write it in Javascript, I think was the official motto.
Most of the smart magpies have moved on to Go at this point, but the people who have stayed in the Node community have undoubtedly created the most over engineered eco system that has ever appeared. No one can create a library that does anything. Every project that creeps up is even more ambitious than the next. It all starts with a core module and 400 plugins for this module. No one will build something that actually does anything. I just don’t understand. The only thing I can think, is people are just constantly re writing Node.js apps over and over.
React: Facebook couldn’t create a notification indicator on Facebook.com, so they create this over engineered dump pile to solve that problem. Now they can tell you how many unread posts you have while downloading 15 terabytes of Javascript.
How does rewriting your interface in the latest framework get you to the next customer? Or the next 50 customers. Does it actually make your customers happier?
A [single-page application] will lock you into a framework that has the shelf life of a hamster dump. When you think you need a SPA, just stop thinking. Just don’t. Your users just don’t fucking care.
The code examples in my 1990s book on web development still run! So does the open-source software that we distributed starting in the mid-1990s. This code doesn’t respond differently to a request from a tablet or a mobile phone, but the browser software on those devices is smart enough to make all of the pages usable. I wouldn’t advise a developer building something new today to use Perl scripts linked to the Oracle C library as I did in 1994 for the Boston Children’s Hospital web-based electronic medical record system (see JAMIA paper that I co-authored unknowingly). But on the other hand I haven’t seen any new development tools that are obviously more productive.
Readers: Are tools for building straightforward web-database applications getting worse or better?
Separately, a youngish programmer friend was telling me that he thinks discriminating against older programmers is rational because programmers usually learn about new tools during evenings or while doing uncompensated side projects. He thinks that older programmers, e.g., due to family responsibilities or reduced energy levels, are less likely to build stuff without being paid and thus employers assume that they won’t know about Node.js and the other frameworks mentioned above.
2002: Rewrite all J2EE apps with PHP
2005: Rewrite all PHP apps with ruby
2010: Rewrite all the ruby apps with javascript
2012: Rewrite all the javascript apps with Go
2014: Rewrite all the Go apps with Rust
I can’t speak for other older programmers, but I am a 40-something hardware engineer who knows about node.js and other frameworks. Sounds like the young guys looking for a rationalization to discriminate.
Node.js hit the right spot for programmers looking for something that was new and modern, but was still rooted in an established language. It’s also nice to use the same language from front to back-end. Our company picked node.js/MySQL because it works for what we do – custom web apps that need to be fast. Maybe it’s not the right tool for everyone, but it works well for us.
Regarding discrimination against older programmers, is it always really discrimination? Several years back my programmer father was in his late 60’s and needed some work. I tried to get him doing some PHP work, but I was never able to get him writing decent code and understanding what was going on. I went back to a recent college grad freelancer who had the project done quickly. I think older programmers “suffer” from an increasingly narrow focus of expertise as they become more experienced. They don’t feel the need to learn the latest and greatest flavor of tech that comes along when they’ve seen so many rise and fall over the years. This is a great point of view if they’re in a senior position, but if they’re still working at a low to mid-level position, they don’t always have the luxury of not adapting and learning the latest tech if they want to remain employable.
As a developer, I can agree that the current situation is very strange. To me it feels like frontend web tooling is strangely neglected. People built straightforward database applications with much more greater and speed in the 1990s, using RAD environments like Visual Basic, Borland Delphi or Microsoft Access (as programmer’s we’re supposed to scoff at Access of course, but it enabled barely computer-literate people to produce fully functioning applications). Almost nothing like this exists in 2016, even though demand for these straightfoward database applications is at an all-time high.
At the same time there’s a strange veneration for over-engineered distributed database management systems; Many web applications are designed as if they’re the next Facebook, comfortably disregarding the fact that the real Facebook didn’t tackle scalability until it was absolutely necessary.
One explanation could be that software developers are conspiring to build a kind of competitive moat around their profession, making jobs inaccessable to dillettantes and people who lack formal education.
Another is that we’ve incorrectly treated software development as an engineering discipline; as a process restrained by physical laws, where simple and cheaper solutions tend to prevail eventually.
It’s just as likely that software development is more like being a lawyer, and involves dealing with a stack of man-made complexity that will only grow as time progresses.
What’s the story behind your being an unknowing co-author?
Older web developers have it rough because the new tools clients demand suck in various ways that make them uninteresting to learn for their own sake.
It’s very different when you don’t have to interface with other systems. I do a lot of highly compensated programming to solve mathematical problems but nothing I do ever requires tools more modernized than Visual Basic, SQL, or shell scripts (occasionally LISP will make something easier). On the other hand, the cleverness and sophistication of the algorithms I create with those simple tools is worth a great deal to my clients.
So Perl and Oracle is an under engineered solution to serve up HTML? Just use CGI/C++ and flat files…That is what Yahoo and Viaweb did for years.
The advantage of node.js is having a single language on the client and server. It is over complicated, especially the event handling, but not difficult for anyone who has done async programming. You don’t have to use the complicated libraries if you don’t want to do complicated things.
-a 50 something developer
I actually started learning web development from your online materials back in I believe 2005 using mysql and php. Thank you!! That model still works, can be productive, and still makes sense for many use cases. If you can avoid the fray and stick with that way of doing things, great. There are a lot of use cases where there are advantages to the single-page-app model though, and if constructed properly they can provide a better and faster user experience. It’s disingenuous for some to claim that people are using these tools just because they are new and shiny (though a minority surely do).
Javascript isn’t the best language ever designed, but it makes a lot of sense if you take the time to understand it properly. It also has several very strong reasons to take it seriously:
1) It is by far the most used language, and it’s role on the web will likely never change. The return on investment for becoming an expert in the language is huge.
2) Because of #1, its performance on many platforms is excellent.
3) While javascript itself lacks a standard library, there is open-source code to do nearly everything.
4) The package manager (npm) is the best I have ever used. Many people disparage how many dependencies are involved in a given tool, but that’s what happens when you have a package manager that makes it easy to find, install, and pin versions of 3rd-party code! Lots of small dependencies is a symptom of a well designed system.
5) The language is getting better all the time. Checkout ES2015.
6) You can use it for both client and server, possibly even sharing code.
Personally I’ll continue to keep up and yes, keep rewriting and learning new frameworks. The introduction of AJAX changed the nature of building web applications and we’re all still figuring out how to best make sense of it. It’s perfectly healthy for there to be a certain amount of “churn” in technologies but I think they are starting to converge on some good approaches.
Michiel:
“…we’ve incorrectly treated software development as an engineering discipline; as a process restrained by physical laws, where simple and cheaper solutions tend to prevail eventually.
It’s just as likely that software development is more like being a lawyer, and involves dealing with a stack of man-made complexity that will only grow as time progresses.”
This is a great way to summarize a fundamental insight. I’m stealing this quote. 🙂
I wrote an article inspired by that one, might be of interest to you:
http://intercoolerjs.org/2016/01/18/rescuing-rest.html
He’s right. Web development is a complete mess right now.
The value of any API/library now is a pretty arbitrary number of billions of dollars determined by whatever is required to keep everyone employed, more than functionality. Soon, it will take a few terabytes of javascript from thousands of startups to flip 1 bit, but everyone will be employed.
@Carson Gross
I love the term “API Winter”. Not disagreeing with your premise altogether but I think it’s too early to deem REST dead. I think it has just become so successful it’s no longer a topic of controversy and discussion. Any planning of an api these days starts with REST as a given and may be evolved to meet unique needs. GraphQL and the like are interesting but I don’t see them used a lot in practice.
I’d say the “API Winter” started and HATEOAS ideas faded because public APIs just aren’t as in vogue as they used to be in the web 2.0 days. API design matters less if it’s only used internally. The driver of that trend seems to be that people realized building on 3rd party APIs is a terrible business decision.
Yeah, I was engaged in hyperbole for effect: REST as an architecture will never die completely, but there seems to be a move away from it and towards RPC-style coding.
I really think this is a technical issue with respect to JSON APIs: even if you are able to adopt a disciplined HATEOAS-conforming convention, the clients need to deal with that convention. If the difficulties and holy wars around providing HATEOAS on the server was the initial blow, the client interpretation of whatever implementation you picked was the dagger in the heart.
The beauty of HTML and HTML-based (rather than JSON-based) applications is that they just naturally support HATEOAS, without anyone even thinking about it. If you look at intercooler examples, you won’t even notice the HATEOAS: you’ll just think “Oh, great, I can make an element issue an HTTP action in response to an event, just like links and forms!” So a full REST/HATEOAS architecture naturally falls out of your code, rather than being something you have to struggle to bring into existence.
I don’t want to hijack the comments too much here, but I do think that going back to this older model (but with a modern update) is the way forward for a lot of web development, rather than continuing down the current path.
Web development has always been a complete mess. Let’s not allow ourselves to be blinded by nostalgia.
I’ll take the current tools along with github and AWS, and the current browsers over the 1990s era technology every day of the week. You want to go back to deploying your site by copying your files to your servers via ftp? Did you like using Microsoft SourceSafe? Did Internet Explorer do anything for you other than waste 10 years of your life hacking around standards compatibility bugs? Do you still have copies of the invoices from the ISP that hosted your giant, monolithic SQL database?
Web development technologies are not yet mature. But we don’t get through adolescence by returning to infancy.
In about 2000, I wrote some .tcl/adp and some SQL for a book publisher. It ran unmolested until December 2015. The new programmers had difficulty understanding (despite the database dump) the 5-table SQL schema and haven’t ported the functionality over yet, from what I can tell.
Michael ‘think(s) older programmers “suffer” from an increasingly narrow focus of expertise as they become more experienced’ because of his experience with his father. I reccomend he look into some genetic issue that the both of them share, based on the cohort sample size of two.
Without objective tests of *current* levels of competence, it is rational to discriminate by using the usual means of assessing capabilities–worked consistently the past 4 years, rigor of degree, school rank, etc—
https://lh4.googleusercontent.com/_gxYAfFM1cj0/S6hXmZ4qtjI/AAAAAAAAAUc/mBtqICfKs2w/brainage.jpg
The human brain appears to have a general decline of about 1/3 of a standard deviation in general intellectual acumen and capabilities starting at the age of 20. The rate starts to greatly increase at around age 70, but barring that, its pretty consistent, up to about age 65.
This means that at age 53 one is around a full 15 lower general IQ points then at age 20.
So for the very newest of work, where its basically how quickly can one cleverly learn new material, the older are at a large disadvantage. If one isn’t in a field where ones experience can be easily applied (and a *lot* of CS work is discovering new algorithms almost by raw cleverness)…its rational.
A gap of 15 translates to about the difference between the typical student at cornell, vs the colleges at the bottom of the top 100 universities.
http://pss.sagepub.com/content/26/4/433
Perhaps real CS work is actually like a vocabulary exercise in recognising the necessary composition of computational patterns inherant in a problem – which peaks in the late 60s or early 70s.
Or maybe it is as you imagine, mostly about how fast you can hit the backspace key.
@noko: aren’t there other benchmarks than those you link to? I recall StackOverflow answers by older developers being better-ranked – not necessarily the greatest benchmark, but it doesn’t seem to be worse than the completely artificial ones in your graphs, either. Until we can measure programmer productivity, I’m not sure we can tell if “general intelligence” (performance on an IQ-like test) trumps experience or vice versa.
Anecdotally, to me the average older developer appears better than the average younger developer, on the other hand, it ought to be at least partially explained by survivor bias (young people who’re bad enough at it quit in larger quantities early in their lives than old people who’re good at it starting out very late in their lives); some say that older people having usually gone into it for the love of it while younger people more often went into it for the money also explains this in part, though I’m not sure about that (love wears off more quickly than greed, and more generally it’s not clear which motivation results in better performance.)
Oh. Well, I would guess that in general, an older programmer *would* be more competent in general as they do have a good deal of knowledge and experience of how project go, how specific patterns of problem solving go in small, intermediate, and up to very large groups. As Greenspun noted, everyone seems to reinvent apps again and again. Someone who made all of those before *would* have much better productivity.
Survivorship bias is *huge* and its one of the largest biases that people seem to have in general. I mean, at age 50+, I guess the only engineers that comment are those who survived to be industry-recognized professionals.
I still stand by the benchmarks I linked though, at least for technology that approaches paradigm-shifts and so called game-changers.
Or one way to state it, is that the very best general programmer in the world is probably some experienced guy in his early 50’s, while the best new tech builder is some mid-20’s PHD student.
Young people always think it’s rational to discriminate against older people. That’s why countries like the US have laws against it.
“My dear, here we must run as fast as we can, just to stay in place. And if you wish to go anywhere you must run twice as fast as that.”
“If I have not seen farther, it’s because I have been standing in the footprints of giants.”
“Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.” (hey, how did that get in here?)