Are academics who teach software development methodologies teaching the lessons of failure?

I found an interesting book the other day: Designing the Requirements: Building Applications that the User Wants and Needs. My Amazon review:

I found this in the MIT engineering library. Unfortunately the people who need to read it the most are probably too busy coding to read any book. But it is rewarding and basically sensible. The book is much more informed by the reality of designing and building software than the typical academic work or textbook for undergrads (just imagine the kid who goes straight from a lecture on classical “waterfall” methodology into a job at Google or Facebook!).

I guess one way to look at the competing literature is that it reflects the lessons of failure. Software projects that were years late and 300% over budget led to a genius sitting down and writing “Whoa. If we had only done X, Y, and Z that project could have been successful.” This book, by contrast, feels like a synthesis of general principles starting from lessons of practical success.

What do readers think? The most successful software projects don’t seem to generate neat methodologies to which book authors and/or academics can give names (Wikipedia contains a partial list). This whole genre started with The Mythical Man-Month, Fred Brooks’s attempt to understand why IBM’s OS/360 project had gone off the rails:

The effort cannot be called wholly successful, however. Any OS/360 user is quickly aware of how much better it should be. The flaws in design and execution pervade especially the control program, as distinguished from the language compilers. …. Furthermore, the product was late, it took more memory than planned, the costs were several times the estimate, and it did not perform very well until several releases after the first. (page xi)

Friends who work at the most successful software development enterprises, e.g., Google, Amazon, Facebook, Apple, don’t report the use of any special project management sauce.

I posed a question to a Silicon Valley insider (not quite as far inside as Ellen Pao, of course, since he merely produces hardware and software that hundreds of millions of people have used instead of lawsuits and discussion around gender equality):

Let’s say 5 programmers are going to spend 2 months coding something at Netflix, Google, or Apple. What would you expect to be the number of pages of requirements, specifications, system architecture documents, etc. to be written before the coding starts?

His answer:

At Netflix there would be no documentation and everybody would go and be a cowboy.

At Google there would be a 3-page half-assed document (Google Doc) that looks like crap and barely scratches the surface.

At Apple anything could happen: If it was between different groups there might be meticulous API specifications that people had to sign off on (that then became immutable), but if it’s internal it would depend on the group and there could be nothing.

Maybe it is worth asking if readers think that there is any value to the official methodologies compared to basic common sense (i.e., don’t compare Waterfall or Agile to flailing away in a disorganized fashion, but instead compare to a common sense “write down whatever seems useful to write down, use version control for the code, use a bug/task tracker for change requests, etc.”) and, if so, is there a One True Methodology?

Related:

  • http://philip.greenspun.com/doc/chat (from 1999; answers at least the minimum questions that I like to see in a web-based application, such as “Why was this built at all?” and “Where is the code to be found?”_

19 thoughts on “Are academics who teach software development methodologies teaching the lessons of failure?

  1. The “official methodologies” are useful in that they can help define project norms without requiring the team and/or its manager(s) to reinvent the wheel. However, there are such a wide variety of projects in such a wide variety of environments it is hard to see how “One True Methodology” could be suitable for all of them. Therefore, no methodology can be a substitute for actually managing the project.

  2. Bug trackers are where bugs go to die. Something that works is not looking for bugs or features until the last bug or feature is fixed. The ultimate solution is somewhere between requirements documents & iterative evolution. SpaceX has been a good role model. They probably do have extensive requirements documents, but their iteration is legendary.

  3. I think it is important to have experience with official methodologies. When you do, you can pick elements from several methodologies that will for your particular project and company culture.

    I have worked for some US-based startups, large tech companies and now a British tech company. Jira has replaced a lot of the old style requirements and design documents. When you have many teams around the globe working on a project, as this British company does, you really need to document things. In-person communication isn’t reliable enough and seems to be forgotten more quickly than if it is written down somewhere.

  4. I’ll second Fazal’s comment.

    Taking it further, I think the importance of formal methodology is directly relatable to the experience, skill, and communication ability of the people on the project. People with less experience, questionable skills, distributed across the globe with all the attendant communication issues, make a formal methodology desirable. Because then you don’t have to reinvent the wheel and can point to something as “this is the way we are working”, “this is what done means”, etc.

    In my experience small highly skilled teams that have been working together for a while in the same location don’t need these same controls. You can adapt to the methodology of 3rd parties you are working with and still be successful.

  5. What Fazal said. Methodology stuff is worth reading and thinking about, but success or failure is overwhelmingly due to management and staff competence and professionalism. Most of the time when an organization starts making a lot of noise about methodology they are trying to cope with fundamentally inept leadership or staff. The situation often emerged by way of misguided penny pinching on salaries, or putting non-leader material in management roles.

    BTW, I wouldn’t necessarily hold up Facebook or Google as great models for software development. They happen to sit on cash geysers and so can waste huge amounts of money yet still manage to deliver working stuff. But by many accounts they are massively overstaffed and struggling with legacy hairball software.

  6. The software industry loves to blindly re-invent wheels then write books about their amazing discoveries (XP! Agile! Scrum!) while still delivering dismal results. Meanwhile, other industries get things done without fanfare by following the 60yr old methodology of “Systems Engineering” (google it; if its good enough Boeing and NASA then its good enough for build websites and apps).

    The title of the book “Designing the Requirements: Building Applications…” makes me cringe because it seemingly mushes together three distinct tasks: 1) defining requirements, 2) designing functional solutions to the requirements, 3) building an implementation the functionality.

  7. Agile/Scrum was a reaction to the ‘the magically changing requirements of current day’ approach so beloved by non-technical managers, wasn’t it? But on a longer scale, it seems much software degenerates because of features and requirements added over time, not seldom inimical to the existing software architecture. The end result tends to be the ‘big ball of mud’ pattern.

  8. I’m Director of Engineering at a software company in suburban Indianapolis. We follow agile-ish principles but we never use the word agile. Really, I set forth some engineering principles I believe in and I let the engineers organize around them. Our product people write some high-level outcome statements, and we elaborate on those in some level of documentation, and we build for a week and we demo to the product people and then we repeat again the next week. And our quality levels are good, right for what we’re trying to accomplish. It seems to work for us. That’s really the goal, I think: regular delivery, at the quality level your business needs.

  9. In my experience, it is the code review process and approval before check-in that counts the most.

    If you have a team member on the team — s/he doesn’t have to be the lead or architect, just someone with authority — who is a pain-in-the-ass about code quality and makes sure nothing gets checked-in if the quality isn’t there and if a test cases doesn’t exist for the delivery, then you can sit back and be sure that the project is of good quality.

    Working with such a person can be difficult at times mainly because it can take multiple code reviews before a check-in and it can slow down a release, but it’s worth it and other team members will learn from such a person.

    The company I work for, we have such a person on my team. Other teams don’t and you can tell the difference in a big way. Yes, all teams use Agile methodology to its core.

  10. From the review it appears that this software book, like almost all of them, focuses mostly on *how* to build software, but not much on *what* to build. “What the User Wants and Needs” is in the subtitle, yes, but it sounds like the book is mostly concerned with the software itself, not the users. Since I’ve written my own book on the topic (http://amzn.to/1Pf9xvi) I can say from experience that very few books do more than pay lip service to the idea of considering what users would find helpful.

  11. The formal software methods taught in schools were spawned from bet-the-large-company projects to build complex software systems with stringent performance requirements. E.g. a telephone switch, an air traffic control system, a mainframe operating system, a fighter jet, etc.
    These are the projects that have the budget and headcount to spin off whole software methodologies as a byproduct, and these are the projects where you need a methodology.

    The methodology must provide a way to:
    – Collect the mountain, literally, of requirements
    – Decide which requirements to drop
    – Iterate on design options that satisfy all requirements. Don’t forget a single one.
    – Break down the system, somehow, so that teams in different regions can implement their part without constant business travel.
    – etc.

    The typical FANG project to add some feature to some app that takes 5 engineers 2 months is a speck. You can just go build it.

  12. The size of the software project is so IMPORTANT. If it needs 10 man years or less work then you can build it and maintain it and improve it any old way. Spaghetti code is fine. No need for specs.

    But if you are building BIG software systems you have to have documented and proven methodology with written specs and methods that add modules and layers as the product grows and evolves. Think about how you would go about writing the software for a fighter jet or a cell phone communications switch or a satellite communications switch. Think about how you would add on to a system that has a lifetime of 20-30 years and thousands of improvements and changes (those customers do not know what they want) and decades of bugs. Those systems are millions of lines of code and required 100s of man years work and must have good reliability and repair-ability. Doing it ad hoc just does not work and the reliability sucks.

  13. Reflecting elements of several comments above, but especially Neal and anon, I fear the sudden eventual realization that the very systems analysis that’s the most rigorously studied organisational capability, able to inject sanity into the most ailing project, has been subordinated by the concentration of this past centurys need to overcome technical hurdles which has placed in the hands of the most skilled multi talented programmers a level of ad-hoc logical fluidity, which in causing as side effects impenetrable software operations, will increasingly be inefficient use of skills. The advance of raw computing power is temporarily bound by parallelism adoption and competence, but factors such as silicon complexity and security could accelerate the impetus for consistent programming techniques and style, if not a rush to embrace dusty ADA compilers. This may manifest in new jargon like, oh, Design For Analysis, which could advocate macro writers may be permitted to do so only when providing the concomitant test code to unravel the instruction flow exposing the macro action to a verbose but proscribed interface to enable lesser mortals to confirm tests. But the battle royale will be that any programmer of stature may be fairly able to argue that this kind of approximate seriousness was what they’ve been advocating all along, only budget, management impedance and deadlines the true enemy. Personally I think making any macro author simultaneously write a exposition of the function which is handily learned by colleagues, is a admirable goal that’s attainable today. The movement I wish to see take a hold in software is simply classical didacticism. I believe that most shops already have dangerous levels of disparity between the learning of the complimentary skills either being managed or subject to management, and I will rejoice the day when, like The Toyota Production Method, the most junior programmer can reach up and tug the emergency line whenever they detect any possibility that the build will be inscrutable or complicated if the concerned code is committed. To enable such a halt on commits, the vigilant programmer’s halt needs to encompass the behaviour of code other than theirs that their code interacts with. This simple thought is reasonably claimed by a number of programming methodologies, CI arguably for one, but the watched issues that I should like to subject to inspection would have a two way resolution of both parts against a ongoing documentation for maintenance of design consistency, discovery and diligence.

    (by diligence I mean to record the efforts undertaken to avert later difficulty, either for the programmers who are to consume your code, or later, and customer depending precautions against offering surfaces to hostile environments. This is approaching the ISO9001 philosophy of assurance, but I maintain the processes used currently are missing the opportunity to instill a educational and positive involvement of the whole team in a manner that I feel confident will quickly become adopted as second nature, thereby making the cost impact of this rigor increasingly negligible)

  14. My proposal could equally push down the whole team the most advanced programming styles, language features use, the intention is not reduction to the most imperative or another denominator of intelligibility, but to agree to be able to at any time apply commonly understood methods so that anyone can stand a chance of reviewing the whole codebase. Allowing common sense limitations of course, the aim isn’t to pander to the unwilling to learn or empower the newest recruit yet to invest the time to read and understand the project and assimilate it’s idiosyncrasies. I see the first benefit to be the potential elimination of idiosyncrasies introduced by any reason other than requirements or language capabilities. The ultimate potential result might even be creation of a DSL as the result of my methods. But it would at least be the preserve of no self anointed individual or primadonna, and we’ll described in terms any appropriately experienced programmer will find quickly comprehensible.

    I have just described more nirvana than remedy, but the method is the sum of small steps made unexceptional part of the normal day. I hate making a song and dance about anything that I prefer to be a given, but such occasions are unfortunately infrequent.

  15. @Mark Hurst,

    When you need to address the risks involved in or inherent to the functions you’re writing in even a basic but formal process, the questions you have to ask to assess your risks usually align the effort with specifically what your customer wants. If you are asked to place currency crosses over the screen in the densest fashion, but have a keyboard shortcuts system to hit the bids and offers, do you want to have a zoom function which ensures the trader is not by accident able to confirm the trade on the wrong quote? Individual quote zoom, or row / column / user selects group arrangements zoom… Or enlarge the n quotes next closest to the one that is initially selected, to get to hit the volume before the tickets confirm? This is a rough example from a usually pampered customer type, which unfortunately due to hft the tools available for click trading see less investment every year, but it might be applied to a telesales screen in which the outgoing and incoming calls are weighted according to known potential for sales.

    Ultimately isn’t this entire subject pretty much reduced to, “If you can afford the rigor the problems tend to go away.”

    What about unintentional consequences, such as the projects that, subjected to a new method, don’t get approval because of budget rising too high?

  16. Amazon says this was published in Oct 2015, and yours is the first and only review. Has anyone else read it? Couldn’t the author at least convince some friends to write reviews?

    What I’ve noticed after 35 years in software is we love to reinvent the wheel. Unfortunately, each new methodology doesn’t learn from the past (indeed, we feel the need to demonize the “old way”), so we end up stubbing our toes again and again.

    I’ve seen waterfall, then rational unified processs (RUP), and now Agile/SCRUM.

    And now, wait for it, the latest silver bullet, DevOps!

    That said, for some corporate web software projects, Agile/Scrum is far superior to waterfall, in my experience.

Comments are closed.