The Year 2000 Bugby Philip Greenspun(last updated June 30, 1998) |
Site Home : Research : One Article
What do we conclude from this? You shouldn't hire MIT nerds to write business data processing software!
Hardware engineers are brilliant. The great strides they've made have almost obscured the stagnation in the software world. Microsoft has a monopoly and Bill Gates has a lot of money. But that doesn't change the fact that Microsoft programs don't work well enough for anything that matters. The airlines run on software that was written in the 1960s (before MIT kids realized that you could make more money going to medical school or getting an MBA than by programming).
Society pays hundreds of thousands of people to keep our computing systems together with chewing gum and baling wire. Year 2000 bugs will be subsumed into the sea of bugs that these folks fight every day anyway.
How likely is this to happen? It has already happened. Tektronix makes extremely high quality products, mostly test equipment for engineers. Allegedly a Tek digital storage scope built in the mid-1970s failed in the mid-1980s because it couldn't store the date anymore in the allotted 24 bits.
I'm personally more worried about these kinds of embedded computers failing than mainframe computers. There are computers in your car, in your microwave oven, in the gas pipeline, in the electric power grid. Some of these may fail or behave irrationally on January 1, 2000. Some may fail at unpredictable times related to how many seconds will fit in the computer's word size and when they started counting. All of these computers are harder to find and test than the mainframes in Corporate America's glass-walled computer rooms.
Why don't I just leave the software field?
There is a guy upstairs from me at MIT: Dave Clark. He basically wrote the spec for TCP/IP, the standard by which computers talk to each other over the Internet. The Internet started out with two computers on his desk and scaled to the current 20 million machines. Dave Clark did not have to go out and rewrite TCP/IP every time another million machines were hooked up or when people started to broadcast radio sound or video clips over the Internet.
Some day I hope to design systems as robustly as Dave Clark.
More information on this topic can be found a TimeBomb 2000 and the Y2K Problem.A discussion group is at:
TimeBomb 2000 (Y2000)
-- Tracy Adams, September 6, 1998
On TCP/IP: your tale about Dave Clark is roughly accurate, but there are a few inaccuracies.Dave Clark did not by any means design TCP/IP by himself.
When the Internet was two computers, back in 1969, it did not use TCP/IP. The NCP protocol became stable sometime in the 1970s (I don't know when -- I wasn't there, and I haven't read the RFCs that far back) and had some minor problems, like 8-bit host addresses. TCP/IP in its current form was stabilized about 1980, when there were a hundred or two machines on the Internet. It broke down completely about 1986. Van Jacobson and some smart guys at Berkeley figured out how to fix this, by having TCP work a little differently and slow down when the network got too congested. This way, you would get worse service, but everybody else would get better service. Since the Berkeley guys were maintaining BSD, which was by that time the dominant OS on the Internet, they could mandate that every computer on the Internet use their congestion-avoidance algorithms.
Now we're running out of host addresses again, and you can't send radio sound or video clips over TCP very well -- a single lost packet in the middle means that the whole sound or video stops for several seconds waiting for a resend -- so people are going back to UDP and not building in congestion-avoidance stuff. The heart of the Internet, the backbones, can't figure out how to route reliably, and the best solutions they've come up with so far result in every router on the backbone having to know which direction tens of thousands of networks are in. The Internet still isn't handling a fraction of the data the phone system is, which is partly because every packet is routed independently, and it's not handling it anywhere near as reliably.
The current TCP/IP protocols don't work well for distributing the same information to many people at once. That's why Phil Greenspun has a 1300-kg HP K460 with four gigabytes of RAM, four processors, and a 250-kg 6000-watt power supply in his closet, and he doesn't have a hundredth of the number of readers that CNN does. TCP is fundamentally unicast, and IP in its original form didn't have any provision for multicasting -- where you send out one packet that goes to several other people. In fact, most routers *still* don't support multicasting.
It's a royal pain in the neck to run a TCP/IP LAN. This is because TCP/IP was not designed for LANs -- it was designed for the Internet, which was a bunch of big multi-user mainframes and minicomputers connected by long-haul leased lines. As a result, adding a Mac to an Appletalk LAN consists of plugging it in, but adding a computer to a TCP/IP LAN consists of plugging it in, assigning it a unique IP address, telling it its IP address and telling it the IP addresses of DNS servers to use, and if I want to use it on the Internet, it also needs to know the local network's netmask (so it can tell which addresses are on the local network and which ones aren't) and the name of the default router. These are not problematic if you're setting up a new machine every year or two, but it's a pain when you're reinstalling Windows several times a day.
TCP/IP is not designed for security. Anyone in between you and someone you're talking to can see everything you send and receive, and if you're using TCP, they can insert arbitrary bytes into the bytestream; anyone on the Internet can crash your computer if its implementation of IP hasn't been updated in more than about a year, and probably even if it has; anyone who can get hold of some minimal information about any particular TCP connection you're talking on can close it at any time; anyone on the Internet can "smurf" you and flood your connection to the Internet with garbage, by making it appear that you're requesting lots of ping replies; if you're following the TCP sequence number generation as it was originally specified, then someone else can fake a TCP connection from you to somewhere else, from anywhere on the Internet. If someone can get physical access to your Ethernet, they can make everybody else on the LAN think that they are everybody else, so all data traffic goes to their machine, which can, of course, do whatever it wants with it.
The Internet Protocol version 6, or IPv6, previously known as IPng, solves most of these problems. It has 128-bit host addresses, so we'll never run out again; it has a better priority scheme to distinguish real-time traffic like sound and video from reliable traffic like telnet and the Web, and to allow routers in the middle to keep too much sound and video from stopping the telnet and Web traffic; it mandates support for multicast; it includes built-in mandatory support for encryption; and it provides a way you can plug a machine into your network and have it just work without having to configure it, just like Appletalk, only worldwide. IPv6 is currently supported in several operating systems, and there's a network called the 6bone where IPv6 is in use right now by several hundred people. There's no telling how long it'll take to switch completely. They haven't solved the routing problem or the reliability problem, as far as I can tell; http://snad.ncsl.nist.gov/~ipng/NIST-6bone-status. html seems to say that between half and three-fourths of the 6bone is unreachable from NIST as I write this.
So, in summary: - TCP/IP has gone through at least two major redesigns; - TCP/IP has a lot of scaling problems and a lot of problems being applied to new problems, like multicast, voice, operation in a hostile network, and local area networking; - Dave Clark didn't do TCP/IP by himself.
Now, don't think I hate TCP/IP. I love it. It's the best networking protocol in the world. It's more flexible than any other networking system. It's reasonably fast (although the packet-by-packet routing means that it will be limited by router silicon speeds, instead of wire speeds, for a long time to come) and it's reasonably secure, assuming nobody can snoop on your packets. It's not hard to configure, although it's a long way from AppleTalk, and it's robust enough that it keeps on going even when there are 90% packet loss rates on the backbones.
-- Kragen
-- kragen@pobox.com --, September 22, 1998
> Dave Clark did not by any means design TCP/IP by himselfDoes anyone really design anything by themselves? This is quibbling IMO.
> Now we're running out of host addresses again
Actually, CIDR really pushed this issue off a fair distance into the future. All it took was a little self discipline. Of course, if things got really tight some institutions (ie: MIT) that have sizable allocations (ie: an entire Class A) could always renumber and share the wealth, but that's a pain and probably unnecessary.
> the best solutions they've come up with so far result in every router on the backbone having to know which direction tens of thousands of networks are in
This is mainly because doing the kind of baling-wire fix you describe turned out to be rediculously easy with the advent of better block allocation and almost-free memory. If what you have works, you don't fix it.
> The current TCP/IP protocols don't work well for distributing the same information to many people at once. That's why Phil Greenspun ...doesn't have a hundredth of the number of readers that CNN does.
In my opinion (and from reading the articles, I believe its philg's as well) this isn't a big deal. We've become accustomed to multicast services (ie: TV, radio) because for a very long time they were the only game in town. While I agree that they have their place, there's no reason to dismiss TCP/IP just because it doesn't service an area that already has substantial support. In information gathering and distribution, I think that the peer-to-peer model is much nicer anyway.
> It's a royal pain in the neck to run a TCP/IP LAN...
The comments in this section have nothing whatsoever to do with TCP/IP. The example given regarded setting up multiple intances of Windows machines; ironically Windows' support of DHCP is about as good as you could ask for and solves every issue discussed in this section.
> TCP/IP is not designed for security
No it isn't. Who cares? Why insist that security is put in place at the transport level? This issue has been addressed in most critical areas (https, ssh, etc). Anyway, the Internet is a global cooperative network. If we relied on TCP/IP to provide security, the US parts of it would have a hard time communicating with the rest of the planet.
> IPv6 has 128-bit host addresses, so we'll never run out again
Can I quote you on this in a few decades? The real answer would be to make a protocol that could accept expansion. However many addresses we specifiy (and yes, I'm aware that 340282366920938463463374607431768211456 is a long number) we will eventually find a way to use them up.
Just a different opinion.
-- Richard Stanford, March 17, 1999
It is possible that the IPv6 technology mentioned above may not provide an adequete number of IP addresses. However, this technology will provide enough IP addresses for a world covered by computers (today's size) stacked 500 high. Probably enough to give every leaf an IP address.If the leaves fall...would that be considered crashing?
-- Kurt Schroeder, March 22, 1999
I am quite puzzled. Who IS Dave Clark? I read the book: Where Wizards Stay Up Late-The Origins of the Internet, and there is no mention of Dave Clark in it.
-- Waran Were, May 10, 1999
I was born in 1969. In 2038, when the time counters wrap around on 32-bit Unix systems, I will be ... near retirement age.Surely, forty years from now, 32-bit computers will be obsolete, and every computer that could seriously affect my life is going to have, at the very least, a 64-bit processor and an OS that uses 64-bit dates. So I have nothing to worry about. Right?
Maybe I should spend the next few decades stockpiling canned foods and working on my target shooting....
-- Seth Gordon, August 30, 1999
Dave Clark DID NOT write the specs for either TCP or IP.The RFC for TCP (RFC 793) was written by Jon Postel. So was the one for IP (RFC 791). In fact a serch for Postel in the authors field of RFCs at www.rfc-editor.org comes up with 231 results. The earliest RFC authored by Postel is RFC 0045.
A similar search for Clark comes up with 19 results. Of these 4 are for Clarks other than D. The earliest RFC attributed to D. Clark is RFC 0813 which deals with implementation as opposed to design issues.
CONCLUSION: Dave Clark is not the creator of TCP/IP.
The foregoing does not mean that Postel is the creator of TCP/IP either. The good work was started long before he (and quite a bit earlier than Dave Clark) arrived on the scene. There were quite a few illustrious predecessors. The name of Vint Cerf comes most easily to mind. However Jon Postel's contributions to the Internet are immense; much more so than most.
Attachment: a
-- Ratnakarprasad Tiwari, April 10, 2002
> Can I quote you on this in a few decades? The real answer would be to make a protocol that could accept expansion. However many addresses we specifiy (and yes, I'm aware that 340282366920938463463374607431768211456 is a long number) we will eventually find a way to use them up.> It is possible that the IPv6 technology mentioned above may not provide an adequete number of IP addresses. However, this technology will provide enough IP addresses for a world covered by computers (today's size) stacked 500 high. Probably enough to give every leaf an IP address.
First, if exploration of other planets (much less other solar systems) ever gets underway, the available address space would start getting split up immediately.
Second, suppose Internet connections become simple enough (remember that many things that were considered difficult or impossible 25 or 50 years ago are considered simple today) to put a small client/server into every piece of equipment connected to a phone line or electrical line (signals can be sent over electrical lines), so that it can regularly report its' status upstream. As a trivial example, suppose a light socket in hotel room 72 on the 17th floor could report a burned-out bulb; someone on the maintenance staff could come up to replace the bulb in minutes.
A single-chip FTP server has already been demonstrated.
-- Scott McNay, August 18, 2002
Can I quote you on this in a few decades? The real answer would be to make a protocol that could accept expansion. However many addresses we specifiy (and yes, I'm aware that 340282366920938463463374607431768211456 is a long number) we will eventually find a way to use them up.However, this technology will provide enough IP addresses for a world covered by computers (today's size) stacked 500 high. Probably enough to give every leaf an IP address.
The first comment above is right on the money. The second comment above (by another correspondent) is an often-made argument that completely misses the point that rarely, if ever, do you want to assign addresses sequentially, and especially so in heterogeneous systems. Look at phone numbers, as a common example.
-- Dr. J. S. Pezaris, April 10, 2003