Scalability, Three-Tiered Architectures, and Application Serversor "Why the Netscape Application Server (Kiva) Sucks"by Philip Greenspun for Web Tools Review |
"Men are most apt to believe what they least understand."Application servers for Web publishing are generally systems that let you write database-backed Web pages in Java.
-- Michael de Montaigne
The first problem with this idea is that Java, because it must be compiled, is usually a bad choice of programming language for Web services (see my book chapter on server-side programming).
The second problem with this idea is that, if what you really want to do is write some Java code that talks to data in your database, you can execute Java right inside of your RDBMS (Oracle 8.1, Informix 9.x). Java executing inside the database server's process is always going to have faster access to table data than Java running as a client. In fact, at least with Oracle on a Unix box, you could bind Port 80 to a program that would call a Java program running in the Oracle RDBMS. You don't even need a Web server, much less an application server. It is possible that you'll get higher performance and easier development by adding a thin-layer Web server like AOLserver or Microsoft's IIS/ASP, but certainly you can't get higher reliability by adding a bunch of extra programs and computers to a system that need only rely on one program and one computer.
This document works through some of these issues in greater detail, pointing out the grievous flaws in Netscape Application Server (formerly "Kiva") and explaining the situations in which Oracle Application Server is useful.
Our hardware for this monstrously popular site? A Sun Microsystems SPARC Ultra 2 pizza box Unix machine, built in 1996. Its dual 167-MHz CPUs would be laughed at by the average Quake-playing 10-year-old. The CPUs sit idle 80% of the time. The disks sit idle most of the time, partly because I spent $4,000 on enough RAM to hold the entire 750 MB data set. Oh yes, the machine also serves a few hundred thousand hits/day for other customers of arsdigita.com and runs the street cleaning and birthday reminder services that we built.
If we tarred up the site and moved it to a mid-range Unix server such as the HP K460 that sits behind http://www.photo.net, we could probably serve at least 5 million hits/day. If we moved it to the highest-end HP server, I'd bet that we could get close to the 100-million hit/day mark that sites like Yahoo serve.
Why has "scalable" become the buzzword du jour? People get burned because they do stupid things. They connect their Web server to their RDBMS via CGI, thus forcing the machine to work 10-20 times as hard for no good reason. They run Windows NT. They run some unproven junkware/middleware that came in an attractive box. Services get wedged and they run out and buy another dozen (or thousand, as with www.microsoft.com) physical computer systems. Now that they have a whole machine room full of hardware, they know that they can't keep it all running simultaneously so they look for software to yoke it all together somehow such that the death of one machine won't be noticed.
How do my friends and I avoid scalability problems? We know that we're stupid. We run the Oracle 8 RDBMS like the rest of the world and don't try to figure out if some new competitor's hype has any relationship to reality. We talk to the RDBMS via AOLserver, which has been doing connection pooling from a Tcl API since 1995. So we get the safety and software develop ease of Perl/CGI but the computer never has to fork a CGI process and the database connections are shared among the scripts. We've served roughly 1 billion hits with AOLserver so we're pretty sure that it works. Linux and NT get magazine writers excited, but we run the same commercial versions of Unix on which the Fortune 500 relies for its enterprise computing.
But what's reasonable and realistic? It will cost you a fortune in extra hardware, software, and administration time to shoot for 24x7x365 uptime. And, in the end, you will never achieve it. Nobody in the history of computing has ever achieved 100% uptime. So would you rather have three Web services that are down for eight scheduled hours/year and eight unscheduled hours or one service that wasn't supposed to ever go down but in fact is unavailable for four hours/year?
Why do people think that availability is such a problem? Again, they are mostly applying band-aids to decisions that were risky on the face. If you read a Sun Microsystems marketing brochure, you might be ready to run out and buy a 64-processor E10000. But after watching one of their contract service guys try to fix a desktop Sun system, a wise person would probably think twice about relying on the latest, greatest, and most complex Sun server. Does that mean you can't buy a big fancy machine? Of course not. I'm not nervous about my personal HP K460 with its 36 disk drives on 6 SCSI chains, 4 CPUs, 4 GB of RAM, and 2 network cards. Why not? All the HP service engineers that I've met in the Boston area are wizards on both the hardware and software. How about Windows NT? Do you personally know anyone serving 10 million hits/day, every day, from an NT box parked where they don't have physical access? If not, why risk your service on NT?
Suppose I could get a couple more HP K460s and the HP ServiceGuard software and maybe some round-robin routers, all for free? Would it make my site more reliable? I don't think so. I don't have enough time or money to figure out how to install, configure, and maintain all that stuff. I wouldn't even have time to document the configuration so that if I were on vacation and the site failed, someone else could bring it back up.
Before trashing the idea of the application server, let me trash the idea of the three-tiered architecture for Web services.
A Web service usually has to go from conception to launch in less than one year, preferably closer to 6 months. If you can't do that, the idea that seemed so clever will probably already have been done by six other people. It might be nice to break everything up into abstractions, layers, objects, and protocols operating on three redundant tiers. But what if it takes you three years? How can you be sure that the publishing or business needs won't change radically six months after you launch and get some real user experience?
The end result of this process is laughably unreliable since NT isn't truly crash-proof, IIS doesn't work so well, ASP is a bit flakey, and the COM/DCOM stuff isn't reliable or fast. But the process by which the publisher got to this end result is perfectly reasonable.
America Online was enamored of the process but not of the resulting reliability. So they added "ADP" pages to AOLserver (http://www.aolserver.com). An HTML document is a legal ADP program. Magic escape codes allow the developer to insert a bit of Tcl. When the developer needs some powerful encapsulated programs, a programmer can build Tcl modules that are loaded at server start-up. If that isn't sufficient, a programmer can write a shared library in C and make it available to Tcl programs AOLserver-wide. The C code has access to the full range of services on a Unix machine.
In the Apache world, people achieve the same results with a mix of traditional Perl CGI scripts and plug-in modules. The approach that is closest to ASP in spirit is the PHP hypertext preprocessor (see http://www.php.net).
A cleaner system than any of the above is Meta-HTML (http://www.metahtml.com), which extends HTML syntax and semantics into a powerful programming language. This means that the developer isn't forced to constantly bounce between Tcl and HTML syntax, for example.
Curl is one of the most thoughtfully developed gentle-slope programming systems for the Web. It was built by computer systems researchers at MIT and works for both server- and client-side Web programming (see http://curl.lcs.mit.edu/curl/wwwpaper.html). Curl is currently being commercialized by a venture capital-backed company.
Almost all of the implementations of these gentle-slope languages provide the developer with a rapid-prototyping environment. To make a change to a program, the developer need only edit a file in the Unix file system. The next time a URL is requested, the new version of the program is used. Note that the earliest Web server scripting system, Perl/CGI, has this property.
This e-mail message was about the Netscape Application Server, a product originally named "Kiva Enterprise Server" that Netscape purchased in early 1998.Last year, I worked with a group of programmers who were rebuilding a site that had originally been knocked out by a consultant in a couple of months. It was some big, nasty Perl scripts talking to an Oracle RDBMS. Homely yet functional. The new team of programmers loved Kiva. Actually they bristled at the title "programmer" and would point out that they were in fact "software engineers." I don't mention that as a point of ridicule, but rather to illustrate their perspective. They dealt in high-level concepts on prototypes that remained prototypical right up until the VC funding ran out. -- email from a Web technologist
After you pay $35,000 per CPU, you can add a dynamic page to your Web site by following these easy steps, as outlined by one of my co-developers:
Following these steps, it took us two weeks to port an application that had taken a day to write in AOLserver Tcl. That's not counting the time it took to get some paper manuals FEDEXed because the documentation wasn't available on the Web in HTML format. The 2.0 software was almost laughably unreliable, with minor Java programming errors in a single script capable of bringing all Web services to a halt. But even if the Netscape Application Server had worked as advertised, it would have been an extremely painful development environment. Here's a Tcl string into which we are substituting the values of two variables:1. write your java code in foo.java 2. compile with: /usr/local/kds/jdk1.1.5/bin/javac -g -classpath "/usr/local/kds/jdk1.1.5/lib/classes.zip:/usr/local/kds/classes/java/SWING.JAR:/usr/local/kds/classes/java/kfcjdk11.jar:/usr/local/kds/classes/java/kdsjdk11.jar:/usr/local/kds/classes/java/ktjdk11.jar::/usr/local/kds/jdk1.1.5/classes:/usr/local/kds/jdk1.1.5/lib/classes.jar:/usr/local/kds/jdk1.1.5/lib/rt.jar:/usr/local/kds/jdk1.1.5/lib/i18n.jar:/usr/local/kds/jdk1.1.5/lib/classes.zip:/usr/local/kds/jdk1.1.5/classes:/usr/local/kds/jdk1.1.5/lib/classes.jar:/usr/local/kds/jdk1.1.5/lib/rt.jar:/usr/local/kds/jdk1.1.5/lib/i18n.jar:/usr/local/kds/jdk1.1.5/lib/classes.zip::/usr/local/kds/APPS" "/usr/local/kds/APPS/yourappname/foo.java" 3. once sucessfullly compiled, create a GUID with: /usr/local/kds/bin/kguidgen 4. paste a copy of this GUID into your code. 5. edit yourappname.gxr and add an entry for foo, with the GUID 6. Register the applogic with /usr/local/kds/bin/kreg yourappname.gxr if you like, or for some reason this doesn't work, you can run kreg without arguments in interactive mode. 7. you can access the guid at http://yourserver.com/cgi-bin/gx.cgi/AppLogic+foo The logs are in /usr/local/kds/log/ I think the kjsdev.log usually has the most interesting information. If your applogic fails at run time, the browser will return "document contains no data"
Manage $domain ($pretty_name)
In Kiva's template language, you have
Manage %gx type=cell id=domain%%/gx% (%gx type=cell id=pretty_name%%/gx%)
It turns out that string assembly in any system that uses Java is
painful because Java's parser is so weak that you can't have a string
literal containing newlines. Consider the simple error message fragment
in Tcl:
append exception_text "<li>Your email addresss doesn't look right to us. We need your full
Internet address, something like one of the following:
<code>
<ul>
<li>Joe.Smith@att.com
<li>student73@cs.stateu.edu
<li>francois@unique.fr
</ul>
</code>
"
In Java, that turns into the following rich source of compiler complaints:
exception_text += "<li>Your email addresss doesn't look right to us. We need your full\n"+
"Internet address, something like one of the following:\n\n"+
"<code>\n"+
"<ul>\n"+
"<li>Joe.Smith@att.com\n"+
"<li>student73@cs.stateu.edu\n"+
"<li>francois@unique.fr\n"+
"</ul>\n"+
"</code>\n";
Passing sets of variables from the user's browser through to Oracle is
much more painful in Kiva. Here's the AOLserver Tcl script that allows
a contest administrator to add a column to the entry table. Note that
this is a 7-line program plus two "return a page" statements.
set_form_variables
set db [ns_conn db $conn]
set table_name [database_to_tcl_string $db "select entrants_table_name from contest_domains where domain = '$QQdomain'"]
set alter_sql "alter table $table_name add column $column_actual_name $column_type"
set insert_sql "insert into contest_extra_columns (domain, column_pretty_name, column_actual_name, column_type)
values
( '$QQdomain', '$QQcolumn_pretty_name', '$QQcolumn_actual_name','$QQcolumn_type')"
if [catch { ns_db dml $db $alter_sql
ns_db dml $db $insert_sql } errmsg] {
# print error message
# ...
} else {
# database stuff went OK
# print confirm page..."}
Note that variables that came from the previous form like
$column_actual_name
and $QQcolumn_pretty_name
(with any apostrophes
quoted) are available simply because we called the Tcl procedure
set_form_variables
. The Kiva code shows just how much pain
this Tcl magic was saving us.
package contest;
import java.lang.*;
import java.util.*;
import java.text.*;
import java.io.*;
import com.kivasoft.applogic.*;
import com.kivasoft.types.*;
import com.kivasoft.util.*;
import com.kivasoft.*;
/*GUID {C8CCC3C0-535C-1510-AD41-0800208F129A} */
public class ContestAddCustomColumn2 extends ContestAppLogic
{
public int execute() {
// grab variables from the previous form
String domain = valIn.getValString("domain");
String column_pretty_name = valIn.getValString("column_pretty_name");
String column_actual_name = valIn.getValString("column_actual_name");
String column_type = valIn.getValString("column_type");
com.kivasoft.IDataConn conn = openDatabase();
// four lines to replace one line of Tcl (database_to_tcl_string...)
IQuery domain_info_query = createQuery();
domain_info_query.setSQL("select entrants_table_name from contest_domains where domain = '"+domain+"'");
IResultSet domain_info_rs = conn.executeQuery(0, domain_info_query, null, null);
String entrants_table_name = domain_info_rs.getValueString(domain_info_rs.getColumnOrdinal("entrants_table_name"));
String alter_sql = "alter table "+entrants_table_name+" add ("+column_actual_name+" "+column_type+")";
String insert_sql = "insert into contest_extra_columns (domain, column_pretty_name, column_actual_name, column_type) values (:domain, :cpn, :can, :ct)";
IQuery alter = createQuery();
alter.setSQL(alter_sql);
IResultSet ignore_this_rs = conn.executeQuery(0, alter, null, null);
IValList insertValList = GX.CreateValList();
// set some substitution variables; note that Kiva
// delivers garbage to Oracle if you include underscores
// in the bind variable names
insertValList.setValString(":domain",domain);
insertValList.setValString(":cpn",column_pretty_name);
insertValList.setValString(":can",column_actual_name);
insertValList.setValString(":ct",column_type);
IQuery insert = createQuery();
insert.setSQL(insert_sql);
IPreparedQuery insert_prepared_query = conn.prepareQuery(0, insert, null, null);
// here we're finally able to do the insert, something that took one
// line of Tcl
IResultSet ignore_this_rs_again = insert_prepared_query.execute(0, insertValList, null, null);
TemplateMapBasic map = new TemplateMapBasic();
map.putString("system_name",systemName());
map.putString("system_owner",systemOwner());
map.putString("alter_sql",alter_sql);
map.putString("column_actual_name",column_actual_name);
map.putString("column_pretty_name",column_pretty_name);
map.putString("entrants_table_name",entrants_table_name);
// this last bit will substitute all the preceding variables into a
// template (a file that we separately maintain)
return evalTemplate(getDocumentRoot()+"ContestAddCustomColumn2.html", (ITemplateData) null, map);
}
}
Now we have more than 40 lines of Java code. But it is so much more
reliable than the old Tcl, isn't it? Actually it is less reliable. My
Tcl program checks for errors in executing the database ALTER TABLE and
INSERT statements (note for db nerds: these are not bundled together in
a transaction because DDL statements such as ALTER TABLE cannot be
rolled back). The Java program does not. Why not? I was too worn out
from writing all the extra lines and fighting Kiva bugs.
Now that we've covered the day-to-day pain of working with Kiva/Netscape Application Server, let's look at some of its more pervasive shortcomings.
If I write a Java program using the Servlet API to talk to the Web and the JDBC API to talk to an RDBMS, then I can run my program without modification on sites backed by the following Web servers: AOLserver, Apache, Lotus Domino, Microsoft IIS, Netscape Enterprise, Sun Java Web Server, Zeus (and about 20 more, according to http://jserv.javasoft.com/products/java-server/servlets/environments.html).
If I write a Java program to the Kiva/Netscape API, then I can run my program on a system with the $35,000/CPU Netscape Application Server. Period.
Everything in the query is always the same except for the number at the end, which will change depending on which user is grabbing a page. In ancient times, the RDBMS vendors decided that the syntax for putting in SQL with "bind variables" should beselect * from users where user_id = 6752
Before your program asks the RDBMS to execute this query, it is supposed to tell the RDBMS what the value of ":1" will be. If you have a bunch of bind variables, you end up with ":17" in your SQL and it becomes rather ugly. As far back as I can remember, Oracle at least would let you use bind variables like ":user_id". And if you read The SQL Standard (Date and Darwen 1997; formerly "the red book" but now with a blue cover) you will find bind variables like ":user_id". But Kiva decided, perhaps for superstitious reasons, to write a little translator that would map each symbolic variable name into a numeric name. However, the Kiva documentation doesn't say what acceptable bind variable names are. It turns out that underscore, though a legal character in an SQL column name, a Java variable name, or a Tcl variable name (from which I was adapting the code), does not work in this little ad hoc Kiva subsystem. My program failed. A prospective user would get "document contains no data". The Kiva error log would fill up:select * from users where user_id = :1
When I changed the bind variable from ":user_id" to ":userid", the page started working again.[03/11/98 02:10:23:4] warning: ORCL-048: select * from users where user_id = :1_id
Another half-thought-through idea in Kiva is that data from browser cookies and data from HTML form variables should come in via the same programming interface. Unfortunately, that means if a cookie and a form variable have the same name, you'll only get the value of one of them.
I don't want to go on record as saying that Tcl and Visual Basic are the world's greatest computer languages. However, they are real computer languages whose syntax and semantics are well-understood and thoroughly documented. Ditto for Java, the Servlet API, and the JDBC API. With Kiva/Netscape Application Server, it really isn't clear what the program is even supposed to do.
Netscape Application Server isn't just a way to talk to your RDBMS, though. Rather than have the RDBMS manage a user's session state, you can ask the cluster of application servers to do it for you. Because the load-balancing features of the application server mean that a user may be bounced from one physical machine to another, all of the user's session state must be kept simultaneously up to date on every physical machine running the application server. This is the well-known problem of keeping a consistent replicated database of dynamic information. A wise Web publisher would not trust Oracle 8 to do this job. After all, Oracle Corporation only has 20 years of experience in building this kind of system (through 8 versions). They only have $7 billion in revenue and 30,000 employees dedicated to making it reliable. Banks and insurance companies rely on Oracle's expertise. But that's not good enough for your Web site. Instead, you want the 2.0 version of a product built by a small start-up company. The fact that a customer's Java syntax error can crash the entire product shouldn't dissuade you from trusting the Kiva programmers to have solved this difficult problem correctly.
Assuming you believe that the Kiva programmers figured out how to do replication correctly, remember that the servers have to communicate session state amongst themselves over the network. You have to configure and administer this communication. You have to make sure that the open ports used for this communication don't become a security risk.
How is this security risk managed in practice? My friends who run Kiva never got this far. They were never able to get Kiva to run on more than one physical machine at a time. In fact, they had to restrict Kiva on that machine to a single thread. So all of the users of their dynamic Web content must line up single-file. If one of the users were to ask for a page that required Oracle to spend 30 seconds sweeping through some big tables, all of the other users would be staring at blank screens for 30 seconds.
Once, after nearly an hour of continuous 100 percent CPU utilization, every Sun Java VM crashed so badly we couldn't even kill them using the Solaris "kill -9" command. With the Java VMs continuing to use nearly 100 percent of the server's CPU resources, we had to reboot even the (normally) exceptionally stable Solaris system.Despite having support from vendor experts, Dyck was unable to ever support more than 10 simultaneous users using Java pages in Netscape Application Server. What kind of a machine was he using? Just a little SPARC E4000 with 8 CPUs... (this is a $200,000 box)The bottom line is that even a few crashes per hour aren't acceptable for mission-critical applications. After all the tests, we faced an inescapable fact: Java VMs and Java database drivers just aren't ready for the demands of high-load, production environments.
The major Java problems we observed under both Sapphire/Web and Netscape Application Server were crashes in Java threading code (exceptions in java.lang.thread), which we solved by keeping thread counts below five per virtual machine; unstable Oracle JDBC (Java Database Connectivity) drivers, particularly when handling date and time values (which we mostly solved by getting Oracle Corp.'s as yet not publicly posted 8.0.4.2.0 JDBC drivers and sticking with Oracle's type 4 all-Java drivers, which proved more stable and virtually the same speed as the company's type 2 mixed Java/C drivers); and large memory leaks in the JavaSoft VMs that caused them to continuously consume RAM during the test interval (which we solved by equipping all servers with at least 1GB of RAM each and manually restarting all the Java VMs between each test run).
Analogously, if you show a super-hairy transactional Web service to a bunch of folks in positions of power at a big company, they aren't going to say "I think you would get better linguistics performance if you kept a denormalized copy of the data in the Unix file system and indexed it with PLS". Nor are they likely to opine that "You should probably upgrade to Solaris 2.6 because Sun rewrote the TCP stack to better handle 100+ simultaneously threads." Neither will they look at your SQL queries and say "you could clean this up by using the Oracle tree extensions; look up CONNECT BY in the manual."
What they will do is say "I think that page should be a lighter shade of
mauve". In a Kiva-backed site, this can be done by a graphic designer
editing a template and the Java programmer need never be aware of the
change. Of course, the graphic designer will have to be fairly formally
minded and understand that chunks of the form
%gx type=cell id=domain%%/gx%
must be preserved.
Do you need to pay $35,000/CPU to get this kind of separation of "business logic" from presentation? No. You can download AOLserver for free and send your staff the following:
Personally I find ADP syntax (copied by the AOLserver guys from Microsoft's IIS/ASP) to be cleaner than Kiva's. Furthermore, if there are parts of your Web site that don't have elaborate presentation, e.g., admin pages, you can just have the programmers code them up using standard AOLserver .tcl or .adp style (where the queries are mixed in with HTML).To: Web Developers I want you to put all the SQL queries into Tcl functions that get loaded at server start-up time. The graphic designers are to build ADP pages that call a Tcl procedure which will set a bunch of local variables with values from the database. They are then to stick <%=$variable_name=> in the ADP page wherever they want one of the variables to appear. Alternatively, write .tcl scripts that implement the business logic and, after stuffing a bunch of local vars, call ns_adp_parse to drag in the ADP created by the graphic designer.
Similarly, if you've got Windows NT you can just use Active Server Pages with a similar directive to the developers: put the SQL queries in a COM object and call it at the top of an ASP page; then reference the values returned within the HTML. Again, you save $35,000/CPU.
Finally, you can use the Web standards to separate design from presentation. With the 4.x browsers, it is possible to pack a surprising amount of design into a cascading style sheet. If your graphic designers are satisfied with this level of power, your dynamic pages can pump out rat-simple HTML.
From my own experience, some kind of templating discipline is useful on about 25% of the pages in a typical transactional site. Which 25%? The pages that are viewed by the public (and hence get extensively designed and redesigned) and also require at least a screen or two of procedure language (e.g., Tcl) statements or SQL. Certainly templating is merely an annoyance when building admin pages, which are almost always plain text. Maybe that's why I find Kiva so annoying; there are usually a roughly equal number of admin and user pages on the sites that I build.
Unlike Kiva/NAS, OAS is essentially a stateless system, thus eliminating a whole class of potential bugs.
Maybe you're having second thoughts at this point. You're going to be doing lots of transactions. Your idea is so brilliant that you're going to have half the world using your site. Maybe a simple Web service layer on top of a proven RDBMS is good enough for Greenspun and his pathetic 500,000 hit per day personal site, but you're building something serious here.
Two words of advice: "America Online".
As of this writing (July 1998), AOL has 11 million members. They have a bunch of Unix machines running a replicated Sybase. They interface this replicated Sybase to the Web via a very simple stateless program: AOLserver.
telnet webcenters.netscape.com 80 Trying... Connected to webcenters.netscape.com. Escape character is '^]'. HEAD / HTTP/1.0 HTTP/1.0 302 Found Location: http://home.netscape.com MIME-Version: 1.0 Date: Thu, 04 Nov 1999 06:23:02 GMT Server: NaviServer/2.0 AOLserver/2.3.3 Content-Type: text/html Content-Length: 319
In explaining some of this mess to MIT students in our one-semester course in software engineering for Web applications, it occurred to me that I never explained how three-tiered architectures and application servers came to be useful in large corporations to begin with.
Imagine that a large bank has a checking account system built in the 1960s on an IBM mainframe. They also have a Visa card system built in the 1980s on a Unix machine running the Informix relational database management system. They have a call-center management system built in the 1990s running on a Unix machine running the Oracle 7.3 RDBMS. Suppose that the bank wants to offer the following service: Charlie Consumer calls up the 800 number and wants to pay his Visa bill directly from his checking account.
The bank's IT staff is extremely risk-averse and doesn't want anyone to touch the guts of any of the three existing systems. So they hire a C programmer to write a custom program that will talk to the call center's Oracle database to figure out which consumers want to transfer money, then talk to the Informix- and mainframe-based systems to actually move the funds. This was called an application server because it contained the logic necessary to enable the new application, i.e., moving money from checking to credit card.
If the bank were starting from scratch would they build things this way? Of course not! They'd put everything into one big Oracle database on a Unix machine and they wouldn't need application servers. The checking-to-credit transfer could be accomplished with a tiny PL/SQL or Java program running inside Oracle.
The strange thing is that most Web service operators are in exactly the position that all the world's large companies would love to be in. The Web services have all their relevant info collected in one big relational database management system. So they don't have the pernicious systems integration problems that Fortune 500 companies do. Yet apparently at least a few of the technology folks at Web startups are sufficiently confused that they buy willingly into the same kind of computer systems complexity that the Fortune 500 are trying desperately to escape.
I've purchased the dead trees book, and looked at the new one online, and now this page here. Still no mention of Cold Fusion.I know you have strong opinions, Phil, and you probably have a reason for not considering or mentioning it. But as an application server it seems such a powerful yet elegantly simple environment for building just the kind of apps you so fondly describe. If you've not looked at it recently (4.0 is about to come out), please consider doing so.
Compared to the effort to do the same thing in Kiva/NAS as described above (and perhaps Aolserver), simple things can be rendered amazingly quickly, while complex things can be created quite easily.
Interested readers (and I offer this as one to another, not as a product hype) can find more at www.allaire.com.
-- charlie arehart, July 14, 1998
I may have just found part of an answer to my question, if I extrapolate from a quote in the info on setting up an ecommerce site (http://photo.net/wtr/thebook/ecommerce.html):"We looked at the vPOS system from VeriFone and rejected it immediately because it only works with Microsoft Internet Information Server (we don't know anyone who understands NT well enough to run an Web service from Windows NT)."
Many presume that CF runs only on NT, and would therefore presume it's equally susceptible. Just for info, as of 3.1, it runs on Solaris.
Also, I want to be clear before receiving a response (if one's coming) and state that while I "looked at" the new book online, I did not read it in entirety. I would like to have been able to search it for reference to CF before saying "still no mention of CF".
I appreciate your penchant for thoroughness and accuracy, and if I've missed such a reference, I apologize. But my inclination after reviewing much today (and remembering some emails we traded many months ago), it to conclude that the lean away from CF would still seem persists.
I'm just trying to share this as an alternative for the interested reader. I really have found it to be quite useful, especially as compared to other platforms I've seen. Even seemingly in comparison to the AOLserver approach outlined in the book. That's an opinion, of course.
I only wish that your site reflected a considered observation regarding CF, especially in its later versions which have addressed so many issues that may previously have been found wanting.
I hope others find the comments helpful.
-- charlie arehart, July 14, 1998
Charlie, I don't have a penchant for "thoroughness"! My goal is not to try every possible Web/db tool. My first goal is to keep my million-hits/day personal sites up and running. My second goal is to finish the new rev of my community software package so that a bunch of folks (including me) can use it. My third goal is to complete my consulting projects (i.e., get Web sites built for clients of arsdigita.com).None of these goals is furthered by spending a few weeks experiment with Cold Fusion. I don't think it can fundamentally do anything that my current Web tools can't so I'd rather spend my time data modelling and writing programs.
If those Allaire guys release an Emacs mode for Cold Fusion, maybe I'll have a look :-)
-- Philip Greenspun, July 14, 1998
Thanks for that. I meant the thoroughness penchant wit respect (I really did mean I "appreciated" it). Your coverage of the issues near and dear to you is exhaustive and complete. That's what I meant by thorough. I was only sparked to comment on this "application server" discussion page. It seemed a place where you might have indeed taken time to evaluate alternatives. I can appreciate (or rather, understand) if it's more just a log of experience trying a couple things. Even so, I offer my comments both to you for consideration should you experiment again in the future, and more importantly to readers should they be considering other options.For that, I *appreciate* the opportunity to have offered these comments. It, like many of your site's features--and you yourself, are valuable resources. :-)
-- charlie arehart, July 14, 1998
Unfortunately, most of my consulting engagements are with corporations that already have an established architecture and guidelines, etc. These invariably include NT. I entirely agree with Philip's point of view regarding middleware, but its hard to go simple and elegant on NT. What are some good light web server/scripting language/RDBMS combinations that work with NT (if any)? Anything besides IIS/ASP?Also, what about Server Side Java? What would be the likelihood of using SSJ to interact with an RDBMS and then output HTML to a browser (once SSJ is ready for the task)?
-- Justin Loeber, August 7, 1998
OK so I understand why you hate Kiva's server. My colleagues and I have done a whole bunch of work since the good ole days of Perl CGI's, to try and find a reasonable server-side programming solution which has RDBMS support.I will first say that we haven't tried CF or AOLServer although those sound viable. My main concern with CF is the plug-in architecture - C only, as I understand (i.e. not Perl or Java). C is so utterly crude and dangerous that I don't touch it if at all possible, so that's a big downer.
We came up with our *own* perl based template solution, which was pretty cool for a while. Then Java servlets came along and swept away all the reasons we had for coding horribly complex ASP and LiveWire stuff. LiveWire at least has a very seamless JavaScript/Java interface so we have some cool stuff that uses JS for presentation and Java for complex, high performance stuff. ASP uses ActiveX which is super horrible, even from Java.
But servlets only solve the problem for high end guys like me, and of course there is the compilation cycle. Not cool for an HTML whacker, and that's where CF and AOLServer seem to have Servlets beat. (I certainly don't think that between TCL and CFML, you can do nearly as much cool stuff as with Java, to be fair to Java.)
Well, it looks like there is finally a "no compilation required, page based programming model " sort of solution that uses Java. It's called JSP, for JavaServer Pages, and the first implementation I have seen is from Live Software (www.livesoftware.com).
Basically it's implemented as a servlet that serves all pages whose filename ends in .jsp, and it looks remarkably like ASP, except for 3 critical differences:
1) It's pure Java so you can run it on Linux or HP/UX or Solaris or AIX or MacOS, not just NT. 2) The embedded language is Java, so although you may be twitching in horror at having to use a strict language, you have the benefit of a language without giant syntactical BS like VBScript and Java have, plus your knowledge of the "reusable objects" language can help you train your "scripted pages" people, and you can share books, training classes, etc. etc. 3) The interface from scripting-land to reusable-objects-land is practically nonexistent. Since the JSP parser is a servlet, it is written in Java, hence you already have a JRE and a classpath going. Just compile your classes and put them where the servlet engine can find them and you're done. If necessary you can hack up a servlet just to test one object, or you can hack up a command-line Runnable class that can do the same thing - do that with COM. If you like you can make the return value of your Clever_DB class be a Java object - also impossible with COM, unless it too is a COM object, and then you have to do exceptional syntactical stuff to make that work. And in LiveWire you have to account for the difference between JavaScript variables and Java String objects, which is not needed here.
The reason this is not taking over the universe is that, like much of the really exciting Java stuff, it's not really "there" yet. You can get it in a beta version but most definitely it isn't as tried and true as AOLServer is for Mr. Greenspun...
Anyway, it's just something to watch...
-- Jamie Flournoy, August 19, 1998
Today the DOW dropped about 500 points so I thought it might be a good time to get into the market. I went over to E*Trade to open an account and saw URLs like the following:https://trading.etrade.com/cgi-bin/gx.cgi/Applogic+Home?gxml=hpa_welcome_c_t.html&SOURCE=YAHOO
from the URL, it's apparent that they're using Netscape Application Server (aka Kiva). Given Kiva's well-documented reliability and usability issues, perhaps I'll reconsider trusting E*Trade with the management of my investments..
-- Gideon Glass, September 1, 1998
Actual E*Trade horror story:The company I work for recently switched over to E*Trade's "Optionslink" captive broker program. They handle our stock option and ESPP (employee stock purchase program) shares. The day after our last ESPP purchase, the first under the new regime, hundreds of employees were unable to connect to the optionslink site. As far as I could tell they were down for the entire day, having crashed early in the morning due to high load. People wanting to sell their shares were forced to discover and use the touchtone telephone access line (which costs more). E*Trade subsequently refunded all fees for trades executed that day.
That was enough to convince me to choose another service do to my personal online trading.
-- Ben Jackson, October 24, 1998
It's a very interesting perspective which you bring to this. The comments on scalability and availability are very apt, and nice to hear in the face of the marketing deluge from the app server companies.However, there are a few points where I see things differently. I've been using Art Technology Group's Dynamo, and I've been quite happy with it. As with Kiva, it is somewhat complicated to do just about anything. I have found however, that it makes the hard things significantly more achievable.
One of my main experiences of the dynamic sites I wrote before using Dynamo (in Perl or in PHP), was that I was often solving the same problems over and over. And, just as often, under time pressure, I was solving them badly. Maintaining a user's session, templating (and sub-templating) pieces of HTML, pooling database connections, inserting queues for performance, logging errors and events accurately, etc. With Dynamo, all of these things are supported in very flexible, very well-implemented ways.
So that's something I like: the services provided by a good application server.
If I were just writing bare servlets (given that I do like Java as a server-side programming language), or Java Server Pages, I would have to come up with a lot of that functionality on my own.
Just another perspective...
-- Dan Milstein, November 19, 1998
An interesting prospective. Though I don't share all the statemens of the article - the one is obvious. Most of the tasks for which application servers are being advertised could be sovled by employing much more cost-effective solutions. Presently, just by using a web-server as simple proxy-server to so called applications servers people: (1) underutilize front-ends (2) unable to take full advantage of HTTP1/1 features (3) end-up with almost unacceptable latency. A decent high-performance webserver, such as Netscape Enterprise Server or IIS plus a hardware load-balancing solution such as Cisco Local Director (which one would want to have anyway) would do just fine in many cases.
-- Ruslan Belkin, February 9, 1999
I am currently involved in the evaluation phase of application servers and alternative technical routes that achieve a core goal:"An architecture that supports a component based development framework for reusability and quick development cycles."
One of the choices not mentioned is the use of client-side scripting with server-side XML. I am experimenting with using java servlets to serve XML mappings of database queries directly to a browser. A set of javascript routines facilitate the actual event of hitting the web-server with query parameters then capturing and parsing the XML data.
This way the user interaction is 'seamless', minimal data is transferred, and the servlet is 100% free of any presentation rules.
The drawback is that this is NOT going to work in Lynx. But it achieves separation of data from presentation. It also lets application developers work with a single rapid development tool such as Javascript/HTML and data access component developers developers work with their tool of choice (for us Java).
-- Mitch Coopet, February 27, 1999
hi,you have raved about Oracle database reliabilty, scalibility etc etc but its still hard to configure and maintain without database tools like those provided by quest software http://www.quests.com/ Why can't Oracle provide GUI interface to configure the database at least half decent as SQL server or MS Access (sic)..
-- Dragon Heart, March 26, 1999
Interesting perspective on app servers!"Java as a language is mediocre for web applications" Mr Greenspun's main points are: Java requires a lot of typing (True), Java is compiled, which adds steps to the development process. (True).
Although, as with any reasonably rich language, a skilled programmer will quickly develop worker classes/methods to shorten common tasks - such as string handling.
Also, Java is technically interpreted and capable of dynamically including classes - so much of the advantages of on-the-fly code changes can be implemented by designing an application to pickup new extensions dynamically.
Anyway, I wouldn't spend much time debating this-vs-that language. It ranks in the this-vs-that editor debate. Everyone has their preferences and no one will agree once they've chosen their favorite...
"Application server comments - et. al."
Now here are where some compelling statements and examples are given and are worthy of discussion.
"There is no scalability problem"
Of course there is a problem. The trick is to know when it needs a solution. I agree heartily that keeping things simple at the beginning is a key strategy. I also agree that many folks, rather than think of how their applications might scale, simply throw more plumbing at the problem.
In my experience, the largest impact on performance of any application is the, shock, *design* of that application. IOW, the developer is most often their own worst enemy.
So, assuming many shops are throwing additional complexities over an application that basically still "sucks", it will most likely just add misery to the situation.
Assuming that the application has been focused upon, works great and is easy to maintain, it should be a lot clearer that performance issues are most likely related to shortcomings in the delivery platform.
This makes it much easier to pinpoint what needs to be added to the delivery platform to enhance performance. Again, not throwing the kitchen sink at the problem, but incrementally adding things where they fix a particular, identified problem.
"Becoming a prisoner"
Mr. Greenspun points out that using essentially open APIs and object libraries (Servlets, JDBC, Java Pages, open source connection pooling software, etc.) gives you a lot of portability and even reduces your cost of deployment.
I find little to argue with here. In fact, I think this is *the* "dirty little secret" of web development. There's opportunity here. There are already companies using nothing but open source tools, applications and operating systems, to develop turnkey solutions that kick ass, and they are making money - $5-$20 million in income a year.
However, as Mr. Greenspun points out, fortune 500 companies have their own strange psychology about how to purchase and develop their computing tools and infrastructure. This makes them very nervous about the above kinds of sources they use for their solutions. They'd rather have a vendor they can call on the carpet. They'd also like to have vendors with enough cash, i.e. something to lose, to compensate for problems.
I believe this is changing rapidly - that many companies are sitting up and taking notice of what's going on in the open source arena. A few more large sucess stories and orders of magnitude less cost are going to to shift things very quickly - perhaps in the next 2-3 years - in favor of more or less "open" solutions.
Do I think that the current suite of "application server" products are insanely priced and needlessly complex? You bet your ass I do.
If most of the customers I met jumped up and shook my hand for the deal when hearing I could whip up their enterprise-class web solution using nothing but AOLServer and TCL scripts, I'd be more than pleased.
Until that becomes the norm, we're stuck with using Edsels to run the Indianapolis 500...
- Mike O'Shea Puget Sound Systems Group mikeo@pssg.com
-- Michael OShea, April 30, 1999
The address "http//www.meta-html.com/" (Meta-HTML extension to HTML syntax) doesn't exist anymore (02-May-99).
-- Agon Buchholz, May 2, 1999
I always enjoy reading Mr. Greenspun pragmatic and realistic views of developing web database sites. His thoughtful comments push developers to think. However in terms of measuring scalablity with web applications I think he is giving the public at large a misconceoption of what web scalibity means with respect to price performance and application servers. Before I make any comments on webserver scalibity lets say that I look at it form the kernel up as opposed to Mr. Greenspun looks at it from the higher level down. And for information purposes, yes I do work for Sun.Frist on of the major factors that affect application transaction throughput of a server based on your database backend is if your rdbms is correctly thread modeled to the OS, i.e. Oracle is not a truely threaded rdbms in the two model smp architecture. Informix is a more true threaded database. However one has to match the user thread level architecture such as java or tcl or php3 or perl to the underlying OS. In the case of Solaris this is more difficult to accomplish, java a perl come the closest. Both must be proficient in matching the thread vm models to the OS. Java 1.2 comes extreamly close to this which commercially is in it's infancy. Perl threading with respect to a two thread model smp architecture is second best but by no means close. The key in java would be controlling garbage collection on large image allocation. This is typically not a concern with other web api's which the developer has to manage themself.
So where does that leave us for a billon webtransactions a day, what that is 12k a second of which static pages can be supplied at 10k a second according to specweb on Solaris 4 cpu Xeon box and lets grossly say 20 percent of that is transactions. If I can get 3.3k messages a second on a (dual) Xeon box running Solaris with a Volano java benchmark I would be concerned only with what affects on cpu loading in user space is offered by java or perl, tcl etc. Yes I firmly expect it to scale (valono marks) on a quad.
The model of the webserver is extermly important in this case. Example using Apache or any other fork prefetch model would be inappropreaite to draw response time concultions from in the Solaris smp model. Netsacpe or the Sun Webserver 2.x is more closely matched. By this I mean that java servlets in the Sun Webserver are dropped down through an rpc door call and effectively reduce overall latency by more closely matching the thread model of the over all architecture. Man if that isn't a mouth full of marbles.
So what my point and what is all the gobblity gook I'm getting at. One according to the web sizing and calculations in Capacity Planning for Web Performance by Danial Menasce and Virgilio Almeida, Philip is mostly correct about the cost performance justifications of application servers in general. But for the most important part is tracking latency of the model. At least I thing is more of a concern. What God gave us bandwidth, humans cause latency. Always did like that one.
As for price/performance shop till you drop. And Open Source solutions are worth as much time as you have to fix the bugs and performance issues that you encounter. And one of the best ones I ever heard is; Who in the hell told you OO is suppose to perform well.
---Bob palowoda@fiver.net
-- Bob Palowoda, May 4, 1999
The NAS perspective is very helpful to me, as I have just become involved with a Fortune 500 company that is insisting on a Kiva solution. The Netscape NAS information is so loaded with marketing jive that I can't get a lot of useful information out of it. I am hoping to comply with the letter of the requirement and violate its spirit by using AppLogics as minimally as possible, passing request/response strings back and forth to servlets with no intervention of any kind, but, you know, I am afraid I am going to get trashed for the additional overhead, coming as it does with no benefit whatsoever. Thanks for alerting me to this dangerous product.
-- John Burns, May 18, 1999
ColdFusion: This is a viable solution for NT. It scales and performs reasonably well for probably all but the very high-end site. I would highly reccommend against using it on Solaris until their next release, supposedly due out this Fall. Reasoning? The code isn't actually ported over from Win32 -- it's running on top of Win32-like libraries (from Bristol) on Solaris, and performance ends up being similar to NT (bang for buck). The folks at Allaire are rewriting the entire code base to make it portable and more modular, and hopefully better performing. (Note: I used to work for Allaire, so I don't know if that matters at all in terms of what that means for this opinion.)Kiva/App Servers: I have to agree with Philip (sorry, I misspelled it earlier) on this one. They are typically more trouble than they are worth. Again, unless external forces dictate otherwise, stick with tried and true solutions like the ACS stuff, or ColdFusion (sorry...I just know it works well in lots of situations). N.B.: Philip, I was thinking of writing a CF-mode for Emacs, but just never got around to it. I hear ya. ;-)
-- J E, May 29, 1999
Meta HTML has relocated their website. It can now be found at www.metahtml.com. There are GNU and commercial versions of Meta HTML.
-- Brian Jones, June 15, 1999
Beside Kiva, what other app server should I be avoiding?
-- Cuong Tran, July 10, 1999
I think this should be compulsive reading for all web developers.I was just trying to work out what kind of systems it would take to support the kind of load I was anticipating. Now I know!
What you say is common sense to anybody, the only point most people don't realise is whether or not the simple idea can hack the load.
Now we know it can.
Thanks very much.
Sam
-- Samuel Liddicott, July 16, 1999
Although I agree with a lot of the comments about KIVA/NAS I do not feel the same way about application servers in general. I've been working on the web for a relatively long time. In the past 3 years I've been working with application server technologies.We've had a lot of success with Sapphire/Web and all of it's components. I've seen people that didn't come from a software background (at least no C++/Java) develop functional applications in relatively short periods of time.
A great thing about it is that it allows integration with other systems. Although running an Oracle 8 rdms might be the answer for some people you have to remember that there are many companies that have other/mixed rdbms systems. With Sapphire/Web we have the ability to talk to multiple databases and middleware services. This has allowed us to access some older systems via the web.
NT is seen as easier to manage than unix so it's gaining a greater prescence in corporate america.
Java's new work with servlets and jsp is going to be very interesting too.
-- Tom Menegatos, September 19, 1999
AppLogics will be replaced by servlets & JSPs in NAS 4.0 (which might be out by now). This servlet implementation will be built on top of the AppLogic layer so it will probably be even buggier than before. If anyone is using NAS 4.0, please share your experience.Personally, I like implementing my server logic in servlets. I build my stuff in JBuilder on Win95 (which offers many nice hand-holding features) and upload the resultant class files to my host. I write a BaseServlet that handles most of the DB connection & user validation crap for a project and then subclass that for the other servlets.
Java may require more lines of code but I find it WAY easier to read/maintain than Perl and other similar scriptng languages.
David Nicks
-- Ephram Gonzalez, October 22, 1999
People seem to miss the problem Phil has with NT (that I share). It's not that it's easy to administer. It's actually not easy to administer _well_. It is easy to administer badly or in a mediocre fashion. The problem is then that if you fail to administer it well, it's not suitable for an 'enterprise' application. In other words, you may have to reboot it nightly, or weekly, or more frequently than 3 months to a year, which can be a realistic figure for a moderately well or really well administered *x system. When you look at VAX, you get into figures like years and possibly a decade between instances where the server's operating system needs rebooting.I agree that having to reboot the operating system frequently makes it difficult to run an 'enterprise' application with any degree of professionalism, and I urge others to see if they can figure out the author's stance about reliability, uptime and administrative costs before suggesting web servers, development environments and operating systems that are not proven to be adequately stable. It makes them look like they actually haven't retained any comprehension from their readings.
-- Malcolm Gin, November 7, 1999
I read Phil's essay on application servers and three-level hierarchies while the Sun Ultra 10 on my desk churned through another test series to get some performance numbers on what kind of performance hit I can expect from ColdFusion versus stock HTML.Obviously, the more levels one introduces to the hierarchy, and the more dinking around one does with the data that ultimately winds up being shoved out to the end-user, the more time or performance it'll take.
I arrived at my current job a month ago, eager to CFML-ize everything within reach. But now that I'm running tests (using Jef Poskanzer's http_load), I'm starting to realize that ColdFusion isn't particularly efficient at handling high loads. Simply having the CFserver (running as an NSAPI plugin to Netscape Enterprise Server) *look* at data, even if there's nothing there for it to parse or act on, drops pages-per-second performance incredibly.
Of course, we're not using an Ultra 10 for production webservice - we've got multiple Enterprise 450's for that, which don't even break a sweat serving hundreds of thousands of static pages a day as it is. But reading "solution" examples on Allaire's websites, and noting that some of their other customers are running clusters of up to *40* machines... wow.
Yes, we want database integration. Yes, we want personalization for our users. But no, we're not willing to sacrifice performance or load time. And no, we don't really need a "content management system." Content is great, but without performance, it never makes it to the user.
-- Dan Birchall, December 2, 1999
I feel it germane to mention Zope here, for the following reasons:I don't work for Digital Creations, nor do I have a vested interest in Zope, other than the fact that I use it, personally. I simply agree with most of what Phil said on this page, and feel that Zope is a nice alternative to AOLserver for people who think similarly.
- Zope is Open Source.
- Zope is written in (mostly) Python, with certain sections optimized in C. Python is an interpreted language, offering much of the same benefits as Tcl provides for this kind of work; namely no compilation, and therefore faster development cycles.
- Zope has a nice reporting language called DTML that, while having faults of its own, improves on the ASP/JSP/?SP syntax (IMHO.)
- Zope includes an Open Source transactional, logging (and thus versioned) object database called ZODB, in which most of your web objects are stored.
- If ZODB doesn't fit the task at hand (i.e. you are doing more writes than reads, or the data needs to be accessable from another system), you can easily store your data in Oracle, Sybase (Sybase officially supports Zope!), ODBC via OpenLink, and a number of other RDBMSs.
- RDBMS connections are pooled (cached, actually, along with nearly everything else Zope touches.)
- There is one transaction boundary. If you are in the middle of a commit, you have already written some data to the object database, and then there is a failure in a SQL query, the entire transaction is rolled back. ZODB and the RDBMS stay in sync.
- Zope includes its own (Open Source) protocol server, called ZServer, based on Sam Rushing's asynchronous Medusa, which serves object data to HTTP, FTP, WebDAV, and (soon) SOAP clients. You can even use Emacs/efs to edit methods on Zope objects via FTP.
- If you don't like ZServer, feel free to use Apache, IIS, etc., through proxy-pass, PCGI, or FastCGI.
- URLs your grandmother can read! Yes, you can bookmark them, too.
- An active, growing community.
-- Jeff Hoffman, February 15, 2000
Philip makes a number of interesting points, but neglects to address one great strength of the java-based Application server market, which includes the likes of Weblogic: available programmers.There are far more people with Java skills than there are Tcl. A search of resumes on dice.com turned up 3990 candidates who claim Java experience, while 84 claim Tcl experience. A search for Weblogic turns up 43 resumes, while AOLServer turns up 1.
Forgive a weak analogy, but it's like buying an Jaguar over a Ford. Which is going to be easier to find a replacement mechanic for? The Ford may be the crappier vehicle, but it's far easier to maintain if you're not the primary mechanic on it. So when you need to extend or fix your AOLserver implmentation, who do you call? Some brilliant college kid with no accountability or Philip's company, ArsDigita?
On second thought, perhaps Philip is even smarter than he appears.
-- D.R. Tong, February 17, 2000
re: D.R.Tong's comment regarding availability of Tcl programmers.If your programmers cannot learn Tcl in a couple of days, then you are hiring the wrong kind of programmers.
-- John Smith, February 18, 2000
I just have to write another comment, following up on my positive spin on Dynamo from a while ago.After much use of Dynamo, I have changed my mind. I strongly recommend against using Dynamo for just about any project.
It adds a great deal of needless complexity and is difficult to administer and maintain. The development process it forces on you is laborious and frustrating. It scales poorly and crashes often.
There is pretty much nothing I like about Dynamo which I can't get from Apache JServ (basic Java Servlets), along with some open source toolkits: BitMechanic's JDBC Connection Pool, the superb Freemarker HTML templating system, Oroinc's Perl-like regex packages, etc.
As someone said above, I find putting my logic into Java Servlets to work very well. With Freemarker, there is very little need to use the painful Java string handling syntax, and Java allows complicated things (like input validation) to be handled gracefully. And with JServ, classes are automatically reloaded when they change, so the development process is just one step away from save-and-reload.
-- Dan Milstein, March 17, 2000
A few points from experience:
- I have tried and I am using XML and XSL (Microsoft implementation) and I found that it flies and way outperforms ASP. I would argue that there is hardly anything cleaner than that. In addition, I use ADO and MS SQL Server. I don't know about the Oracle/Java/TCL configuration, but I comfortably support 10 hits/sec on my 266MHz Pentium laptop on NT.
- If you use application/web server, you can keep some data in memory and cache it and scale far better than an RDBMS solution. It really depends on the type of application and data
- Java is compiled (JIT) and every decent servlet environment can detect a change in a .class or .jsp file and reload on the fly without restarting.
- Look at some PC Week test results and have a second thought about application servers and NT. There are also articles about MS SQL Server vs. Oracle.
-- Christo Angelov, March 20, 2000
In reference to D.R. Tong's comment above, I totally agree with Gary Burd's rebuke. I am familiar with both Java and Tcl. Anyone who can't learn to program in Tcl in a few days has no business calling themselves a programmer. I can't say the same thing about Java.
-- Vadim Nasardinov, April 25, 2000
I'd like to note that phil has done a great job with this site. I've found a incredible amount of information. But, I'd like to point out that home.netscape.com itself isn't running Aolserver.As you can see, webcenters.netscape.com is running Aolserver, however it returns a Location header to take you to home.netscape.com, which is running netscape's server. (Or appears to be doing so)
Still, keep up the great work you are doing by providing all of this information for free, I understand how much work it is.
-- Joseph Moore, May 6, 2000
webcenters.netscape.com redirects you back to home.netscape.com if you don't present all the cookies that it wants to see (i.e., you have to be a registered user already to see the AOLserver-backed site).
-- Philip Greenspun, May 6, 2000
D.R. Tong raised the issue of available programmers on February 17, 2000, and Gary Burd responded:
If your programmers cannot learn Tcl in a couple of days, then you are hiring the wrong kind of programmers.I think D.R. might be referring to a consulting situation. I've run across this to my great frustration in some of my engagements. For some clients, if they want a solution that uses X language so they can maintain the system in that language after you have left, you either provide for that in the proposal or you don't get the contract. No amount of explanation of the benefits will sway them to adopt Y language. I think the real answer here is to enhance AOLserver to support code other than Tcl, ADP, etc., rather than saying "it's my way or the highway". The data model looks clean to me (I've only been peeking at it for a day, and I ain't some whiz, so take this subjective eval of the model with a boulder of salt), and there is nothing that says "thou shalt use AOLserver/Tcl" against that model. A superior data model does not circumscribe the manner in which it is accessed. You are free to use anything else against that model (though you wouldn't derive the benefits of the massed efforts of others behind the AOLserver/Tcl approach, a variation of the issue that D.R. posed).
-- Anthony Yen, May 9, 2000
Concerning the virtues of interpreted scripting languages vs compiled application logic feeding HTML templates, I thought I might share my experiences with rapid prototyping and debugging on two platforms, one compiled, one interpreted.Firstly, ASP development. I use Homesite, which gives me syntax highlighting, tag-wizards, context sensitive help, all that good stuff. But to find syntax errors in my form logic, for instance, I still have to reload the page, read the error message, figure out where the error is (sometimes the line numbers given are helpful, sometimes not) and move to that line in Homesite before I can start debugging. Maybe there are microsoft tools that make this faster, I don't know.
Secondly, Delphi development. I've done a fair amount of standard database development and even a fair amount of web development using Delphi. 18 months ago I wrote the beginnings of a pretty good webBBS system using Delphi (currently running at insanity.net.nz - I hope to finally do some more work on it sometime in the next month or three). It runs on a web application server called Webhub, which although good, is sufficiently obscure that I have doubts about it's future, particularly now that Delphi 6 Enterprise has it's own (very impressive looking) web application development system (which will probably also be in Kylix later on - that's Delphi for Linux). I'm getting off track here. The point I want to raise is that when C hackers think "time to compile now?" they often have to make a strategic decision about whether now is the best time for a coffee break. Whereas when I think "did I get the syntax right on that last function call?" I hit compile, because it takes about half a second. And it'll take my cursor right to the first error, with a clickable list of the current errors, hints and warnings in the message dialogue box.
So when I see people saying things like "Python is an interpreted language, offering much of the same benefits as Tcl provides for this kind of work; namely no compilation, and therefore faster development cycles" (from above, concerning Zope), I often wonder what they're talking about, until I remember that most people have never used or perhaps even heard of Borland Delphi.
-- Seth Wagoner, June 22, 2001