Tuesday, August 23, 2005
Friday, August 19, 2005
I'm currently reading First Things First in an attempt to improve my time management skills. As this all seems to be pretty good advice, there's a good chance that at least some of the principles mentioned here will make it into VIC CRM.
I've also run into a nifty website to complement the book. The D*I*Y Planner is a collection of excellent templates to be printed out and used in day-time planners. The templates are available as PDFs suitable for printing; and in their original OpenOffice.org format.
Finally, I was in need of some decent fonts last night, and stumbled across DaFont.com. Lovely fonts, and mostly free, too!
Sunday, August 14, 2005
I've stopped using the Adobe Reader in favor of Foxit. Why? Because hyperlinks in Adobe Acrobat Reader launch Microsoft Internet Explorer regardless of your default browser settings. That's stupid. If I wanted to use MSIE I wouldn't have changed the settings and I wouldn't have installed Firefox.
Adobe made me angry... angry enough to actually look at what's in the Acrobat Reader directory. It handles FDF forms. Great, I've never actually seen one used. It has a QuickTime plugin. I have never once received or seen a PDF with a movie in it, nor have I ever met anyone who has. It has a WEBBUY API. I don't deal with DRM content, as I have Fair Use rights that are protected under international copyright law. I don't fart around with other peoples' rights, and I insist that they keep their fingers out of mine.
So I went looking for a replacement and found Foxit Reader. Foxit is under 2.5Mb in size and it requires no installation. It fits on my USB data thumb with my other utilties and must-haves. It does what I expect and only what I expect.
I don't do heavy PDF editing. Except out of curiosity I've never used any features of Acrobat except the PDF print driver, and that's now handled by CutePDF or direct export from OpenOffice.org. As a result I've never needed to upgrade from version 4.0. So the rest of Acrobat goes in the bit-bucket, too. If it turns out that I develop a need for a PDF editor you can bet that I'll consider the Foxit Editor first.
I didn't do a Linux audit because all of my Linux software is Open Source with one exception: Lotus Domino.
I broke down the software into several categories: FOSS (Free & Open Source Software), Gratis (Freeware), and COTS (Commercial Off-the-Shelf). Shareware is considered COTS, since I pay for that.
Here's a summary of the results:
- FOSS: 20 programs
- Gratis: 32 programs
- COTS: 11 programs
The interesting part is the type of programs that fall into each category. Gratis programs are typically such things as file readers and browser plug-ins, and small utilities that are nice to have, but not essential. COTS software encompasses those things that I consider to be best in their class (such as FTP Serv-U and Lotus Notes), and also things that I just have to have to do my job (programming languages such as Visual FoxPro).
It was interesting, though, that many of the workhorse programs (OpenOffice.org, Firefox, the GIMP, Open Workbench) are FOSS. Even in the case of the commercial software that I use, Lotus Notes is the only one I've bothered to upgrade recently, and I use that as a platform for writing FOSS. I've had no justification for upgrading anything else. Once you have a text editor, even one as nice as Textpad, do you really need to upgrade it? In my case, no. And though I have Microsoft Office 97, I haven't actually used it the last several weeks. It's there "just in case."
So I'm finding that my commercial software inventory is losing relevance and simply isn't as competitive as the FOSS alternatives.
If you'd like to see some thoughts about why I use this or that piece of software, you can read the current list. This PDF, incidentally, was generated with OpenOffice.org.
"The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too"
The GNU General Public license is the most common "free" software license in the world today. However, it comes with certain responsibilities that you should understand thoroughly before you make the decision to use it.
Why Do I Need a License?
The user's point of view:
Technically speaking, you don't, so long as you understand that without a license, any software you receive must be treated as if it were a book. This means you can't copy it except as necessary to use it (that is, you can install it on your computer). You can sell it or give it away so long as you don't keep a copy. But assuming you acquired the software legally you don't need any license at all to use it.
Most programs do come with licenses, though, and it's important for you to read them. If the licensing doesn't suit you then you should look at other software.
The programmer's point of view:
You have certain specific copyrights granted to you by Title 17 of the U.S. Code. You don't have to register a program or otherwise do anything to acquire those rights. But if you need to defend them in court you should be able to prove your original authorship. Also, you don't have to provide a license to a user to enable him to use your product. Selling him the product gives him the right to do so. But if you want to grant rights to a user beyond those that he would normally have (i.e. the same as he would have for a book), or if you want to restrict the use of a program, then you need to provide a license.
The specific license we're discussing here (the GPL) isn't a restrictive one; but is designed to grant freedoms to the licensee that he wouldn't normally have, while retaining some very important rights for the author that ensure the continued availability of those rights.
Definitions are Important!
One of the first things you notice when reading the GPL is that it defines the meaning of the word "free". Why would they need to do that?
Is free really free?
Merriam-Webster lists 63 entries for the word "free". The primary (adjectival) entry alone lists 15 separate senses of the word, with various nuances of meaning within them. You have to agree that, given a word which can in common usage be interpreted in so many different ways, you're perfectly justified in indicating which of those many definitions you're using. Legally, it may even be your responsibility to do that if you don't want a license to be interpreted according to a meaning chosen by the licensee.
The GPL indicates that it's referring to freedom, not price, But it's not a ´free-for-all´, either. So the GPL goes on to describe the boundaries of the freedoms extended to you by the license.
What is meant by ´distribution´?
Some people suggest that the definition of ´distribute´ is vague and this somehow makes the GPL ´dangerous´ to use (or easy to violate). The questions generally follow in this vein:
- Do I have to accept the license in order to copy a GPL'd program from a friend?
- Does the act of giving it to a friend "encumber" me with the responsibility of providing the source code, despite the fact that I may have never even looked at the contents of the CD?
First of all, "distribution" is not vague and it's the same as for software in general. It just means making a copy of the software and providing it to someone else, whether as a gift or for money.
Only a licensee has been granted the right under the GPL to distribute at all. He can do this actively (by copying and giving it away or selling it himself). He can do it passively (by putting it on a website and allowing downloads, or by putting out some free CDs for people to take). He could give you permission to copy it from his machine. In all of the above, he's doing the distributing, and you as the recipient are not yet a licensee.
The act of giving it to a friend means you must give him the source if you've got it, or at least promise in writing to do so. Otherwise you have to tell him where to get it. If you simply pass on the instructions that came with your own copy of the binary then you've satisfactorily discharged this responsibility.
If you just pass on a CD you received, but you haven't accepted the license, then technically you might be infringing a copyright. Big deal, you're not going to jail... when your friend asks for source code (pointing at the license to prove you have to) then you simply refer your friend to whoever it was you got yours from, and Subsection 3c covers you.
What is meant by a ´derivative work´?
The GPL refers to ´derivative works´ but in describing them it describes situations that are described by two separate definitions in Title 17. The first is a ´derivative work´, which is one that is created by modifying a copyrighted work. The second is a ´collective work´, which is one that is created by including a work with another. This second is the situation that occurs when you link an unmodified GPL'd library to your own work.
People often describe the GPL as ´restricting´ the distribution of derivative and collective works. This is not literally true. Those are copyrights that the law grants to the author alone; and rather than restricting their use the GPL instead allows their use if certain conditions are met. The GPL itself is not a restrictive document.
It all comes down to price.
The GPL is about freedom, not price. As a matter of fact, the GPL allows you to sell a program for as much as you can get. It also allows you to re-distribute that same program absolutely gratis. The only thing you can't do is sell the executable without including the source code in the sale. This doesn't mean you have to provide the code if someone doesn't want it immediately, but you do have to make it available at no more than the cost of reproduction, and that offer of availability must be guaranteed for at least three years.
Nevertheless, the GPL isn't really free!
The usual argument here is that "the GPL isn't really free because it requires you to ´give away´ your own code in return. This is a quid pro quo."
The grant of the right to distribute my code is conditioned on your acceptance of the GPL and its extension to code you have derived from my work and which you distribute with my work.
Why isn't this quid pro quo? Because if I simply license GPL'd code to you I don't require anything from you in return, that's why. You might not be a programmer; you might not have anything to offer me but distribution. Guess what? You get the code anyway. You don't want to distribute the code at all? Guess what? You get the code anyway! Only when you choose to distribute (effectively competing with me, as described above) are the reciprocal terms effective. And that's just fair.
I heard the GPL is a viral license!
Steve Ballmer said so, so it must be true, right? This is the single most common misconception regarding the GPL, and those that encourage it prey upon the limited understanding that people have of copyrights. The first foremost thing to remember is that in the eyes of the law, a computer program is a literary work. The same rules that apply to books apply to programs, plus a few extra to ensure that your usage rights to the program are the same.
The argument is this: the GPL says that if you create a "derived work" (i.e., modify my program) you have permission to distribute it only if the result is also released under the GPL. So far, so good... it is based on your work, after all. But the GPL also says that if you simply link to my work (even if you don't modify it, then the program that links to mine must also be released under the GPL. How can that be?
Well, the case of a linked program is more akin to a "collective work" than a derived work, even though the GPL doesn't differentiate between them. Now, keep in mind that without permissions granted by the author, you have no rights to reproduce and distribute my work other than fair use. If I as the author do not want my code distributed with software that is not similarly licensed -- even for purely philosophical grounds -- I'm well within my rights to reserve that right by placing a condition in the license. A similar instance in print would be if I conditioned the reproduction of a story "for non-commercial purposes only". I don't even have to give a reason.
Secondly, by linking my library, which has that restriction, you've created a whole which doesn't easily allow the separation of my work from the work you've linked it to. You -- as a result of your own action -- have limited what you can do with your own work. This isn't any different from you attempting to include my story in a bound anthology which you then offer for sale. A judge would issue an injunction to stop the sale of the entire book even though I didn't write anything but a single story having no relation to the other stories in the book. The restriction on the story is not there to infringe the rights of other authors included in the anthology... rather, it's to exercise my right to control how my story is distributed. If you don't like it, leave my story out.
The perception that the GPL has some unique and unusual ´viral´ nature is therefore completely unsupportable. It's no more than the natural result of the treatment the law has give derivative and collective works since at least 1909. The only thing that makes it ´viral´ is the technology that has in recent years made publishing so effortless that it anyone can do it, thus spread the license.
As you can see, this isn't a restriction of the GPL, but a right that the author had from the start, and which he reserves unless you meet the conditions required for permission which are stated in the GPL.
But even putting my code on a disk with GPL code obligates me, doesn't it?
This is another common misconception, and the best I can tell you is that those who persist in spreading it have either never read the text for themselves, or they have some interest in deliberately misinforming you. Here's what the GPL says about this:
"In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License." -- The GNU General Public License, Section #2
The GPL does not apply to everything else you write just because it's on the same disk. It does not apply to things that are simply on the same storage medium. It applies only to the GPL'd work itself works that you distribute that are derived from it (i.e., which use code from the GPL'd work).
The GPL designed to steal code!
Quite the opposite. it allows you get to retain control over your own work, exactly as allowed by copyright law, and no further. You conceived of your product, you planned it, you spent months creating it and testing it. With a BSD-style license I could pick up that bulk of your work, expend comparatively little effort extending it, and compete with you using your own effort to do so. The GPL allows you to offer that opportunity to me without incurring the liability you'd have to face without such a restriction.
This isn't done to get control of my work... you do it so I can't hijack yours. In making the offer you don't even know whether I'm going to extend the code. This reciprocal agreement is a lot more fair to you than a BSD license is, and in deciding whether to accept I'd have to consider that I'm still receiving the lion's share of the benefit through your initial provision of the base product."
The end user's point of view:
There really aren't any downsides to the GPL for an end-user. If you stick to distributing the software as it was sent to you, then this license pretty much allows you to treat the software as you would if there were no copyrights. Copy it when you want, give it to whom you want, and never feel guilty about it.
The developer's point of view:
There are several points of view for developers when it comes to the GPL. Free Software developers use the GPL so that other people can't appropriate their Free shoftware and make it proprietary. They also use it because it requires people who modify the code to free their modifications as well. As a result Free Software projects are usually organized with the original author acting as a ´maintainer´, coordinating the inclusion of bug-fixes and modifications that are donated by developers who have used and improved the program.
The second point of view is that of those that sell Free Software. They occupy a niche between those that donate their work gratis and proprietary vendors. Often these developers compete on service, branding, time-to-market, or quality; and release the programs as Free Software so that they can more readily gain marketshare and benefit from donated patches.
Yet other developers are commercial proprietary developers. These individuals tend to shun GPL'd software entirely because of its ´viral´ nature. For as long as there have been computers there has been that class of software that is distributed with source code; but usually the source code is limited to the customer who licensed it (often accompanied by a non-disclosure agreement). If you don't want your source code to be distributed to every user, then you should avoid the GPL.
Finally, there are the proprietary developers who do not want to release their code under the GPL, but may wish to benefit from GPL'd libraries. Word to the wise: you can't -- not without releasing your code under the GPL as well. But the Free Software Foundation endorses another license, the "GNU Lesser General Public License", or LGPL, which does allow linking. If you want to use GPL'd software in this fashion, contact the program's author to see if a dual-licensing is available to you. Section 10 of the GPL actually suggests such negotiations, but most people don't seem to bother to read it. Pity.
The GNU General Public License is an ingenious legal device that uses the fact that current copyright law treats computer software as if it were literary works to free the source code rather than restrict it. There are really no downsides to using GPL'd programs for end users, and there can be benefits for software developers that don't object to sharing their work freely. But if you're a developer of proprietary programs, you'll want to find another license, both for your own work, and for the libraries that you use in your work.
Recently on Usenet I saw the following question asked about Linux:
At least, Microsoft is in the business of publishing their software. They do it for profit and appear to be profiting handsomely by doing it. How can I be sure that the group of 'volunteer' programmers who wrote this thing won't up and 'get a real job?'
This is certainly a question that vexes some managers who misunderstand the way that Open Source works, and who have a mistaken sense of their business priorities. Faced with this question you need to ask yourself another: "Are you in the habit of making your business decisions based on what is good for Microsoft?" If so, it's a habit you should break. Make decisions based on what is good for your business. Here are some things that I think you'll agree are good for your business:
- Lowering your operating costs. Obviously, Open Source Software (OSS) does this. Even when purchased, you can install the software on multiple machines. And there is no risk at all of a BSA audit, so you can pretty much do away with expensive software license monitoring.
- Spreading your risk. It's common sense that you don't "put all your eggs in one basket." Linux is not a one-vendor OS. If your relationship with an OSS vendor turns sour you can go elsewhere. What happens to your business if your relationship with Microsoft goes sour?
- Surviving the failure of a key vendor. Does it matter ifthose volunteers get a "real job?" Not really. It's happened before and we hope it keeps happening! With proprietary software, the vendor dies and you are left looking for a replacement which is often not suitable. With OSS the project is simply picked up by others.
- Adopting standards. Linux is built on standards. And what that means for your company is that you're not stuck with proprietary technologies and specialized skillsets. Linux is a UNIX. You can hire anybody conversant with any UNIX and he can sit down and in fairly short order he can be up to speed on your Linux systems. You do not have to go hire some Linux specialist. And the same guy can, with very little training, be as useful on any hardware platform you happen to have. Look at the difference between this and Windows, where Microsoft was ready to de-certify every NT4 administrator on the planet who did not get re-certified on newer versions. How often are you expected to send your people to training? What happens if you don't? Why are the versions THAT different?
Also, keep in mind that volunteerism doesn't mean that the authors of important OSS projects are not compensated. Many OSS projects are subsidized by corporations, organizations, or governments. And some OSS is more professional than you suspect. For example, a key part of IBM's Websphere is Apache. So keep in mind that sometimes when you go out looking for a "professonal" solution, the solution handed to you by the professionals either is or is based on OSS. And often that "real job" that the OSS author got is subsidizing work on Linux. Case in point: Linus Torvalds "got a real job" at Transmeta. He still works on the Linux kernel.
Why have companies like IBM and SGI bet their businesses on Open Source Software? Certainly not because they think it's risky and in danger of evaporating.
The Internet was originally a military project, and one of the design considerations of it was that it be disaster-resistant: with no single point of failure it's very tough to kill. It's designed so that no matter where a bomb drops, the messages are routed around it and they get through. (in reality it can get choked up, but it does recover) Adding Open Source to your business gives you a similar military-grade competitive advantage. There's no single point of failure for an OSS project. Once it's distributed it's got a life of its own. Which means you have negotiating power. You will never be shaken down for payment in advance for software you may or may not use. You will never be surprised by the terms of a contract you couldn't read until after the sale. You will never be shocked by the terms of a contract that were unilaterally changed by the vendor after the sale. You can easily move to a different vendor if you don't like the one you have. You can even become a vendor if you choose. All of this gives you a level of freedom and control over your business that is simply impossible using commercial software.
It's said that businesspeople are afraid to adopt "free software." Not when they know what "Free" is. Businesspeople do not fear Free Speech, Free Will, or a Free Market Economy. They don't need to fear Free Software, either.
Where was Apache before the dotcom bubble burst? Where is it now? Same place. Where was Linux before? Where is it now? Still there! Where were BIND, and Sendmail, and KDE, and the FSF and project after project after project going back to EMACS and the roots of Free Software? Still there. These projects didn't die for lack of money... because they weren't started for love of money. They were started because the industry had a need, and as long as the need doesn't die neither does Free Software.
Now look at commercial efforts. They whooped it up, had a big ol' IPO... money, money, money just falling from the investors' pockets! How many of them survived? Here's the difference: proprietary efforts die even when there is a need. Even when they're making money they can be bought and "integrated." They can be bought by a competitor simply to be killed so as to remove competition. Or they can simply change their minds and move off in a different direction. And it doesn't matter what you paid for when the support's gone. You're the one left holding the bag with no sourcecode, no leverage, and no recourse. Who wants to bet their business on that?
I developed a rather common-sense methodology which I've been using for a number of years now (it's been documented on this website since October 1999, but it was developed in its original form in 1996. I'm gratified to learn that the approach actually has a name. The December 2001 issue (Vol.34 No.12) of Computer magazine (published by the IEEE Computer Society) describes something amazingly similar in an research feature entitled Function-Class Decomposition: A Hybrid Software Engineering Method. Except for differences in terminology and diagramming (the authors use a new diagramming method, whereas I stick with UML iconography that replaces my earlier use of Data Flow Diagrams and Entity Relationship Diagrams) the technique is the same.
In the simplest possible terms the technique is this: when engineering any system, object-oriented or not, design from the top down, Refine and implement from the bottom up.
After the publication of a truncated form of this letter, I was contacted by the authors of the Function-Class Decomposition article asking permission to use it on their website. This led to a gratifying exchange of correspondence which resulted in an invitation for me to join the SABRE Consortium, which I happily accepted.
Subject: Function-Class Decomposition
Date: Fri, 7 Dec 2001 16:11:29 -0500
From: Dave Leigh
To the Editor:
I read with great interest the research feature entitled "Function-Class Decomposition: A Hybrid Software Engineering Method." This is a method that cannot be touted highly enough. For those of us in the field it amounts to prior art, as we've been using it (under a different name) with great success for some years now. As long as I've been designing, whether object-oriented or not, the rule of thumb has been simply, "Design from the top down, refine and implement from the bottom up."
I'd like to point out a couple of areas where we are using techniques I think are useful to FCD, and which enhance the method.
First, I don't believe it's necessary to have an additional type of diagram as described by the authors. For some time now we've been doing top-down design using UML's package diagrams to represent functional modules (FMs). It is useful to define interactions between functional entities (represented by packages) choosing the rough message format and transport mechanism (i.e. "XML via socket") on the way down, and to define the details of the messages (i.e. schema) on the way back up. While it may be true that some UML diagramming tools do not facilitate top-down design in this manner, that is clearly a fault of the tools, not of the UML. It seems to me that while the diagramming method described in the article illustrates the heirarchy of functional modules, it does a poor job of illustrating their interactions. Frankly, in our experience it's as or more important to clearly illustrate the interactions between FMs as it is to illustrate the structural model. In any case, having seen the alternative I prefer the use of UML package diagrams for this purpose. (Of course, I endorse creative use of UML, and often represent UI elements as "classes" during Functional System Design in order to illustrate the operation of a proposed system to the users using UML sequence diagrams).
Second, I don't think you can sufficiently stress the need to normalize a design after the first pass at it. Redundancies are rarely a problem when a single individual does the design, but on large systems (where FCD shines) it's not uncommon to split the design work up once the high-level FMs are defined. A side effect of this is often redundant classes; so at some stage of your iterative process you need to set aside time to normalize your design, balancing redundancy with efficiency.
Finally, we've found that this approach -- even though it requires additional discipline -- can actually yield significant reduction in the time to design a moderate-to-large sized system over the purely OO iterative approach trumpeted for the past few years. Unfortunately we've been too busy meeting deadlines to objectively quantify the difference; however, here are some anecdotal reasons why you should see a positive benefit.
1. FCD allows you to return to the discipline of coding to a specification, which reduces the uncertainties of "scope creep."
2. Standard OO modeling works well on small projects, and FCD has the effect of breaking a large project into small ones, increasing the efficiency of the standard techniques. Each of these smaller "projects" is easier to estimate than the system-at-large, and easier to assign among developers.
3. In large systems, top-down design reduces the amount of interaction between designers necessary to bring any particular FM to operational status. At any level of iteration the amount of complex interaction should remain the same or less as you move from the bottom back up.
There are other reasons that I won't trouble you with in a letter. The major resulting benefit (in addition to those stated in the article) is that FCD greatly improves the ability of the project manager to estimate development costs of the overall project. Customers love kept promises. They pay for them, and come back to pay for more once you've demonstrated your ability to deliver.
Software Management Consultants, Inc.
I won't reproduce an entire conversation here... for one thing, I have no desire to invade other people's privacy by publishing their words. But here's a response I gave to to Jane Huang's observation that my major critique of their method was in regard to notation (she didn't receive the entire text of the above letter until I forwarded it to her).
Quite frankly, I have no problems with your approach at all. The notation isn't central to the method. That's why when [the editors at Computer] cut it down "in the interest of space" I said go ahead and publish, but I asked them to pass the letter on to you. I just assumed that they'd give you the unexpurgated version. Silly me.
In reading your approach, I get a sense of coming home, because this is almost exactly what I do in practice (notation notwithstanding). I found that my last clients, who insisted on rigid conformance with their own methodology, didn't even notice that my development team followed what was basically FCD for the last four years. All they noticed was that we were consistently on or ahead of schedule and our team members were all uncommonly well versed in the overall system.
Of course, this last is simply due to the fact that the team got the overview early, and everything we did was a refinement of a system that we basically "designed" as a group in the first team session! (My P/As joke that I'd jump out of an airplane without a parachute because, "that's just a detail, it's not important yet." ). They're only half right. It probably is my signature quote, but every detail that's postponed is nonessential and is slated for design on the way back up. I like to think of it as "fractal design:" Every revision to the design exposes more detail without ever changing its overall shape.
The important thing to take away here is that FCD gets your team members involved early; gives them a good overview of the system; and maintains the overall structure even as details are added during bottom-up design. Since the bottom-up design has the top-down structure to use as a guide, modules can be designed more quickly as you have a greater understanding of how they will be fit into the final product; and you don't need all the details to get started. In my opinion, adopting FCD is the most effective methodology to couple with extreme programming (XP).
I consult for a living, so I need to keep accurate track of the time I spend on multiple projects. There's a very nifty little utility for KDE that's useful for this sort of purpose called Personal Time Tracker (filename "karm"). The idea is simple... you list your projects, start the clock, and whenever you switch to a different task you simply click on the taskname, and the time accrues. At the end of the day you know exactly how much time you've spent on which project. GNOME has a similar utility (GtimeTracker), and there are some others.
Very nice, except these run on Linux. I do some work on Linux, but I also work on Windows and a number of other operating systems, (including DOS) so I thought I'd take an afternoon and code myself a more portable version of the program to use generally. I wanted a few improvements over the Linux programs as well, such as not having to have the program active in memory all the time, and being able to export to my spreadsheet. Here were my user requirements:
* I wanted it to run on any machine I happened to be around, including DOS clients without recompilation and without multiple versions.
* Since I move around between systems, I wanted to be able to put it on a floppy an carry it with me.
* Since I'd be carrying it on a floppy, it had to "work" even when it wasn't running(!)
* It should automatically save information when shut down.
* I wanted it to be extremely small so it would load fast. My arbitrary goal here was under 25kb.
* It needed to be able to export data to comma separated format so I could import it to my timesheet spreadsheet.
* I'm the only user, so the user interface didn't need to be slick and commercial.
(I might have written it to run under PalmOS, but quite frankly, having used one for awhile, I find the synching more cumbersome than the floppy. Shortly after writing mine I actually found a PalmOS equivalent and my prejudice held true.)
Given the requirement to "write once, run anywhere," I immediately thought of Java. However, I ran into a few simple realities. A Java Virtual Machines take up a lot of memory. They also take up a lot of disk space, and they take a long time to load. The most surprising reality, though, is that Java's not available everywhere. For instance, I still support a number of networked MS-DOS and DR-DOS machines, with 8MB RAM, of even 4MB RAM or less. I'll simply point out that there are only two DOS JVMs that I know of, Sun's JavaPC and Transvirtual's Kaffe, they both require a huge amount of memory, and neither of them is free. So Java wasn't a solution for me at all.
Without spending a LOT of time on it, I looked at a number of other alternatives, such as TCL/TK. But when the smoke cleared, I finally chose... DOS.
Why DOS? Basically, because (surprise!) DOS is more portable than Java for my purposes. My 1.0 program, weighing in at under 30K, operates on my DR-DOS powered laptop, on any Windows machine (Win9x or NT) or under any OS that can run a DOS emulator. For instance it runs under Linux's DOSEMU or VMWare, under OS/2, or on a Mac or Amiga with a DOS emulator. Even the most basic DOS emulator seems adequate, and the worst of them takes up less memory and disk space than Java. The choice of compiler was trivial (I used Asic, but there's no lack of efficient compilers for DOS) and I finished the program, from requirements to compilation, in one afternoon.
The important part of this exercise has nothing to do with the program I wrote: it has to do with the lessons learned. I began the exercise with the assumption that I'd be coding a Java program. However, the requirements that fueled Java's creation and have been fueling Sun's marketing are the very same requirements that prevented me from using Java. On the other hand, the much maligned and forgotten DOS turned out to be the best choice for this particular job... a simple, surprisingly portable OS for a simple application.
We tend to focus on the cutting edge of technology, with the unspoken assumption that the cutting edge is best. Sometimes that assumption isn't warranted (for instance I don't foresee the wheel being replaced by magnetic levitation on a large scale basis in my lifetime and the basic design of a hammer hasn't changed in 20,000 years). DOS is still alive in embedded systems, where the the task to be performed gets better press than the technology behind it, and I'm convinced that its relegation to obsolescence is premature.
Hmm. I wonder what other perfectly good technologies we are throwing to the wayside before their time?