IBM's marketing isn't going to do it. While I admit that I enjoy seeing the IBM logo up on the screen these days, that's about the eServer and Linux, not the iSeries. As far as those guys are concerned, the iSeries should be a Linux machine, and the sooner they get rid of that old OS/400 dog, the better.
This is silly, of course; fads come and go. New architectures leap upon the world stage, promising to be the next great universal cure for all of IT's ills, but when the dust settles, IT isn't all that impressed. The new technology is subsumed, the good points added to the great tool chest that is information technology and the bad points shucked off to the circular file.
At least until lately. Lately, I've been watching as the entire shift of development has gone to what I consider "fad programming." Architectures have been given prominence over solutions. We argue about things like J2EE and extreme programming and .NET, instead of real business issues like total cost of ownership (TCO) and flexibility. A standard answer, given with a totally straight face, has been to rewrite all your business logic in Java. Given the kinder, gentler me that has been writing these columns of late, I cannot actually express my true feelings about that statement; suffice it to say I find such recommendations to be less than optimal.
I think the entire focus is wrong. History has shown us that the proper way to move forward in IT is incrementally, and I'd like to address that in this article. I'd also like to take a guess at why IBM and so many other technology providers are missing the boat so completely on this. And finally, I'd like to paint you a realistic picture of how we can move forward.
The Evolution of the Midrange
The IBM midrange has a long and storied history. I got involved in the late '70s with the IBM System/3, a venerable machine that was wonderfully suited for the burgeoning industry of the "service bureau." With a service bureau, you sent in your data to the service provider. The provider keyed that data into the computer, ran batch updates, and then printed reports of various kinds. The reports came off of incredibly noisy 1403 printers that printed at prodigious rates: hundreds of lines a minute! Since this was an impact printer, the machine was actually hitting thousands of little hammers a second. Once printed, these reports were then given to the night shift to be bursted (in the case of multi-copy forms), boxed, and sent to the client.
The next evolutionary step for the IBM midrange was the introduction of the Communications Control Program, or CCP. This was a rather sophisticated (for its time) online transaction processing (OLTP) system. You could in fact create pretty powerful OLTP applications, although the programming was often quite involved. Typically, these systems ran over leased lines from the service provider to the client, thus bypassing the need for sending paper documents to be keyed in at the service bureau. Also, online inquiries allowed faster access to information than the nightly batch reports.
An Evolutionary Step
It became clear that this concept of OLTP was a good one. However, CCP wasn't an easy way to design programs. Imagine writing an order entry application using the RPG cycle, and you'll get an idea of the kind of design that was required.
The next evolutionary step was the entire S/3x line. These machines supported both an online interface and a batch interface as a native part of the operating system. They were designed for online data processing, and the idea was to use them locally rather than connect to a service bureau. This concept proved to be highly successful. These architectures were closed, with dedicated databases and limited communications ability. But through them, the idea of a small, self-contained IT department came into being.
While all this was going on, a few other things were evolving. IBM was an innovator in many of these areas. Among the areas where IBM blazed the trail included hard disk technology, reduced instruction set computing (RISC), communications, networking, and personal computing.
The industry also continued apace, with the introduction of languages such as C and Pascal and the explosion of personal computing, including a multitude of hardware and software options, from microprocessors to operating systems, not to mention a little concept called the Internet.
Then Came the AS/400
What is the point of all this? Well, the point is that the history of computing keeps telling us the same thing over and over: IT is about evolution and the survival of the fittest. But history also shows that the battle is not won in standards committees or advisory councils or technology review boards. The competition is waged in the marketplace, and the winner is determined by the end users.
The AS/400 is a treasure trove of examples of this philosophy. When introduced, the AS/400 was an amalgam of IBM-specific solutions. From its operating system to its languages to its hardware, everything said "made by IBM." The machine spoke Bisync and used SNA and Token-Ring. It had a proprietary database and a unique operating system. Even the CPU was custom, with its own unique and complex instruction set.
However, over a span of perhaps 10 years, all of that changed. Ethernet and TCP/IP became the primary mode of communication. The database was opened up to standard SQL processing. C and C++ made their way onto the box, and even the CPU changed to the RISC design that was so successful in IBM's powerful workstations. But each of these were evolutionary changes, changes that either coexisted with previous technologies or included them.
Even in things as fundamental as the standard communications protocol, the AS/400 supported side-by-side development, allowing users to wean themselves gradually from one technology to another, without having to race headlong into buying new hardware and software every time a new technological fad presented itself. Instead, users were able to pick and choose, and as the marketplace dictated the survivors, the platform followed suit. For example, you don't see a lot of SAA-complaint applications written in C++, do you?
And yet, some technologies remain in place. Subfiles are still here, because they're one of the most powerful data entry techniques ever devised. And the introduction of the IFS was done in such a way that the traditional flat QSYS file system was integrated directly with the hierarchical file systems required to enable other technologies.
It is this integration that makes the IBM midrange platform so unique. It is not the new technologies by themselves that make the platform so powerful; it is the fact that existing users can take their current applications and add new technologies to them, while still running their businesses. The AS/400 was the pinnacle of component-level integration, and this allowed users to move from green-screen applications running on a Token-Ring working on an EBCDIC relational database to Web-enabled TCP/IP-based systems communicating with the outside world using ASCII stream files processed by C programs.
Systems Integration: The Next Step
The iSeries represents the next step in this evolutionary process. With its ability to utilize a plethora of technologies and to run multiple operating systems on a single box, the iSeries brings the idea of integration to a new level. I call this "systems integration," where multiple disparate operating environments work together. Traditionally, this required multiple boxes, and we called it "distributed computing." But with the iSeries, all these environments converge on a single machine.
The first step was probably the integrated Windows Netfinity machines, but there are other examples today: the PASE environment, QShell and its UNIX-like capabilities, the entire Java integration, and the ability to run Linux partitions side by side with OS/400. The iSeries is truly a multi-purpose machine now, with the ability to run multiple environments.
The iSeries managed to do something that no other platform has been able to do: truly integrate all of the required technologies in today's environment. You may argue that the iSeries isn't exactly a big player in the Microsoft arena, but that's a different issue. Microsoft to me is the exact antithesis of this integration step. Rather than work with open standards, Microsoft creates its own version of each new technology and then ensures that its versions work and play only with one another, not the outside world. In fact, as time marches on, Microsoft and IBM seem to be switching roles: IBM is becoming the great bastion of open standards, while Microsoft is entrenching itself as a highly proprietary, insular environment.
In any event, it's clear that the various technologies available today all provide different capabilities. And were we to follow the lessons of history, the next step would be pretty clear: Design applications that take advantage of the new technologies in conjunction with the existing systems.
Application Integration: Where We Need to Go
I call this next step "application integration," where different application components are written in different languages running in different operating environments, yet are perceived by the end user as a seamless work environment. Such an architecture takes advantage of the strengths of existing technologies such as RPG on OS/400 for business rules processing, uses the browser for flexible user interfaces, and integrates with desktop applications. We need to begin that movement now, while the architecture still remains.
And yet, I still hear the constant refrain that we need to rewrite all of our business logic to something, although what exactly that something is tends to change as the next group of CS majors graduates. For a while it was J2EE, using strict UML OO design techniques. Then there was "program by wire," in which all coding was done in a visual development environment, represented by wires connecting widgets. The latest phenomenon is that of extreme programming. Regardless of your opinions about any of these programming techniques, the scary thing to me is that they seem to be either/or decisions. None of the techniques works very well with the others, and the advocates of each are incredibly zealous.
So Why Is IBM Missing the Boat?
It could be that they've been slipping something in the water in Armonk, but I think the truth is actually a little more mundane than that, although you might think I've been getting something in the water when you hear me.
I think the problem is that IBM no longer maintains MAPICS.
Over the years, it has been my observation that the most innovative development occurs when new technologies are applied to existing applications. I've also seen many cases of pure technology blinding developers, who then created elegant technical solutions to nonexistent problems. In many cases, the new architectural direction generated a lot of work but no tangible benefit.
Take, for example, a company that decides to rewrite its entire application suite to move from native RPG to Java and SQL. While there may be some perceived benefit of platform-independence, you need to balance the real benefits against the real costs. Among the costs: rewriting every program, retesting every program, and fine-tuning performance. This is simply to get the application back to its original state prior to the conversion. Then, to have real platform-independence, you need to port the solution to a new platform and resolve any issues there. Finally, you need to staff the alternate development environment and put controls in place to manage the multiple development and testing environments. Such a move makes sense only if the benefits of the Java version of the product offset all of the indicated costs, including the delay to adding any new features.
For a small shop, the costs may be offset by savings from moving to less expensive hardware, but given the low TCO of the iSeries as opposed to UNIX and Windows servers, moving off the machine is rarely going to save much money. So unless the rewrite is relatively trivial, there's no good business reason.
Another possible situation is a software development shop, in which increased software sales to other platforms cover the cost of the rewrite. And while this is a simple calculation, our industry has a bad habit of over-estimating sales. The truth is that enterprise applications often don't run well on commodity hardware, and even when they do, the customers who buy commodity hardware want to pay commodity prices for their software as well.
So Why Do They Keep Pushing These Architectures?
The $64,000 question is, "Why keep pushing architectural decisions that don't make business sense?" To my way of thinking, it's because there's no application team telling the architecture team that the emperor has no clothes. I can think of a half a dozen places where decisions have been made that wouldn't have been made if an application like MAPICS had been part of the decision. I believe that a team of application designers with real deadlines to meet would have stopped the ivory tower architecture decisions dead in their tracks.
Is this a bad thing? To the new generation of developers, it is. I think that's because these developers have grown up in the world of open source, where APIs change at the whim of the developer and backward compatibility is a foreign concept. As far as I can tell, few of the architects in this generation of developers have any respect for the concept of legacy systems, and in fact, most would rather see everything rewritten from scratch (and rewritten over and over, according to the extreme programming mantra).
Is There an Answer?
There is an answer to this problem, and it's a killer app. Note the term "app," as in application. In order for this to work, it can't just be another technology preview without any real substance. In the best of all worlds, it would be a working application that IBM would use on a daily basis and make available to their customers via the Web.
And the Winning Architecture Is...
I would implement this via portal technology. While I'm not thrilled with the current pricing model for iSeries Portal Express, I remain convinced that portal technology, specifically portlets combined with Web-enabling, is the answer. The killer application would allow a combination of simple green-screen development and the related Web-enablement, pure Web-application design, and client/server architecture, and it would show how the three worlds can work together. The first part of the application would be a traditional green-screen application, Web-enabled to run in a portlet. The second part would be a pure J2EE application (JDBC and servlet technology) that interacts with the "legacy" application. Finally, a thick-client piece.
If IBM developers were to do this, they might use the iPTF process as a candidate application. The core application would perform standard green-screen maintenance of and inquiry into the PTF database. The "order entry" portion, which handles the creation of iPTF orders, would run here. Second would be a high-powered search engine, written from the ground up as a Java servlet using JDBC. This would allow all manner of scanning for various problem reports. The last piece would be a customized thick-client application, running on either a workstation or the iSeries, that would coordinate download and application of the fixes.
I admit that it's not as robust as MAPICS, but I find it difficult to imagine IBM getting back into the application game anytime soon. This iPTF concept, or an application of similar complexity, might be just enough to push the envelope of the portal development tools, thus making sure they stay on track, while at the same time presenting an interface that outside users can actually touch and feel and showing off the capabilities of the machine and its tools.
What if They Don't Do It?
The problem is that I doubt IBM has the time or desire to do it. And if they don't, we have to do it. Unless there's money to be made, it's going to be hard to get people to spend a lot of time on this project. I'll see what I can do about creating another open-source project, but unfortunately the price of Portal Express is still prohibitive for a non-commercial endeavor. Maybe we have to start with a non-commercial portal software solution.
Please, get into the discussion for this article, and let me know if you think this can be done. Let me know if you think IBM should do this or we should and if you would be willing to participate. I'm concerned that if we don't start now, it will indeed be too late, and soon everything will be running on Windows or Linux, and that great wealth of RPG business logic that currently exists will be lost forever, as will the jobs of those who maintain and enhance those systems.