If you know me, you might know that I spent a critical part of my career at System Software Associates. Yes, at one time, your humble narrator was the Manager of Architecture at the World's Largest AS/400 Software Company (®, ©, ™). It was often said that a year at SSA was like seven years anywhere else, and that had a lot to do with impossible deadlines against hopeless odds. That didn't stop us, though: We created project plans and Gantt charts and timelines and tasks lists in which things simply got compressed to the point of ridiculousness. On one particularly silly project, we even had a diagram: The left side was our current status, listing tasks completed (mostly "design" tasks) and so on, while the right side was a list of deliverables with dates handed down from management. In the middle was a big box with the caption "A Miracle Occurs!"
The First Black Boxes
That was the black box of project management, and it was one of the first black boxes I had to deal with. But I was OK with that one, because the project usually got delivered on time by lopping off a few features, working a few 120-hour weeks (true, and trust me, you don't want to know many of the details, especially the hygiene-related ones), and getting rid of certain unnecessary time-consuming stages like, say, testing. Honestly, at times we felt we were only half kidding when we would say, "It compiles. Ship it!"
But before that was my very first black box, the black box of sales. My first IT job was working for a small entrepreneurial company by the name of Sweeney Computing Corporation. There, I was introduced to one of the biggest black boxes in our industry (still to this day, I might add), the sales black box. Mike Sweeney, the president of the company, would go off to a meeting with the unlikeliest of prospects and would come back beaming from ear to ear. Then he'd tell us something like, "I just closed a deal to connect Omron cash registers to microprocessor-based computers (remember, this is pre-PC) and upload the data to the Series/1." Then he'd throw down a spec sheet for the Omron (translated very poorly from Japanese) and say, "How do we do it?" That was the sales black box: Mike had no fear of selling pure and unvarnished vaporware, because he knew we'd figure out how to do it. I look back fondly on that place; it was a fantastic way to begin my career, because I was never told that something couldn't be done. In fact, I was challenged on a regular basis to make the impossible occur.
Not All Black Boxes Are Alike
I generally don't have problems with black boxes in our industry, at least not the big ones that tend to center on sales and marketing and deadlines and deliverables. Those sorts of black boxes are the things that push the envelope to make us work harder and strive for ever-loftier goals as programmers.
However, the little black boxes that are starting to appear in programming itself bother me. Those are the little boxes that say things like, "Don't worry, this new hardware will work just like the old one." Or "Just use this API; it will do exactly what you want." Or "You have to do it this way; it's the standard." Nowhere is this more evident than in the current rush to SQL. It seems that a small but very vocal segment of the programming population wants to use SQL as a replacement for programming. And this really bothers me.
Declarative or Imperative, That Is the Question
Note the specific phrasing I used: "a replacement for programming." This will get some dander up, I'm sure, but I've taken a long time analyzing my position on this, and I believe I'm a lot more accurate than the SQL advocates might want to admit. Here's why: SQL is probably the only declarative language that is currently being used in business applications today. The term "declarative" has a specific meaning in the context of computer programming. The opposing term is "imperative," and the difference is easy to define: Declarative programming tells the computer what you want done, while imperative programming instructs the computer how to do it.
It seems like a simple, almost trivial distinction, doesn't it? But at the base of the matter is an issue so fundamental as to be almost intangible. It's sort of a "forest for the trees" situation (or maybe the elephant in the room that nobody wants to talk about). Let me try to explain it simply. In an imperative language—which includes pretty much every programming language you or I have ever programmed in—we write instructions that tell the computer what to do. The computer is completely under our control, and in fact we acknowledge that simple statement with the oft-used acronym GIGO, or Garbage In Garbage Out. Those four words very succinctly explain that the computer is doing exactly what we tell it to do and that when something bad happens, whether due to bad programming or bad data, the computer is not at fault; our own human hands have caused the breach. In a way, it's the ultimate Programmer's Creed of Responsibility.
Declarative languages, on the other hand, are not so designed. Instead, in a declarative language, one simply tells the computer what one wants, and it is up to the computer to figure out how that's done. SQL is the archetype for this concept: You tell the computer you want columns A, B, and C from table 1 and columns D and E from table 2; you tell it how the tables are related; you tell it which records you want and in what order; and then you hit the "Miracle Occurs" button. You don't know what indexes the engine will access, nor what order records will be read, nor which fields will be compared first and which last. You have basically handed off your database access decisions to the SQL engine.
Many fourth-generation languages and code generators operate much the same way. You sort of point and click and then magically code gets generated that users then use. No real programming is involved; these tools are really end-user application design tools.
Not That There's Anything Wrong With That...
So why does that bother me? Isn't it a good thing for the computer to do the work for me? At first glance, it seems like a positive, yet I was never comfortable with the concept. I thought about it for a long time until it finally hit me: The computer isn't doing the work for me. The computer can't do anything for me; it's still the same GIGO machine that I started with. I haven't enlisted some all-powerful Commander Data on a Chip to do my programming for me. What has happened is that I've turned my programming over to someone else.
Because at the end of the day, imperative code is still being executed. There are no magic declarative instructions in the Power5 machine code. The RUM (Read User's Mind) opcode still has not been implemented. So what I've really done by turning to a declarative language is to stop writing code and instead use someone else's programming. I have thrown up the white flag and stated in no uncertain terms that someone else is better qualified to write my code than I am.
And maybe it's just ego talking, but in many cases this simply isn't correct. No offense, but let's take the case of the guy writing the SQL engine code. We'll call him Bill, just to be friendly. Remember that when Bill writes his code, he has to write it for every possible case. While in some situations I'll save time using Bill's code, those are the simpler cases, the ones in which Bill can generalize and use some standard technique to get the data the fastest.
But Bill doesn't know how my database is laid out. Not really. Bill is like a really good contractor that you just hired. He knows the files and formats because he can look at the tables, but he doesn't know the contents of the data. He doesn't, for example, know that there are no records with a status of COMP and an active flag of Y, because that's a system design point that isn't encoded in the database. And while this knowledge may not affect the query he's working on, it's the kind of information that lets a programmer who knows the system write better code than one who doesn't.
It gets even worse when you talk about code generators. Database access for the most part boils down to mathematics, and as operating systems keep better track of the statistics of your database, Bill will be able to make better decisions. He won't always make the correct decisions, mind you, but he's got a much better chance than Jen. Jen is the person who wrote the code generator you are using. And while Jen is probably a very good programmer, try to remember the last time you sat down and looked at someone else's code and said, "Wow, that's exactly how I would have done it!" Again, it's obvious that I have an ego the size of Montana, but I still think that I'm a better programmer than Jen, especially if she hasn't been writing ERP applications for 25 years.
So, no offense to Bill and Jen. I love them dearly, and I respect their abilities. I'm just not quite ready to hang up the spikes yet. When Cal Ripken, Jr. broke Lou Gehrig's consecutive games streak in 1995, he decided he still had a few years left in him, and frankly so do I.
Joe Pluta is the founder and chief architect of Pluta Brothers Design, Inc. He has been working in the field since the late 1970s and has made a career of extending the IBM midrange, starting back in the days of the IBM System/3. Joe has used WebSphere extensively, especially as the base for PSC/400, the only product that can move your legacy systems to the Web using simple green-screen commands. Joe is also the author of E-Deployment: The Fastest Path to the Web, Eclipse: Step by Step, and WDSc: Step by Step. You can reach him at firstname.lastname@example.org.