23
Tue, Apr
1 New Articles

Practical DB2: Database Field-Naming Conventions

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

The good thing about naming conventions is that there are so many of them, and database field naming is just one of those areas of convention contention.

Put three programmers in a room to define a database and you'll end up with at least four different sets of naming conventions. And it won't be quick, either, no matter how much experience you have. I can't count the number of databases I've helped design over the years, yet I still find myself in a room sitting for hours every time I have to work on another one. Each database has its own idiosyncrasies, but a few areas are relatively common among all databases. This article will address one of the fundamental issues: field names.

 

Identifying the Business Requirement

Even though it seems like a pretty technical pursuit, you should think like an analyst when creating your programming conventions: Identify the business requirement, establish some success criteria, and then meet that goal. In the case of field-naming conventions, my goal is twofold: Make it easy to get data into my database, and make it easy to get data out. Naming conventions are becoming more important rather than less; increasingly, we're seeing users accessing data through ad hoc query tools, and those conventions make it easier on them as well. Today, I am going to use a very specific example to prove just how important those conventions can be. The example is quite straightforward: I'm going to add an order header record. When creating the record, I'm going to initialize several fields with values from the customer master. And while the example will be very simple, it will show how good naming conventions scale quickly and easily.

 

DDS, DDL, and SQL

Over the years, we've seen more and more integration of SQL and traditional database I/O, but field naming is one area where the two don't seem to always mesh well. IBM has done everything it can to allow the two to live together in harmony, but real consistency requires an attention to detail that sometimes just doesn't fall within our time frame. I absolutely believe that the benefits of DDL definition over DDS definition far outweigh any drawbacks, but those benefits aren't completely risk-free. To me, the biggest problem with SQL is that it lends itself to a very ad hoc development environment, and you have to work hard to not let yourself be caught up in it. The ability to add a field named EXTRA_INFO_FOR_JANET with a simple SQL statement has some significant ramifications, and I'll address those in a follow-up article. For today, I'm going to focus on DDS environments.

 

To Refer or Not to Refer

Today's example will use a field reference file, albeit a very abbreviated one. I think field reference files are a critical component to any good database design, and I hope to spend more time on the concept a little later. Here's our reference file, named REFFILPF:

 

R RREFFIL                                          

CUST        6S 0      TEXT('CUSTOMER NUMBER')

NAME        30        TEXT('NAME')          

ORNO        10S 0      TEXT('ORDER NUMBER')  

ORDTYP      1A      TEXT('ORDER TYPE')    

PHONE        15        TEXT('PHONE NUMBER')  

ADDR        30         TEXT('ADDRESS')        

CITY        25        TEXT('CITY')          

STATE        3        TEXT('STATE')          

ZIP          9        TEXT('ZIP CODE')      

PHONE        15        TEXT('PHONE')          

EMAIL        64        TEXT('EMAIL ADDRESS')  

 

The field reference file defines your database attributes at a very basic level: customer number, address, phone number. The basic lengths and types are here. Other files then reference those fields in their definitions. Here's our customer master file, CUSMASPF:

 

                           REF(*LIBL/REFFILPF)

R RCUSMAS                                    

CMCUST   R            REFFLD('CUST')    

CMNAME   R            REFFLD('NAME')    

CMADDR1   R            REFFLD('ADDR')    

CMADDR2   R            REFFLD('ADDR')    

CMCITY   R            REFFLD('CITY')    

CMSTATE   R            REFFLD('STATE')  

CMZIP     R            REFFLD('ZIP')    

CMPHONE   R            REFFLD('PHONE')  

CMFAX     R            REFFLD('FAX')    

CMEMAIL   R            REFFLD('EMAIL')  

 

You'll probably notice a couple of things right off the bat. First, I use a two-character prefix for every field. This prefix identifies the file name. As we'll see later, it's not strictly necessary, and in fact there is a school of thought that eschews prefixes. Personally, I prefer them because it allows old-school programmers to use the files without a chance of collision. There are nearly 1,000 of these prefixes, so you ought to have no problem coming up with unique identifiers for each database file. Once you've made that decision, field naming becomes quite simple: The name of the field in the database file is simply the name of the referenced field appended to the file's prefix.

 

You probably notice one anomaly here. While there is only one address field in the reference file, we have two address fields in the customer master. That's not a problem; we create fields CMADDR1 and CMADDR2, but both refer to the same ADDR field in the field reference file. While we try to keep the field names consistent, exceptions like this are very easy to handle. OK, this example also has an order header, so let's define that next:

 

                          REF(*LIBL/REFFILPF)

R RORDHDR                                    

OHORNO   R            REFFLD('ORNO')    

OHORDTYP R            REFFLD('ORDTYP')  

OHCUST   R            REFFLD('CUST')    

OHNAME   R            REFFLD('NAME')    

OHPHONE   R            REFFLD('PHONE')  

 

Look closely and you'll see several fields that refer to the same fields as fields in the customer master. This is no accident; the design for this particular database calls for the customer name and phone number to be included in the order header. It may be that it has to be modified under certain circumstances, or it just may have to be there for other processing. Whatever the case, you can see that these fields will need to be populated from the corresponding fields in the customer master.

 

And Now for the Programming Magic

Sometimes, the preparation is more dramatic than the payoff, and this is probably one of those cases. The code that I'm going to show you is really very simple.

 

ctl-opt dftactgrp(*no) actgrp(*new);      

                                          

dcl-f CUSMASPF keyed;                    

dcl-f ORDHDRPF usage(*output);            

                                          

dcl-pi *n;                                

iCust like(dsORDHDR.CUST);              

iOrno like(dsORDHDR.ORNO);              

end-pi;                                  

                                          

dcl-ds dsCUSMAS extname('CUSMASPF':*input)

prefix('':2) qualified;                

end-ds;                                  

dcl-ds dsORDHDR extname('ORDHDRPF':*output)

prefix('':2) qualified inz;            

end-ds;                                  

                                          

dsORDHDR.ORNO = iORNO;                    

chain (iCUST) CUSMASPF dsCUSMAS;          

eval-corr dsORDHDR = dsCUSMAS;            

write RORDHDR dsORDHDR;                  

return;                              

 

The first line is the control options, nothing special there (although in a production environment you probably would be using something other than a *NEW activation group). The next two lines define the files: The customer master is input, the order header is output. The next block of code defines the parameters: The program receives a customer number and an order number. The program is supposed to create an order header for that customer. The next block of code does the setup by creating two data structures suitable for I/O. One is used to read data from the customer master, the other to write data to the order header. The trick is the use of the PREFIX('':2) keyword on both data structures. What this does is remove the first two characters of every field. Now, rather than CMCUST and OHCUST, both data structures simply have the field name CUST. This would normally cause the compiler to have some issues, so in order to avoid a collision, I also had to specify QUALIFIED on both. The one difference is that I also specified INZ on the dsORDHDR data structure; this sets any numeric fields to zeros and avoids decimal data errors.

 

So now that the setup is all done, the code is pretty anticlimactic. I store the order number in the order header. I then chain to the customer master. The magic is the use of the EVAL-CORR to then move all the fields from dsCUSMAS to dsORDHDR. The fields are CUST, NAME, and PHONE. Then I write the record. That's all there is to it.

 

Now, you might think that's an awful lot of work to avoid three EVAL statements, and you're right, it is. In this simple situation, the setup probably exceeds the savings. But the payoff comes when I decide I want to add the email address. All I do is add the field OHEMAIL to the ORDHDRPF file and recompile the program. That's it; the move happens automatically. If I need more fields, I just add them and recompile. If I need fields from another file, I just add the file and a corresponding data structure, add the chain, and add another EVAL-CORR. This technique is wonderfully scalable and frankly a lot easier than even SQL.

 

And it all starts from a good, solid set of field-naming conventions! Next, I'll show how to do much of this same work through DDL rather than DDS.

 

Joe Pluta

Joe Pluta is the founder and chief architect of Pluta Brothers Design, Inc. He has been extending the IBM midrange since the days of the IBM System/3. Joe uses WebSphere extensively, especially as the base for PSC/400, the only product that can move your legacy systems to the Web using simple green-screen commands. He has written several books, including Developing Web 2.0 Applications with EGL for IBM i, E-Deployment: The Fastest Path to the Web, Eclipse: Step by Step, and WDSC: Step by Step. Joe performs onsite mentoring and speaks at user groups around the country. You can reach him at This email address is being protected from spambots. You need JavaScript enabled to view it..


MC Press books written by Joe Pluta available now on the MC Press Bookstore.

Developing Web 2.0 Applications with EGL for IBM i Developing Web 2.0 Applications with EGL for IBM i
Joe Pluta introduces you to EGL Rich UI and IBM’s Rational Developer for the IBM i platform.
List Price $39.95

Now On Sale

WDSC: Step by Step WDSC: Step by Step
Discover incredibly powerful WDSC with this easy-to-understand yet thorough introduction.
List Price $74.95

Now On Sale

Eclipse: Step by Step Eclipse: Step by Step
Quickly get up to speed and productivity using Eclipse.
List Price $59.00

Now On Sale

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$0.00 Raised:
$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: