26
Fri, Apr
1 New Articles

Use a Logical File Layer to Minimize Recompiles from Field Additions

RPG
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

 

When you add a field to a file, you do not have to recompile all programs that access the data.

By Sam Lennon

Traditionally, if you add a field to a physical file, you have to recompile all programs that use that physical file definition. Generally, you also need to recompile all programs that use logical files over the physical. But with some planning, you can avoid many of these recompiles, thereby shortening and simplifying the installation. And without level checks.

 

Perhaps you're in the position where you need to add a field to a heavily used file, one that has been around for years. You know many programs reference this file. You could create a new file with the same key to hold just the new data, but this is a high-activity file, and you know that, for performance, the right thing to do is to add the field to the existing file. So you bite the bullet, crank up your favorite cross-reference tool (if you have one), and are horrified to find that on top of the 13 programs you need to modify to use the new field, and the three you need to write, you are also going to have to recompile 317 programs and re-create 64 queries. There may even be some non-iSeries uses that you don't know about. (While these numbers are fictional, I have experienced such a project.)

 

Even if you have a smart change-management system that will automatically recompile the programs for you, the installation will take longer and have a higher level of risk. IT management, correctly, likes to minimize risk.

 

If you are in a situation like this, which may not be of your making, it is because the programs and queries are closely dependent on the physical data layout. To solve the problem, or to avoid it in new applications, the view of the data the program sees needs to be divorced from the physical file layout.

 

Logical files, used correctly, can add a layer of abstraction that helps minimize recompiles. Once in place, a logical file layer means that when you add a new field, the only existing programs or queries that need to be touched are those that will actually use the new field.

Why Are Recompiles Needed?

Everyone knows the answer: to avoid level checks.

 

Each record format in an externally defined file has a format level identifier. When an RPG or COBOL program uses a format, the compiler saves the current format level identifier in the program. When the program runs and the file is opened, the system checks the format level identifier of the file against the format level identifier the compiler saved at compile time and, if they are different, sends a CPF4131 escape message (the dreaded level check) to the program. Similar processing occurs with Query/400 query definitions. You can suppress runtime level checking and thus avoid level checks, but doing so is generally considered a bad practice, so the solution for a level check is to recompile the program or open and save the query definition.

The Crux of the Problem

The way we create logical files is the main reason we need to re-create programs and query definitions. When coding the DDS for a logical file, all the fields of the physical file are automatically included if you don't code any field names. This is the quickest and easiest way to define a logical file. So if your physical file is named ORDLINP and is coded like this...

 

A               R ORDLINF

A                 ORDNUM         7P 0

A                 ORDLIN         3P 0

A                 SKU           11P 0

A                 QTYORD         5P 0

A                 QTYRCV         5P 0

A                 LASTRCV         L

 

...then in most cases, the logical file, ORDLIN01, is coded like this:

 

A                                           UNIQUE

A               R ORDLINF                   PFILE(ORDLINP)

A               K ORDNUM

A               K ORDLIN

 

All the fields of ORDLINP are copied into ORDLIN01, and the two files are tightly coupled. If you add a field to ORDLINP, you have to re-create it, which you can do in one of two ways:

  • By deleting ORDLIN01, saving the ORDLINP data, recompiling ORDLINP, copying the data back, and finally re-creating ORDLIN01
  • By using CHGPF and specifying the new DDS

 

Either way, ORDLIN01 gets a new format level identifier, so you must re-create all programs or query definitions that reference ORDLIN01 as well as those that reference ORDLINP.

A Better Way

Minimizing recompiles and object re-creation depends on a few rules that require a small amount of up-front work and a significant commitment to follow the rules. Here they are, with the reasoning behind each rule.

 

Rule #1: When you code a logical file, always explicitly list the fields. This is the little bit of extra work that is needed up front and is the most tempting rule to ignore. ORDLIN01 would be correctly coded like this:

 

A                                           UNIQUE

A               R ORDLINF                   PFILE(ORDLINP)

A                 ORDNUM

A                 ORDLIN

A                 SKU

A                 QTYORD

A                 QTYRCV

A                 LASTRCV

A               K ORDNUM

A               K ORDLIN

 

 

Now if we add a field to the physical file, we have the option of not adding that field to existing logicals. Unchanged logicals will not get a new format level identifier and thus won't cause level checks.

 

Rule #2: Do not reference the physical file anywhere. Instead, always use a logical, including in applications that write new records to the file. This ensures that no application is dependent on the physical file definition. In fact, you can add a field to the physical without re-creating any objects.  (Kent Milligan of IBM says "The one case that is not recommended is on SQL statements since that will force the usage of CQE (Classic Query Engine) instead of SQE.  In that case, the SQL statement should reference the PF directly or they should consider creating an SQL view.")

 

Rule #3: When using embedded SQL, do not use the "SELECT * FROM ..." construct, where "*" means all fields. Instead, specify the fields you need explicitly, even if you are using all the fields. This will make the application not only independent of field additions to the physical file, but also independent of field additions to the logical file (unless, of course, the application needs the new field; in that case, you will have to re-create the application anyway). Selecting only the fields you need is also more efficient.

 

Rule #4: Do not code any keys on the physical file. This encourages enforcement of rule #2, since traditional IO nearly always uses a key. (Note that SQL will happily accept the physical file, keyed or not. This is not a problem, but it will likely generate "noise" in your research and your cross-reference tool.)

 

Rule #5: When you add a new field to the physical file, consider specifying a default value for new date, time, and timestamp fields. Remember: all existing programs that add data records will be doing so through a logical (rule #2) and will not reference your new field, so the default value will be used. Unless coded otherwise in the DDS, these field values default to the current date and time, which is likely to cause confusion. Ideally, you would specify DFT(*NULL), but this might not play well with older code, so something like DFT('0001-01-01') for date fields, DFT('00.00.00') for time fields, or DFT('0001-01-01-00.00.00.000000) for timestamp fields may be more suitable.

Example: Logical File Layer Exists

Let's see how it works. Suppose the ORDLINP file (DDS above) exists and was created about 15 years ago. A user department manager has asked that, when an order is delayed, we keep a status of the reason it was delayed and the date the delay occurred. The delay status is to be a single-character field, where a blank means no delay has occurred and anything else is a delay reason code. The support manager signs off on the change request since it looks easy. (Bear in mind this is a contrived file that would have many more fields in real life.)

 

You are assigned to make the change, and you find there are 330 programs and 64 query definitions that reference the file. Re-creating nearly 400 objects could make for a long and risky install. There are 23 logicals over ORDLINP, but fortunately, the logical file layer is in place and no objects directly reference the physical file.

 

The new ORDLINP will look like this, with fields DELAYSTS and DELAYDATE added at the end:

 

A               R ORDLINF

A                 ORDNUM         7P 0

A                 ORDLIN         3P 0

A                 SKU           11P 0

A                 QTYORD         5P 0

A                 QTYRCV         5P 0

A                 LASTRCV         L

A                 DELAYSTS       1

A                 DELAYDTE        L         DFT('0001-01-01')

 

 

After your analysis, you know you need to write two new programs to maintain the two new delay fields. Both will access the data using SKU and LASTRCV as the key. There are two existing logicals with these fields as the key. You also need to change three existing programs, all of which access the file by ORDNUM and ORDLIN, and there are several logicals with this key.

 

This means that just two logical files will need to contain the new fields. The trick is making the decision: do you want to change an existing logical or add another logical?

 

Consider the two new programs first. Suppose ORDLIN01 has the correct key (SKU and LASTRCV) for the new programs, and 17 programs and 5 queries already use it. ORDLIN05 also has the right key plus a couple of additional non-key fields, and 45 programs and 8 queries use it. You have several choices:

  • Add the new fields to ORDLIN01 and re-create the 17 programs and 3 queries that use it. This seems reasonable, if ORDLIN01 has all the fields that will be required for the logic of the two maintenance programs explicitly coded in the DDS. You will also need to make sure there are no collisions between the new field names and existing variable names in the 17 programs you will have to recompile,
  • It is likely that ORDLIN05 may be newer and will have more fields explicitly coded, so it might be a better choice to change, but the downside is the re-creation of the 45 programs and 8 queries.
  • Create another logical, ORDLIN24, with the needed fields and keyed by SKU and LASTRCV. The downside is there will be another object on the system, but there should be no space or runtime overhead for this new logical because it will share the access path of either ORDLIN01 or ORDLIN05. The upside is that you won't have to change an existing logical and re-create its dependent objects.

 

The third choice is the quickest install with the least risk.

 

You will need to go through a similar exercise for the three existing programs that need to be changed. Find which logicals they use, see how many other programs and queries use those logicals, and weigh the cost, risk, and effort of recompiling those programs against creating another logical.

 

Whichever choice you make, it will be a quicker and safer installation than having to re-create almost 400 objects.

Example: No Logical File Layer Exists

What do you do if no logical file layer exists and you have to make the same change as the previous example, but you want a shorter, less-risky install than one that involves 400+ objects?  The good news is that you can do it incrementally and have a logical file layer in place when you're finished. I am a fan of incremental installs, where possible.

 

The first step is to get rid of all references to the physical file. Create a logical that explicitly defines all the fields in the physical, with the same keys if the physical is keyed. Then change all the references to the physical to use the new logical. You can do this one object at a time or in  groups sized to your comfort level.

 

Next, one at a time, change each logical to explicitly define all fields. This coding is simple, largely cut and paste. The file format level should not change, and you can easily check this using the DSPFD command. Existing programs and queries should not notice that the file has changed, but run a simple regression test if you're nervous.

 

Alternatively, you can make the change to each logical even more granular. One logical at a time, create a new logical with explicitly defined fields and the same keys. The new logical should share the access path of the original, so there should be no system overhead. Then, one by one, change dependent programs and queries to use the new logical. After some time in production, the old logical should show no use and can be deleted. Be aware that when you do the delete, an index build will probably be triggered on the new logical, which up to that point had been sharing the access path of the old logical.

 

Now you have a logical file layer in place and can proceed accordingly.

Conclusion

By making all references to your data through logical files, you can add fields to your physical database without having to re-create programs or queries. This makes for a shorter install that is less risky. Remember, management likes to minimize risk.

Notes

PF-38 or LF-38 Files: Be cautious if you have PF-38 or LF-38 files that were created a long time ago. There was a period when the logic to create the format level identifier on System/38 files was incorrect. Simply re-creating such a file with the same DDS today generates a new, correct format level identifier value. A colleague and I identified this issue with IBM several years ago, but there is no fix, and there probably can't be, since the old IBM code had a bug. We had no choice but to re-create the programs that used the file.

 

If you have any doubts, use DSPFD to check the file format identifier in the old and new versions of the file. You are looking for a column something like this bolded text:

 

Record Format List                         

                       Record  Format Level

 Format       Fields   Length  Identifier  

 ORDLINF           6       28  2B1E6E5BB3280

 

Alternatively, you can run DSPFD to an outfile, like this:

 

DSPFD FILE(yourfile) TYPE(*RCDFMT) OUTPUT(*OUTFILE) OUTFILE(QTEMP/somefile)

 

You will find the format level identifier in field RFID.

 

Shared Access Paths: When talking about an additional logical file sharing an access path with an existing logical, I have used the word "should." There are occasional situations where logical file sharing may not take place. Run a DSPFD on the new logical; if it is shared, you should see output like this:

 

Implicit access path sharing  . . . . . :            Yes             

  Access path journaled . . . . . . . . :            No              

Number of unique partial key values . . :                            

  Key field 1 . . . . . . . . . . . . . :                          2 

  Key fields 1 - 2  . . . . . . . . . . :                          7 

File owning access path . . . . . . . . :            LENNONS1/ORDLINL

 

If it isn't shared, consult the IBM documentation on "Using existing access paths." You can find the V5R3 Info Center documentation at this link.

 

 

Sam Lennon

Sam Lennon is an analyst, developer, consultant and IBM i geek. He started his programming career in 360 assembly language on IBM mainframes, but moved to the AS400 platform in 1991 and has been an AS400/iSeries/i5/IBM i advocate ever since.

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$0.00 Raised:
$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: