Sidebar

Getting Gold out of Journal Records

IBM i (OS/400, i5/OS)
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Journaling is a facility that extends the AS/400's recovery capabilities beyond what can be obtained through backup/restore commands. What most people don't realize is that journaling is also a treasure chest of information about your system, users, files, and applications.

By accessing journal records directly, you'll be able to use journal records in myriad ways, many more than you could with just the system-supplied Apply Journal Changes (APYJRNCHG) and Remove Journal Changes (RMVJRNCHG) commands.

If you're journaling files presently and have never "played" with the journals, use the Display Journal (DSPJRN) command online and see what they look like. If you have a journal named Journal, here's what the DSPJRN command might look like:

DSPJRN JRN(JOURNAL)
Figure 1 contains an example of the screen you may get. Reading from the left, you'll see sequence
numbers, entry codes, and then entry types. Figures 2 and 3 contain the valid journal codes and entry
types. Also, it indicates which entry types go with which codes.

The heart of accessing journal entries programmatically is the Display Journal (DSPJRN) command. It's
also not a bad command to help you get familiar with journal entries if you are new to the subject.
Prompt DSPJRN from your AS/400 workstation, and page down to the last page of the command. You'll see
the parameter Output with a default value of asterisk (*). You can replace this value with *PRINT or
*OUTFILE to send its output to a printer or database file.

When you are ready to build a file to process with a program, specify *OUTFILE and press Enter. The
system will give you another set of parameters in which you name the output file to receive the data
and some attributes for it. Figure 4 shows the procedure for specifying an outfile (the symbol
indicates which lines I changed).

You can request different levels of journal information through the Output format parameter. The
default-TYPE1-is the smallest amount of information. Other values are TYPE2 and TYPE3, each giving more
information. Personally, everything I need to know is in the TYPE1 format, and that's what I'll focus
on in this article. The others' fields are listed in Appendix 1.2.3 of the OS/400 Backup and Recovery-
Advanced V3R7 manual.

The important field in Figure 4 is Field data format, the first field of the parameter Entry data
length (ENTDTALEN). Notice I put in *CALC.

Now, back up a minute, and I'll give you a couple rules I always use. One, for consistency, I always
send my output from the DSPJRN command to the same-named output file, MDJRNFLE. Two, before running
DSPJRN and outputting another bunch of journal entries to it, I delete it if it is still hanging around
the system from the last time I wrote to it. Here is why: Most of the AS/400 database files are of
fixed length. Journal records themselves are variable length. The DSPJRN command, using the Outfile
parameter, creates a file that is part fixed, part variable.

Let me explain that. The journal entry is a record. It has a fixed set of fields followed by a variable

field that can contain a one-character indicator, a system message of any length, or the image of the
record it is reporting as being changed. The record size within the output file where the journal
entries are put is always "fixed" at the longest journal record length. The result is really a fixed
record file. So every time the output file is created, it can be created with a different length.

The journal entry's first 16 fields (refer to Figure 5 are fixed; the eighteenth field (JOESD) is the
last one and the one that has a variable length.

Getting back to Field data format on the DSPJRN command: Its default is *OUTFILFMT, which gives you a
fixed record of 256 bytes. The database records (the before and after images) kept within the journal
records will be truncated to 138 bytes. If your records are longer than that (most files are), you'll
lose some data. It's better to use the *CALC value for the Field data format parameter; the resulting
file will be large enough to hold the largest records.

If you do use the DSPJRN command, even specifying the *CALC to a file that already exists, you may end
up with a truncation problem anyway. So delete the file you intend to name as the DSPJRN's outfile
before issuing the command, and you'll be OK.

At this point, you have a database file containing journal entries that you can read and do something
with. To summarize the process:
o Delete the outfile if it exists. (I reuse the same name. It simplifies DASD maintenance for me.)

o Run the DSPJRN command. Prompt it, and use as many qualifiers as you can on the second screen to
limit the selection. On the last screen, specify the output as *OUTFILE, press Enter, specify the name
of the output file, and specify its length as *CALC.

After those two steps, you have a database file of journal entries you can access with a query or a
high-level language (HLL) program.

There is one "gotcha" with having your query or program access the contents of the captured application
database record. It is raw data held in a field called JOESD; the journal doesn't differentiate
database fields. Typically, I'll include a data structure within an HLL program that I can move this
field into to parse out its fields.

Why would you want to read the journal entries? I have two reasons: I read them when a file is
corrupted beyond the capacity of using a simple Apply Journal Change (APYJRNCHG) or Remove Journal
Change (RMVJRNCHG) command; I also read them to analyze application functionality.

How can a database file get that corrupted? Lots of ways, but let me give you one from real life. A
company is doing a Year 2000 conversion on a large application. At some point, many file structures are
converted (as the internal date fields grow from 6 to 8 bytes). If the company had an undetected file
problem prior to the conversion and realized it after the conversion, using journal entries in their
traditional sense wouldn't work; the journal entries appropriate to the problem wouldn't match the
file's current record structure. The only way to fix that file is with a custom program that reads and
applies journal entries.

Studying journal entries is also a handy way of learning the AS/400, because journal entries follow the
system (or work management), not the application. You'll notice the journal entries stack up
differently from how you may think your application works. If you ever do read the journal
programmatically and have your own program use its records, you better know how the AS/400 works. Let's
use the journal to explore work management on the AS/400.

I've written a small application I'll call "typical." It opens three files. The first and third files
are populated with identical records, and the second file is empty. The program reads a record from the
first file, changes a field, and updates the record. Then, it adds the record to the second file. It
finishes by finding an identical record in the third file and deleting it. True, it's not too
realistic, but it gives us a good example of lots of functions. Journaling is on, capturing before and
after images for all three files.

So we have an RPG cycle that will do the following:
o Read sequentially and update a record from file 1
o Write that record to file 2
oReadbykeyanddeletearecordfromfile3
There are 100 records in file 1, so the program will go through all three steps 100 times. Note the
order in which the functions are performed in the program-update, add, delete.

Figure 6 shows the journal entries (with journal code "R"). Note that file reads do not appear in the
journal. Entries 12 and 13 show the first record update. Entry 14 shows the first record delete. But
remember the application construction-it wrote a record between the update and the delete, and that
record doesn't appear in the journal.

If you keep reading, however, you'll see an update at entries 111 and 112, followed by a write
(actuallyaPUTorPT)onentry113.Thisisfollowedbymorewritesuntilentry148,whereweseea
record deleted again.

This is what I mean about journaling following the system, not the application. The AS/400
automatically blocks records when it can. Record blocking isn't new or unique to the AS/400; what is
unique is automatic blocking. This is evident in the journal, because the journal logs transactions
only when they occur at the database level-it doesn't care about the buffers or program logic.
Here's how our application works. Updated records are not blocked. File 1's updated records went out to
the database and therefore into the journal in real time. When records are read by key and deleted
(whichiskindofanupdate),blockingisalsoturnedoff.Thedeletesinfile3fallintothis
category, so its database functions and journaling are also real time.

Writing new records to a file is an activity that can easily be blocked. The system does this for file
2's WRITEs. When records are blocked, they are stored in a buffer temporarily until they are written to
the database in a bunch. In the example shown, the buffer records were pushed to the database when the
buffer area filled up.

You can even figure out from the journal how big the buffer was. I know from the Display File
Description (DSPFD) command that each record in file 2 is 117 bytes. I know from the journal entries
that 34 records were written as a group. If I multiply 34 by 117, I get 3,978-almosta4KBbuffer.
You can (and should) always block records in files you process, especially those files you sequentially

process. Do this with the Override Database File (OVRDBF) command. Within that command, however, is a
parameter, Force write ratio (FRCRATIO). When you are journaling a file, leave this parameter at its
default, *NONE.

So maybe you can see that journal entries are a little different from your application, depending on
what's going on with the system. Let's get really weird and see what happens when we turn on commitment
control and apply it to the same program.

Then, let's go one step further to really analyze things. I'll modify my program to commit every
transaction but one-that one it will rollback.

From the results (Figure 7), we can see that when a transaction is committed, all file updates,
deletes, and adds that belong to a transaction are pushed to the database, even though automatic
blocking is technically still in effect.

Let's look at entries 15 through 19, because they are typical of this application's committed
(successful) transactions. Entries 15 and 16 record the update to a file 1 record. The application next
writes a record to file 2, but that record is put in the buffer, so the written record doesn't show in
the journal. The application then reads a record from file 3 and deletes it. That occurs on entry 17.
Now, the application encounters the COMIT op code. Any database records currently held in buffers are
forced to the database, so we see a Put or Write (PT) for file 2 on entry 18, followed by entry 19,
which shows a COMIT (CM) operation was encountered in the program.

Let's look at what happens when we have an unsuccessful transaction and try to roll it back with the
ROLBK op code (refer to entries 30 through 36).

Entries 30 through 32 are fairly normal. A file 1 record is updated, and a file 3 record is deleted. No
write appears yet because it is in the buffer, waiting for either the buffer to fill or a COMIT verb to
force it out. The program encounters the ROLBK.

ROLBK is more exciting than COMIT. Remember, COMIT only pushed buffered records to the database and
logged a journal entry. ROLBK, however, actively uses the journal entries. It starts reading entries
from the current (number 32) until the previous boundary.

A boundary would be the last time a COMIT, ROLBK, or logical unit of work (LUW) was encountered. In
this case, entry number 26 contained a COMIT, so it becomes the boundary the ROLBK operation will stop
at.

The ROLBK logic reads the entry 32, sees it is a record delete, and creates a contra-entry to put the
deleted record back. (A contra-entry is a term borrowed from accounting, meaning the appropriate
opposite function is invoked. The contra-entry for a delete record is a write record.) The entry code
is UR. UR is used only by ROLBK. It is the after image or the record image it is writing to the
database. That becomes record 33.

Let's try that another way. ROLBK reads entry 32. Entry 32 is a delete of a record (DL). It contains
everything the rollback needs to undelete the record-the file/library name and the record image
(contained in field JOESD). The rollback logic takes that information and writes the record back to the
file, then creates another journal entry indicating the action it took. This entry, number 33 in the
example, is identical to the reversing entry, 32, except its entry type is UR.

The rollback continues in this manner with the record pair 31/30, the after and before image for the
original update. Like the delete, each is assigned its own contra-type code (UP-BR, UB-UR), and the
database and journals are updated. After the system does the operation in entry 35, the database has
been restored to the point it was at prior to the start of this transaction.

The ROLBK instruction encountered in the program becomes entry number 36. If the program updated,
deleted, undeleted, or unupdated (to coin a term), what happened to the write? Simple-it never got out
of the buffer, so the system just lost it. It never made it to the database in the first place.

That brings up one sad issue with commitment control-too many AS/400 programmers don't know how to use
it. The committing points should be on transaction boundaries. A transaction boundary can be two
things:
o In interactive programs, it is all the screens needed to enter something. It doesn't matter if the
user is using one screen to enter a new vendor or 10 screens to do order entry. From the time he or she
starts on an initial screen to the time that screen appears again, ready for another vendor or order
entry or whatever, that's one transaction.

o In batch programs, a transaction boundary is everything from the time a record is read from the
primary file (the main one the batch program is processing) through all the secondary file processing
until the next primary file record is read.

I see way too many AS/400 programmers who don't recognize proper boundaries. For whatever reason, they
commit on a program boundary, not transaction-when the program ends, it either commits everything or
rolls back everything.

I won't get into judgment here, but let's look at what happens when we do that in our program. I'll
move the commit verb to the end of the program, run it once, replace the commit verb with a rollback,
and run it again.

Look at each resulting journal. The commit journal looks like the original journal we had commitment
control of. We update and delete over and over, interrupting every so often to write 34 records out of
the buffer. The only thing different is the Commit (CM) journal entry at the end. However, when the
program rolls back, it's got a small problem. The written records have been forced from the buffer to
the database in groups of 34, so the rollback logic simply can't ignore them in the buffer. It must
deal with them as database records. What we end up with is a journal that looks something like Figure
8.

Delete Record (DR) is the rollback contra-entry for Put Record or Write (PT). Commitment control based
on a program boundary has the effect of allowing blocked records-therefore the program runs as fast
when COMIT is used. However, a ROLBK encountered on a program boundary causes a real performance
problem-every transaction made since the program started must be backed out.

Note one thing about working with journal entries of committed records-the cycle ID. Every COMIT,
ROLBK, or LUW defines a cycle. In the correct use of commit/rollback, each transaction created one
cycle while the program-based commit/rollback had only one cycle ID. (LUW stands for Logical Unit of

Work. I won't get into details here; I mention it only for the sake of completeness.)
If you ever work with journal entries from a committed program, sort the entry file into cycles and
work with each set of transactions as a group. You may want to build tables within your program so you
can work out all the codes and contra-codes within a cycle before hitting the database.

We've used all the journal record entry types except for PX. PX is a write using a direct relative
record number (RRN). Instead of writing a record to the next available slot at the end of the file, PX
writes it using the record's RRN. If you write database record number 5, that record will go into
record number 5.

You can process a database this way in RPG, but it is very rarely used. It is used more commonly by
work management when writing records to a file that has been changed to allow the reuse of deleted
records.

V3R1 gave us the capability of designating files as being capable of reusing deleted records. Before
V3R1, every time you deleted a record, its space in the file was left open. Until a file was
reorganized or copied on top of itself, or cleared, those deleted records took up as much physical
space as they did before they were deleted.

Since V3R1, the Create Physical File (CRTPF) and Change Physical File (CHGPF) commands allow us to
designate a file's deleted record space as available for new records. With journaling, you can get a
glimpse of how work management does this. A PX entry type means that it found a deleted record slot and
forced a new record into that slot using an RRN. If the deleted record takes the number 5 slot in the
file, the new record gets plugged into that slot with the PX write. The field JOCTRR in the journal
entry record will contain the actual RRN the system used.

The problem you may have if you're working with journal entries on your own is whether to take PX at
its literal value and access it through the relative position or not. Frankly, I hate using the RRN,
even if the journal used it.

Here's why I feel that way. When I use journal records to fix a file, I run into a serious problem that
has existed for some time. (For fresh, simple database problems, I just would have removed journal
entries with the RMVJRNCHG command). While a record may have been added to the file at a particular
RRN, it may have been deleted later, and an entirely different record could be written into that same
spot. My contra-operation to a PX is a Delete (DL). If I take the journal's RRN literally, without
checking first, I'll delete a perfectly good record.

I have my own personal rules about using PX. If the file is keyed uniquely, I'll ignore the RRN and
just use the record's key fields (from the record contents held in field JOESD) and rely on normal
database access methods.

However, if the file is unkeyed or keyed with duplicate keys allowed, I'll use RRN, but I'll do a
direct read and compare some basic field values to make sure I'm at least in the ball park before doing
anything to it.

Here's a quick primer to help you access database records by RRN in RPG. On the file statement, the
file must be fully procedural; the K in column 51 must be taken out. In the calculation specifications,
use the CHAIN op code to position the file pointer to the record you want to access. Factor 1 for this
operation should contain a numeric literal or a numeric field. Either way, this is the RRN value CHAIN
will use to point to the record you wanted. If you want to point to the fifth record in the file,
either of these will work:

C 5 CHAINMSTFLE 01

C Z-ADD5 RECNO 60
C RECNO CHAINMSTFLE 01
After the file's pointer is successfully positioned (in the example, indicator 01 is off), you can
UPDAT or DELET the record. You can WRITE to the file anytime.

If you are going to WRITE records to a file using RRN yourself (taking that control from the system),
you're a glutton for punishment, but here's what you do:
Set up the file specification statement as file type O (output). You can't put in the F for fully
procedural. Leave off the K in column 51. In the continuation (K) area, put in the RECNO keyword and a
field name that you'll be using as the record number. Here's an output-only file specification:

FTSTJRN1 O E DISK KRECNO RECN A
Here's the code that will WRITE a record (presumably after you've filled its fields) directly to record
number 5:

C Z-ADD5 RECNO
C WRITETJR1 02
C *IN02 IFEQ ë1í
C ... Record was active, not deleted,
C handle that here...

C ENDIF
Be careful with this. You'd better know that record 5 is a deleted record; otherwise, the WRITE will
result in a "duplicate key" error message. You can get around this by specifying an indicator in
columns 56-57 (02 in the example) and checking its status after the WRITE.

I focused on the journal codes "R" in this article because they are representative of how you access
and use journal entries. If you keep in mind the skills I've presented here and refer to the possible
journal entries listed in Figure 3, you'll get a sense of how you can utilize journal entries for
yourself.

You can sweep them for security problems (who accessed a particular record in a file and what did he or
she do with it?) or any of those mysteries that sometimes happen in the computer room (who IPLed the
system last night?). I've even used it for improving complex applications (try using it to see how
often files are opened or closed in a typical session; you may be surprised).

We journal everything anyway, so we even use it for simple tasks like determining which users are
working on the weekend-who, when, how long, etc. Their managers love it. In IT, we use the user access
data to document computer demand each day of the month. That way, we make knowledgeable decisions about

when to take the computer down for service to have the least impact on our users.
While saying that, I realize that I'm working on a machine that carries lots of DASD. I have to make a
disclaimer here that journaling can be expensive in terms of DASD. It can also cause your applications
to take a real performance hit. Although journaling is nice, be careful to think it through before you
start. Try to come up with a journaling strategy that matches your environment to your company's needs.
Mike Dawson is a technical editor for Midrange Computing. He is also the author of The AS/400 Owner's
Manual, published by Midrange Computing, and The AS/400 System's Programming Guide, published by
McGraw-Hill. He can be reached at 602-788-4105.

OS/400 Backup and Recovery-Advanced V3R7 (SC41-4305-01, CD-ROM QBJALF01)

Figure 1: Sample screen from DSPJRN command





Getting_Gold_out_of_Journal_Records05-00.png 900x458

Figure 2: Journal code summary





Getting_Gold_out_of_Journal_Records06-00.png 820x745

Figure 3: Journal entry types





Getting_Gold_out_of_Journal_Records07-00.png 854x1118

Figure 4: Specifying an outfile for the DSPJRN command
(changed fields marked with )





Getting_Gold_out_of_Journal_Records08-00.png 900x418

Figure 5: Journal record layout





Getting_Gold_out_of_Journal_Records09-00.png 852x1295

Figure 6: Journal entries with journal code "R"





Getting_Gold_out_of_Journal_Records10-00.png 795x1239

Figure 7: Journal entries after ROLBK





Getting_Gold_out_of_Journal_Records12-00.png 783x1270

Figure 8: Journal with COMIT at the end of the program





Getting_Gold_out_of_Journal_Records13-00.png 900x745
BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

RESOURCE CENTER

  • WHITE PAPERS

  • WEBCAST

  • TRIAL SOFTWARE

  • White Paper: Node.js for Enterprise IBM i Modernization

    SB Profound WP 5539

    If your business is thinking about modernizing your legacy IBM i (also known as AS/400 or iSeries) applications, you will want to read this white paper first!

    Download this paper and learn how Node.js can ensure that you:
    - Modernize on-time and budget - no more lengthy, costly, disruptive app rewrites!
    - Retain your IBM i systems of record
    - Find and hire new development talent
    - Integrate new Node.js applications with your existing RPG, Java, .Net, and PHP apps
    - Extend your IBM i capabilties to include Watson API, Cloud, and Internet of Things


    Read Node.js for Enterprise IBM i Modernization Now!

     

  • Profound Logic Solution Guide

    SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation.
    Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects.
    The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the companyare not aligned with the current IT environment.

    Get your copy of this important guide today!

     

  • 2022 IBM i Marketplace Survey Results

    Fortra2022 marks the eighth edition of the IBM i Marketplace Survey Results. Each year, Fortra captures data on how businesses use the IBM i platform and the IT and cybersecurity initiatives it supports.

    Over the years, this survey has become a true industry benchmark, revealing to readers the trends that are shaping and driving the market and providing insight into what the future may bring for this technology.

  • Brunswick bowls a perfect 300 with LANSA!

    FortraBrunswick is the leader in bowling products, services, and industry expertise for the development and renovation of new and existing bowling centers and mixed-use recreation facilities across the entertainment industry. However, the lifeblood of Brunswick’s capital equipment business was running on a 15-year-old software application written in Visual Basic 6 (VB6) with a SQL Server back-end. The application was at the end of its life and needed to be replaced.
    With the help of Visual LANSA, they found an easy-to-use, long-term platform that enabled their team to collaborate, innovate, and integrate with existing systems and databases within a single platform.
    Read the case study to learn how they achieved success and increased the speed of development by 30% with Visual LANSA.

     

  • The Power of Coding in a Low-Code Solution

    LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed.
    Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

    • Discover the benefits of Low-code's quick application creation
    • Understand the differences in model-based and language-based Low-Code platforms
    • Explore the strengths of LANSA's Low-Code Solution to Low-Code’s biggest drawbacks

     

     

  • Why Migrate When You Can Modernize?

    LANSABusiness users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.
    In this white paper, you’ll learn how to think of these issues as opportunities rather than problems. We’ll explore motivations to migrate or modernize, their risks and considerations you should be aware of before embarking on a (migration or modernization) project.
    Lastly, we’ll discuss how modernizing IBM i applications with optimized business workflows, integration with other technologies and new mobile and web user interfaces will enable IT – and the business – to experience time-added value and much more.

     

  • UPDATED: Developer Kit: Making a Business Case for Modernization and Beyond

    Profound Logic Software, Inc.Having trouble getting management approval for modernization projects? The problem may be you're not speaking enough "business" to them.

    This Developer Kit provides you study-backed data and a ready-to-use business case template to help get your very next development project approved!

  • What to Do When Your AS/400 Talent Retires

    FortraIT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators is small.

    This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn:

    • Why IBM i skills depletion is a top concern
    • How leading organizations are coping
    • Where automation will make the biggest impact

     

  • Node.js on IBM i Webinar Series Pt. 2: Setting Up Your Development Tools

    Profound Logic Software, Inc.Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. In Part 2, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Attend this webinar to learn:

    • Different tools to develop Node.js applications on IBM i
    • Debugging Node.js
    • The basics of Git and tools to help those new to it
    • Using NodeRun.com as a pre-built development environment

     

     

  • Expert Tips for IBM i Security: Beyond the Basics

    SB PowerTech WC GenericIn this session, IBM i security expert Robin Tatam provides a quick recap of IBM i security basics and guides you through some advanced cybersecurity techniques that can help you take data protection to the next level. Robin will cover:

    • Reducing the risk posed by special authorities
    • Establishing object-level security
    • Overseeing user actions and data access

    Don't miss this chance to take your knowledge of IBM i security beyond the basics.

     

     

  • 5 IBM i Security Quick Wins

    SB PowerTech WC GenericIn today’s threat landscape, upper management is laser-focused on cybersecurity. You need to make progress in securing your systems—and make it fast.
    There’s no shortage of actions you could take, but what tactics will actually deliver the results you need? And how can you find a security strategy that fits your budget and time constraints?
    Join top IBM i security expert Robin Tatam as he outlines the five fastest and most impactful changes you can make to strengthen IBM i security this year.
    Your system didn’t become unsecure overnight and you won’t be able to turn it around overnight either. But quick wins are possible with IBM i security, and Robin Tatam will show you how to achieve them.

  • Security Bulletin: Malware Infection Discovered on IBM i Server!

    SB PowerTech WC GenericMalicious programs can bring entire businesses to their knees—and IBM i shops are not immune. It’s critical to grasp the true impact malware can have on IBM i and the network that connects to it. Attend this webinar to gain a thorough understanding of the relationships between:

    • Viruses, native objects, and the integrated file system (IFS)
    • Power Systems and Windows-based viruses and malware
    • PC-based anti-virus scanning versus native IBM i scanning

    There are a number of ways you can minimize your exposure to viruses. IBM i security expert Sandi Moore explains the facts, including how to ensure you're fully protected and compliant with regulations such as PCI.

     

     

  • Encryption on IBM i Simplified

    SB PowerTech WC GenericDB2 Field Procedures (FieldProcs) were introduced in IBM i 7.1 and have greatly simplified encryption, often without requiring any application changes. Now you can quickly encrypt sensitive data on the IBM i including PII, PCI, PHI data in your physical files and tables.
    Watch this webinar to learn how you can quickly implement encryption on the IBM i. During the webinar, security expert Robin Tatam will show you how to:

    • Use Field Procedures to automate encryption and decryption
    • Restrict and mask field level access by user or group
    • Meet compliance requirements with effective key management and audit trails

     

  • Lessons Learned from IBM i Cyber Attacks

    SB PowerTech WC GenericDespite the many options IBM has provided to protect your systems and data, many organizations still struggle to apply appropriate security controls.
    In this webinar, you'll get insight into how the criminals accessed these systems, the fallout from these attacks, and how the incidents could have been avoided by following security best practices.

    • Learn which security gaps cyber criminals love most
    • Find out how other IBM i organizations have fallen victim
    • Get the details on policies and processes you can implement to protect your organization, even when staff works from home

    You will learn the steps you can take to avoid the mistakes made in these examples, as well as other inadequate and misconfigured settings that put businesses at risk.

     

     

  • The Power of Coding in a Low-Code Solution

    SB PowerTech WC GenericWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed.
    Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

    • Discover the benefits of Low-code's quick application creation
    • Understand the differences in model-based and language-based Low-Code platforms
    • Explore the strengths of LANSA's Low-Code Solution to Low-Code’s biggest drawbacks

     

     

  • The Biggest Mistakes in IBM i Security

    SB Profound WC Generic The Biggest Mistakes in IBM i Security
    Here’s the harsh reality: cybersecurity pros have to get their jobs right every single day, while an attacker only has to succeed once to do incredible damage.
    Whether that’s thousands of exposed records, millions of dollars in fines and legal fees, or diminished share value, it’s easy to judge organizations that fall victim. IBM i enjoys an enviable reputation for security, but no system is impervious to mistakes.
    Join this webinar to learn about the biggest errors made when securing a Power Systems server.
    This knowledge is critical for ensuring integrity of your application data and preventing you from becoming the next Equifax. It’s also essential for complying with all formal regulations, including SOX, PCI, GDPR, and HIPAA
    Watch Now.

  • Comply in 5! Well, actually UNDER 5 minutes!!

    SB CYBRA PPL 5382

    TRY the one package that solves all your document design and printing challenges on all your platforms.

    Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product.

    Request your trial now!

  • Backup and Recovery on IBM i: Your Strategy for the Unexpected

    FortraRobot automates the routine tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:
    - Simplified backup procedures
    - Easy data encryption
    - Save media management
    - Guided restoration
    - Seamless product integration
    Make sure your data survives when catastrophe hits. Try the Robot Backup and Recovery Solution FREE for 30 days.

  • Manage IBM i Messages by Exception with Robot

    SB HelpSystems SC 5413Managing messages on your IBM i can be more than a full-time job if you have to do it manually. How can you be sure you won’t miss important system events?
    Automate your message center with the Robot Message Management Solution. Key features include:
    - Automated message management
    - Tailored notifications and automatic escalation
    - System-wide control of your IBM i partitions
    - Two-way system notifications from your mobile device
    - Seamless product integration
    Try the Robot Message Management Solution FREE for 30 days.

  • Easiest Way to Save Money? Stop Printing IBM i Reports

    FortraRobot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing.
    Manage your reports with the Robot Report Management Solution. Key features include:

    - Automated report distribution
    - View online without delay
    - Browser interface to make notes
    - Custom retention capabilities
    - Seamless product integration
    Rerun another report? Never again. Try the Robot Report Management Solution FREE for 30 days.

  • Hassle-Free IBM i Operations around the Clock

    SB HelpSystems SC 5413For over 30 years, Robot has been a leader in systems management for IBM i.
    Manage your job schedule with the Robot Job Scheduling Solution. Key features include:
    - Automated batch, interactive, and cross-platform scheduling
    - Event-driven dependency processing
    - Centralized monitoring and reporting
    - Audit log and ready-to-use reports
    - Seamless product integration
    Scale your software, not your staff. Try the Robot Job Scheduling Solution FREE for 30 days.