Sidebar

Remote Journaling and Data Recovery

DB2
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Data recovery is an important factor in any disaster recovery plan. One of the main challenges in a disaster recovery operation is getting a copy of the latest data on the target system.

Traditionally, this has been the domain of a High Availability (HA) provider, who would use a journal-scraping technique to capture the changes as they happen and then copy those changes to a remote system before applying them in close to real-time.

With the introduction of remote journaling, some vendors built a similar process, but they used the Remote Journal object to do the scrape against, which removed the need to have a transport layer for the data to be transmitted between the systems. Again, near real-time replication is achieved. Some customers, however, cannot afford to implement these solutions, so they have turn to another solution, such as Hot Site or Mobile recovery, which are standard offerings from HA providers. These solutions allow them to recover, but there's an extended time period before they can be up and running.

More and more companies are realizing that they cannot afford to be without their iSeries for more than a few hours without the risk of losing their business altogether. The amount of data that is pumped into databases is growing rapidly, and the time required to rebuild the system following a loss is increasing. Add to this the time and effort required to retrieve lost data and to keep input of new data flowing. Catching up can become almost impossible.

If only you could input and apply the data in real-time! With remote journaling, the data is available, but you can't use the Apply/Remove Journal Change (APY/RMVJRNCHG) commands against a Remote Journal object because the Remote Journal objects reside on the source system, which can't be accessed. So how do you proceed?

First, you need a copy of the objects that the remote journal changes can be applied to. The save from last night--or a set of incremental saves if you're using Save Changed Objects (SAVCHGOBJ)--is a good starting point. Remote journaling needs to have been implemented for the required objects, and the receivers must still be online. A local journal environment that is a mirror of the source system local journal has to exist so that when the restore is carried out, the files will automatically attach themselves to the journal. Then, you must get the data that was deposited in the remote receivers since the last save and apply it to the restored objects.

This is the tricky part! You have to fool the system into thinking that the data in the receivers is relevant to the objects so that it can be applied. Remember, the Remote Journal object doesn't know that those objects exist on this system; it only knows they were attached to a local journal on the source system.

I set out to prove that it's possible to fool the system. The receivers have no affiliation with the objects; they only hold the data that the journal has captured, and the journal has the affiliation with the objects. So all I needed to do was copy the receivers from the Remote Journal object to the Local Journal object. Once the receivers were attached to the local journal, the APY/RMVJRNCHG commands worked perfectly!

Where This Could Be Used

Now, you have all the information you need to develop a whole plethora of recovery options. Because you know the data can be updated using the RMV/APYJRNCHG commands, you can provide options that were not available previously. There are obviously lots more options than listed here, but here are a few to start with.

Your Own Resources

Suppose you have a system that could be used for recovery, but you don't have the budget for an HA product and its implementation. All you have to do is create the required objects on the target system and maintain those objects using save and restore operations. Remote journaling will keep your data changes since the last save. If you need to recover, all you have to do is replay the changes against the database using the APYJRNCHG command and clean up any object changes. The recovery time will not match that of an HA product, but, depending on the time of the failure, it could be very acceptable. If the failure occurs before any changes have been created, you will be in the same position you would be with an HA product.

http://www.mcpressonline.com/articles/images/2002/Remote%20Journaling%20and%20Data%20Recovery_2%20(V4)00.png

Hot Site Provider

Hot site providers can target new offerings in which remote journaling is used to store real-time updates from source systems. A true copy of your system is maintained using the daily saves, and in the event of a failure, you can replay the information against the database using the APYJRNCHG command and clean up any object changes. LPAR is a major contributor to this solution. Recovery time will be extended only because of the time it takes to get to the hot site and start the process. The amount of data loss is reduced too. Previously, all you had to work with was the last save; you couldn't apply any changes as you didn't have a copy.

http://www.mcpressonline.com/articles/images/2002/Remote%20Journaling%20and%20Data%20Recovery_2%20(V4)01.png

 

Remote Data Vault

A remote data vault is a type of hot site in which only your remote journals are stored. In the event of a system loss, a new system is rebuilt using your saves, and then you replay the information that was captured in the remote journals against the database using the APYJRNCHG command and clean up any object changes. The vendor stores only the Remote Journal objects and data for you. With this solution, management processes must be installed so that when a save operation completes, the receivers are deleted, because the save then holds the same information. Recovery is extended because of the time it takes to create the base system; however, data loss is minimal.

http://www.mcpressonline.com/articles/images/2002/Remote%20Journaling%20and%20Data%20Recovery_2%20(V4)02.png

Application Recovery

Replication using an HA product will result in near real-time replication. Should a failure occur, the data on the target system could be in a position that is too far forward--that is, data has been applied that needs to be removed to allow jobs to be resubmitted. Using the remote journal information and the Remove Journal Change (RMVJRNCHG) command, this data can now be easily removed to a start point compatible with a job restart. The Job Information in the journal relates to the job on the source system, so as long as you know which jobs were open at the time of the failure, you can use this information to remove the relevant changes. (Note: The RMVJRNCHG command will remove all entries to the open job entry. This means that any entries that were added to the same object list by other jobs after the job start and up to the job end will also be removed.)

http://www.mcpressonline.com/articles/images/2002/Remote%20Journaling%20and%20Data%20Recovery_2%20(V4)03.png

A Picture's Worth a Thousand Words!

Below is a pictorial view of what you set out to achieve. The yellow items relate to what is going on constantly during normal periods. The blue items reflect the fact that the journal existed on the target system, but it was in a static status because no updates were being applied as a result of the updates on the source system. The red items show the process that was followed to update the objects using the remote journal receivers. I could have expanded the picture to show a daily save and restore of the database object, but that's for another day!

http://www.mcpressonline.com/articles/images/2002/Remote%20Journaling%20and%20Data%20Recovery_2%20(V4)04.png
How I Did It

To test the new functionality of remote journaling and identify the process that has to be followed to allow the APY/RMVJRNCHG commands to be run against remote journal receivers, I set up a test environment. The test environment was very simple, as it only had to prove the concept. The next stage will be to test the additional capabilities of remote journaling, such as data area support, data queue support, and IFS object support. As commitment control is generally self-healing, I decided not to test the functionality; the ability to force an abrupt end of the system while commitment control is active would be too complex a task. Forcing the system to end abruptly may also cause additional damage to the system. My test was only to confirm the ability to use the information stored in a remote journal to update the target database.

I used two systems: a 170 and a 720, both running V5R1. The systems were connected on a LAN using Ethernet and TCP/IP.

170
DTALIB--Contained all of the files to be replicated
Files--FILEA, FILEB, FILEC
JRNLIB--Contained the journal environment
Journal--TSTJRN
Receiver--RCV0000001
Message Queue--JRNMSGQ (could have been QSYSOPR)

720
Same objects as on the 170 system plus...
RMTJRN--Contained the remote journal environment
SAVLIB--Contained the save files used for the save and restore process
Save File--TEMP

A few notes: 1) Only the library has to exist before the Add Remote Journal (ADDRMTJRN) command is issued. Then, when ADDRMTJRN is issued, the journal is created in the remote journal library. 2) I chose to have a different library on the target system to allow the recovery method to work. 3) The journal receiver is created when the remote journal is activated. 4) Remember to make sure *BOTH is defined for the images to be captured.

Setting Up the Test Environment

First, create the required libraries on both systems. Before you can set up the environments, you have to create a number of libraries to segregate the relevant objects. I separated the data objects and the journal objects because, when you save a library that has a journal object in it and then you restore that library to a system where it hasn't existed before, the OS will automatically attach a receiver to the journal object when it is restored. When the receiver is restored, it is restored as a partial receiver. This is because the receiver was attached at the time it was saved, but the restore process restores the objects in alphabetical order--JRN comes before JRNRCV--and a journal must have a receiver attached to exist. While this situation doesn't affect the test, I felt it was prudent. The remote journal must exist in a separate library to allow the test to work as described. The SAVLIB library is for convenience more than anything else.

So, run the following commands on the source system:

CRTLIB LIB(DTALIB) TEXT('Remote Journal Test DATA Library')
CRTLIB LIB(JRNLIB) TEXT('Remote Journal Test JRN Library')


Then, run these commands on the target system:

CRTLIB LIB(DTALIB) TEXT('Remote Journal Test DATA Library')
CRTLIB LIB(JRNLIB) TEXT('Remote Journal Test JRN Library')
CRTLIB LIB(RMTJRN) TEXT('Remote Journal Test RMTJRN Library')
CRTLIB LIB(SAVLIB) TEXT('Remote Journal Test SAVE Library')


Next, create the journal environment on both systems. I created only the Local Journal objects, as the Remote Journal objects are created later using the Add Remote Journal (ADDRMTJRN) and Change Remote Journal (CHGRMTJRN) commands. The order in which you create these objects is important. The receiver and message queue have to exist before the journal can be created. The restore of the objects to the target system will create a link between the journal on that system and the objects. I could have saved the journal object and restored it to the target system--which removes the need to do the CHGJRN command before restoring the receivers from the remote journal--but the method I chose seems simpler.

Now, run these commands on the source system and on the target system:

CRTJRNRCV JRNRCV(JRNLIB/RCV0000001) TEXT('Test Journal Receiver')
CRTMSGQ MSGQ(JRNMSGQ) TEXT('Journal Message Queue')
CRTJRN JRN (JRNLIB/TSTJRN) JRNRCV(JRNLIB/RCV0000001)
MSGQ(JRNLIB/JRNMSGQ) TEXT('Remote Journal Test (Local journal)')


Now, create the files required on the source system. I chose to test only a few files. The aim of this test is to show the use of the APY/RMVJRNCHG commands, not to see how many objects can be included in the use of the commands. I created more than one file to show that the commands can handle a variety of requests.

Enter these commands on your source system:

CRTPF FILE(DTALIB/FILEA) RCDLEN(100) TEXT('Test File') 
CRTPF FILE(DTALIB/FILEB) RCDLEN(100) TEXT('Test File') 
CRTPF FILE(DTALIB/FILEC) RCDLEN(100) TEXT('Test File')
CRTPF FILE(DTALIB/FILED) RCDLEN(100) TEXT('Test File')


Now, journal the files on the source system to make sure all updates are captured as soon as the objects exist, thereby ensuring the files are associated with the journal object. When they are restored, the OS will try to attach them to a journal of the same name and library. This is, therefore, an important step; if you save the files before journaling is started and then try to use the APY/RMVJRNCHG commands against the remote journal receivers, the operation will fail, stating that the required objects do not exist on the system. This happens because no link has been created on the source system to the journal object you tried to run the commands against.

Note: You must journal the objects with *BOTH for the journal images. Otherwise, the RMVJRNCHG command could fail. To replace an image in the file, the system has to know what the image was like before a change.

Here are the commands for the source system:

STRJRNPF FILE(DTALIB/FILEA DTALIB/FILEB DTALIB/FILEC DTALIB/FILED) JRN(JRNLIB/TSTJRN) IMAGES(*BOTH) OMTJRNE(*OPNCLO)


Save the files from the source system, and restore them to the target system. This will copy the objects--complete with the "Journal ID" set--to the remote system. This step is very important; when the object is journaled, the system will set the Journal ID in the object. This Journal ID is a 10-byte value that uniquely identifies the journal object itself in the system. The journal entries created will contain this information, and when the APY/RMVJRNCHG commands are run, they check the Journal ID to ensure the IDs match in both the object and the journal entry. Failure to carry out this step will cause the test to fail.

Run this on the source system:

SAVOBJ OBJ(*ALL) LIB(DTALIB) DEV(*SAVF) SAVF(QGPL/TEST)


You can use any transfer method you have at your disposal to transfer the save file to the remote system. I used FTP.

Now, run this on the target system:
RSTOBJ OBJ(*ALL) SAVLIB(DTALIB) DEV(*SAVF) SAVF(QGPL/TEST) MBROPT(*ALL) ALWOBJDIF(*ALL)

This is what the environments look like so far:
http://www.mcpressonline.com/articles/images/2002/Remote%20Journaling%20and%20Data%20Recovery_2%20(V4)05.png


The next step is to create a remote database entry. This is required for the ADDRMTJRN command.

Run this command on the source system:
ADDRDBDIRE RDB(SHIELDSYS2) RMTLOCNAME('192.168.100.7' *IP) TEXT('remote data base on system 2')

Now, add a remote journal to the local journal. This is how the remote journal objects are created. You must have a remote database entry and a valid communications link for this to work. The Remote Journal Library is all that has to exist on the remote system. I separated the Local and Remote Journal Libraries on the target system to ensure the test would work.

Run this command on the source system:

ADDRMTJRN RDB(SHIELDSYS2) SRCJRN(JRNLIB/TSTJRN) 
TGTJRN(RMTJRN/TSTJRN) RMTRCVLIB(RMTJRN) MSGQ(QSYSOPR) TEXT('Remote Journal test (Remote Journal)')


Activate the remote journal. The system will ensure that the entries placed in the local journal are also transmitted over the communications link to the remote journal. I chose ASYNC as my delivery method because I wasn't interested in determining that all entries exist on both systems before they exist on the source. If you are using an HA product for the transport mechanism, you would only have an ASYNC process, because the HA product has to extract the entries and transport and store them on the remote system separately.

Use these commands on the source system:

CHGRMTJRN RDB(SHEILDSYS2) SRCJRN(JRNLIB/TSTJRN) 
TGTJRN(RMTJRN/TSTJRN) JRNSTATE(*ACTIVE)


Now, the links are created and the setup is complete.

http://www.mcpressonline.com/articles/images/2002/Remote%20Journaling%20and%20Data%20Recovery_2%20(V4)06.png
Testing the Theory

You're ready to start the test! Because you are only looking at the ability to use the APY/RMVJRNCHG commands against the remote journal receivers, you can use the Update Data (UPDDTA) command to create the changes to the files. First, you need to add some new records into the files. Then, you will change the data you have added to demonstrate the ability to change records. This is done on the source system only. (I left out updating FILED on purpose.)

UPDDTA FILE(DTALIB/FILEA)
UPDDTA FILE(DTALIB/FILEB)
UPDDTA FILE(DTALIB/FILEC)


You can enter as much or as little data as you wish, but I suggest creating a number of records with multiple updates against those records. Doing this will allow you to add or remove a variable number of changes and then verify that the data is as you expect. Remember, you have the ability to misuse the commands, and any corruption will probably be caused by a misunderstanding of the process used. If you try to use APYJRNCHG on changes that have already been applied, you will cause errors.

Once you have populated the files with some data and carried out a few updates, you're ready to apply those changes to the backup database.

Understand the Issues

First, you have to attach the receivers that have been created and maintained by the Remote Journal function to the local journal on the target system. This is carried out by a few simple actions, as follows.

When you save the object, it will have to be restored to the local journal environment. Therefore, unless you take action to resolve it, you will receive an error stating that the object already exists! My test involved using the CHGJRN command against the local journal on the target system. I used *GEN for the receiver parameter, which created a receiver RCV0000002.

Because I had only one receiver attached, I only had to delete the RCV0000001 object in the JRNLIB to allow the test to continue. If, however, I had saved the journal object only and restored this on the target system, the OS would have automatically attached a receiver that would not conflict. A test I ran showed that a journal that was saved as restored with RCV0000007 attached actually resulted in a RCV2000007 being attached on the restore.

My test did continue to do a CHGJRN on the source system and carry out further tests using multiple receivers. The results were the same, so I won't detail those tests. For the process to work, just be sure that the start and end sequence numbers and receiver names are correct.

Running the Test and Evaluating the Results

Create the save file in SAVLIB. You need to be able to save the receiver object from the Remote Journal Library, and a save file may be a quicker and simpler method than using tape. It was for me.

Run this command on the target system:

CRTSAVF FILE(SAVLIB/TEMP) TEXT('Remote journal Test save file')


Save the receiver to the save file. I had only one receiver, so I didn't have to determine which ones were required.

Run this on the target system:
SAVOBJ OBJ(*ALL) LIB(RMTJRN) DEV(*SAVF) OBJTYPE(*JRNRCV) SAVF(SAVLIB/TEMP)

Change the local journal on the target system to allow the removal of the attached receiver (RCV0000001). You'll be trying to restore RCV0000001 to the library.

Run this on the target system:

CHGJRN JRN(JRNLIB/TSTJRN) JRNRCV(*GEN)


Delete the old receiver from the local journal on the target system. Now, only RCV0000002 exists in the library. The IGNINQMSG parameter just says ignore the normal message that is sent if you try to delete a receiver before it has been saved.

Run this on the target system:

DLTJRNRCV JRNRCV(JRNLIB/RCV0000001) DLTOPT(*IGNINQMSG)


Restore the receiver from the remote journal to the local journal. The receiver will be restored and can be used for data replication. Because there is a receiver already "attached" to the journal, the system will still restore the new receiver as a "partial" receiver because the status of the receiver when it was saved was "attached." Only one receiver can be in attached status. While there are limitations as to what you can do with a partial receiver using the APY/RMVJRNCHG commands, they are not important for this test.

Run this on the target system:
RSTOBJ OBJ(*ALL) SAVLIB(RMTJRN) DEV(*SAVF) SAVF(SAVLIB/TEMP) MBROPT(*ALL) ALWOBJDIF(*ALL) RSTLIB(JRNLIB)

 
http://www.mcpressonline.com/articles/images/2002/Remote%20Journaling%20and%20Data%20Recovery_2%20(V4)07.png
Now, identify the sequence numbers required--this would be from the last save entry +1 to the last update entry for the files required in the test. The link between the receivers will be broken even though you have RCV0000001/2. The RCV0000002 has an entry that states it has a previous receiver of RCV0000001; however, the RCV0000001 receiver knows nothing about RCV0000002, so the OS will complain if you try to apply changes that exist across this boundary.

Run this on the target system:
DSPJRN JRN(RMTJRN/TSTJRN)

Apply the journal changes to the local files. The first entry after the objects were saved was 30, and the last entry in the journal was 86, so here's the command to run on the target system:

APYJRNCHG JRN(JRNLIB/TSTJRN) FILE((DTALIB/*ALL)) RCVRNG(JRNLIB/RCV0000001 JRNLIB/RCV0000001) FROMENT(30) TOENT(86)

This is what you have now:
 http://www.mcpressonline.com/articles/images/2002/Remote%20Journaling%20and%20Data%20Recovery_2%20(V4)08.png
Now, it's time to check the files to ensure the updates worked. The files were empty; they should now be an exact copy of the source files. When you ran the UPDDTA commands, you added information in the record data to show where you had updated the records the updates as opposed to creating them. This allowed you to track the data updates as well as new records. FILED was empty, and no updates were applied, so it should still be empty.

Run this on the source system to check:
DSPPFM FILE(DTALIB/FILEA)
DSPPFM FILE(DTALIB/FILEB)
DSPPFM FILE(DTALIB/FILEC)
DSPPFM FILE(DTALIB/FILED)

And then run the same thing on the target system.

Remove the journal changes. This time, you are going from last to first entry because the RMVJRNCHG has to start with the last change and work backward. So here's what to run on the target system:

RMVJRNCHG JRN(JRNLIB/TSTJRN) FILE((DTALIB/*ALL)) RCVRNG(JRNLIB/RCV0000001 JRNLIB/RCV0000001) FROMENT(86) TOENT(30)

Check the files to ensure the updates worked. You know the files were empty, so a check of the files on the remote system is all that is needed to ensure they have no entries. Run this on the target system:

DSPPFM FILE(DTALIB/FILEA)
DSPPFM FILE(DTALIB/FILEB)
DSPPFM FILE(DTALIB/FILEC)

After RMVJRNCHG, you have this:
http://www.mcpressonline.com/articles/images/2002/Remote%20Journaling%20and%20Data%20Recovery_2%20(V4)09.png
Did It Work?
This concludes the test and confirms that the remote journal can be used to update the remote database without the use of an HA product. Obviously, there are restrictions, but these restrictions are not showstoppers for most users. New features are being added all the time, and as I test those features, I will create documentation to show what can be achieved.

Sidebar: PRPQ 5799 AJC--Another Improvement!

The free PRPQ 5799 AJC for V5R1 allows the replay of additional object-wide changes. Unfortunately, there is no backward-compatible support because the PRPQ uses the features available only in V5R1. However, these features are standard in V5R2. A major benefit of the PRPQ is the ability to replicate more object commands using the Apply Journaled Changes Extended (APYJRNCHGX) command, an extended version of the APYJRNCHG command. While there are limitations, which are listed below, the additional support provided will be welcomed by most. I have been informed by reliable sources that the PRPQ also provides other improvements that offer increased flexibility and speed when running the APY/RMVJRNCHG commands.

APYJRNCHGX provides the capability to replay many object-level OS/400 commands. For example, CREATE FILE and CHANGE FILE have been enhanced for V5R1 such that they now emit new journal entries that APYJRNCHGX can recognize and replay.

This product's key goal is to improve recoverability of an application running on the iSeries. Prior to V5R1, many object-level operations were not journaled. Now, they are journaled, but APYJRNCHG is not capable of applying object-level journal entries (such as an ALTER_TABLE SQL command). The command supplied with this PRPQ, APYJRNCHGX, does apply the object-level operations.

This PRPQ is especially useful in environments where object-level changes occur between database backups. If an application creates or alters tables (or otherwise makes object-level changes) during productive operations, then this PRPQ provides the ability to more fully recover the database in the event of a disaster.

The APYJRNCHGX command applies the changes that have been journaled for a particular journaled object to a saved version of the object to recover it after an operational error or some form of damage. The difference between APYJRNCHGX and APYJRNCHG is that object-level changes are included as part of the APYJRNCHGX apply. Examples of object-level changes include the following SQL statements:

CREATE TABLE

  • CREATE INDEX
  • ALTER TABLE
  • DROP INDEX


Many object-level OS/400 commands (for example, CHGPF and DLTF) also deposit journal entries. For a complete list of object-level journal entries, refer to the online help of APYJRNCHGX or the Backup and Recovery book (SC41-5304).

For example, here's the command you'd use to apply changes to an SQL collection:
APYJRNCHGX JRN(MYCOLL/QSQJRN) FILE(MYCOLL/*ALL)

This command causes the system to apply all journaled changes to all files in the MYCOLL collection since the last save. The receiver range is determined by the system. The changes are applied beginning with the first journaled change on the receiver chain after each file was last saved and continue through all applicable journal entries to the point at which the files were last restored.

All object-level entries (for example, CREATE/DROP/ALTER TABLE) for the MYCOLL collection are included. Commitment control boundaries are honored because the default value for the CMTBDY parameter, *YES, is used.

The product does have a few limitations. It is English-only. It does not cover IFS, data queue, or data area changes (you have to use the normal APYJRNCHG command to service these objects). And the user may not specify individual file names on which to apply journaled changes (which is allowed with APYJRNCHG), and the library must be specified (that is, LIBRARY/*ALL for the file parameter).


Chris Hird first worked with High Availability (HA) at IBM Havant (UK) in 1989. He was responsible for the technical interface with the HA product's developer and for setting up a support structure in the UK to support the IBM Installations. He has spent a good deal of time installing the product at customer sites throughout EMEA prior to leaving IBM to set up Shield Software Services in 1993. Shield was an IBM Business Partner and became a MiMiX reseller on the purchase of the Multiple Systems Software by Lakeview Technology. Shield retained this status until being sold to another MiMiX reseller. Chris moved to Canada in 1997 and launched Shield Advanced Solutions (Canada) Ltd. Shield Advanced Solutions develops and provides tools and utilities aimed mainly at supporting HA environments. Chris still consults on HA implementations using his broad knowledge of the iSeries to help customers gain the most from their investment in a HA product. He can be contacted by phone at 519-940-1192 or via email at This email address is being protected from spambots. You need JavaScript enabled to view it..

 

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

RESOURCE CENTER

  • WHITE PAPERS

  • WEBCAST

  • TRIAL SOFTWARE

  • Mobile Computing and the IBM i

    SB ASNA PPL 5450Mobile computing is rapidly maturing into a solid platform for delivering enterprise applications. Many IBM i shops today are realizing that integrating their IBM i with mobile applications is the fast path to improved business workflows, better customer relations, and more responsive business reporting.

    This ASNA whitepaper takes a look at mobile computing for the IBM i. It discusses the different ways mobile applications may be used within the enterprise and how ASNA products solve the challenges mobile presents. It also presents the case that you already have the mobile programming team your projects need: that team is your existing RPG development team!

    Get your copy today!

  • Automate IBM i Operations using Wireless Devices

    DDL SystemsDownload the technical whitepaper on MANAGING YOUR IBM i WIRELESSLY and (optionally) register to download an absolutely FREE software trail. This whitepaper provides an in-depth review of the native IBM i technology and ACO MONITOR's advanced two-way messaging features to remotely manage your IBM i while in or away from the office. Notify on-duty personnel of system events and remotely respond to complex problems (via your Smartphone) before they become critical-24/7. Problem solved!

    Order your copy here.

  • DR Strategy Guide from Maxava: Brand New Edition - now fully updated to include Cloud!

    SB Maxava PPL 5476PRACTICAL TOOLS TO IMPLEMENT DISASTER RECOVERY IN YOUR IBM i ENVIRONMENT

    CLOUD VS. ON-PREMISE?
    - COMPREHENSIVE CHECKLISTS
    - RISK COST CALCULATIONS
    - BUSINESS CASE FRAMEWORK
    - DR SOLUTIONS OVERVIEW
    - RFP BUILDER
    Download your free copy of DR Strategy Guide for IBM i from Maxava today.

     

  • White Paper: Node.js for Enterprise IBM i Modernization

    SB Profound WP 5539

    If your business is thinking about modernizing your legacy IBM i (also known as AS/400 or iSeries) applications, you will want to read this white paper first!

    Download this paper and learn how Node.js can ensure that you:
    - Modernize on-time and budget - no more lengthy, costly, disruptive app rewrites!
    - Retain your IBM i systems of record
    - Find and hire new development talent
    - Integrate new Node.js applications with your existing RPG, Java, .Net, and PHP apps
    - Extend your IBM i capabilties to include Watson API, Cloud, and Internet of Things


    Read Node.js for Enterprise IBM i Modernization Now!

     

  • 2020 IBM i Marketplace Survey Results

    HelpSystems

    This year marks the sixth edition of the popular IBM i Marketplace Survey Results. Each year, HelpSystems sets out to gather data about how businesses use the IBM i platform and the IT initiatives it supports. Year over year, the survey has begun to reveal long-term trends that give insight into the future of this trusted technology.

    More than 500 IBM i users from around the globe participated in this year’s survey, and we’re so happy to share the results with you. We hope you’ll find the information interesting and useful as you evaluate your own IT projects.

  • AIX Security Basics eCourse

    Core Security

    With so many organizations depending on AIX day to day, ensuring proper security and configuration is critical to ensure the safety of your environment. Don’t let common threats put your critical AIX servers at risk. Avoid simple mistakes and start to build a long-term plan with this AIX Security eCourse. Enroll today to get easy to follow instructions on topics like:

    • Removing extraneous files
    • Patching systems efficiently
    • Setting and validating permissions
    • Managing service considerations
    • Getting overall visibility into your networks

     

  • Developer Kit: Making a Business Case for Modernization and Beyond

    Profound Logic Software, Inc.

    Having trouble getting management approval for modernization projects? The problem may be you're not speaking enough "business" to them.

    This Developer Kit provides you study-backed data and a ready-to-use business case template to help get your very next development project approved!

  • What to Do When Your AS/400 Talent Retires

    HelpSystemsIT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators is small.

    This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn:

    • Why IBM i skills depletion is a top concern
    • How leading organizations are coping
    • Where automation will make the biggest impact

     

  • IBM i Resources Retiring?

    SB HelpSystems WC GenericLet’s face it: IBM i experts and RPG programmers are retiring from the workforce. Are you prepared to handle their departure?
    Our panel of IBM i experts—Chuck Losinski, Robin Tatam, Richard Schoen, and Tom Huntington—will outline strategies that allow your company to cope with IBM i skills depletion by adopting these strategies that allow you to get the job done without deep expertise on the OS:
    - Automate IBM i processes
    - Use managed services to help fill the gaps
    - Secure the system against data loss and viruses
    The strategies you discover in this webinar will help you ensure that your system of record—your IBM i—continues to deliver a powerful business advantage, even as staff retires.

     

  • Backup and Recovery Considerations for Security Data and Encrypted Backups

    SB PowerTech WC GenericSecurity expert Carol Woodbury is joined by Debbie Saugen. Debbie is an expert on IBM i backup and recovery, disaster recovery, and high availability, helping IBM i shops build and implement effective business continuity plans.
    In today’s business climate, business continuity is more important than ever. But 83 percent of organizations are not totally confident in their backup strategy.
    During this webinar, Carol and Debbie discuss the importance of a good backup plan, how to ensure you’re backing up your security information, and your options for encrypted back-ups.

  • Profound.js: The Agile Approach to Legacy Modernization

    SB Profound WC GenericIn this presentation, Alex Roytman and Liam Allan will unveil a completely new and unique way to modernize your legacy applications. Learn how Agile Modernization:
    - Uses the power of Node.js in place of costly system re-writes and migrations
    - Enables you to modernize legacy systems in an iterative, low-risk manner
    - Makes it easier to hire developers for your modernization efforts
    - Integrates with Profound UI (GUI modernization) for a seamless, end-to-end legacy modernization solution

     

  • Data Breaches: Is IBM i Really at Risk?

    SB PowerTech WC GenericIBM i is known for its security, but this OS could be more vulnerable than you think.
    Although Power Servers often live inside the safety of the perimeter firewall, the risk of suffering a data leak or data corruption remains high.
    Watch noted IBM i security expert Robin Tatam as he discusses common ways that this supposedly “secure” operating system may actually be vulnerable and who the culprits might be.

    Watch the webinar today!

     

  • Easy Mobile Development

    SB Profound WC GenericWatch this on-demand webinar and learn how to rapidly and easily deploy mobile apps to your organization – even when working with legacy RPG code! IBM Champion Scott Klement will demonstrate how to:
    - Develop RPG applications without mobile development experience
    - Deploy secure applications for any mobile device
    - Build one application for all platforms, including Apple and Android
    - Extend the life and reach of your IBM i (aka iSeries, AS400) platform
    You’ll see examples from customers who have used our products and services to deliver the mobile applications of their dreams, faster and easier than they ever thought possible!

     

  • Profound UI: Unlock True Modernization from your IBM i Enterprise

    SB Profound PPL 5491Modern, web-based applications can make your Enterprise more efficient, connected and engaged. This session will demonstrate how the Profound UI framework is the best and most native way to convert your existing RPG applications and develop new modern applications for your business. Additionally, you will learn how you can address modernization across your Enterprise, including databases and legacy source code, with Profound Logic.

  • Node Webinar Series Pt. 1: The World of Node.js on IBM i

    Profound Logic Software, Inc.Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

    Part 1 will teach you what Node.js is, why it's a great option for IBM i shops, and how to take advantage of the ecosystem surrounding Node.

    In addition to background information, our Director of Product Development Scott Klement will demonstrate applications that take advantage of the Node Package Manager (npm).

  • 5 New and Unique Ways to Use the IBM i Audit Journal

    SB HelpSystems ROBOT GenericYou must be asking yourself: am I doing everything I can to protect my organization’s data? Tune in as our panel of IBM i high availability experts discuss:


    - Why companies don’t test role swaps when they know they should
    - Whether high availability in the cloud makes sense for IBM i users
    - Why some organizations don’t have high availability yet
    - How to get high availability up and running at your organization
    - High availability considerations for today’s security concerns

  • Profound.js 2.0: Extend the Power of Node to your IBM i Applications

    SB Profound WC 5541In this Webinar, we'll demonstrate how Profound.js 2.0 enables you to easily adopt Node.js in your business, and to take advantage of the many benefits of Node, including access to a much larger pool of developers for IBM i and access to countless reusable open source code packages on npm (Node Package Manager).
    You will see how Profound.js 2.0 allows you to:

    • Provide RPG-like capabilities for server-side JavaScript.
    • Easily create web and mobile application interfaces for Node on IBM i.
    • Let existing RPG programs call Node.js modules directly, and vice versa.
    • Automatically generate code for Node.js.
    • Automatically converts existing RPGLE code into clean, simplified Node.js code.

    Download and watch today!

     

  • Make Modern Apps You'll Love with Profound UI & Profound.js

    SB Profound WC 5541Whether you have green screens or a drab GUI, your outdated apps can benefit from modern source code, modern GUIs, and modern tools.
    Profound Logic's Alex Roytman and Liam Allan are here to show you how Free-format RPG and Node.js make it possible to deliver applications your whole business will love:

    • Transform legacy RPG code to modern free-format RPG and Node.js
    • Deliver truly modern application interfaces with Profound UI
    • Extend your RPG applications to include Web Services and NPM packages with Node.js

     

  • Accelerating Programmer Productivity with Sequel

    SB_HelpSystems_WC_Generic

    Most business intelligence tools are just that: tools, a means to an end but not an accelerator. Yours could even be slowing you down. But what if your BI tool didn't just give you a platform for query-writing but also improved programmer productivity?
    Watch the recorded webinar to see how Sequel:

    • Makes creating complex results simple
    • Eliminates barriers to data sources
    • Increases flexibility with data usage and distribution

    Accelerated productivity makes everyone happy, from programmer to business user.

  • Business Intelligence is Changing: Make Your Game Plan

    SB_HelpSystems_WC_GenericIt’s time to develop a strategy that will help you meet your informational challenges head-on. Watch the webinar to learn how to set your IT department up for business intelligence success. You’ll learn how the right data access tool will help you:

    • Access IBM i data faster
    • Deliver useful information to executives and business users
    • Empower users with secure data access

    Ready to make your game plan and finally keep up with your data access requests?

     

  • Controlling Insider Threats on IBM i

    SB_HelpSystems_WC_GenericLet’s face facts: servers don’t hack other servers. Despite the avalanche of regulations, news headlines remain chock full of stories about data breaches, all initiated by insiders or intruders masquerading as insiders.
    User profiles are often duplicated or restored and are rarely reviewed for the appropriateness of their current configuration. This increases the risk of the profile being able to access data without the intended authority or having privileges that should be reserved for administrators.
    Watch security expert Robin Tatam as he discusses a new approach for onboarding new users on IBM i and best-practices techniques for managing and monitoring activities after they sign on.

  • Don't Just Settle for Query/400...

    SB_HelpSystems_WC_GenericWhile introducing Sequel Data Access, we’ll address common frustrations with Query/400, discuss major data access, distribution trends, and more advanced query tools. Plus, you’ll learn how a tool like Sequel lightens IT’s load by:

    - Accessing real-time data, so you can make real-time decisions
    - Providing run-time prompts, so users can help themselves
    - Delivering instant results in Microsoft Excel and PDF, without the wait
    - Automating the query process with on-demand data, dashboards, and scheduled jobs

  • How to Manage Documents the Easy Way

    SB_HelpSystems_WC_GenericWhat happens when your company depends on an outdated document management strategy?
    Everything is harder.
    You don’t need to stick with status quo anymore.
    Watch the webinar to learn how to put effective document management into practice and:

    • Capture documents faster, instead of wasting everyone’s time
    • Manage documents easily, so you can always find them
    • Distribute documents automatically, and move on to the next task

     

  • Lessons Learned from the AS/400 Breach

    SB_PowerTech_WC_GenericGet actionable info to avoid becoming the next cyberattack victim.
    In “Data breach digest—Scenarios from the field,” Verizon documented an AS/400 security breach. Whether you call it AS/400, iSeries, or IBM i, you now have proof that the system has been breached.
    Watch IBM i security expert Robin Tatam give an insightful discussion of the issues surrounding this specific scenario.
    Robin will also draw on his extensive cybersecurity experience to discuss policies, processes, and configuration details that you can implement to help reduce the risk of your system being the next victim of an attack.

  • Overwhelmed by Operating Systems?

    SB_HelpSystems_WC_GenericIn this 30-minute recorded webinar, our experts demonstrate how you can:

    • Manage multiple platforms from a central location
    • View monitoring results in a single pane of glass on your desktop or mobile device
    • Take advantage of best practice, plug-and-play monitoring templates
    • Create rules to automate daily checks across your entire infrastructure
    • Receive notification if something is wrong or about to go wrong

    This presentation includes a live demo of Network Server Suite.

     

  • Real-Time Disk Monitoring with Robot Monitor

    SB_HelpSystems_WC_GenericYou need to know when IBM i disk space starts to disappear and where it has gone before system performance and productivity start to suffer. Our experts will show you how Robot Monitor can help you pinpoint exactly when your auxiliary storage starts to disappear and why, so you can start taking a proactive approach to disk monitoring and analysis. You’ll also get insight into:

    • The main sources of disk consumption
    • How to monitor temporary storage and QTEMP objects in real time
    • How to monitor objects and libraries in real time and near-real time
    • How to track long-term disk trends

     

     

  • Stop Re-keying Data Between IBM I and Other Applications

    SB_HelpSystems_WC_GenericMany business still depend on RPG for their daily business processes and report generation.Wouldn’t it be nice if you could stop re-keying data between IBM i and other applications? Or if you could stop replicating data and start processing orders faster? Or what if you could automatically extract data from existing reports instead of re-keying? It’s all possible. Watch this webinar to learn about:

    • The data dilemma
    • 3 ways to stop re-keying data
    • Data automation in practice

    Plus, see how HelpSystems data automation software will help you stop re-keying data.

     

  • The Top Five RPG Open Access Myths....BUSTED!

    SB_Profound_WC_GenericWhen it comes to IBM Rational Open Access: RPG Edition, there are still many misconceptions - especially where application modernization is concerned!

    In this Webinar, we'll address some of the biggest myths about RPG Open Access, including:

    • Modernizing with RPG OA requires significant changes to the source code
    • The RPG language is outdated and impractical for modernizing applications
    • Modernizing with RPG OA is the equivalent to "screen scraping"

     

  • Time to Remove the Paper from Your Desk and Become More Efficient

    SB_HelpSystems_WC_GenericToo much paper is wasted. Attempts to locate documents in endless filing cabinets.And distributing documents is expensive and takes up far too much time.
    These are just three common reasons why it might be time for your company to implement a paperless document management system.
    Watch the webinar to learn more and discover how easy it can be to:

    • Capture
    • Manage
    • And distribute documents digitally

     

  • IBM i: It’s Not Just AS/400

    SB_HelpSystems_WC_Generic

    IBM’s Steve Will talks AS/400, POWER9, cognitive systems, and everything in between

    Are there still companies that use AS400? Of course!

    IBM i was built on the same foundation.
    Watch this recorded webinar with IBM i Chief Architect Steve Will and IBM Power Champion Tom Huntington to gain a unique perspective on the direction of this platform, including:

    • IBM i development strategies in progress at IBM
    • Ways that Watson will shake hands with IBM i
    • Key takeaways from the AS/400 days

     

  • Ask the RDi Experts

    SB_HelpSystems_WC_GenericWatch this recording where Jim Buck, Susan Gantner, and Charlie Guarino answered your questions, including:

    • What are the “hidden gems” in RDi that can make me more productive?
    • What makes RDi Debug better than the STRDBG green screen debugger?
    • How can RDi help me find out if I’ve tested all lines of a program?
    • What’s the best way to transition from PDM to RDi?
    • How do I convince my long-term developers to use RDi?

    This is a unique, online opportunity to hear how you can get more out of RDi.

     

  • Node.js on IBM i Webinar Series Pt. 2: Setting Up Your Development Tools

    Profound Logic Software, Inc.Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. In Part 2, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Attend this webinar to learn:

    • Different tools to develop Node.js applications on IBM i
    • Debugging Node.js
    • The basics of Git and tools to help those new to it
    • Using NodeRun.com as a pre-built development environment

     

     

  • Inside the Integrated File System (IFS)

    SB_HelpSystems_WC_GenericDuring this webinar, you’ll learn basic tips, helpful tools, and integrated file system commands—including WRKLNK—for managing your IFS directories and Access Client Solutions (ACS). We’ll answer your most pressing IFS questions, including:

    • What is stored inside my IFS directories?
    • How do I monitor the IFS?
    • How do I replicate the IFS or back it up?
    • How do I secure the IFS?

    Understanding what the integrated file system is and how to work with it must be a critical part of your systems management plans for IBM i.

     

  • Expert Tips for IBM i Security: Beyond the Basics

    SB PowerTech WC GenericIn this session, IBM i security expert Robin Tatam provides a quick recap of IBM i security basics and guides you through some advanced cybersecurity techniques that can help you take data protection to the next level. Robin will cover:

    • Reducing the risk posed by special authorities
    • Establishing object-level security
    • Overseeing user actions and data access

    Don't miss this chance to take your knowledge of IBM i security beyond the basics.

     

     

  • 5 IBM i Security Quick Wins

    SB PowerTech WC GenericIn today’s threat landscape, upper management is laser-focused on cybersecurity. You need to make progress in securing your systems—and make it fast.
    There’s no shortage of actions you could take, but what tactics will actually deliver the results you need? And how can you find a security strategy that fits your budget and time constraints?
    Join top IBM i security expert Robin Tatam as he outlines the five fastest and most impactful changes you can make to strengthen IBM i security this year.
    Your system didn’t become unsecure overnight and you won’t be able to turn it around overnight either. But quick wins are possible with IBM i security, and Robin Tatam will show you how to achieve them.

  • How to Meet the Newest Encryption Requirements on IBM i

    SB PowerTech WC GenericA growing number of compliance mandates require sensitive data to be encrypted. But what kind of encryption solution will satisfy an auditor and how can you implement encryption on IBM i? Watch this on-demand webinar to find out how to meet today’s most common encryption requirements on IBM i. You’ll also learn:

    • Why disk encryption isn’t enough
    • What sets strong encryption apart from other solutions
    • Important considerations before implementing encryption

     

     

  • Security Bulletin: Malware Infection Discovered on IBM i Server!

    SB PowerTech WC GenericMalicious programs can bring entire businesses to their knees—and IBM i shops are not immune. It’s critical to grasp the true impact malware can have on IBM i and the network that connects to it. Attend this webinar to gain a thorough understanding of the relationships between:

    • Viruses, native objects, and the integrated file system (IFS)
    • Power Systems and Windows-based viruses and malware
    • PC-based anti-virus scanning versus native IBM i scanning

    There are a number of ways you can minimize your exposure to viruses. IBM i security expert Sandi Moore explains the facts, including how to ensure you're fully protected and compliant with regulations such as PCI.

     

     

  • Fight Cyber Threats with IBM i Encryption

    SB PowerTech WC GenericCyber attacks often target mission-critical servers, and those attack strategies are constantly changing. To stay on top of these threats, your cybersecurity strategies must evolve, too. In this session, IBM i security expert Robin Tatam provides a quick recap of IBM i security basics and guides you through some advanced cybersecurity techniques that can help you take data protection to the next level. Robin will cover:

    • Reducing the risk posed by special authorities
    • Establishing object-level security
    • Overseeing user actions and data access

     

     

     

  • 10 Practical IBM i Security Tips for Surviving Covid-19 and Working From Home

    SB PowerTech WC GenericNow that many organizations have moved to a work from home model, security concerns have risen.

    During this session Carol Woodbury will discuss the issues that the world is currently seeing such as increased malware attacks and then provide practical actions you can take to both monitor and protect your IBM i during this challenging time.

     

  • How to Transfer IBM i Data to Microsoft Excel

    SB_HelpSystems_WC_Generic3 easy ways to get IBM i data into Excel every time
    There’s an easy, more reliable way to import your IBM i data to Excel? It’s called Sequel. During this webinar, our data access experts demonstrate how you can simplify the process of getting data from multiple sources—including Db2 for i—into Excel. Watch to learn how to:

    • Download your IBM i data to Excel in a single step
    • Deliver data to business users in Excel via email or a scheduled job
    • Access IBM i data directly using the Excel add-in in Sequel

    Make 2020 the year you finally see your data clearly, quickly, and securely. Start by giving business users the ability to access crucial business data from IBM i the way they want it—in Microsoft Excel.

     

     

  • HA Alternatives: MIMIX Is Not Your Only Option on IBM i

    SB_HelpSystems_WC_GenericIn this recorded webinar, our experts introduce you to the new HA transition technology available with our Robot HA software. You’ll learn how to:

    • Transition your rules from MIMIX (if you’re happy with them)
    • Simplify your day-to-day activities around high availability
    • Gain back time in your work week
    • Make your CEO happy about reducing IT costs

    Don’t stick with a legacy high availability solution that makes you uncomfortable when transitioning to something better can be simple, safe, and cost-effective.

     

     

  • Comply in 5! Well, actually UNDER 5 minutes!!

    SB CYBRA PPL 5382

    TRY the one package that solves all your document design and printing challenges on all your platforms.

    Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product.

    Request your trial now!

  • Backup and Recovery on IBM i: Your Strategy for the Unexpected

    SB HelpSystems SC 5413Robot automates the routine tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:
    - Simplified backup procedures
    - Easy data encryption
    - Save media management
    - Guided restoration
    - Seamless product integration
    Make sure your data survives when catastrophe hits. Try the Robot Backup and Recovery Solution FREE for 30 days.

  • Manage IBM i Messages by Exception with Robot

    SB HelpSystems SC 5413Managing messages on your IBM i can be more than a full-time job if you have to do it manually. How can you be sure you won’t miss important system events?
    Automate your message center with the Robot Message Management Solution. Key features include:
    - Automated message management
    - Tailored notifications and automatic escalation
    - System-wide control of your IBM i partitions
    - Two-way system notifications from your mobile device
    - Seamless product integration
    Try the Robot Message Management Solution FREE for 30 days.

  • Easiest Way to Save Money? Stop Printing IBM i Reports

    SB HelpSystems SC 5413Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing.
    Manage your reports with the Robot Report Management Solution. Key features include:

    - Automated report distribution
    - View online without delay
    - Browser interface to make notes
    - Custom retention capabilities
    - Seamless product integration
    Rerun another report? Never again. Try the Robot Report Management Solution FREE for 30 days.

  • Hassle-Free IBM i Operations around the Clock

    SB HelpSystems SC 5413For over 30 years, Robot has been a leader in systems management for IBM i.
    Manage your job schedule with the Robot Job Scheduling Solution. Key features include:
    - Automated batch, interactive, and cross-platform scheduling
    - Event-driven dependency processing
    - Centralized monitoring and reporting
    - Audit log and ready-to-use reports
    - Seamless product integration
    Scale your software, not your staff. Try the Robot Job Scheduling Solution FREE for 30 days.

  • ACO MONITOR Manages your IBM i 24/7 and Notifies You When Your IBM i Needs Assistance!

    SB DDL Systems 5429More than a paging system - ACO MONITOR is a complete systems management solution for your Power Systems running IBM i. ACO MONITOR manages your Power System 24/7, uses advanced technology (like two-way messaging) to notify on-duty support personnel, and responds to complex problems before they reach critical status.

    ACO MONITOR is proven technology and is capable of processing thousands of mission-critical events daily. The software is pre-configured, easy to install, scalable, and greatly improves data center efficiency.