Sidebar

SQL Triggers and Other Trigger Enhancements in V5

SQL
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

 

Read Trigger

Read triggers have been implemented in OS/400 to satisfy a U.S. federal statute (HIPAA) requiring that all access to patient records in the medical profession be logged and auditable.

Read triggers are supported only via external triggers and are implemented via the new read trigger event, and there is no support for a read trigger with SQL triggers. The reason for this is that a read trigger has a very specific function; it is not intended for general use because indiscriminate use can have a significant negative impact on system performance and throughput due to System i and iSeries performance characteristics being altered.

Any read-only access to a file with a read trigger defined—either user program or system function—will cause the trigger to fire. This means that using the Copy File (CPYF) command, the Display Physical File Member (DSPPFM) command, query products like Query/400 or SQL, or user-written display or report programs will cause a read trigger to fire every time a record is accessed in each of the above scenarios. If you access one million records, you just fired the read trigger one million times. Get the picture?

The use of a read trigger will disable some system functions that are used to optimize system performance and throughput. For example, a read trigger prevents adaptive blocking and double buffering when processing a file sequentially (arrival sequence) and asynchronous fetch of the next record in keyed sequence when processing a file sequentially by key or index. This means that records are read from the file one at time instead of in optimized blocks, dramatically increasing file I/O time.

The effect is that batch programs using these processing techniques on a file with a read trigger defined will run orders of magnitude longer. This negative impact on performance can be mitigated to a certain extent by increasing CPU power, adding memory, and adding disk arms.

Multiple Triggers per Database or Trigger Event

OS/400 releases prior to V5R1 limited a file to a maximum of six triggers per file. Many software providers now include triggers in their application packages, which can conflict with the trigger requirements of System i and iSeries users.

When conflicting trigger requirements occur, it can be very difficult or impossible to combine two or more trigger programs into one. IBM's V5R1 solution was to increase the limit of six triggers to 300 and to allow more than one trigger with the same trigger time and trigger event to be defined for a file.

When multiple triggers are defined for the same database event, the triggers are fired in the order they were created. The first trigger created is the first one fired; the last created is the last one fired.

This raises some interesting questions when a trigger has to be deleted and recreated. Suppose the first trigger on the execution list must be deleted and then recreated. It then moves from being first on the execution list to last on the list. The question that you must ask is if the trigger depends on its place in the execution list to function properly. If the answer to that question is "yes, the trigger firing order must be maintained," you will need to create two CL or SQL scripts: One script removes all the triggers for the file in question, and the second recreates all the triggers in the required sequence.

Named Triggers

The increase to 300 triggers per file means that the combination of trigger time and trigger event can no longer be used to identify a trigger. Starting in V5R1, both external and SQL triggers are given a name to provide unique identification when they are created. The trigger name must be unique within a given library (not per file) and can be a maximum of 128 characters long. If you do not provide a name when the trigger is created, DB2 UDB will create a default name, and I guarantee you will not like it!

To support the naming of a trigger, a trigger name and library parameters have been added to the ADDPFTRG and RMVPFTRG commands and the new CREATE TRIGGER and DROP TRIGGER SQL statements (more on these SQL statements later).

Change Physical File Trigger (CHGPFTRG) Command

The CHGPFTRG command changes the state of one or all triggers defined for a file or table. There are two possible trigger states: disabled or enabled. When a trigger is disabled, its corresponding trigger program will not be invoked when the trigger event is satisfied. When a trigger is enabled, its corresponding trigger program will be invoked when the trigger event is satisfied.

This eliminates the problem of having to delete a trigger to disable it and then recreate the trigger again to enable it in prior releases of OS/400. Following is an example of the CHGPFTRG command syntax.

CHGPFTRG   FILE(Lib_Name/File_Name)   TRG(Trigger_Name)   
          TRGLIB(Lib_Name)   STATE(*DISABLED)

System Catalog Trigger Enhancements

As an aid in managing the V5R1 trigger enhancements, IBM has added four new files to the System Catalog to log and store information about triggers:

  • SYSTRIGGERS contains one row for each trigger in a library for both external and SQL triggers.
  • SYSTRIGCOL contains one row for each column or field either implicitly or explicitly referenced in the WHEN clause or the SQL statements for an SQL trigger.
  • SYSTRIGDEP contains one row for each object referenced in the WHEN clause or SQL statements for an SQL trigger.
  • SYSTRIGUPD contains one row for each column identified in the UPDATE column list, if any.

SQL Triggers

SQL triggers are a V5R1 enhancement and use one or more SQL statements (instead of a user-provided or written program) within the trigger body to perform the desired action when the trigger fires. SQL trigger support in V5R1 is a superset of the support found in DB2 UDB Version 7.1 and provides an industry-standard method for defining and managing triggers that has a high degree of portability to other database management systems.

SQL triggers use IBM's SQL procedural language to implement the trigger implementation, and they also provide more granularity and function than external triggers with column or field-level triggers, row or record-level triggers, and (SQL) statement-level triggers.

An SQL trigger can be added to a file or table with the CREATE TRIGGER statement and can be removed with the DROP TRIGGER statement, or both can be done with the Database function in iSeries Navigator.

SQL Trigger Components

In V5R1, an external trigger has the following five components (remember that the trigger name was just added in V5R1): base file or table, trigger name, trigger event, trigger time, and trigger program.

An SQL trigger has the same first four components; however, the trigger program is replaced by five additional components: trigger granularity, transition variables, transition tables, trigger mode, and triggered action.

The base file or table is the physical file or table to which the trigger is added. The trigger name provides unique trigger identification within a library. The trigger event is the condition that causes the trigger to fire. It can be the insert of a new record or row, the delete of an existing row, the update of an existing row or column, or, in very limited circumstances, the read of an existing row (see "read trigger" earlier in this article). The trigger time is when the triggered action will be performed, either before or after the trigger event completes. The trigger granularity, in conjunction with the trigger event, determines what causes the trigger to fire. Granularity can be at the column or field level, row or record level, or (SQL) statement level.

Column-level triggers are an extension of the Update trigger event and are available only with SQL triggers. Only an update of those columns listed as part of the update trigger event will cause the trigger to fire and the triggered action to be performed.

The following SQL syntax shows how the column names are listed:

UPDATE OF   Column_Name1, Column_Name2,...

If no columns are listed in the UPDATE OF clause, then an update to any column defined in the row causes the associated trigger to fire.

With a row-level trigger, the associated trigger is fired and the triggered action is performed each time the trigger event is satisfied. If the trigger condition is never satisfied, the triggered action is never performed. An SQL trigger is defined as a row-level trigger with the FOR EACH ROW clause. An external trigger is implicitly a row-level trigger.

Statement-level triggers are available only with SQL triggers, and the triggered action is performed only once per trigger event, regardless of the number of rows processed. If the trigger event is never satisfied, the triggered action is still performed once at the end of the SQL statement processing. A statement-level trigger can be used only in conjunction with a trigger time of After and a trigger mode of DB2SQL (more on trigger mode later) and is defined with the FOR EACH STATEMENT clause.

Transition variables provide the same function as the before and after images in the trigger buffer used with external triggers. They provide qualification of the column names for the image of the single row that caused the trigger to fire, before and/or after the trigger event has completed.

Transition variables are not valid with statement-level triggers. They are defined with the OLD ROW clause for the before image and the NEW ROW clause for the after image.

The following SQL syntax is used to describe and reference transition variables as part of an SQL trigger.

...REFERENCING   OLD ROW   AS   Oldrow   NEW ROW   AS   Newrow
...
...WHERE   Newrow.Salary   >   Oldrow.salary   + 10000...

A transition table provides a function analogous to the before and after images in the trigger buffer used with external triggers and is a temporary table that contains the image of all rows affected before and/or after the trigger event completes. Since a single SQL statement can process multiple rows in a file, a mechanism is needed to be able to track/log the activity on those rows processed. Transition tables provide that capability.

A transition table can be used only in conjunction with a trigger time of After and a trigger mode of DB2SQL. It is defined with the OLD TABLE clause for a before image of all affected rows and the NEW TABLE clause for an after image of all affected rows.

The following SQL syntax is used to describe and reference a transition table as part of an SQL trigger.

...REFERENCING   OLD TABLE   AS   Old_Table_Name...
...
...(SELECT   COUNT(*)   FROM   Old_Table_Name)...

There are two trigger modes: DB2ROW and DB2SQL. A mode of DB2ROW causes the trigger to fire after each row operation and is valid only with row-level triggers. It is an exclusive function of DB2 UDB for System i and iSeries and is not available in other DB2 UDB implementations. A mode of DB2SQL causes the trigger to fire after all row operations are complete and is valid only with a trigger time of After. If it is specified for a row-level trigger, the triggered action is executed n times after all row operations, where n equals the number of rows processed. This is not as efficient as DB2ROW, since each row is effectively processed twice.

The triggered action is analogous to the trigger program in external triggers and has three parts: the SET OPTION clause, the WHEN clause, and the SQL trigger body:

The SET OPTION clause specifies the options that will be used to create the trigger.

The WHEN clause specifies the search or selection criteria or the execution criteria for the trigger body. In other words, it specifies when the SQL statements in the trigger body will be executed.

The SQL trigger body contains one or more SQL statements that perform the desired action when the trigger fires. Multiple SQL statements in the trigger body are delineated with the BEGIN and END statements. Each complete SQL statement in the trigger body must be ended with a semicolon (;).

The standard DDL and DML SQL statements—such as SELECT, INSERT, DELETE, and CREATE—can be used in the trigger body along with IBM's SQL procedural language.

An SQL trigger can be added to a file with the CREATE TRIGGER statement and removed with the DROP TRIGGER statement. The SQL syntax for these statements is shown below.

Detailed SQL Syntax for Create Trigger Statement

                                    +-NO CASCADE--+          
>>--CREATE TRIGGER--trigger-name--+-+-------------+-BEFORE-+-->  
                                  +-AFTER------------------+     

>--+--INSERT--------------------------+--ON--table-name------->  
   |--DELETE--------------------------|               
   +--UPDATE--+---------------------+-+                         
              |    +-,-----<-----+  |
              |    |             |  |
              +-OF-+-column-name-+--+

>-+---------------------------------------------------------+->  
  |             +-----------------------<-----------------+ |     
  |             |                                         | |
  |             |       +-ROW-+ +-AS-+                    | |   
  +-REFERENCING-+-+-OLD-+-----+-+----+-correlation-name-+-+-+
                  |                                     |
                  |     +-ROW-+ +-AS-+                  |
                  +-NEW-+-----+-+----+-correlation-name-+
                  |                                     |
                  |           +-AS-+                    |
                  +-OLD TABLE-+----+---table-identifier-+ 
                  |                                     |
                  |           +-AS-+                    |
                  +-NEW TABLE-+----+---table-identifier-+

   +--FOR EACH STATEMENT--+  +--MODE DB2SQL--+         
   |                      |  |               |
>--+----------------------+--+---------------+---------------->
   |                      |  |               |
   +--FOR EACH ROW--------+  +--MODE DB2ROW--+

>--+-----------------------------------+---------------------->
   |                                   |
   +--SET OPTION---option-statement----+  

>--+---------------------------------------------------+------>
   |                                                   |
   +--WHEN--(--trigger-body-execution-criteria------)--+  

>--SQL-trigger-body------------------------------------------><


Detailed SQL Syntax for Drop Trigger Statement

>>--DROP TRIGGER----trigger-name------------------------------><

When SQL triggers are created, they have an implicit attribute of ALWREPCHG(*YES). This attribute must be explicitly specified when using external triggers; otherwise, it defaults to ALWREPCHG(*NO).

SQL Trigger Examples


Column-Level Trigger with Simple Trigger Body

CREATE  TRIGGER  empsal
     BEFORE  UPDATE  OF  salary  ON  emp
     REFERENCING  NEW  AS  new  OLD  AS  old
     FOR  EACH  ROW  MODE  DB2ROW
     WHEN  (new.salary  >  1.5  *  old.salary)
          SET  new.salary  =  1.5  *  old.salary;

The SQL trigger created in this example is called empsal and is fired before the update of a row in the table or file called salary. Transition variables called new and old have been defined for new row (the after image) and old row (the before image) and will be used to qualify field names referenced in the trigger body. The trigger is a row-level trigger, and its mode is DB2ROW.

The SQL statement in the trigger body will be executed when new.salary equals 1.5 times the old.salary. When this criteria is satisfied, the new.salary is set to 1.5 times the old salary. Note that this is the preferred method for modifying or changing data before it is actually written to a table or file.

Row-Level Trigger with Complex Trigger Body

CREATE  TRIGGER  big_spenders  
     AFTER  INSERT  ON  expenses
     REFERENCING  NEW  ROW  AS  n
     FOR  EACH  ROW
     MODE  DB2ROW
     WHEN  (n.totalamount  >  10000)
BEGIN   
     DECLARE  emplname  CHAR(30);
     SET  emplname  =  
          (SELECT  lname  FROM  employee
               WHERE  empid  =  n.empno);
     INSERT  INTO  travel_audit 
          VALUES(n.empno,  emplname,  n.deptno,  
               n.totalamount,  n.enddate);
END

The SQL trigger created in this example is called big_spenders and is fired after the insert of a row in the table or file called expenses. A transition variable called n has been defined for new row (the after image) and will be used to qualify field names referenced in the trigger body. The trigger is a row-level trigger, and its mode is DB2ROW.

Note that there are multiple SQL statements in the trigger body and that they are delineated with a BEGIN and END statement. Also note that each SQL statement is ended with a semicolon. These SQL statements in the trigger body will be executed when n.totalamount is greater than 10,000. When this criteria is satisfied, the SLQ statements in the trigger body are executed, and the result is that a row is inserted into the table called travel_audit, which contains the columns or fields called n.empno, emplname, n.deptno, n.totalamount, and n.enddate.

New Trigger Enhancements

IBM has provided significant trigger enhancements in V5R1.The maximum number of triggers per file has been raised from 6 to 300. To support 300 triggers per file, triggers are now given a name, which must be unique within a library, when they are created. The new CHGPFTRG command allows you to easily disable/enable a single trigger or a group of related triggers, and the System Catalog has had four new files added to it to log and store trigger information.

SQL triggers provide a new level of function and use one or more SQL statements (instead of a user-provided/written program) within the trigger body to perform the desired action when the trigger fires. SQL trigger support in V5R1 provides an industry-standard method for defining and managing triggers that has a high degree of portability to other database management systems. SQL triggers also provide more granularity and function than external triggers, with column or field-level triggers, row or record-level triggers, and statement-level (SQL) triggers.

Lastly, there is the new read trigger, which is not intended for general use, so you must approach it with extreme caution. Use of read triggers can have a severe detrimental impact on System i and iSeries performance and throughput.

The V5R1 trigger enhancements offer additional options for fulfilling application requirements. With proper use, these enhancements can be very beneficial to programmers.

Skip Marchesani retired from IBM after 30 years and is now a consultant with Custom Systems Corp. and one of the founding partners of System i Developer. Skip spent much of his IBM career working with the Rochester Development Lab on projects for S/38 and AS/400 and was involved with the development of the AS/400. He was part of the team that taught early AS/400 education to customers and IBM lab sites worldwide.

Skip is recognized as an industry expert on DB2 UDB (aka DB2/400) and author of the book DB2/400: The New AS/400 Database. He specializes in providing customized education for any area of the System i, iSeries, and AS/400; does database design and design reviews; and performs general System i, iSeries, and AS/400 consulting for interested clients. He has been a speaker for user groups, technical conferences, and System i, iSeries, and AS/400 audiences around the world. He is an award-winning COMMON speaker and has received its Distinguished Service Award.

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

RESOURCE CENTER

  • WHITE PAPERS

  • WEBCAST

  • TRIAL SOFTWARE

  • White Paper: Node.js for Enterprise IBM i Modernization

    SB Profound WP 5539

    If your business is thinking about modernizing your legacy IBM i (also known as AS/400 or iSeries) applications, you will want to read this white paper first!

    Download this paper and learn how Node.js can ensure that you:
    - Modernize on-time and budget - no more lengthy, costly, disruptive app rewrites!
    - Retain your IBM i systems of record
    - Find and hire new development talent
    - Integrate new Node.js applications with your existing RPG, Java, .Net, and PHP apps
    - Extend your IBM i capabilties to include Watson API, Cloud, and Internet of Things


    Read Node.js for Enterprise IBM i Modernization Now!

     

  • Profound Logic Solution Guide

    SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation.
    Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects.
    The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the companyare not aligned with the current IT environment.

    Get your copy of this important guide today!

     

  • 2022 IBM i Marketplace Survey Results

    Fortra2022 marks the eighth edition of the IBM i Marketplace Survey Results. Each year, Fortra captures data on how businesses use the IBM i platform and the IT and cybersecurity initiatives it supports.

    Over the years, this survey has become a true industry benchmark, revealing to readers the trends that are shaping and driving the market and providing insight into what the future may bring for this technology.

  • Brunswick bowls a perfect 300 with LANSA!

    FortraBrunswick is the leader in bowling products, services, and industry expertise for the development and renovation of new and existing bowling centers and mixed-use recreation facilities across the entertainment industry. However, the lifeblood of Brunswick’s capital equipment business was running on a 15-year-old software application written in Visual Basic 6 (VB6) with a SQL Server back-end. The application was at the end of its life and needed to be replaced.
    With the help of Visual LANSA, they found an easy-to-use, long-term platform that enabled their team to collaborate, innovate, and integrate with existing systems and databases within a single platform.
    Read the case study to learn how they achieved success and increased the speed of development by 30% with Visual LANSA.

     

  • Progressive Web Apps: Create a Universal Experience Across All Devices

    LANSAProgressive Web Apps allow you to reach anyone, anywhere, and on any device with a single unified codebase. This means that your applications—regardless of browser, device, or platform—instantly become more reliable and consistent. They are the present and future of application development, and more and more businesses are catching on.
    Download this whitepaper and learn:

    • How PWAs support fast application development and streamline DevOps
    • How to give your business a competitive edge using PWAs
    • What makes progressive web apps so versatile, both online and offline

     

     

  • The Power of Coding in a Low-Code Solution

    LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed.
    Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

    • Discover the benefits of Low-code's quick application creation
    • Understand the differences in model-based and language-based Low-Code platforms
    • Explore the strengths of LANSA's Low-Code Solution to Low-Code’s biggest drawbacks

     

     

  • Why Migrate When You Can Modernize?

    LANSABusiness users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.
    In this white paper, you’ll learn how to think of these issues as opportunities rather than problems. We’ll explore motivations to migrate or modernize, their risks and considerations you should be aware of before embarking on a (migration or modernization) project.
    Lastly, we’ll discuss how modernizing IBM i applications with optimized business workflows, integration with other technologies and new mobile and web user interfaces will enable IT – and the business – to experience time-added value and much more.

     

  • UPDATED: Developer Kit: Making a Business Case for Modernization and Beyond

    Profound Logic Software, Inc.Having trouble getting management approval for modernization projects? The problem may be you're not speaking enough "business" to them.

    This Developer Kit provides you study-backed data and a ready-to-use business case template to help get your very next development project approved!

  • What to Do When Your AS/400 Talent Retires

    FortraIT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators is small.

    This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn:

    • Why IBM i skills depletion is a top concern
    • How leading organizations are coping
    • Where automation will make the biggest impact

     

  • Node.js on IBM i Webinar Series Pt. 2: Setting Up Your Development Tools

    Profound Logic Software, Inc.Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. In Part 2, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Attend this webinar to learn:

    • Different tools to develop Node.js applications on IBM i
    • Debugging Node.js
    • The basics of Git and tools to help those new to it
    • Using NodeRun.com as a pre-built development environment

     

     

  • Expert Tips for IBM i Security: Beyond the Basics

    SB PowerTech WC GenericIn this session, IBM i security expert Robin Tatam provides a quick recap of IBM i security basics and guides you through some advanced cybersecurity techniques that can help you take data protection to the next level. Robin will cover:

    • Reducing the risk posed by special authorities
    • Establishing object-level security
    • Overseeing user actions and data access

    Don't miss this chance to take your knowledge of IBM i security beyond the basics.

     

     

  • 5 IBM i Security Quick Wins

    SB PowerTech WC GenericIn today’s threat landscape, upper management is laser-focused on cybersecurity. You need to make progress in securing your systems—and make it fast.
    There’s no shortage of actions you could take, but what tactics will actually deliver the results you need? And how can you find a security strategy that fits your budget and time constraints?
    Join top IBM i security expert Robin Tatam as he outlines the five fastest and most impactful changes you can make to strengthen IBM i security this year.
    Your system didn’t become unsecure overnight and you won’t be able to turn it around overnight either. But quick wins are possible with IBM i security, and Robin Tatam will show you how to achieve them.

  • Security Bulletin: Malware Infection Discovered on IBM i Server!

    SB PowerTech WC GenericMalicious programs can bring entire businesses to their knees—and IBM i shops are not immune. It’s critical to grasp the true impact malware can have on IBM i and the network that connects to it. Attend this webinar to gain a thorough understanding of the relationships between:

    • Viruses, native objects, and the integrated file system (IFS)
    • Power Systems and Windows-based viruses and malware
    • PC-based anti-virus scanning versus native IBM i scanning

    There are a number of ways you can minimize your exposure to viruses. IBM i security expert Sandi Moore explains the facts, including how to ensure you're fully protected and compliant with regulations such as PCI.

     

     

  • Encryption on IBM i Simplified

    SB PowerTech WC GenericDB2 Field Procedures (FieldProcs) were introduced in IBM i 7.1 and have greatly simplified encryption, often without requiring any application changes. Now you can quickly encrypt sensitive data on the IBM i including PII, PCI, PHI data in your physical files and tables.
    Watch this webinar to learn how you can quickly implement encryption on the IBM i. During the webinar, security expert Robin Tatam will show you how to:

    • Use Field Procedures to automate encryption and decryption
    • Restrict and mask field level access by user or group
    • Meet compliance requirements with effective key management and audit trails

     

  • Lessons Learned from IBM i Cyber Attacks

    SB PowerTech WC GenericDespite the many options IBM has provided to protect your systems and data, many organizations still struggle to apply appropriate security controls.
    In this webinar, you'll get insight into how the criminals accessed these systems, the fallout from these attacks, and how the incidents could have been avoided by following security best practices.

    • Learn which security gaps cyber criminals love most
    • Find out how other IBM i organizations have fallen victim
    • Get the details on policies and processes you can implement to protect your organization, even when staff works from home

    You will learn the steps you can take to avoid the mistakes made in these examples, as well as other inadequate and misconfigured settings that put businesses at risk.

     

     

  • The Power of Coding in a Low-Code Solution

    SB PowerTech WC GenericWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed.
    Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

    • Discover the benefits of Low-code's quick application creation
    • Understand the differences in model-based and language-based Low-Code platforms
    • Explore the strengths of LANSA's Low-Code Solution to Low-Code’s biggest drawbacks

     

     

  • Node Webinar Series Pt. 1: The World of Node.js on IBM i

    SB Profound WC GenericHave you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.
    Part 1 will teach you what Node.js is, why it's a great option for IBM i shops, and how to take advantage of the ecosystem surrounding Node.
    In addition to background information, our Director of Product Development Scott Klement will demonstrate applications that take advantage of the Node Package Manager (npm).
    Watch Now.

  • The Biggest Mistakes in IBM i Security

    SB Profound WC Generic The Biggest Mistakes in IBM i Security
    Here’s the harsh reality: cybersecurity pros have to get their jobs right every single day, while an attacker only has to succeed once to do incredible damage.
    Whether that’s thousands of exposed records, millions of dollars in fines and legal fees, or diminished share value, it’s easy to judge organizations that fall victim. IBM i enjoys an enviable reputation for security, but no system is impervious to mistakes.
    Join this webinar to learn about the biggest errors made when securing a Power Systems server.
    This knowledge is critical for ensuring integrity of your application data and preventing you from becoming the next Equifax. It’s also essential for complying with all formal regulations, including SOX, PCI, GDPR, and HIPAA
    Watch Now.

  • Comply in 5! Well, actually UNDER 5 minutes!!

    SB CYBRA PPL 5382

    TRY the one package that solves all your document design and printing challenges on all your platforms.

    Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product.

    Request your trial now!

  • Backup and Recovery on IBM i: Your Strategy for the Unexpected

    FortraRobot automates the routine tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:
    - Simplified backup procedures
    - Easy data encryption
    - Save media management
    - Guided restoration
    - Seamless product integration
    Make sure your data survives when catastrophe hits. Try the Robot Backup and Recovery Solution FREE for 30 days.

  • Manage IBM i Messages by Exception with Robot

    SB HelpSystems SC 5413Managing messages on your IBM i can be more than a full-time job if you have to do it manually. How can you be sure you won’t miss important system events?
    Automate your message center with the Robot Message Management Solution. Key features include:
    - Automated message management
    - Tailored notifications and automatic escalation
    - System-wide control of your IBM i partitions
    - Two-way system notifications from your mobile device
    - Seamless product integration
    Try the Robot Message Management Solution FREE for 30 days.

  • Easiest Way to Save Money? Stop Printing IBM i Reports

    FortraRobot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing.
    Manage your reports with the Robot Report Management Solution. Key features include:

    - Automated report distribution
    - View online without delay
    - Browser interface to make notes
    - Custom retention capabilities
    - Seamless product integration
    Rerun another report? Never again. Try the Robot Report Management Solution FREE for 30 days.

  • Hassle-Free IBM i Operations around the Clock

    SB HelpSystems SC 5413For over 30 years, Robot has been a leader in systems management for IBM i.
    Manage your job schedule with the Robot Job Scheduling Solution. Key features include:
    - Automated batch, interactive, and cross-platform scheduling
    - Event-driven dependency processing
    - Centralized monitoring and reporting
    - Audit log and ready-to-use reports
    - Seamless product integration
    Scale your software, not your staff. Try the Robot Job Scheduling Solution FREE for 30 days.