Sidebar

Getting the Most Performance from ODBC Query and Development Tools

APIs
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Client/server computing is the hottest trend in the industry today. One of the significant factors fueling this trend is the availability of sophisticated desktop development tools that are easy-to-use. Many development shops are abandoning traditional third-generation language (3GL) programming (such as C, COBOL, and RPG) and are turning to popular fourth-generation language (4GL) development tools. The list of available tools is quite impressive, and equally impressive is the apparent ease of use that these tools claim. I say "apparent" because, more often than not, applications developed using popular 4GL environments do not live up to the performance expectations of the developers and users. (For a more detailed discussion of performance issues in general, see "Maximizing Performance with Client Access/400 ODBC," MC, March 1996.)

For example, all too often, a solution provider will develop an application using a 4GL tool combined with a desktop database management system (DBMS). When the programmer attempts to run the same application using ODBC and a client/server DBMS, the performance is unacceptable. Of course, it's easy to just blame ODBC, but the real problem lies in the fact that applications designed for client/server must be architected quite differently than traditional applications, or they will perform poorly.

If you are building a client/server solution with one of the popular 4GL tools of the day, or if you are sticking with a traditional 3GL approach such as C, this article is for you. I'm going to assume a fairly high level of familiarity with SQL and ODBC. However, if you want more information about the ODBC functions that I discuss, turn to the Microsoft ODBC 2.0 Programmer's Reference and SDK Guide. Here, we will discuss key client/server performance issues and the implications of using popular 4GL query tools and development environments.

In "Maximizing Performance with Client Access/400 ODBC," I discussed the performance implications of client/server environments in contrast with traditional host-centric environments. I further described how these implications influenced the development of the Client Access/400 ODBC driver. Having an ODBC driver that is optimally tuned for performance is only part of the battle, though. Other things to consider are the tools that are used and whether to simply query the data or build complex programs for decision support and online transaction processing (OLTP). Many of these tools tend to violate the golden rule of client/server performance: Don't go to the server unless you have to, and go there in as few trips as possible when you do.

This rule gets violated for many reasons. Probably the foremost is that many 4GL tools were never designed for client/server environments. Instead, they were architected for standalone database access. When the rush to client/server gained momentum, some of these tools were retargeted for client/server without gaining the necessary architecture changes to ensure optimal client/server performance.

Another cause for golden rule violation and poor client/server performance is education. Here, the industry is clearly at fault. We have convinced you that building mission-critical client/server solutions is simple if you just use our 4GL tools. All you have to do is drag icons and draw lines with the click of a mouse button and voila, you've just replaced your legacy mission-critical application! Well, nothing could be further from the truth, as many of you have lived to tell.

Many 4GL development and query tools are available today. A partial list includes the following products:

o Borland Delphi

o Brio Technology DataPrism for AS/400

o Cognos Impromptu

o Computer Associates Visual Express

o Crystal Services Crystal Reports Professional

o Gupta SQLWindows

o IBM VisualAge

o IBM Visualizer for Windows

o Microsoft Access

o Microsoft Visual Basic

o Powersoft PowerBuilder

o ShowCase Vista

o Trinzic Forest and Trees

This is just a sampling. Many more are available, and every tool in the marketplace has its own strengths, weaknesses, and performance characteristics. But most have one thing in common: support for ODBC database servers. However, since ODBC serves as a common denominator for various DBMSs, and since there are subtle differences from one ODBC driver to the next, many tool providers end up writing to the more common ODBC and SQL interfaces and avoid taking advantage of a particular database server's strengths. While this eases programming efforts for the tool vendor, it often hurts overall performance.

Before we launch into some specific examples, let's take a high-level look at the generic 4GL tool architecture and how it relates to application programming logic and database access.

1 shows how a typical tool translates programming script and tool objects into more mundane 3GL database access. The first thing to notice in 1 is that many tools come packaged with a local standalone DBMS. Many programmers design and test their applications against local databases and then expect to roll the application out into a client/server environment without changes. Many tool user manuals suggest this as a development approach. It simply doesn't work, however, because of the different performance characteristics of client/server environments.

Figure 1 shows how a typical tool translates programming script and tool objects into more mundane 3GL database access. The first thing to notice in Figure 1 is that many tools come packaged with a local standalone DBMS. Many programmers design and test their applications against local databases and then expect to roll the application out into a client/server environment without changes. Many tool user manuals suggest this as a development approach. It simply doesn't work, however, because of the different performance characteristics of client/server environments.

The next critical piece of the architecture in 1 is what I call the Data Access Abstraction Layer. The reason it's so critical is that most database accesses go through this layer, yet many 4GL programmers don't even know that this layer exists! Worse, the programmer or user is often unable to affect this layer's behavior; hence, the term "black box." This layer is responsible for translating the high-level data access requests of the tool into specific DBMS requests, typically using SQL and ODBC. Your application's success will rely heavily on the quality of this layer's output. For example, some tools have a very good knowledge of the various server DBMSs and generate SQL that is known to perform well with each server. Other tools simply lump all server databases into one category and permit the DBMS to do as little as possible, which results in very poor client/server performance.

The next critical piece of the architecture in Figure 1 is what I call the Data Access Abstraction Layer. The reason it's so critical is that most database accesses go through this layer, yet many 4GL programmers don't even know that this layer exists! Worse, the programmer or user is often unable to affect this layer's behavior; hence, the term "black box." This layer is responsible for translating the high-level data access requests of the tool into specific DBMS requests, typically using SQL and ODBC. Your application's success will rely heavily on the quality of this layer's output. For example, some tools have a very good knowledge of the various server DBMSs and generate SQL that is known to perform well with each server. Other tools simply lump all server databases into one category and permit the DBMS to do as little as possible, which results in very poor client/server performance.

Along with the quality of the SQL and ODBC calls generated, the frequency with which the calls are generated is a critical aspect to the performance of the application. There are many different ways to accomplish the same thing when using SQL and ODBC. Some methods generate far more trips to the server than others, which degrades performance.

How can you tell the differences from one tool to the next? You must understand the output of the data access abstraction layer, both when evaluating a particular tool and throughout the application development process. In order to understand what this layer is producing, you must see the calls it is making, which is where the ODBC trace utility comes in. The version 2 ODBC driver manager has a built-in trace facility that can be activated using the ODBC Administrator. Simply run the Administrator and select the Options button on the bottom of the list box. This will bring up a dialog box that allows you to trace ODBC calls and direct them to a file for later viewing. 2 shows a typical trace listing for a popular tool.

How can you tell the differences from one tool to the next? You must understand the output of the data access abstraction layer, both when evaluating a particular tool and throughout the application development process. In order to understand what this layer is producing, you must see the calls it is making, which is where the ODBC trace utility comes in. The version 2 ODBC driver manager has a built-in trace facility that can be activated using the ODBC Administrator. Simply run the Administrator and select the Options button on the bottom of the list box. This will bring up a dialog box that allows you to trace ODBC calls and direct them to a file for later viewing. Figure 2 shows a typical trace listing for a popular tool.

It is not my intention to describe everything you might see in an ODBC trace. There is far too much to cover, and most of it is of little interest. What is important is that you are able to identify what SQL requests are being made, when they're being made, and what ODBC APIs are being used to pass the SQL to the server. In 2, one SQL SELECT statement is passed to the SQLExecDirect ODBC API. The result of the query is processed using SQLFetch and SQLGetData APIs. For the most part, this is all you have to be able to identify to diagnose performance characteristics based upon the examples described in the following pages.

It is not my intention to describe everything you might see in an ODBC trace. There is far too much to cover, and most of it is of little interest. What is important is that you are able to identify what SQL requests are being made, when they're being made, and what ODBC APIs are being used to pass the SQL to the server. In Figure 2, one SQL SELECT statement is passed to the SQLExecDirect ODBC API. The result of the query is processed using SQLFetch and SQLGetData APIs. For the most part, this is all you have to be able to identify to diagnose performance characteristics based upon the examples described in the following pages.

The performance problems incurred by generating SQL and ODBC calls that pay no attention to the particular ODBC driver or the server DBMS are best shown with a few examples. We'll start by examining some ODBC traces of some popular tools. As mentioned previously, ODBC trace information can give valuable insight into the quality of the ODBC and SQL requests made. Here are the requests of a few different tools (of course, we've changed the names and faces to protect the innocent).

Tool A

Query tool A makes the following ODBC calls to process SELECT statements:

 SQLExecDirect("SELECT * FROM table_name") WHILE there_are_rows_to_fetch DO SQLFetch() FOR every_column DO SQLGetData( COLn ) END FOR ...process the data END WHILE 

This tool does not make use of ODBC bound columns, which would help performance. A faster way to process this is as follows:

 SQLExecDirect("SELECT * FROM table_name") FOR every_column DO SQLBindColumn( COLn ) END FOR WHILE there_are_rows_to_fetch DO SQLFetch() ...process the data END WHILE 

For a table containing one column, there wouldn't be much difference between the two approaches. For a table with 100 columns, you end up with 100 times as many ODBC calls as in the first example, for every row fetched. We can further optimize the second scenario because bound FETCHs have the target data types defined prior to each FETCH, unlike FETCHs processed with SQLGetData calls.

Tool B

Query tool B allows the user to update a spreadsheet of rows and then send the updates to the database. It makes the following ODBC calls:

 FOR every_row_updated DO SQLAllocStmt() SQLExecDirect("UPDATE...SET COLn='literal'...WHERE COLn='oldval'...") SQLFreeStmt( SQL_DROP ) END LOOP 

The first thing to note is that the tool performs a statement allocation and drop for every row. Only one allocate statement is needed here, and the free statement call could be changed to SQLFreeStmt( SQL_CLOSE ) after each SQLExecDirect. This would save the overhead of creating and destroying a statement handle for every operation. A second, more important performance concern is the use of SQL with literals instead of parameter markers. The SQLExecDirect() call causes an SQLPrepare and SQLExecute every time. A faster way to perform this operation would be as follows:

 SQLAllocStmt() SQLPrepare("UPDATE...SET COL1=?...WHERE COL1=?...") SQLBindParameter( new_column_buffers ) SQLBindParameter( old_column_buffers) FOR every_row_updated DO ...move each row's data into the parameter buffers SQLExecute() END LOOP 

These sets of ODBC calls can outperform the original set by a large factor. For example, when using the CA/400 ODBC driver, the server CPU utilization will decrease to approximately 5 percent of what it was before! Response times can easily improve, dropping to a third of what they were.

Tool C (Your Worst Possible Nightmare)

Query tool C allows complex decision support type-queries to be made by defining complex query criteria with a point-and-click interface. For a particularly complex query, you might think you are generating the following SQL:

 SELECT A.COL1, B.COL2, C.COL3, etc. FROM A, B, C, etc... WHERE many complex inner and outer joins are specified 

The fact that you didn't have to write this complex query yourself sure is nice, but is this statement actually what the tool is processing? Perhaps yes, perhaps no. For example, one tool might pass this statement directly to the ODBC driver, while another would split the query into many individual queries and process the results at the client, like this:

 SQLExecDirect("SELECT * FROM A") SQLFetch() all rows from A SQLExecDirect("SELECT * FROM B") SQLFetch() all rows from B (Process the first join at the client) SQLExecDirect("SELECT * FROM C") SQLFetch() all rows from C (Process the next join at the client) . . . And so on... 

This approach can lead to tremendous amounts of data being passed to the client, which will kill performance. In one real-world example, a programmer thought that a 10-way join was being passed to ODBC, with four rows being returned. Actually, however, 10 simple SELECT statements and all the FETCHs associated with them were passed. The net result of four rows was achieved only after 81,000 ODBC calls were made by the tool! Of course, the programmer was originally blaming ODBC for the slow performance, but not after the ODBC trace was revealed.

The previous examples show different ways to perform the same operation, but with different performance characteristics. If you are using a simple query tool, you typically do not have control over the SQL generated, and you are at the mercy of the programmers who wrote the tool. If you are using a 4GL development environment to build your own programs, you might have greater control over the types of ODBC and SQL calls generated. Or you might not. Evaluate each tool carefully with performance in mind, knowing that, at some point, you may have to exploit a particular feature of one DBMS to either get response times down or to increase scalability. Some tools will let you, some won't.

Although 4GL environments have great advantages for programmer productivity, they offer less control over the resulting code than with traditional 3GL development in languages such as C and C++. Sometimes, the increased control can make all the difference, especially where performance scaling is concerned.

A hybrid approach is to combine the strengths of both environments by implementing performance-critical application pieces, such as the data access layer in the 3GL environment and invoking them from the 4GL environment (assuming the 4GL tool allows this). This not only gives you the power of a 3GL where you need it, but, with proper encapsulation, you get the ability to make major changes to accommodate increasing performance requirements at late stages of the game.

When you make ODBC calls in a 3GL environment, you have full control over the types of ODBC calls and, more importantly, the quality of the SQL requests. There are typically three types of SQL requests when considered from a performance perspective: bad, good, and best. Some 4GL tools can generate good performing SQL, while some generate only bad performing SQL. To get the best performing SQL, however, you usually have to take advantage of a particular DBMS's feature, which many 4GLs do not. For example, there are essentially three ways to do INSERTs with DB2/400:

o INSERT using literals

o INSERT using parameter markers

o Blocked INSERT

Many 4GL tools use the first technique, and some use the second technique. I don't know of any (yet) that take advantage of blocked INSERT. What are the performance implications? Using parameter markers can be three times as fast as using literals, and using blocked INSERT is about 20 times as fast as using parameter markers, when all three methods are issued through the Client Access/400 ODBC driver. Although this example applies only to the Client Access/400 ODBC driver and DB2/400, consider the implications carefully at early stages of application development like these.

The client/server plunge should not be taken lightly. It is important to get your feet wet with a project of manageable size before jumping to a mission-critical application. Set your sights to the long-term, and bear in mind that client/server solutions are not cheaper in terms of dollars than traditional solutions. Much of the increased cost is in keeping things running with acceptable performance.

Another thing to be wary of is the popular advice of the day. For example, one current trend is the push for tools that can build client/server applications without any knowledge of the server. While this sounds good on paper (decreased programmer training, for example), how successful this approach will be remains unclear.

Consider also the implications of multitier architectures that utilize middle-tier servers in addition to a single data repository. Although they are significantly more complicated to implement, they offer performance scalability that is unprecedented. While you may have several thousand 5250 emulators attached to a single AS/400, you won't end up with ratios anywhere near this when distributed client/server architectures are involved. So what is considered aggressive in a two-tier client/server model? I tend to consider anything over 100 clients per server as a very aggressive client/server project. Of course, it depends on your application, but I would recommend a small number of clients per server for your first project. After that, you can rely on your own gray hair for advice.

Lance C. Amundsen is a member of the Client Access/400 ODBC development team in Rochester, Minnesota. His primary responsibility is identifying and implementing performance enhancements in the ODBC driver.

Reference

Microsoft ODBC 2.0 Programmer's Reference and SDK Guide (ISBN 1-55615-658-8).

Getting the Most Performance from ODBC Query and Development Tools

Figure 1: Typical Client/Server Tool Database Access Methods



Getting the Most Performance from ODBC Query and Development Tools

Figure 2: Typical ODBC Trace Listing

 SQLAllocEnv(henv); SQLAllocConnect(henv, hdbc); SQLSetConnectOption(hdbc, 103, 00000014); SQLDriverConnect(hdbc, hwnd, "", 32, ConnStr, 256, ConnStrOut, 0); . . . SQLAllocStmt(hdbc, hstmt); SQLExecDirect(hstmt, "SELECT NEWS_DOC_SEQNBR,NEWS_KEY FROM OINT771", -3); SQLFetch(hstmt); SQLGetData(hstmt, 1, 99, rgbValue, 252, pcbValue); SQLGetData(hstmt, 2, 99, rgbValue, 244, pcbValue); . . . SQLFreeStmt(hstmt); SQLDisconnect(hdbc48470000); 
BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

RESOURCE CENTER

  • WHITE PAPERS

  • WEBCAST

  • TRIAL SOFTWARE

  • White Paper: Node.js for Enterprise IBM i Modernization

    SB Profound WP 5539

    If your business is thinking about modernizing your legacy IBM i (also known as AS/400 or iSeries) applications, you will want to read this white paper first!

    Download this paper and learn how Node.js can ensure that you:
    - Modernize on-time and budget - no more lengthy, costly, disruptive app rewrites!
    - Retain your IBM i systems of record
    - Find and hire new development talent
    - Integrate new Node.js applications with your existing RPG, Java, .Net, and PHP apps
    - Extend your IBM i capabilties to include Watson API, Cloud, and Internet of Things


    Read Node.js for Enterprise IBM i Modernization Now!

     

  • Profound Logic Solution Guide

    SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation.
    Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects.
    The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the companyare not aligned with the current IT environment.

    Get your copy of this important guide today!

     

  • 2022 IBM i Marketplace Survey Results

    Fortra2022 marks the eighth edition of the IBM i Marketplace Survey Results. Each year, Fortra captures data on how businesses use the IBM i platform and the IT and cybersecurity initiatives it supports.

    Over the years, this survey has become a true industry benchmark, revealing to readers the trends that are shaping and driving the market and providing insight into what the future may bring for this technology.

  • Brunswick bowls a perfect 300 with LANSA!

    FortraBrunswick is the leader in bowling products, services, and industry expertise for the development and renovation of new and existing bowling centers and mixed-use recreation facilities across the entertainment industry. However, the lifeblood of Brunswick’s capital equipment business was running on a 15-year-old software application written in Visual Basic 6 (VB6) with a SQL Server back-end. The application was at the end of its life and needed to be replaced.
    With the help of Visual LANSA, they found an easy-to-use, long-term platform that enabled their team to collaborate, innovate, and integrate with existing systems and databases within a single platform.
    Read the case study to learn how they achieved success and increased the speed of development by 30% with Visual LANSA.

     

  • Progressive Web Apps: Create a Universal Experience Across All Devices

    LANSAProgressive Web Apps allow you to reach anyone, anywhere, and on any device with a single unified codebase. This means that your applications—regardless of browser, device, or platform—instantly become more reliable and consistent. They are the present and future of application development, and more and more businesses are catching on.
    Download this whitepaper and learn:

    • How PWAs support fast application development and streamline DevOps
    • How to give your business a competitive edge using PWAs
    • What makes progressive web apps so versatile, both online and offline

     

     

  • The Power of Coding in a Low-Code Solution

    LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed.
    Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

    • Discover the benefits of Low-code's quick application creation
    • Understand the differences in model-based and language-based Low-Code platforms
    • Explore the strengths of LANSA's Low-Code Solution to Low-Code’s biggest drawbacks

     

     

  • Why Migrate When You Can Modernize?

    LANSABusiness users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.
    In this white paper, you’ll learn how to think of these issues as opportunities rather than problems. We’ll explore motivations to migrate or modernize, their risks and considerations you should be aware of before embarking on a (migration or modernization) project.
    Lastly, we’ll discuss how modernizing IBM i applications with optimized business workflows, integration with other technologies and new mobile and web user interfaces will enable IT – and the business – to experience time-added value and much more.

     

  • UPDATED: Developer Kit: Making a Business Case for Modernization and Beyond

    Profound Logic Software, Inc.Having trouble getting management approval for modernization projects? The problem may be you're not speaking enough "business" to them.

    This Developer Kit provides you study-backed data and a ready-to-use business case template to help get your very next development project approved!

  • What to Do When Your AS/400 Talent Retires

    FortraIT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators is small.

    This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn:

    • Why IBM i skills depletion is a top concern
    • How leading organizations are coping
    • Where automation will make the biggest impact

     

  • Node.js on IBM i Webinar Series Pt. 2: Setting Up Your Development Tools

    Profound Logic Software, Inc.Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. In Part 2, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Attend this webinar to learn:

    • Different tools to develop Node.js applications on IBM i
    • Debugging Node.js
    • The basics of Git and tools to help those new to it
    • Using NodeRun.com as a pre-built development environment

     

     

  • Expert Tips for IBM i Security: Beyond the Basics

    SB PowerTech WC GenericIn this session, IBM i security expert Robin Tatam provides a quick recap of IBM i security basics and guides you through some advanced cybersecurity techniques that can help you take data protection to the next level. Robin will cover:

    • Reducing the risk posed by special authorities
    • Establishing object-level security
    • Overseeing user actions and data access

    Don't miss this chance to take your knowledge of IBM i security beyond the basics.

     

     

  • 5 IBM i Security Quick Wins

    SB PowerTech WC GenericIn today’s threat landscape, upper management is laser-focused on cybersecurity. You need to make progress in securing your systems—and make it fast.
    There’s no shortage of actions you could take, but what tactics will actually deliver the results you need? And how can you find a security strategy that fits your budget and time constraints?
    Join top IBM i security expert Robin Tatam as he outlines the five fastest and most impactful changes you can make to strengthen IBM i security this year.
    Your system didn’t become unsecure overnight and you won’t be able to turn it around overnight either. But quick wins are possible with IBM i security, and Robin Tatam will show you how to achieve them.

  • Security Bulletin: Malware Infection Discovered on IBM i Server!

    SB PowerTech WC GenericMalicious programs can bring entire businesses to their knees—and IBM i shops are not immune. It’s critical to grasp the true impact malware can have on IBM i and the network that connects to it. Attend this webinar to gain a thorough understanding of the relationships between:

    • Viruses, native objects, and the integrated file system (IFS)
    • Power Systems and Windows-based viruses and malware
    • PC-based anti-virus scanning versus native IBM i scanning

    There are a number of ways you can minimize your exposure to viruses. IBM i security expert Sandi Moore explains the facts, including how to ensure you're fully protected and compliant with regulations such as PCI.

     

     

  • Encryption on IBM i Simplified

    SB PowerTech WC GenericDB2 Field Procedures (FieldProcs) were introduced in IBM i 7.1 and have greatly simplified encryption, often without requiring any application changes. Now you can quickly encrypt sensitive data on the IBM i including PII, PCI, PHI data in your physical files and tables.
    Watch this webinar to learn how you can quickly implement encryption on the IBM i. During the webinar, security expert Robin Tatam will show you how to:

    • Use Field Procedures to automate encryption and decryption
    • Restrict and mask field level access by user or group
    • Meet compliance requirements with effective key management and audit trails

     

  • Lessons Learned from IBM i Cyber Attacks

    SB PowerTech WC GenericDespite the many options IBM has provided to protect your systems and data, many organizations still struggle to apply appropriate security controls.
    In this webinar, you'll get insight into how the criminals accessed these systems, the fallout from these attacks, and how the incidents could have been avoided by following security best practices.

    • Learn which security gaps cyber criminals love most
    • Find out how other IBM i organizations have fallen victim
    • Get the details on policies and processes you can implement to protect your organization, even when staff works from home

    You will learn the steps you can take to avoid the mistakes made in these examples, as well as other inadequate and misconfigured settings that put businesses at risk.

     

     

  • The Power of Coding in a Low-Code Solution

    SB PowerTech WC GenericWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed.
    Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

    • Discover the benefits of Low-code's quick application creation
    • Understand the differences in model-based and language-based Low-Code platforms
    • Explore the strengths of LANSA's Low-Code Solution to Low-Code’s biggest drawbacks

     

     

  • Node Webinar Series Pt. 1: The World of Node.js on IBM i

    SB Profound WC GenericHave you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.
    Part 1 will teach you what Node.js is, why it's a great option for IBM i shops, and how to take advantage of the ecosystem surrounding Node.
    In addition to background information, our Director of Product Development Scott Klement will demonstrate applications that take advantage of the Node Package Manager (npm).
    Watch Now.

  • The Biggest Mistakes in IBM i Security

    SB Profound WC Generic The Biggest Mistakes in IBM i Security
    Here’s the harsh reality: cybersecurity pros have to get their jobs right every single day, while an attacker only has to succeed once to do incredible damage.
    Whether that’s thousands of exposed records, millions of dollars in fines and legal fees, or diminished share value, it’s easy to judge organizations that fall victim. IBM i enjoys an enviable reputation for security, but no system is impervious to mistakes.
    Join this webinar to learn about the biggest errors made when securing a Power Systems server.
    This knowledge is critical for ensuring integrity of your application data and preventing you from becoming the next Equifax. It’s also essential for complying with all formal regulations, including SOX, PCI, GDPR, and HIPAA
    Watch Now.

  • Comply in 5! Well, actually UNDER 5 minutes!!

    SB CYBRA PPL 5382

    TRY the one package that solves all your document design and printing challenges on all your platforms.

    Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product.

    Request your trial now!

  • Backup and Recovery on IBM i: Your Strategy for the Unexpected

    FortraRobot automates the routine tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:
    - Simplified backup procedures
    - Easy data encryption
    - Save media management
    - Guided restoration
    - Seamless product integration
    Make sure your data survives when catastrophe hits. Try the Robot Backup and Recovery Solution FREE for 30 days.

  • Manage IBM i Messages by Exception with Robot

    SB HelpSystems SC 5413Managing messages on your IBM i can be more than a full-time job if you have to do it manually. How can you be sure you won’t miss important system events?
    Automate your message center with the Robot Message Management Solution. Key features include:
    - Automated message management
    - Tailored notifications and automatic escalation
    - System-wide control of your IBM i partitions
    - Two-way system notifications from your mobile device
    - Seamless product integration
    Try the Robot Message Management Solution FREE for 30 days.

  • Easiest Way to Save Money? Stop Printing IBM i Reports

    FortraRobot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing.
    Manage your reports with the Robot Report Management Solution. Key features include:

    - Automated report distribution
    - View online without delay
    - Browser interface to make notes
    - Custom retention capabilities
    - Seamless product integration
    Rerun another report? Never again. Try the Robot Report Management Solution FREE for 30 days.

  • Hassle-Free IBM i Operations around the Clock

    SB HelpSystems SC 5413For over 30 years, Robot has been a leader in systems management for IBM i.
    Manage your job schedule with the Robot Job Scheduling Solution. Key features include:
    - Automated batch, interactive, and cross-platform scheduling
    - Event-driven dependency processing
    - Centralized monitoring and reporting
    - Audit log and ready-to-use reports
    - Seamless product integration
    Scale your software, not your staff. Try the Robot Job Scheduling Solution FREE for 30 days.