26
Fri, Apr
1 New Articles

The Linux Letter: Performance

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

This month, MC Mag Online focuses on performance, which seems uncannily timely. You can't watch television for more than a few minutes without hearing about some sports figure's use of performance-enhancing drugs (or about supplements alleged to improve a difference sort of performance). But don't worry, I won't go there. Instead, I'll focus on the type of performance computer professionals are most concerned with: that of their computer systems.

What, Me Worry?

I've been in this business for more years than I like to admit. I remember writing software to structure the disks (thus enhancing the disk performance) on a PDP-11/34 running RSTS, and bit-twiddling algorithms to wring out the last iota of performance from the CPUs. When my company migrated from the System/34 to the System/38, I wrote some of my code in MI to make my software more responsive. And like many of you, I sat in classes at IBM Technical Conferences where the topic was OS/400 performance tuning. Ahhhh...the good old days.

Over the years, OS/400 has evolved--and with it, the tools to manage the system. The gurus in Rochester have automated most of those tedious tuning procedures to the point that most i5 users will never have to give performance tuning a second thought. Unfortunately, this same automation is not yet in Linux, so from a tuning standpoint, Linux is now at a point similar to that of OS/400 V2R3. This is not to say that Linux is a dog. On the contrary, it's quite snappy. But by keeping some simple things in mind, you can really make even mediocre hardware seem quick.

Less Is More

All current Linux distributions come on multiple CDs containing an amazing amount of software. The list of programs that will be presented to you during the system installation can be quite tantalizing. Unless you are creating a play system (a.k.a. development box), then you really need to resist the temptation to load every available package. Instead, load only those packages that support the function for which the machine is destined. There are some very good reasons to do this.

One is that by minimizing the software manifest, the disk space consumed will itself be minimized. (Now, that's obvious!) Although this reason may not seem particularly compelling (given the three-figure gigabyte disks that are now commonly available), you will find that by trimming the fat, your backup and restore times are shorter. We all know that system availability is always an issue, so anything we can do to shorten our backup window is a plus.

Another reason you will want to minimize the software manifest is because it will save you time during the install. There is no doubt that you will want to update your system immediately after loading it. Since virtually all distributions provide their updates via the Internet, and since most package management tools allow you to load new software as well updates from those networked repositories, it makes sense to initially load only a base system. That way, you can add any packages that you need from the network and get the most up-to-date versions of those packages at the same time. There is no use loading down-level software just to upgrade it immediately thereafter.

If the box on which you are loading Linux will be facing the Internet, then perhaps the most compelling reason to keep package count low is to minimize the tools available to a cracker, should he somehow get shell access to your machine. Providing a full suite of development tools on an Internet server is like building a bank inside a hardware store: Readily available tools make burglary very convenient. Maintaining a secure system is hard enough without handing the tools to the cracker. On the same note, by loading and running software services that you don't need, you provide additional possibilities for exploitation and more packages for you to have to police. If you don't need it, don't load it!

On my systems, I use Red Had Enterprise Linux and some of its clones, such as CentOS and White Box Linux. When I do an install, I usually choose the "minimum" install so that I get a base system, and then I add the packages I intend on using with the up2date and yum package management tools. The result is a lean, mean computing machine with just the basics installed. I can add whatever else I need later.

Of course, this advice is applicable to any operating system, not just Linux. I mention it only because so many people seem to revel in bloated software installs nowadays. Perhaps that's because it's so much easier to just install everything than it is to actually do some planning. And the current crop of hardware helps to hide this sin. Just keep in the back of your mind that there is always a performance penalty for every bit of software you have running. In your quest for the speediest system possible, less is more.

Don't Get Gooey!

I can't think of a bigger waste of CPU cycles and memory to inflict on a server than what gets wasted running a GUI interface. Unlike Redmond-esque OSes, where the GUI is integrated with the OS, Linux uses a separate application (X windows) to provide a graphical interface. You aren't obliged to waste the resources necessary to load and run one.

For a true iSeries bigot, a text user interface (TUI) is just fine, but I suppose the comfort level with a command line will diminish as IBM drives everyone to the iSeries Navigator. If you are already a GUI junkie, you have a couple of options in regards to Linux server administration that will minimize the performance impact.

First, you can load X windows on your system but use it only when necessary. To accomplish this, configure the server to boot into text mode (typically, run level 3). When you want to administer the system, you can log on as root and then issue the command startx, which will give you the GUI fix that you crave. Once you are done with whatever tasks you need to complete, simply log off from the GUI, and you'll find yourself back at the command line. The resources needed for the GUI will be released for more productive use.

Another option is to take advantage of the remote graphical capabilities of X windows: Log in on a different graphical workstation, and then use it to administer your servers. The secure shell software included with every Linux distribution makes this an exquisitely easy and attractive solution. As an added benefit, you don't need to waste calories walking into the machine room to get to a console. You can do it without ever letting your office chair get cold.

Finally, you can use something like Webmin, a Web-based administration client, to assist you in your administrative duties. The GUI-based administration tools normally provided by Linux distributions require X Windows to run. However, Webmin and its ilk take a different tack; they use CGI scripts on the server to effect the desired configuration changes. Same great GUI experience, but less filling! If you will be running a Web server (such as Apache) on the server, then there really isn't any appreciable resource cost to using Webmin. The only downside to using a third-party tool like Webmin is that the distribution's documentation will not be as useful, since the tools it describes won't be the same. On the other hand, Webmin works for a large number of Linux distributions as well as many UNIX variants and the BSDs, including Macintosh OS X server. So you can learn one interface to administer them all.

Application Tuning

Most of the servers I get involved with use the Apache Web server and one of the open-source database management systems such as PostgreSQL or MySQL. Installing one or more of these packages is extremely simple on the Red Hat-based systems I typically use. What you need to realize, however, is that the installations provided are plain vanilla configurations. They don't take into account the capacity of your machine but instead have a default configuration suitable for a typical "Joe six-pack" machine. The defaults will work fine for the majority of the users who employ them, but for the person interested in obtaining maximum performance, some reading of the documentation is in order. It just so happens that Apache, PostgreSQL, and MySQL are excellent examples of applications just begging to be tuned.

The Apache Web server on my Fedora Core 1 system has 44 modules configured to be loaded on startup. There are modules for IMAP access, modules for spellchecking, modules to enable proxy hosts, modules to allow alternate means of authentication, and a host of other modules. Do I actually need all of these loaded? No. If I were interested in maximizing the performance of Apache on my laptop, I would start by removing all of the cruft embedded within its configuration file, such as disabling any modules that I don't need for my Web sites. I can also improve Apache's responsiveness by tweaking the number of worker threads created and kept spare. There is a lot of room for tuning in that configuration file, and a little experimentation can yield some significant results.

What about the databases? In the documentation directory for MySQL, you will find four sample configuration files: my-large.cnf, my-huge.cnf, my-medium.cnf and my-small.cnf. The comments in "my-huge.cnf" start with these lines:

# Example mysql config file for large systems. # # This is for large system with memory = 512M where the system # runs mainly MySQL.

 

The comments also contain the recommended settings for this type of machine. Any of these configurations are drop-in replacements for the default MySQL configuration and are already optimized for a particular hardware environment. Just a couple of minutes of your time to make the switch will reward you with improved database performance.

Although PostgreSQL doesn't provide drop-in configurations like MySQL's, a quick scan of its configuration file shows a plethora of potential tweaks, all based on the size of your target database and of your machine. So where do you start? I keyed the query "postgresql performance" into google, just to see what I'd get. The first three results returned were Postgresql Database Performance Tuning, Postgresql Performance Tuning (Linux Journal), and Postgresql Performance Tips. The information is out there, easily located, and well worth your time to study.

Results from the queries "mysql performance" and "apache performance" were equally rewarding. The bottom line is that you will definitely want to research the applications that you are using to see what you can do to optimize their performance.

For the Hard-Core Bit-Twiddler

One of the extraordinary things about an open-source operating system is that all of the information you could possibly want about the OS is readily available. If you're hard-core, then you will want to install the kernel source package. Even if you don't plan on compiling the kernel, the documentation found within is worth the disk space consumed by the source tree. Some quality time spent with this can give you some interesting clues on how to change the behavior of the Linux kernel.

How easy is it to make changes to a running Linux system? The Linux file system has an interesting directory called "/proc" which, when displayed using the ls command, appears to contain a large number of other directories and files. In actuality, /proc is a view into the running kernel, not real files and directories. You can use its contents to learn about the hardware that the kernel has identified or the processes that the kernel is currently running. But the most important point for the bit-twiddler is that you can retrieve and modify the settings on which the kernel is basing its behavior. For this purpose, you'll want to investigate the /proc/sys/vm directory.

If you have a busy server, you can tune it for optimum performance. Running a Linux-laptop? Then you can tune your machine for optimum battery life. The options are endless! The whole topic of kernel tuning would constitute a series of articles, so I won't get into more detail in the short space I have here. Remember, google is your friend, so if this topic interests you, be sure to do a "linux kernel tuning" search, grab a cup of coffee, and settle in for some interesting reading.

Don't Settle for Average

Any stock Linux box, with sufficient hardware resources, will give you satisfactory performance for most tasks. The ability to easily make adjustments gives you the capability to wring out all of the performance that your hardware can muster. If things aren't working to your satisfaction, then by all means, do your homework and take advantage of the openness of open-source software. Until Linux catches up with i5/OS in terms of auto-tuning, you have no other choice.

Barry L. Kline is a consultant and has been developing software on various DEC and IBM midrange platforms for over 21 years. Barry discovered Linux back in the days when it was necessary to download diskette images and source code from the Internet. Since then, he has installed Linux on hundreds of machines, where it functions as servers and workstations in iSeries and Windows networks. He co-authored the book Understanding Linux Web Hosting with Don Denoncourt. Barry can be reached at This email address is being protected from spambots. You need JavaScript enabled to view it..

Barry Kline 0

Barry L. Kline is a consultant and has been developing software on various DEC and IBM midrange platforms since the early 1980s. Barry discovered Linux back in the days when it was necessary to download diskette images and source code from the Internet. Since then, he has installed Linux on hundreds of machines, where it functions as servers and workstations in iSeries and Windows networks. He co-authored the book Understanding Web Hosting on Linux with Don Denoncourt. Barry can be reached at This email address is being protected from spambots. You need JavaScript enabled to view it.

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$0.00 Raised:
$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: