|Best Practices for System i Optimization|
|System Administration - Performance Monitoring & Tuning|
|Written by Chris Thomas|
|Sunday, 09 November 2008 19:00|
Data volumes grow constantly, even when business is slow. Keeping storage and processing optimized today can save your skin when business booms.
Editor's note: This article provides a summary insight into the 12-page, 5000-word whitepaper System i Optimization Best Practices. The full white paper can be accessed for free by clicking here.
One of the most frustrating aspects of managing systems is the natural yet unfair lack of connection between business volumes and your workload. When economies drag and sales sag, businesses universally look for places to clip budgets. And if management does not see IT as a core contributor to revenues, but rather as just another overhead cost to be constrained, you end up struggling with data volumes that keep growing--even in slow times. Growing data volumes always end up compounding into bigger system problems, including slower application response times, longer batch process and reporting times, ever-increasing backup windows and, eventually, increasingly desperate measures to clear out some DASD because no new hardware purchases are being approved.
The key to saving your sanity (and maybe even reclaiming part of your weekends) can be found in continual, methodical, and (most importantly) automated system optimization. But even before you can set about establishing an effective optimization plan, you have to be able to see where the problems are and where data growth is not being managed appropriately. There are several prime suspects in just about every System i shop.
Don't Guess. Measure!
All aspects of System i performance and resource usage can and should be measured and monitored. Knowing the facts--facts that continually change as your System i usage evolves--is critical to maintaining System i health.
The difference between subsecond system response and, say, a two- or three-second response does not seem substantial until you multiply it by many thousands of transactions a day. Any additional response time per transaction can quickly translate into a daily loss of up to several hours of productivity across the whole company. Worse, slow applications cause users to become frustrated (with you), which is bad enough, but when the users are customers trying to order from a Web-based sales system, frustration can result in significant lost revenue.
Thought we were talking about data volumes? We are. Measuring and understanding when CPU utilization is highest gives you critical answers to these questions:
Once you know which apps are chewing up your CPU, it gets easier to identify where your data volume issues are coming from.
During leaner economic times, the fact that new disk is continually getting cheaper is no comfort. You have to make do with what you have. When you start to "hit the wall" on available disk space, the space you have left becomes precious. Back in the day, maybe you could afford to let DASD get cluttered with 20 or 30 percent logically deleted (but not physically deleted) records. But no longer. Knowing just exactly how much space is being wasted in this way is a first step in gaining some breathing room.
But don't be fooled into thinking that reorganizing DASD once is going to solve your problem for long. The real answer--for now and especially for when business heats up again--is to reorganize early and often. You will likely see substantial gains--in terms of drops in DASD utilization and in performance improvements of the related applications--immediately following your first reorg run. But there is more goodness to be gained. Ongoing, preferably automated reorganization and optimization of files and objects nearly always shows further gains after each successive sweep, for at least several rounds of optimization. From this perspective, tools that automate the monitoring of CPU and DASD statistics and automate the optimization of files and objects become very cost-efficient, with nice, short ROI times.
Archiving and Purging
Back to your CPU utilization stats. If you can tell which apps are adding the most data to your storage, you can likely tease out the other problem as well: data that is overdue to get copied to offline disk or to tape. Which applications are low-volume or rare users of CPU? What files are getting old? Again, low-access or no-access data may not be apparent until you look carefully.
Depending upon the nature of your business, the length of time you need to retain data fully online and ready for instant recall may vary greatly. For example, electronic medical records require very different handling than, say, three-year-old bill-of-lading records generated by your logistics application. But while keeping in mind your business requirements for data retention, applicable government regulations, and the service-level agreements you may have in place with your customers, finding and archiving "dusty data" can be a big help for many systems.
There are other areas of file and system optimization in System i environments that bear study as well. Follow the link at the end of this article to read the complete white paper.
The bottom line is that optimizing your System i server is necessary in order to, at a minimum, improve system performance and reduce storage utilization. But for optimized operation to become the normal state, rather than just an occasional event, system optimization must be done regularly, consistently, and correctly every time. Done manually, it takes considerable time, research, and knowledge to perform and, even in the most skilled hands, is still subject to human error.
Whoever first said that "a job well done need never be done again" was obviously not an IT professional!
To learn more about optimization techniques and practices for the System i, click here to access the complete white paper, System i Optimization Best Practices.
|Last Updated on Wednesday, 12 November 2008 02:20|