Linux Performance Administration
By Mark Rais, senior editor for reallylinux.com


There are plenty of general server admin articles available on the internet. Unfortunately, few of them delve into the subject of performance administration, especially from an introductory Linux perspective.

Yet, it is this kind of information that new Linux administrators require to ensure they are effective at their job.

With this latest article, I provide a quick first guide for those getting started with Linux performance administration. I personally use these commands and tools in my own context and continue to find them of value when dealing with enterprise Linux performance.

My hope is that this article includes some beneficial tips for those just getting started with basic performance monitoring. If you need more server commands, I also have more details in my Linux Server Administration article, in case you are interested.

Understanding how system I/O, CPU and Memory work in conjunction with your specific Linux applications is helpful for proper performance management.

As you begin your own Linux server endeavors, you may find benefits from using the following commands and tools to help assess system performance:

Taken From Reallylinux.com

A nice name for a quick command to check system memory. It is useful when trying to determine the status of your SWAP file during certain load instances. From the command line type:

free -l

I tend to run this command with the -l option as it helps determine where my low memory and high memory stats are during peak loads or testing. Regardless how you use it, you'll find this a handy command for basic memory statistics and performance analysis.


ps

Although some administrators avoid using the ps command to review running processes, it can be useful when establishing your server’s baseline and monitoring errant processes. Type from the command line:

ps -aux
Or perhaps a more useful variant is:
ps -ef|grep term

In the above example replace the word term with a specific string or name you want to review from the output.

Using the options above will ensure you can view key system processes and search for any specific context. In the future as the server changes with enhancements and updates, keeping a record of ps output serves as a baseline. You can get a more details regarding the use of ps on my Commands For Guru Wanna-bees page.

Also remember that to determine whether core processes and daemons are active you still need to use the following command:

/sbin/chkconfig --list


top

You may already be familiar with this command. It is especially helpful since it displays system statistics and details regarding active processes. When trying to identify I/O bound or CPU bound processes that seem to be dragging your performance down, give this command a try:

top 

Using this basic syntax gives you a summary view of the system with a listing of users, memory usage, CPU usage, and associated processes.

Now, it is beneficial to implement a simple bash script to perform historical benchmark output. This is especially important for the top command.

Therefore, I provide an example of a bash script that you can use for your own server. Below is a simple bash script I created for saving historical log files that may help:

#!/bin/bash
INTERVAL=$1
DIR_DATE=`date '+%d_%b_%Y'`
TOP_LOG_DIR=/opt/performance/top/${DIR_DATE}
JOB_NAME=`basename $0`
TIME=`date '+%H%M'`

TOP_LOG_FILE=${TOP_LOG_DIR}/topoutput-${TIME}.out

if [ "$1" = "" ]
then

echo "Please include an iteration after $0"

else
if [ ! -d ${TOP_LOG_DIR} ]
then
mkdir -p ${TOP_LOG_DIR}
fi

top -b -d $INTERVAL -n 10 > ${TOP_LOG_FILE}
fi

If you use the script above, ensure you make the script file executable using chmod, such as:

chmod -x scriptname
Or, you can simply run the script via bash, being sure to include the interval frequency such as:
bash scriptname 5
This will run the script every 5 seconds for system output.

You also need to change the TOP_LOG_DIR= parameter to match your own directory path, and replace the number of output file iterations (currently set as -n 10, creating ten out files).

The positive is that this script, when run from cron can generate useful historical top stats and show where bottlenecks may lie.


uptime

This rudimentary command performs exactly what the name states, checking for your server's total uptime statistics.

uptime

What makes this command beneficial for performance is that it displays your average system load over the past few minutes.

Therefore, when trying to troubleshoot an anomaly like latency and bottlenecks, you can use uptime to test existing performance. By keeping historical data from uptime, it is easier to discern if the problem stems from underpowered CPUs or memory management. It serves as a good quick performance check.


vmstat

I find that the vmstat command serves as a useful performance metric, especially when running in a virtualized environment.

vmstat

This tool provides useful information related to your virtual memory, especially useful when determining I/O bound application issues or dealing with the over committed virtualized environment factors. It is always a good idea to generate out files from this command and run it regularly using the cron command (see my cron article for details). It will provide a very helpful historical summary. It is especially important to use when in a virtual Linux environment.


Sysstat Toolset

If you are familiar with UNIX servers, or simply wish to enhance your performance monitoring capabilities, the Sysstat toolset is remarkably useful. It contains a number of beneficial commands such as iostat, pidstat, and the ever useful sar. Most enterprise server admins will find benefit by installing this toolset, originally implemented by Sebastien Godard.

It is easily downloaded and installed from the website either as a tar file (sysstat-X.X.X.tar.gz) or as an RPM. Please note for RHEL and Centos users that you should be sure to download the sysstat-X.X.X-1.i586.rpm file.

Below are a few useful commands that come with the Sysstat tools:

iostat

This command allows you to instantly identify I/O performance on all mounted devices.

 iostat -u 2 10

Very similar to the sar command in terms of options, you will note using the above options that iostat updates the I/O reads/writes/tps for mounted drives. The option updates stats every 2 seconds, up to 10 times.

If you are using iostat in conjunction with sar output to historical log files, then I recommend you instead use the iostat -d option which displays only the I/O specs without cpu utilization. I strongly recommend you add this to your cron entries and run it routinely with output to a historical log file. If you need guidance with cron, then try the tips in my article on the basics of using cron.

sar

This is a helpful administrator command to establish historical CPU states and daily performance activity.

 sar -u 2 10

The above options provide CPU utilization stats for every 2 seconds, up to 10 increments.

You can use many other options, such as the -f option which ensures that the output from sar is saved to a specific file. You may also pipe (|) the output from sar to a file or use cron to perform specific activities.

It is important to routinely run the sar command to develop a historical sense of server performance and CPU utilization. Most system administrators include the sar command with others in their crontab entries. If you need help with more introductory Linux server commands, then read my latest reallylinux.com article for Beginning Server Administrators.

You may also wish to integrate the sar command with an existing bash maintenance script on your Linux server. In any case, you'll find it very useful, especially when you keep historical records and then encounter an anomaly.


Bonnie++ Tool

The Bonnie++ performance tools are very useful for managing hard disk and file system performance benchmarks. Bonnie++ was created by Russell Coker in Australia, from original Bonie by Tim Bray. The Bonnie++ suite of tools makes benchmarking on Linux servers quite effective and easy.

It is readily available for download as a tar file (tgz). You can also get it through Sourceforge using this link.

What makes this suite of tools so useful is that benchmarking file system writes/reads on massive as well as miniscule files is incredibly easy and quick. It gives an administrator one more option for observing and managing performance on a Linux server.


Hopefully, by using the commands and tools described above, you will be far more effective at identifying potential throughput anomalies, for any type of Linux environment.


Mark Rais is currently the senior editor for reallylinux.com and previously served as a senior manager at Netscape and AOL.

NOTICE this is only ONE of several Linux command lists.

Also read our Beginner Linux commands and commands for files and directories:
Commands for Linux administrators
Linux network administration commands
Beginning Linux Commands
Files and Permissions
Directory navigation
Commands for Guru-Wannabees