Pages

Saturday 30 July 2011

Unix Made Easy: Domainkeys,DKIM and SPF with Postfix

Unix Made Easy: Domainkeys,DKIM and SPF with Postfix: "Domainkeys,DKIM and SPF with Postfix SPAM and Phishing has been a growing problem for a long time and more recently the battle to stamp i..."

Domainkeys,DKIM and SPF with Postfix


Domainkeys,DKIM and SPF with Postfix

SPAM and Phishing has been a growing problem for a long time and more recently the battle to stamp it out has been getting more aggressive resulting in a lot of legitimate mail starting to get discarded as SPAM/Phishing.
For less technical users and all but the best system administrators, it is often near impossible to jump through all the hurdles to ensure all mail always gets where it's intended. In many cases misconfigurations or at lest sub-optimum configurations (eg. reverse DNS mismatches) play a part. Adding to the problem is that many anti-SPAM mechanisims discard (or quarantine and then expire) mail after it has been accepted by the server, thus defeating the design of SMTP that no mail should be lost (it would either delivered or bounced back to the sender). In many cases the mail losses go unnoticed, or are just accepted as normal and people have to re-send mail that does not get through.
Almost all SPAM is forged, often using legitimate addresses or domains as the fake source addresses. Domainkeys (originally proposed by Yahoo!) provides a means of verifying the mail has in fact come from where it claims which all sounds good. If widely implemented this could largely stamp out many Phishing mails and much more.
Additionally, SPF (Sender Policy Framework) can be added to verify the source of the email is legitimate.
These all add credibility to your mail and reduce the risks of having your domain blacklisted or your mail silently discarded by others systems.
There is however plenty to consider....
DomainKeys Warts and all
The first snag we hit is that there are in fact two standards which are confusingly named: DomainKeys (DK) and DomainKeys Identified Mail (DKIM). Although plain DK is historic and should not be used for verification (a point missed by many), there are still systems (including reports of some major web mail providers) out there who run verification on the legacy standard. That wouldn't be so bad if it wasn't that the same DNS records are used by both standards so if you don't sign mail with legacy standard on sending, it looks to systems that do verify against the legacy standard like the mail is faked. This means that (for now anyway) we have to support both standards on sending to ensure that mail gets through.
Secondly, almost all the information I have found on setting up DK/DKIM seems to say how to configure things in testing mode (where mail gets treated the same as unsigned mail) and then stops there. Even many of the worlds leading tech companies are running their DK/DKIM in testing mode, and I'm sure the ones reading this are thinking that it can't be them! That's fine if they are testing, but few seem to be brave enough to bite the bullet and switch off test mode. This effectively means that although they have DK/DKIM, they are requesting that everyone ignores the DK/DKIM signatures and treats their mail as unsigned.
And finally, it's not without it's vulnerabilities. The public key (for verification) is distributed by DNS in a TXT record. This is back to the classic crypto key exchange problem. If your DNS can be compromised or just faked to the receiving server or upstream DNS caches, then anyone can pretend to be sending mail from you, or even DoS your mail (eg. corrupt your public key so verification always fails and your mail gets discarded). This DoS could in fact be used even if the sender doesn't support DK/DKIM as fake DNS records would tell everyone that they do. Nothing to be alarmed by - it is still possible to disrupt mail flow without DK/DKIM if DNS gets compromised.
Not really completely bullet proof, but provides more integrity than no verification at all. Personally I think the problem is that SMTP was designed at a time where there was no reason to worry about what would come through email - times have changed and the underlying SMTP protocol isn't hardened against the abuse that happens now. All the add-ons will only have impact if they are widely deployed, but how many admins out there even have a clue they exist? So long as basic SMTP is alive and kicking the problems will continue. The only certain way to stop it is to replace SMTP with a modern protocol where verification is mandatory at every stage, but that's not going to happen any time soon.
DK/DKIM only validates part of the mail and to get the full benefit of authenticating mail all the way really needs to be combined with other technologies like SPF (see later) and ADSP (see later).
Should you use it?
There are a number of aspects to DK/DKIM and SPF that undermine their value:
  • Almost everyone operating their DKIM in test mode, effectively requesting peers to treat mail as unsigned
  • There is no formal requirement for setting a domain policy so mail can easily be forged
  • DKIM ADSP (previously known as ASP) provides the beginnings of a policy mechanism for DKIM, but at this time is not formalised, and recommendations include running it in a mode where it is acceptable not to sign messages. This again defeats the effectiveness of the system.
  • DKIM doesn't authenticate the envelope but rather selected aspects of the mail. This means that if these aspects are then replicated exactly other aspects of the mail (including the envelope and hence recipient) may be changed.
  • Neither DK/DKIM nor SPF is widely used beyond a few major mail providers. SPF does seem to be more widely deployed in organisations that are heavily phished. The many small providers, corporates etc. aren't paying any attention, and I doubt even know anything about it.
  • None of them protect against account breakins (eg. via a trojaned machine) and other mail that would appear to authenticate properly.
  • If misconfigured (either on the sending or receiving side), it could make a real disaster of your mail
The big thing in the favour of DKIM and SPF is that they add credibility to mail from your domain. If the mail checks out then odds are it's legit, and in making it more difficult for spammers and fraudsters to use your domain, you reduce the chance of it being blaclisted.
If you are also running a highly phished domain then it can be useful to discourage abuse of your domain. That said, a quick check of a few high street banks which I see an enormous amount of phishing of, only one I checked had SPF configured - that's all! It's such an easy way to protect their customers from phishing, yet few can be bothered.
How DK/DKIM it works
What DK/DKIM does is relatively simple, though not without it's warts. The concept is that key parts of the mail get cryptographically signed with a private key on the sending server, and then verified on the receiving server.
Each server doing signing can have a unique "selector" with a matching key making it easier to have multiple independent machines without having to keep keys in sync across them all. It also means provides a degree of isolation if a key or server gets compromised.
It's unclear just how effective it currently is with almost everyone running in test mode, or if some systems are even ignoring that and using DK/DKIM for spam filtering anyway.
A note on chroot in Postfix
Postfix is often run with at least some services being chroot (default in Debian Lenny), but some older installs do not do this. There are security benefits of chroot, though it does make setup a bit more tricky as sockets for the milters have to be placed within the chroot rather than the /var/run as they would normally be.
Typically Postfix will chroot to it's spool directory of /var/spool/postfix.
There are three approaches to dealing with chroot - either create a directory in the chroot area and configure the milters to put their sockets there, create directories and bind mount the relevant directories from /var/run into the chroot, or run the milters networked so that there are no sockets at all.
Personally I favour the first option and create the directory /var/spool/postfix/milter where I configure all the milter sockets to be. This means that in the Postfix config it will see all the sockets under /milter where all the milter configs will have them under /var/spool/postfix/milter
# mkdir /var/spool/postfix/milter
# mkdir /var/spool/postfix/milter/dk-filter
# chown dk-filter.dk-filter /var/spool/postfix/milter/dk-filter
# chmod 2755 /var/spool/postfix/milter/dk-filter
# mkdir /var/spool/postfix/milter/dkim-filter
# chown dkim-filter.dkim-filter /var/spool/postfix/milter/dkim-filter
# chmod 0755 /var/spool/postfix/milter/dkim-filter
If you are concearned about the possible risks of having a world writable directory then you could just make subdirectories with appropriate permissions for each milter.
The advantage of sticking to Unix sockets is that the permissions can be controlled making them more secure.
Keep this in mind with the config that follows as you may need to adapt it to the location and configuration of your Postfix.
DKIM preparation
Start off by installing dkim-filter. This is a milter which can be used in Postfix to do signing and verification.
Next, we need to generate keys. I am going to base this on handling multiple domains (eg. virtual hosted) on the same box so we are going to create a key per domain on each server.
# mkdir -p /etc/mail/dkim/keys/domain1
# cd /etc/mail/dkim/keys/domain1
# dkim-genkey -r -d domain1
At this point we have our key pair for domain1. The selector (identifier that says what key we are using) be the filename that dkim-filter pulls the key from. We can either rename the key, or I prefer to just symlink it. So for example, if we are on a server mail2.domain1, we probably just want to call the selector mail2 to keep things simple:
# ln -s default.private mail2
Likewise, you can do the same for domain2, domain3, and so on for all the domains that your server handles.
Next, we are going to tell dkim-filter what key to use for what mail. Create a file /etc/dkim-keys.conf and put the following in it:
*@domain1:domain1:/etc/mail/dkim/keys/domain1/mail2
*@domain2:domain2:/etc/mail/dkim/keys/domain2/mail2
*@domain3:domain3:/etc/mail/dkim/keys/domain3/mail2
Now, you need to take some time to look at how your network is configured. In many cases machines may be allowed to use the server as an outbound relay. Any machines that do this need to be explicitly defined else dkim-filter will not sign mail from them. If you do need to tell dkim-filter about these then create a file /etc/dkim-internalhosts.conf and put the machines that can use this server as a relay in:
127.0.0.1
::1
localhost
server2.domain1
server1.domain2
All that needs doing is to do the final config in /etc/dkim-filter.conf. From the default, the only things you will probably need to do is uncomment the line:
KeyList        /etc/dkim-keys.conf
And, add the InternalHosts (if needed):
InternalHosts    /etc/dkim-internalhosts.conf
You may want to take more control over how dkim-filter behaves under different circumstances. See the man page and look at On-* options which may also be added to the config telling it how to handle mail.
If you are running Postfix chroot (see above) then add/change the line in /etc/default/dkim-filter to put the socket within the chroot:
SOCKET="local:/var/spool/postfix/milter/dkim-filter/dkim-filter.sock"
Now restart dkim-filter and hopefully everything will work as expected:
# /etc/init.d/dkim-filter restart
Restarting DKIM Filter: dkim-filter.
The only other thing you need to do is add postfix into the dkim-filter group else it will not be able to connect to the socket to talk to dkim-filter.
We will handle the Postfix side of things later once all the parts are in place.
DK preparation
This is the historic method and is much less tidy than dkim-filter. Install dk-filter.
We will be using the same keys, but DK normally puts them in a different location. For convenience I just symlink them:
# mkdir /etc/mail/domainkeys
# cd /etc/mail/domainkeys
# ln -s /etc/mail/dkim/keys
The only thing to watch is permissions as dk-filter tries to read the keys as it's uer rather than root. To solve this I suggest changing permissions on the keys:
# cd keys/domain1
# chgrp dk-filter *
# chmod g+r *
And repeat this for all the other domains.
Next we need to create some lists of domains, keys etc. for dk-filter. These are similar to what we did for dkim-filter, but beware, are not all the same.
Easy one first - internal hosts is the same so I just symlink it:
# cd /etc/mail/domainkeys
# ln -s /etc/dkim-internalhosts.conf internalhosts
We also need a list of domains that we shoud sign. Create /etc/mail/domainkeys/domains containing:
domain1
domain2
domain3
... as needed.
The list of keys to use is also a different format. Create /etc/mail/domainkeys/keylist containing:
*@domain1:/etc/mail/domainkeys/keys/domain1/mail2
*@domain2:/etc/mail/domainkeys/keys/domain2/mail2
*@domain3:/etc/mail/domainkeys/keys/domain3/mail2
... and more as needed.
The config for dk-filter is all done with command line arguments. Typically these would be added in to /etc/default/dk-filter. I have added the following to the bottom of the file:
DAEMON_OPTS="$DAEMON_OPTS -i /etc/mail/domainkeys/internalhosts"
DAEMON_OPTS="$DAEMON_OPTS -d /etc/mail/domainkeys/domains"
DAEMON_OPTS="$DAEMON_OPTS -k -s /etc/mail/domainkeys/keylist"
DAEMON_OPTS="$DAEMON_OPTS -b s"
The last line is important because it causes dk-filter to sign only. If we do verification on DK then we just become part of the problem of legacy systems still running.
If you are running Postfix chroot (see above) then also add/change the line in /etc/default/dk-filter to put the socket within the chroot:
SOCKET="/var/spool/postfix/milter/dk-filter/dk-filter.sock"
Now restart dk-filter and hopefully everything will work as expected:
# /etc/init.d/dk-filter restart
Restarting DomainKeys Filter: dk-filter.
The only other thing you need to do is add postfix into the dk-filter group else it will not be able to connect to the socket to talk to dk-filter.
Now that we have the filters working we can get Postfix hooked up.
DK/DKIM Postfix configuration
Edit your /etc/postfix/main.cf file and add the lines (or add to them if you already have milters:
smtpd_milters =
    unix:/var/run/dkim-filter/dkim-filter.sock
    unix:/var/run/dk-filter/dk-filter.sock
non_smtpd_milters =
    unix:/var/run/dkim-filter/dkim-filter.sock
    unix:/var/run/dk-filter/dk-filter.sock
milter_default_action = accept
Or for a chroot configuration of Postfix (see above):
smtpd_milters =
    unix:/milter/dkim-filter/dkim-filter.sock
    unix:/milter/dk-filter/dk-filter.sock
non_smtpd_milters =
    unix:/milter/dkim-filter/dkim-filter.sock
    unix:/milter/dk-filter/dk-filter.sock
milter_default_action = accept
These tell Postfix where to find the sockets for talking to the filters. Restart Postfix and hopefully now mail will be getting signed:
# /etc/init.d/postfix restart
Now we need to publish the public keys to make sure that people can verify the mail.
DK/DKIM DNS configuration
One catch here is that you will need to be able to add TXT records with underscores in them and some DNS providers have problems with this.
In the directories that you created the keys for each domain there will be a default.txt file which contains the DNS record that has to be added to that domain. For now I also suggest you add a t=y flag to it to indicate that it should be in test mode (don't treat mail any different to unsigned mail even if it fails verification):
default._domainkey IN TXT "v=DKIM1; g=*; k=rsa; t=y; p=MIGf........."
In your DNS record change default to whatever the selector is for this server (ie. mail2 in our example):
mail2._domainkey IN TXT "v=DKIM1; g=*; k=rsa; t=y; p=MIGf........."
This is what goes in your DNS zone. Beware that with many web interfaces you will have to put in mail2._domainkey as the name and the part in quotes (not including the quotes) as the value, ensuring that you are creating a TXT record.
For DK, a default policy for a domain is probably worth setting to discourage mail from being rejected by legacy systems verifying DK:
_domainkey IN TXT "t=y;o=~"
This says that it is in testing mode (ie. treat it the same as unsigned mail), and that not all mail will be signed. This should give DK verifies no reason to reject any mail any more than an unsigned mail.
You can test your DNS config with the policy tester at http://domainkeys.sourceforge.net/policycheck.html and the selector tester at http://domainkeys.sourceforge.net/selectorcheck.html
Testing
It can take some time for DNS to propagate, so ensure that the DNS records you added have become available before trying to test.
It is worth having access to some mail accounts with major mail providers who use DK/DKIM so that you can test. Yahoo! and Google are good places to start, but other providers are also worth testing.
Send mails to your test accounts from all your domains, and from all your test accounts to all your domains, and examine the message headers at the other side.
You should see that signatures are being added:
You should also see a line indicating that verification has succeeded:
If you run into trouble then check that the correct fields are making it into the headers:
Also check the DNS:
$ host -t txt _domainkey.domain1
$ host -t txt mail2._domainkey.domain1
It's worth checking against other DNS servers. Google's public DNS is useful for this:
$ host -t txt _domainkey.domain1 8.8.8.8
$ host -t txt _domainkey.domain1 8.8.8.8
$ host -t txt mail2._domainkey.domain1 8.8.4.4
$ host -t txt mail2._domainkey.domain1 8.8.4.4
There are also test reflectors listed at: http://testing.dkim.org/reflector.html
Going Live
Once you have tested sufficiently and run the system in test mode for long enough to be confident that everything is working then you may like to switch off test mode.
To turn off test mode remove the t=y fields from selector DNS records to indicate that all mail for the domain is signed. At this point other systems should start rejecting mail from your domain that does not verify.
This means that spammers / fraudsters attempting to fake mail from your domain will hit problems and ultimately people using DKIM should have less reason to block mail from your domain due to it being used as a fake source of SPAM / phishing.
Keep in mind however that if something goes wrong (eg. someone mangles the DNS records, messes up the dkim-filter configuration or something) then this could also end up disrupting your mail.
I would reccomend some form of monitoring in place to ensure that everything is working as designed and to be able to detect breakages quickly.
ADSP (was ASP)
DKIM ADSP is at this time not a formal standard, but none the less it takes care of the policy for your domain, and hence you may like to put some thought into using it at this stage. It is once again a TXT DNS record and for now a good place to start is:
_adsp._domainkey IN TXT "dkim=unknown"
This simply states that not all mail from the domain will be signed, hence servers should still accept unsigned mail. This is the recomended state, but if you want to start enforcing it (eg. your domain is being faked by phishers) then you can tell the world that all mail should be signed, and it's worth verifying that all your mail is actually being signed first:
_adsp._domainkey IN TXT "dkim=all"
You can go a step further and explicitly tell the world to discard mail that is unsigned:
_adsp._domainkey IN TXT "dkim=discardable"
There is still much debate about this, and if discarding mail (rather than rejecting it on the edge servers) is actually a good idea at all. My opinion is that so far as possible, mail should never be discarded as if there is a fault upstream the sender doesn't know about it and can't rectify the problem. If mail is rejected then an increase in failures will be noticed on well run systems which are being monitored and the admins can investigate and correct the problems.
The other problem with discarding mail is that it appears to spammers that they are being successful. Really, I would like to demonstrate as clearly as possible to spammers that they are failing and discourage them by rejecting the mail. If it is clearly a waste of time spamming then less people will try it.
There are arguments about the backscatter / blowback problem with rejecting mail, but again, if systems reject mail then it's a problem for those running systems that relay mail and they should harden their systems. If they are creating backscatter then they deserve to have their servers blacklisted.
Adding SPF
You will often see SPF (Sender Policy Framework) related lines in the headers of mail verified with with DKIM, and they work nicely together. SPF is simply a way of publishing policies about what sources of mail should be trusted - kind of like a MX record for sending servers for a domain.
With Postfix I use postfix-policyd-spf-perl to validate SPF. The man page gives you most of what you need to know.
The first thing you need to be aware of is that if you have any secondary servers that forward mail to this one, SPF checks will have to be skipped for them as they will be seen as a source of forged mail. To fix this you will need to edit the actual Perl and add the source addresses of these relays - line 86 in the version in Debian/Lenny:
use constant relay_addresses => map(
    NetAddr::IP->new($_),
    qw(
1.2.3.4  5.6.7.8 )
); # add addresses to qw (  ) above separated by spaces using CIDR notation.
Be aware that if the package is upgraded then these will be overwritten.
Add postfix-policyd-spf-perl to the /etc/postfix/master.cf so that it is started when needed:
spfcheck  unix  -       n       n       -       0       spawn
    user=policyd-spf argv=/usr/sbin/postfix-policyd-spf-perl
Put in your smtpd_recipient_restrictions the policy check:
smtpd_recipient_restrictions =
    reject_invalid_hostname,
    reject_non_fqdn_sender,
    reject_non_fqdn_recipient,
    reject_unknown_sender_domain,
    reject_unknown_recipient_domain,
    permit_mynetworks,
    reject_non_fqdn_hostname,
    permit_sasl_authenticated,
    reject_unauth_destination,
    check_recipient_access pcre:/etc/postfix/toaccess_pcre,
    check_recipient_access hash:/etc/postfix/toaccess,
   
check_policy_service unix:private/spfcheck
    check_policy_service inet:127.0.0.1:60000,
    reject_rbl_client bl.spamcop.net,
    reject_rbl_client dnsbl.sorbs.net,
    reject_rbl_client zen.spamhaus.org,
    permit
Make sure that the line is added after reject_unauth_destination else you could end up approving mail to any destination (open relay). At that point you should be ready to go - restart Postfix and see what happens.
All going to plan you should see things like this logged occasionally:
postfix/policy-spf[*****]: : SPF Pass (Mechanism 'ip4:***.***.***.***/**' matched): Envelope-from: someone@somedomain
postfix/policy-spf[*****]: handler sender_policy_framework: is decisive.
Lastly, you need to setup your DNS so that others can verify your mail sources. Although there is a specific SPF record in DNS, for now you will almost certainly have to use a TXT record:
@ IN TXT "v=spf1 mx a:mail.domain include:senders.domain ~all"
@ means default for the domain (ie. when you lookup the base domain), but you can as easily specify the record for subdomains.
v=spf1 identifies it as an SPF record and gives the version.
mx says that mail could come from a machine matching the MX records for your domain. For smaller domains this is often all that is needed.
a specifies an A or AAAA record where mail may come from. This may be an outbound-only mail relay, a security applicance, a webserver that mails customers directly or perhaps a marketing company's systems who sends out mail blasts on your behalf.
include specifies another TXT record to include which is useful if you run a large outfit need to break up your records into managable chunks.
There are various other mechanisms (eg. ipv4, ipv6 which specify address ranges) that can be added but most will probably only be use to people with large amounts of mail infrastructure to worry about and they can be easily looked up.
Lastly, ~all says that all other sources should soft fail (retryable failure, useful for testing). This can also be -all meaning to fail (reject/bounce) other sources, ?all meaning to ignore the policy (again usful for testing), and +all meaning to accept all others which is probably not a good idea. With the a, mx, etc. the + is implied - ie. saying mx really means +mx
You can find much more on this syntax at: http://www.openspf.org/SPF_Record_Syntax
Like with DKIM, this needs testing and accounts at major web mail providers will often have a verification header in them. Test throughly before setting to -all where other mail sources will not be able to send mail as your domain. If you have forgotten to include one of your legitimate outbound mail sources then this too will be blocked from sending mail.
Record keeping
When deploying technologies like this it is very easy to loose track of all the places where configuration is hiding that need to be changed if for example you add another server or just change the address of an existing one.
With small setups it's generally all left in the head of whoever set it up. As they are not administrating it on a continous basis, they often forget and then mistakes happen. Likewise if they leave, their replacement will have no familiarity with what configuration is where.
In larger organisations there is far more infrastructure and it can be hard work keeping track of it all. Administration is done by many people and unless they communicate effectively it is a recipe for disaster.
In any size organsiation, keeping good records of your configuration, work notes of who did what configuration and checklists/work instructions (eg. for deploying new servers) are vital to ensuring that everything remains under control.
Cacti
I am updating my Postfix templates for Cacti for moitoring DKIM and SPF and these will be available shortly.
Monitoring is vital to smooth running of mail as well as long term planning so get yours configured.


Unix Made Easy: HACMP

Unix Made Easy: HACMP: "Cluster planning: The area of cluster planning is a large one. Not only does it include planning for the types of hardware (CPUs, networks..."

HACMP

Cluster planning:

The area of cluster planning is a large one. Not only does it include planning for the types of hardware (CPUs, networks, and disks) to be used in the cluster, but it also includes other aspects. These include resource planning, that is, planning the desired behavior of the cluster in failure situations. Resource planning must take application loads and characteristics into account, as well as priorities.

High availability:

A high availability solution will ensure that any failure of any component of the solution, be it hardware, software, or system management, will not cause the application and its data to be inaccessible to the user community.

High availability is:

The masking or elimination of both planned and unplanned downtime.
The elimination of single points of failure (SPOFs)
Fault resilience, but not fault tolerance

High availability systems are an excellent solution for applications that can withstand a short interruption should a failure occur, but which must be restored quickly.

The difference between fault tolerance and high availability is:
A fault tolerant environment has no service interruption, while a highly available environment has a minimal service interruption.

System downtime is either planned or unplanned, with planned downtime accounting for the vast majority of the total.

HACMP allows you to minimize or eliminate this planned downtime from your operation, by allowing you to maintain a service to your customers while performing hardware upgrades, software upgrades, or other maintenance activity at the same time.

Services may be moved from one cluster node to another at will, allowing the original node to undergo maintenance without affecting the availability of the service. When the maintenance activity is completed, the service may be moved back to the node which was originally running it.

Unplanned downtime has one of two causes: periods caused by hardware failures, and those periods caused by software failures.

Hardware has been getting more and more reliable over time and will continue to do so, but hardware remains a cause of failures. Together with facilities provided by AIX, HACMP can protect your operation from a hardware failure, by automatically moving the services provided by the failing node to other nodes within the cluster.

Cluster nodes

One of HACMP's key design strengths is its ability to provide support across the entire range of RISC System/6000 products. Because of this built-in flexibility and the facility to mix and match RISC System/6000 products, the effort required to design a highly available cluster is significantly reduced.

The designated hardware should only be used on an appropriate IBM ^ pSeries, RS/6000 Platform, or 9076 Scalable POWERParallel Platform (SP).

The following sections will deal with the various:

Operating system levels

CPU options
Disk storages for CRM
Cluster node considerations available to you when you are planning your HACMP cluster.


Operating system level

Before installation of HACMP, make sure to have the proper version of the

operating system. Here is a list of required operating system levels for HACMP

versions (see Table2-1) and Parallel System Support Program versions (see

Table2-2).

Table 2-1 Required OS level for HACMP

Table 2-2 PSSP versions for SP installation

AIX OS levelHACMP Version

4.3.1

HACMP Version

4.4.0

HACMP Version

4.4.1

4.3.2 yesnono

4.3.3 yesyesyes

5.1 noyes*yes

* Note:

The following restrictions apply to HACMP for AIX Version 4.4.0 support of IBM AIX 5L for Power Version 5.1. IBM intends to remove these restrictions through further APARs on HACMP.


Enhanced Journaled File Systems are not supported on shared volume groups.


Fencing is not supported for concurrent mode volume groups created on 9333 disks.


HACMP can only run on 32-bit AIX kernels. Even if the hardware is capable of supporting 64-bit kernels, it must be booted using the the bosboot command with a 32-bit kernel.


The VSM-based xhacmpm configuration utility is not supported.

HACMP versionPrerequisite PSSP version

HACMP Version 4.3.1 for AIXPSSP Version 3.1

HACMP Version 4.4.0 for AIXPSSP Version 3.2

HACMP Version 4.4.1 for AIXPSSP Version 3.2




CPU options

HACMP is designed to execute with RS/6000 uniprocessors, SMP servers, and SP systems in a no single point of failure server configuration. HACMP supports the IBM ^ pSeries and the RS/6000 models that are designed for server application and meet the minimum requirements for internal memory, internal disk, and I/O slots.










Cluster node considerations:

Your major goal throughout the planning process is to eliminate single points of failure. A single point of failure exists when a critical cluster function is provided by a single component. If that component fails, the cluster has no other way of providing that function, and the service depending on that component becomes unavailable.

HACMP for AIX is designed to recover from a single hardware or software failure. It may not be able to handle multiple failures, depending on the sequence of failures. For example, the default event scripts cannot do an adapter swap after an IP address takeover (IPAT) has occurred if only one standby adapter exists for that network.

How to eliminate the single point of failure

Table2-6 summarizes potential single points of failure within an HACMP cluster and describes how to eliminate them.



Cluster networks

HACMP differentiates between two major types of networks: TCP/IP networks and non-TCP/IP networks. HACMP utilizes both of them for exchanging heartbeats. HACMP uses these heartbeats to diagnose failures in the cluster. Non-TCP/IP networks are used to distinguish an actual hardware failure from the failure of the TCP/IP software. If there were only TCP/IP networks being used, and the TCP/IP software failed, causing heartbeats to stop, HACMP could falsely diagnose a node failure when the node was really still functioning. Since a non-TCP/IP network would continue working in this event, the correct diagnosis could be made by HACMP. In general, all networks are also used for verification, synchronization, communication, and triggering events between nodes. Of course, TCP/IP networks are used for communication with client machines as well.

TCP/IP networks:

The following sections describe supported TCP/IP network types and network considerations.

Supported TCP/IP network types

Basically every adapter that is capable of running the TCP/IP protocol is a supported HACMP network type. There are some special considerations for certain types of adapters, however. The following gives a brief overview on the supported adapters and their special considerations.

Below is a list of TCP/IP network types as you will find them at the configuration time of an adapter for HACMP.


Generic IP


ATM


Ethernet


FCS


FDDI


SP Switch


SLIP


SOCC


Token-Ring




Heartbeating in HACMP

The primary task of HACMP™ is to recognize and respond to failures. HACMP uses heartbeating to monitor the activity of its network interfaces, devices and IP labels.

Heartbeating connections between cluster nodes are necessary because they enable HACMP to recognize the difference between a network failure and a node failure. For instance, if connectivity on the HACMP network (this network's IP labels are used in a resource group) is lost, and you have another TCP/IP based network and a non-IP network configured between the nodes, HACMP recognizes the failure of its cluster network and takes recovery actions that prevent the cluster from becoming partitioned.

To avoid cluster partitioning, we highly recommend configuring redundant networks in the HACMP cluster and using both IP and non-IP networks. Out of these networks, some networks will be used for heartbeating purposes.

In general, heartbeats in HACMP can be sent over:


TCP/IP networks.


Serial (non-IP) networks (RS232, TMSCSI, TMSSA and disk heartbeating).

The Topology Services component of RSCT carries out the heartbeating function in HACMP.

Topology services and heartbeat rings

HACMP uses the Topology Services component of RSCT for monitoring networks and network interfaces. Topology Services organizes all the interfaces in the topology into different heartbeat rings. The current version of RSCT Topology services has a limit of 48 heartbeat rings, which is usually sufficient to monitor networks and network interfaces.

Heartbeat rings are dynamically created and used internally by RSCT. They do not have a direct, one-to-one correlation to HACMP networks or number of network interfaces. The algorithm for allocating interfaces and networks to heartbeat rings is complex, but generally follows these rules:


In an HACMP network, there is one heartbeat ring to monitor the service interfaces, and one for each set of non-service interfaces that are on the same subnet. The number of non-service heartbeat rings is determined by the number of non-service interfaces in the node with the largest number of interfaces.


The number of heartbeat rings is approximately equal to the largest number of interfaces found on any one node in the cluster.

Note that during cluster verification, HACMP calls the RSCT verification API. This API performs a series of verifications, including a check for the heartbeat ring calculations, and issues an error if the limit is exceeded.

Heartbeating over IP aliases

This section contains information about heartbeating over IP aliases.


Overview
In general, HACMP subnetting requirements can be complicated to understand and may require that you reconfigure networks in AIX® to avoid features such as multiple subnet routes, which can lead to a single point of failure for network traffic









Overview

In general, HACMP™ subnetting requirements can be complicated to understand and may require that you reconfigure networks in AIX® to avoid features such as multiple subnet routes, which can lead to a single point of failure for network traffic.

When planning your cluster networks, you may need to:


Reconfigure IP addresses of HACMP interfaces that will be used at boot time or


Update/etc/hosts with the new boot time IP addresses.

Heartbeating over IP Aliases is useful because it:


Uses automatically generated IP aliases for heartbeating


Heartbeating over IP Aliasing provides an option where the addresses used for heartbeating can be automatically configured by HACMP in a subnet range that is outside of the range used for the base NIC or any service addresses.

Although Heartbeating over IP Aliasing automatically configures proper aliases for heartbeating, you must still be aware of the implications of subnet routing for all boot and service IP addresses. That is, failure to plan subnets properly can lead to application failures that are not detectable by HACMP. Reliable HACMP cluster communication still requires that the interfaces on a single network can communicate with the other nodes on that network.


Enables you to avoid reconfiguration of boot time addresses and /etc/hosts.

RSCT sets up the heartbeat rings to go over a separate range of IP aliases. This lets you use a specified subnet in a non-routable range for a heartbeat ring, preserving your other subnets for routable traffic. This also allows you to avoid reconfiguring boot time addresses and entries in /etc/hosts.


Makes HACMP topology configuration easier to understand.


Does not require that you obtain additional routable subnets from the network administrator.

For instance, you can use heartbeating over aliases in HACMP, if due to the network system administration restrictions, the IP addresses that your system can use at boot time must reside on the same subnet. (In general, if there are no system administration restrictions, the IP addresses that your system can use at boot time can reside on either the same or different subnets).





Heartbeating over disk

You can configure a non-IP point-to-point heartbeating network, called a disk heartbeating network, over any shared disk in an enhanced concurrent mode volume group. Heartbeating over disk provides another type of non-IP point-to-point network for failure detection.

Disk heartbeating networks provide an alternative to other point-to-point networks such as RS232 that have cable length restrictions, or TMSSA which require special disk adapter hardware and cabling. Heartbeating over disk does not require additional or specialized hardware, cabling or microcode; it can use any disk that is also used for data and for which volume groups and file systems are included in an HACMP™ resource group.

In a disk heartbeating network, two nodes connected to the disk periodically write heartbeat messages and read heartbeat messages (written by the other node) on a small, non-data portion of the disk. A disk heartbeating network, like the other non-IP heartbeating networks, connects only two nodes. In clusters with more than two nodes, use multiple disks for heartbeating. Each node should have a non-IP heartbeat path to at least one other node. If the disk heartbeating path is severed, at least one node cannot access the shared disk.

You have two different ways for configuring a disk heartbeating network in a cluster:


You can create an enhanced concurrent volume group shared by multiple nodes in your cluster. Then you use the HACMP Extended Configuration SMIT path to configure a point-to-point pair of discovered communication devices.

or


You can start by creating a cluster disk heartbeating network, and then add devices to it using the Add Pre-Defined Communication Interfaces and Devices panel in SMIT.

The HACMP cluster verification utility verifies that the disk heartbeating networks are properly configured.

Heartbeating over disk and fast method for node failure detection

With HACMP, you can reduce the time it takes for node failure to be realized throughout the cluster. If you have a disk heartbeating network configured, and specify a parameter for a disk heartbeating NIM, then when a node fails, HACMP uses a disk heartbeating network to place a departing message on the shared disk so neighboring nodes are aware of the node failure within one heartbeat period.












Friday 29 July 2011

Common Linux log files name and usage


Common Linux log files name and usage

  • /var/log/message: General message and system related stuff
  • /var/log/auth.log: Authenication logs
  • /var/log/kern.log: Kernel logs
  • /var/log/cron.log: Crond logs (cron job)
  • /var/log/maillog: Mail server logs
  • /var/log/qmail/ : Qmail log directory (more files inside this directory)
  • /var/log/httpd/: Apache access and error logs directory
  • /var/log/lighttpd: Lighttpd access and error logs directory
  • /var/log/boot.log : System boot log
  • /var/log/mysqld.log: MySQL database server log file
  • /var/log/secure: Authentication log
  • /var/log/utmp or /var/log/wtmp : Login records file
  • /var/log/yum.log: Yum log files

In short /var/log is the location where you should find all Linux logs file. However some applications such as httpd have a directory within /var/log/ for their own log files. You can rotate log file using logrotate software and monitor logs files using logwatch software

Tuesday 26 July 2011

Red Hat Top Tools (priority order?)


CPU Tools Memory Tools Process Tools Disk Tools
1 . top 1 . top 1 . top 1 . iostat -x
2 . vmstat 2 . vmstat s 2 . ps -o pmem 2 . vmstat -D
3 . mpstat P 3 . ipcs
3 . sar -DEV #
all 4 . ps -o vss,rss 3 . gprof
4 . ps ef
4 . strace,ltrace 4 . nsfstat
5 . sar u 5 . sar -r -B -W 5 . sar 5 . NEED MORE!
6 . procinfo


7 . iostat


8 . gnomesystemmonitor 6 . meminfo

9 . KDEmonitor 7 . free

10 . oprofile 8 . gnome-system-monitor


9 . KDE-monitor


10 . oprofile





Friday 22 July 2011

Linux Server Hardening


chmod 711 /
chmod 711 /home
chmod 711 /etc
chmod 711 /var
chmod 711 /usr/etc
chmod 711 /usr/local/etc
chmod 711 /var/log
chmod 711 /sbin
chmod 711 /usr/sbin
chmod 711 /usr/local/sbin

chmod 644 /etc/motd

groupadd deva
chmod 750 /usr/bin/wget
chown root:deva /usr/bin/wget
chmod 750 /usr/bin/perlcc
chown root:deva /usr/bin/perlcc
chmod 750 /usr/bin/byacc
chown root:deva /usr/bin/byacc
chmod 750 /usr/bin/yacc
chown root:deva /usr/bin/yacc
chmod 750 /usr/bin/cc
chown root:deva /usr/bin/cc
chmod 750 /usr/bin/gcc
chown root:deva /usr/bin/gcc

chmod 700 /bin/dmesg
chmod 700 /bin/mount
chmod 700 /bin/rpm
chmod 700 /usr/bin/write
chmod 700 /usr/bin/talk
chmod 700 /usr/bin/ipcrm
chmod 700 /usr/bin/ipcs
chmod 700 /usr/bin/free
chmod 700 /usr/bin/locate
chmod 700 /usr/bin/wall
chmod 700 /usr/bin/finger
chmod 700 /sbin/arp
chmod 700 /sbin/ifconfig
chmod 700 /usr/sbin/repquota
chmod 700 /usr/sbin/tcpdump
chmod 700 /usr/bin/nmap
chmod 700 /usr/bin/wget
chmod 700 /usr/bin/perlcc
chmod 700 /usr/bin/byacc
chmod 700 /usr/bin/yacc
chmod 700 /usr/bin/cc
chmod 700 /usr/bin/gcc
chmod 700 /usr/bin/who
chmod 700 /usr/bin/w
chmod 700 /usr/bin/nc

chmod 1733 /tmp/.ICE-unix
chmod 1733 /tmp/.X11-unix
chmod 660 /var/run/utmp

chmod 000 /usr/bin/rcp
chmod 000 /usr/bin/links
chmod 000 /usr/bin/scp
chmod 000 /usr/bin/elinks
chmod 700 /usr/bin/lwp-*
chmod 000 /usr/bin/GET
chmod 700 /usr/bin/curl
chmod 700 /usr/bin/*++*
chmod 700 /usr/bin/*cc*
chmod 700 /usr/bin/yum
chmod 700 /usr/bin/up2date
chmod 700 /usr/sbin/up2date


chmod u-s /usr/bin/at
chmod u-s /usr/bin/chage
chmod u-s /usr/bin/chfn
chmod u-s /usr/bin/chsh
chmod u-s /usr/bin/crontab
chmod u-s /usr/bin/expiry
chmod u-s /usr/bin/gpasswd
chmod u-s /usr/bin/lppasswd
chmod u-s /usr/bin/newgrp
chmod u-s /usr/bin/rcp
chmod u-s /usr/bin/rlogin
chmod u-s /usr/bin/rsh
chmod u-s /usr/libexec/ssh-keysign

Thursday 21 July 2011

Linux System Administration



 What is the RPM tool?
 How to verify Red Hat Linux packages?
 How to verify UnitedLinux packages?

 How to install packages?
 How to upgrade packages?
 How to remove packages?.


What is the RPM tool?
The Red Hat Package Management (RPM) utility has become a Linux product standard. It is a robust tool for packaging, installing, upgrading and removing software on Linux.
The <package> and <package_dependency> are composite names in the examples. A Linux RPM package is made up of three components. They are a name, version number and build number. They are referred to as <package> or <package_dependency> in the examples below.

Name - Version - Build
pdksh - 5.2.14 - 13
While they are numerous RPM features, the key tasks are installing, upgrading and removing packages. User & group administration is covered in the following sections.


How to verify Red Hat Linux packages?
Red Hat provides a webpage to lookup patches. As a rule, only download patches from the Red Hat site.
Once a patch is downloaded, the rpm utility can be used to examine any conflicts. Conflicts typically occur when a utility replaces a common dependency, like a file. A common file conflict error is linked to documentation files in the man pages.
The rpm command has the capability to view many aspects of packages and configuration, documentation and library files. Below is a summary of query capabilities of the rpm utility.

Options Utility provided
 -qa Lists all installed packages. Generally, the results are piped into a grep for a partial string related to a package.
 -qf file Lists the package that owns a file. It is required to provide the full qualified path and file name.
 -q package Lists information about a package.
 -qi package Lists information about a package.
 -qR package Lists libraries and commands that a package depends on.
 -ql package Lists files in a package.
 -qd package Lists documentation files in a package.
 -qc package Lists configuration files in a package.



How to verify UnitedLinux packages?
UnitedLinux provides a webpage to lookup patches. As a rule, only download patches from the UnitedLinux site.
Once a patch is downloaded, the rpm utility can be used to examine any conflicts. Conflicts typically occur when a utility replaces a common dependency, like a file. A common file conflict error is linked to documentation files in the man pages.
The rpm command has the capability to view many aspects of packages and configuration, documentation and library files. Below is a summary of query capabilities of the rpm utility.

Options Utility provided
 -qa Lists all installed packages. Generally, the results are piped into a grep for a partial string related to a package.
 -qf file Lists the package that owns a file. It is required to provide the full qualified path and file name.
 -qi package Lists information about a package.
 -qR package Lists libraries and commands that a package depends on.
 -ql package Lists files in a package.
 -qd package Lists documentation files in a package.
 -qc package Lists configuration files in a package.



How to install packages?
The RPM utility installation uses a -i argument for installation. It fails when there is a missing dependency. So, it is important to use a -ivh argument, which provides verbose response and progress hash marks.
When a package has a dependency on another package, there are two options. One is to install the dependent package first. The other is to install a package with any dependent packages at the same time.
It is possible that a package may be older than the release date of an operating system, like Red Hat Advanced Server (AS) 2.1. The release problem between vendors is very complex in the Linux market.
An example of the complexity can be illustrated by the standard release of Perl 5.6.1 on Red Hat Linux AS 2.1. While it was not the current version of Perl at time of release, it was the current release of the consumer version of Red Hat Linux 7.2. Since Red Hat Linux AS 2.1 shipped a scalable and enhanced version of Red Hat Linux 7.2, patches to that release were held to a minimum. Perl 5.8 depends on XML utilities, which depend on Berkeley Software Distribution (BSD) database library for C that shipped on the consumer Red Hat Linux 7.3 media.
When installing packages, the machine architecture must be less than or equal to the physical machine. The machine architecture is found by using the uname -m command.
The >package< and >package_dependency< are composite names in the examples. A Linux RPM package is made up of three components. They are a name, version number and build number. For convenience, they are referred to as >package< or >package_dependency< in the syntax examples noted below.

  • Installing packages.
 # rpm [-i install] [-v verbose] [-h hash_marks] \
 > [package]
 # rpm -ivh <package>.`>.`uname -m`.rpm
   - OR -
 # rpm -ivh <package>.`>.`uname -m`.rpm \
 > <package_dependency>.`>.`uname -m`.rpm



How to upgrade packages?
The RPM utility upgrade uses a -u or -U argument. The -U is the preferred argument since it automatically installs the package if it is not already installed. Using the -h argument, enables progress hash mark display.
When a package has a dependency on another package, there are two options. One is to install or upgrade the dependency package first. The other is to upgrade a package with any dependency packages at the same time, using the -U argument that installs previously uninstalled packages.
As a rule any package may be equal to or older than the machine architecture. The machine architecture is found by using the uname -m command. Syntax examples are noted below.

  • Upgrade existing packages.
 # rpm [-u upgrade] [-v verbose] [-h hash_marks] \
 > [package]
 # rpm -uvh <package>.`>.`uname -m`.rpm
   - OR -
 # rpm -uvh <package>.`>.`uname -m`.rpm \
 > <package_dependency>.`>.`uname -m`.rpm
  • Upgrade or Install packages.
 # rpm [-U upgrade] [-v verbose] [-h hash_marks] \
 > [package]
 # rpm -Uvh <package>.`>.`uname -m`.rpm
   - OR -
 # rpm -Uvh <package>.`>.`uname -m`.rpm \
 > <package_dependency>.`>.`uname -m`.rpm



How to remove packages?
The RPM utility removal uses a -e argument. Using the -v argument is optional but recommended when removing packages.
Before removing packages, it is critical to test because it is possible to remove a critical package and crash the operating system. If a package is a dependency to another package, it cannot be removed first. There are two options when a dependency is encountered. One is to remove the package dependency first. The other removes packages and dependencies at the same time. Syntax examples are noted below.

  • Removing packages.
 # rpm [-e erase] [-v verbose] [package]  # rpm -ev <package>
   - OR -
 # rpm -ev <package> \
 > <package_dependency>.`>.`uname -m`.rpm



  • User & Group Administration

 Why is user & group administration important?
 What are the available shell environments?
 Command-line syntax.

   Add a user.
   Modify a user.
   Delete a user.

   Add a group.
   Modify a group.
   Delete a group.
 Red Hat GUI.

   Add a user.
   Modify a user.
   Delete a user.

   Add a group.
   Modify a group.
   Delete a group.
 UnitedLinux GUI.

   Add a user.
   Modify a user.
   Delete a user.

   Add a group.
   Modify a group.
   Delete a group.


Why is user & group administration important?
If user and group accounts are not setup properly, it may be difficult for users to work effectively. Correct configuration can save hours of troubleshooting.
A list of problems introduced by poor system administration are noted below.

  • Login shell configuration may be incomplete.
  • Login shell assignment may be incorrect.
  • Account expiration and policy may be inconsistent.
  • Group assignments may be incorrect.
  • Passwords may be set incorrectly.



Command syntax to find available shells.
The chsh utility enables a user to change shell environments. Using the -l option enables a user to determine the available shells. An example of the syntax requirements are noted below.

  • Finding available shell environments.
 # chsh [-s shell] [-l] [username]  # chsh -l



Command-line syntax.
The following sections cover the command-line syntax to add, modify and delete users and groups.



Command-line: Add a user.
The useradd utility is the only way to successfully add a user with a custom default group.
Creating a user with a custom default group is done by passing the -n and the -g option. The -g option requires a valid group name as an argument. If a user is created with the -n option and without the -g option, the user name will be the default or users group, which has a Group ID of 100. The group should exist before attempting to add a user to the group.
An example of the syntax requirements and recommended steps for adding users are noted below. For reference, the useradd utility only allows encrypted passwords and using a non-encrypted password will make the account inaccessible. Therefore, users are typically added by scripts that create the user and then assign an initial password.

  • Adding a user.
 # useradd [-u uid] [-g initial_group] [-G group[,...]] \
 > [-d home_directory] [-s shell] [-c comment] \
 > [-m [-k skeleton_directory]] [-f inactive_time] \
 > [-e expire_date] -n username
 # useradd -u 502 -g dba -G users,root \
 > -d /u02/oracle -s /bin/tcsh -c "Oracle Account" \
 > -f 7 -e 12/31/03 -n jdoe
  • Enabling a password as the root user.
 # passwd username
 Changing password for user <username>
 New password:



Command-line: Modify a user.
An example of the syntax requirements and recommended steps for modifying users are noted below. For reference, the usermod utility only allows encrypted passwords and using a non-encrypted password will make the account inaccessible. Therefore, users are modified at the command-line by scripts that change usermod and the passwd command. Both commands are shown below.

  • Modifying a user.
 # usermod [-u uid] [-g initial_group] [-G group[,...]] \
 > [-d home_directory] [-s shell] [-c comment] \
 > [-l new_username ] [-f inactive_time] [-e expire_date]
 > username
 # usermod -u 502 -g dba -G users,root
 > -d /u02/oracle -s /bin/bash -c "Senior DBA"
 > -l sdba -f 7 -e 12/31/03 jdoe
  • Changing a password as the root user.
 # passwd username
 Changing password for user <username>
 New password:



Command-line: Delete a user.
An example of the syntax requirements are noted below. While the -r option should be used, there are exceptions. In some cases not using it allows an audit of user accounts in the old home directory.
After reviewing the old home directory and removing or preserving the contents, the /var/opt/mail/username file should be removed.

  • Deleting a user.
 # userdel [-r] username  # userdel -r sdba



Command-line: Add a group.
An example of the syntax requirements are noted below. The -r option is used to create a system group account with a GID below 100. If a group already exists an error is raised unless the -f option is used to suppress it.

  • Creating a group.
 # groupadd [-g gid] [-rf] groupname  # groupadd -g 500 dba



Command-line: Modify a group.
An example of the syntax requirements are noted below.

  • Modifying a group.
 # groupmod [-g gid] [-n new_group_name] groupname  # groupmod -g 500 -n dba oinstall



Command-line: Delete a group.
An example of the syntax requirements are noted below.

  • Deleting a group.
 # groupdel groupname  # groupdel dba



Red Hat GUI.
The following sections cover the GUI navigation steps to add, modify and delete users and groups.


   Add a user.
   Modify a user.
   Delete a user.

   Add a group.
   Modify a group.
   Delete a group.
Red Hat Linux manages GUI access to user and group accounts with the Red Hat User Manager utility. It can be accessed by setting up the X-windows display and typing redhat-config-users at the command-line prompt. If started by other than the root user, the following input dialog box will prompt for the root password.

If started by the root user or when a root password is provided, the utility will display in an X-window. The utility will be displayed as shown below.




GUI: Add a user.
Red Hat Linux manages GUI access to user and group accounts with the Red Hat User Manager utility. Adding a user starts by clicking the New User button from the Red Hat Linux User Manager form. The Create New User screen is shown below.

  Steps to Enter a User.
  • Enter a user name without any whitespace.
  • Enter a full or account name for the user, which may contain whitespaces.
  • Enter a case sensitive password twice.
  • Select a login shell for the user.
  • While the default is to create a /home/username directory, a different directory may be entered.
  • The "Create new group for this user" checkbox should be UNCHECKED. If the button is left checked, the user's default group will be the same as the user name for the account. If the checkbox is unchecked, all users are assigned the default group. The default group has a Group ID of 101.
  • If csh or tcsh are selected as the login shell, the system administrator will need to manually create .login and .cshrc files.




GUI: Modify a user.
Red Hat Linux manages GUI access to user and group accounts with the Red Hat User Manager utility. Modify a user by clicking the Users tab and clicking the Properties button from the Red Hat Linux User Manager form.
User data, account information, password information and group assignments can be modified using the User properties form. Default group assignment for a user cannot be changed with the User Properties form. The command-line tool must be used to change the default group.
Four panels of the User Properties screen are shown below, starting with the default User Data panel. Each contains a brief synopsis of functionality.

  1. User Properties: User Data.
  Modifying user data.
  • User name may be changed.
  • Full or account name for the user may be changed.
  • Passwords may be changed.
  • Home directory may be changed.
  • Login shell may be changed.
  • If csh or tcsh are selected as the login shell, the system administrator will need to manually create .login and .cshrc files.
  1. User Properties: Account Info.
  Modifying user account information.
  • Account expiration dates may be enabled and set.
  • User accounts may be locked.
  1. User Properties: Password Info.
  Modifying user password information.
  • Enable passwords for expiration. Passwords expiration is enabled by default but disabled by setting the number of days before allowing, forcing or accessing to zero.
  • Set number of days before allowing a change in password.
  • Set number of days before forcing a change in password.
  • Set number of days before warning of a required change.
  • Set number of days before unaccessed account becomes inactive.
  1. User Properties: Groups.
  Modifying user groups.
  • The default user group cannot be changed with the form panel.
  • Checking a box enables a group for the user.
  • Unchecking a box disables a group for the user, unless it is the default user group for that user.



GUI: Delete a user.
Deleting a user is always a task that should be done carefully. The Red Hat User Manager makes it more risky for casual use. If the user is not selected before clicking the Delete button, it is possible that the adm user may be deleted. If the adm user is deleted and goes unnoticed, rebooting the system will be problematic. Therefore, follow these rule.

  1. Select the user that should be deleted.
  2. Validate the correct user is active in the display.
  3. Click the Delete button.

NOTE:
If a mistake is made with a system account, DO NOT make any further changes in the Red Hat User Manager utility! Connect to the system as the root user and physically copy the deleted row from the /etc/passwd.OLD file into the /etc/passwd file. It is unlikely at this point that the account is deleted from the shadow passwd file but check the /etc/passwd- file. If it is missing the row recover it from the passwd.OLD file.




GUI: Add a group.
Red Hat Linux manages GUI access to user and group accounts with the Red Hat User Manager utility. Adding a group starts by clicking the New Group button from the Red Hat Linux User Manager form. The Create New Group screen is shown below.

  Add a group.
  • Enter the new group name.



GUI: Modify a group.
Red Hat Linux manages GUI access to user and group accounts with the Red Hat User Manager utility. Modifying a group by clicking the Groups tab and clicking the Properties button from the Red Hat Linux User Manager form.
Group name and group member assignments can be modified using the Group properties form. Two panels of the Group Properties screen are shown below, starting with the default Group Data panel. Each contains a brief synopsis of functionality.

  1. Group Properties: Modify Group Name.
  Modifying a group name.
  • Enter the new group name.
  1. Group Properties: Modify Group Users.
  Modifying a group members.
  • Add a new user to a group by checking the box for the user.
  • Delete a new user to a group by unchecking the box for the user.



GUI: Delete a group.
Deleting a group is always a task that should be done carefully. The Red Hat User Manager makes it more risky for casual use. If the group is not selected before clicking the Delete button, it is possible that the first group displayed in the form may be deleted. Therefore, follow these rule.

  1. Select the group that should be deleted.
  2. Validate the correct group is active in the display.
  3. Click the Delete button.
If a mistake is made with a system account, DO NOT make any further changes in the Red Hat User Manager utility! Connect to the system as the root user and physically copy the deleted row from the /etc/group.OLD file into the /etc/passwd file. It is unlikely at this point that the group is deleted from the shadow passwd file but check the /etc/group- file. If it is missing the row recover it from the group.OLD file.




UnitedLinux GUI.
The following sections cover the GUI navigation steps to add, modify and delete users and groups.


   Add a user.
   Modify a user.
   Delete a user.

   Add a group.
   Modify a group.
   Delete a group.
UnitedLinux manages GUI access to user and group accounts with the YaST utility. It can be accessed by setting up the X-windows display and typing yast2 at the command-line prompt. If started by other than the root user, the following input dialog box will appear to advise the user that they lack rights and permissions.

If started by the root user, the utility will display in an X-window. The utility will be displayed as shown below.




GUI: Add a user.
UnitedLinux manages GUI access to user and group accounts with the YaST utility. Adding a user starts by clicking the Security and User menu selection on the left menu panel.
There are six steps to create a new user. They are shown below.

  1. Edit & create users: User Add.
  Adding a user.
  • Click the add button to start the process.
  • Alternatively, click abort button to lose changes.
  • Alternatively, click back button to return to previous user and group administration screen. All changes will be lost.
  1. Edit & create users: User Entry.
  Entering a user.
  • Enter a first name.
  • Enter a last name.
  • Enter a user name without any whitespace. Click on the suggestion button for system generated user name.
  • Enter a case sensitive password twice.
  • Click the details button to modify standard assigned UID, home directory, login shell, default group or additional group membership.
  • Click the Password setting button to change default password rules for a user.
  1. Edit & create users: User Details.
  Changing user defauls.
  • Change the UID if desired.
  • Change the home directory if desired.
  • Select login shell for the user.
  • Select default group for the user.
  • Click checkboxes to add or remove additional group memberships. If creating an Oracle or Application Manager user, it is CRITICAL to leave the user as a member of the video group to run X windows.
  1. Edit & create users: User Creation.
  Creating a user.
  • Click create button to build user account.
  • Alternatively, click abort button to lose changes.
  • Alternatively, click back button to return to previous user and group administration screen. All changes will be lost.
  1. Edit & create users: User Completion.
  Completing new user setup.
  • Click the finish button to add the new user.
  • Alternatively, click abort button to lose changes.
  • Alternatively, click back button to return to previous user and group administration screen. All changes will be lost.
  1. Edit & create users: User Confirmation.
  Accepting new account creation.
  • Click the OK button to complete the process.



GUI: Modify a user.
UnitedLinux manages GUI access to user and group accounts with the YaST utility. Modifying a user starts by clicking the Security and User menu selection on the left menu panel.
There are several possible edits available, as shown below.

  1. Edit & create users: User Selection.
  Selecting a user account to modify.
  • Select a user by clicking the users name.
  • Edit a user by clicking the edit button.
  • Alternatively, click abort button to lose changes.
  • Alternatively, click back button to return to previous user and group administration screen. All changes will be lost.
  1. Edit & create users: User Edit.
  Modifying user data.
  • User first name may be changed.
  • User last name may be changed.
  • User login name may be changed.
  • Passwords may be changed.
  • Click the next button to effect changes.
  1. Edit & create users: User Details.
  Modifying user account information.
  • Change the home directory if desired.
  • Change login shell for the user.
  • Change default group for the user.
  • Click checkboxes to add or remove additional group memberships. If creating an Oracle or Application Manager user, it is CRITICAL to leave the user as a member of the video group to run X windows.
  • Click the next button to effect changes.
  1. Edit & create users: Password Info.
  Modifying user password information.
  • Set number of days before warning of a a password expiration.
  • Set number of days after expiration that a password will work.
  • Set maximum number of days for a password.
  • Set minimum number of days for a password.
  • Set expiration date. The default is January 1, 1970. When the date preceeds the current working date, the system counts days until it reaches the maximum number of days for a password.
  • Click the next button to effect changes.
  1. Edit & create users: User Completion.
  Completing user modification.
  • Click the finish button to modify user preferences.
  • Alternatively, click abort button to lose changes.
  • Alternatively, click back button to return to previous user and group administration screen. All changes will be lost.
  1. Edit & create users: User Confirmation.
  Accepting new account creation.
  • Click the OK button to complete the process.



GUI: Delete a user.
UnitedLinux manages GUI access to user and group accounts with the YaST utility. Deleting a user starts by clicking the Security and User menu selection on the left menu panel. The four steps to delete a user are shown below.

  1. Edit & create users: User Selection.
  Selecting a user account.
  • Select a user by clicking the users name.
  • Delete a user by clicking the delete button.
  1. Edit & create users: User Deletion.
  Deleting user account directory.
  • Check the delete home directory checkbox to remove the users home directory and files.
  • Click the OK button to effect changes.
  1. Edit & create users: User Completion.
  Completing new user setup.
  • Click the finish button to delete user.
  • Alternatively, click abort button to lose changes.
  • Alternatively, click back button to return to previous user and group administration screen. All changes will be lost.
  1. Edit & create users: User Confirmation.
  Accepting new account creation.
  • Click the OK button to complete the process.



GUI: Add a group.
UnitedLinux manages GUI access to user and group accounts with the YaST utility. Adding a user starts by clicking the Security and User menu selection on the left menu panel.

  1. Edit & create groups: Group Add.
  Adding a group.
  • Click the group administration radio button to change from the default user administration to group administration.
  • Click the add button to start the process.
  • Alternatively, click abort button to lose changes.
  • Alternatively, click back button to return to previous user and group administration screen. All changes will be lost.
  1. Edit & create groups: Group Entry.
  Entering a group.
  • Enter a group name. The group name must be between five and eight characters in length.
  • Accept the default GID or override the value.
  • Enter a case sensitive password twice.
  • Click the user checkboxes that should be added to the new group.
  1. Edit & create groups: Group Creation.
  Completing new group setup.
  • Click finish button to build group account.
  • Alternatively, click abort button to lose changes.
  • Alternatively, click back button to return to previous user and group administration screen. All changes will be lost.
  1. Edit & create users: Group Confirmation.
  Accepting new group creation.
  • Click the OK button to complete the process.



GUI: Modify a group.
UnitedLinux manages GUI access to user and group accounts with the YaST utility. Editing a user starts by clicking the Security and User menu selection on the left menu panel.

  1. Edit & create groups: Group Add.
  Selecting a group.
  • Click the group administration radio button to change from the default user administration to group administration.
  • Select the group name.
  • Click the edit button to start the process.
  1. Edit & create groups: Group Edit.
  Modifying a group.
  • Change the group name if desired.
  • Change GID if desired.
  • Enter a case sensitive password twice.
  • Click the user checkboxes to add or remove users from the group.
  • Click the next button to move forward.
  1. Edit & create groups: Group Changes.
  Completing group modification(s).
  • Click finish button to edit group account.
  • Alternatively, click abort button to lose changes.
  • Alternatively, click back button to return to previous user and group administration screen. All changes will be lost.
  1. Edit & create users: User Confirmation.
  Accepting group modification(s).
  • Click the OK button to complete the process.



GUI: Delete a group.
UnitedLinux manages GUI access to user and group accounts with the YaST utility. Deleting a user starts by clicking the Security and User menu selection on the left menu panel.
Before attempting to delete a group, YaST requires all users be removed from the group. If a group has users when the delete button is selected, it will raise the following dialog message.


  1. Edit & create groups: Group Delete.
  Selecting a group.
  • Click the group administration radio button to change from the default user administration to group administration.
  • Select the group name.
  • Click the delete button to start the process.
  1. Edit & create groups: Group Delete Confirmation.
  Confirming a group delete.
  • Click the Yes button to delete the group.
  • Click the No button to not delete the group.
  1. Edit & create groups: Group Deletion.
  Completing a group deletion.
  • Click finish button to delete group account.
  • Alternatively, click abort button to lose changes.
  • Alternatively, click back button to return to previous user and group administration screen. All changes will be lost.
  1. Edit & create users: User Confirmation.
  Accepting a group deletion.
  • Click the OK button to complete the process.

  • Java Administration
Java administration in Red Hat Linux is covered in the following three sections.
 Why is Java administration important?
A synopsis of issues requiring action.
 How do you verify Java installation?
A description of how to verify Java packages, versions and default/user access setups.
 How do you configure, replace or upgrade Java?
A step-by-step approach to replacement, upgrade and configuration of Java.

  • File System Management

 How to verify file systems.

   Display a disk.
   Display a partition size.
 How to modify a file system.

Existing file systems can be verified by fdisk two ways. They are disk device and partitions. The syntax for each is noted below.

# fdisk -lu <device_name>
# fdisk -lu hda
Disk hda: 255 heads, 63 sectors, 14946 cylinders
Units = sectors of 1 * 512 bytes

Device Boot Start End Blocks Id System
hda1 * 63 208844 104391 83 Linux
hda2 208845 4289354 2040255 82 Linux swap
hda3 4289355 24772229 10241437+ 83 Linux
hda4 24772230 240107489 107667630 f Win95 Ext'd (LBA)
hda5 24772293 35005634 5116671 83 Linux
hda6 35005698 240107489 102550896 83 Linux

# fdisk -s <partition_name>
# fdisk -s hda6
102550896

There are two utilities that can be used to modify Linux file systems. They are:
  • fdisk
  • e2fsck

The fdisk utility can be used to fix a file system. Identify the disk device that has a problem and then use the interactive mode by using the syntax below.
# fdisk -u <device>
# fdisk -u hda
Below is the menu that will be presented by the fdisk utility.
Command (m for help): m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)

Command (m for help):



The e2fsck utility can automatically repair most ext2 file system issues automatically. However, it cannot be used on the newer ext3 standard. The syntax below enables e2fsck to automatically repair a file system.
# e2fsck -p <device>
# e2fsck -p hda


  • Performance Monitoring
Performance monitoring enables effective machine management and enables data collection to tune performance.


 Monitoring system status.
How to collect point-in-time statistics.
 Monitoring running processes.
How to verify & monitor running processes.

 Monitoring memory utilization.

How to monitor the memory subsystem.
 
 Monitoring CPU usage.
How to monitor CPU usage.

 Monitoring disk usage & performance.

How to monitor disk usage.

 Monitoring network traffic.

How to audit and monitor network traffic.




Monitoring system status.
Analyzing system performance should begin the highest level first and then drill down into the detail. There are three system level tools that enable quick inspection of system performance.

 Uptime utility.
 Graphical xload utility.
 System Activity Reporter (sar) utility.
 Monitor per Process Statistics (mpstat) utility.

The highest level view is the uptime utility, which provides average loads since the last boot of the machine. If a machine has been running for a long time, then the uptime report of activity may be of limited value. For example, if the load on the machine has changed since the last boot one or more times, then uptime will report averages that may be of little value.
 # uptime
   1:58pm up 15 days, 4:03, 6 users, load average: 0.40, 0.52, 0.39


Visual inspection of current load activity during peak demand may help identify problems. This can be done with the X-Windows xload utility. The syntax below provides a way to improve graphic rendering of the performance with each horizontal bar representing 10% of system resources.
 # xload -scale 10 -update 1 -fg darkblue -hl tan


Stepping down into system level performance can be done with the System Activity Reporter (sar) utility. sar enables many views but a high level CPU utilization may be found with the -u argument. The sar output with a -u argument provides four views. The percentage of user and system level execution, the percentage of nice process execution and the idle time for the machine. The nice percentage is the frequency that user level processes ran without the system scheduler having to raise their priority.

# sar -u 5 5
01:56:12 PM       CPU     %user     %nice   %system     %idle
01:56:17 PM all 0.20 0.00 2.00 97.80
01:56:22 PM all 0.00 0.00 2.60 97.40
01:56:27 PM all 40.40 0.00 17.60 42.00
01:56:38 PM all 14.82 0.00 85.18 0.00
01:56:43 PM all 40.24 0.00 6.18 53.59
Average: all 18.29 0.00 35.11 46.60


Stepping down into system level performance can be done with the Monitor per Process Statistics (mpstat) utility. mpstat enables a CPU level view and aggregate CPU view of performance. The -P argument may only be used on a multiprocessor machine but the absence of an argument will work on a single processor machine. Syntax for both are noted below. The mpstat output provides similar functionality to the sar command. The percentage of user and system level execution, the percentage of nice process execution and the idle time for the machine. The nice percentage is the frequency that user level processes ran without the system scheduler having to raise their priority. An addition feature is the number of instructions processed by a CPU or set of CPUs.

  • Single processor syntax.
# mpstat 5 5
[root@mmclaugh-linux /]# mpstat 5 5
Linux 2.4.9-e.16 (mmclaugh-linux) 08/21/2003

01:55:58 PM CPU %user %nice %system %idle intr/s
01:56:03 PM all 0.00 0.00 0.00 100.00 114.00
01:56:08 PM all 0.00 0.00 0.00 100.00 114.20
01:56:13 PM all 0.00 0.00 0.20 99.80 119.80
01:56:18 PM all 0.00 0.00 0.00 100.00 121.20
01:56:23 PM all 0.00 0.00 0.00 100.00 158.80
Average: all 0.00 0.00 0.04 99.96 125.60
  • Multiple processor syntax for a single CPU.
The -P argument identifies the CPU target for analysis. CPUs are numbered from 0 to the number of CPUs minus one.
# mpstat -P 0 5 5
[root@ap611ses /]# mpstat -P 0 5 5
Linux 2.4.9-e.12.2enterprise (ap611ses) 08/21/2003

01:27:43 PM CPU %user %nice %system %idle intr/s
01:27:48 PM 0 0.00 0.00 0.00 50.00 209.60
01:27:53 PM 0 0.00 0.00 0.00 50.00 206.20
01:27:58 PM 0 0.00 0.00 0.00 50.00 212.90
01:28:03 PM 0 0.80 0.00 0.70 48.50 404.80
01:28:08 PM 0 0.10 0.00 1.10 48.80 273.10
Average: 0 0.18 0.00 0.36 49.46 261.32



Monitoring running processes.
The ps utility is the comprehensive tool to examine running processes. While the GUI gtop utility provides similar views of running processes.


 Command-line approach.
 GUI Interface approach.


Command-line investigation of running processes is utilmately where most zombie and long-running processes will be identified and resolved. The ps utility is very powerful and argument complex. For example, a view of the top ten cumulative time processes is available with the following syntax.
# ps -el O-k | head -11
F S   UID   PID  PPID  C PRI  NI ADDR    SZ WCHAN  TTY        TIME CMD
100 S 0 1145 1144 1 75 0 - 21585 schedu ? 443:35 /etc/X11/X
000 S 500 11130 11081 2 75 0 - 1921 schedu pts/3 22:49 gtop
000 S 501 8867 1 0 75 0 - 83345 semop ? 1:11 ora_qmn0_dr
000 R 501 11446 31086 13 85 10 - 2674 - ? 0:10 ripples -ro
000 S 501 8949 8946 0 76 0 - 82908 schedu ? 0:04 /u02/oracle
000 S 501 31084 1 0 75 0 - 4849 schedu ? 0:13 nautilus st
040 S 501 8946 8943 0 75 0 - 4772 schedu ? 0:07 /u02/oracle
040 S 501 8931 8916 0 80 5 - 15618 schedu pts/5 0:02 [dbsnmp]
000 S 501 8950 8946 0 76 0 - 45710 schedu ? 0:02 /u02/oracle

The GUI version of the Linux top utility is gtop. It can be invoked from the console station or from the X-Windows command-line. On the GNOME desktop, start with the Main Menu Button and navigate to Programs, System and System Monitor. Starting it on the X-Windows command-line is done by using the gtop command. Processes are displayed in the default Processes panel. There are two views and three filters that may be applied in gtop. All filters are disabled by default.

  • Processes panel filters.
View only TTY, hide or view idle and/or system processes.
  • Processes panel view.
View all or user processes.

The default view of processes is shown below.




Monitoring memory utilization.
Memory monitoring has many nuances and Linux provides command-line and GUI monitoring tools. The vmstat, free and swapon utilities are command-line only tools. However, the GUI monitoring tool does provide quick insights into how memory is used.


 Command-line approach.

   The vmstat utility.
   The free utility.
   The swapon utility.
 GUI Interface approach.


The vmstat, free and swapon utilities are the comprehensive tools to examine memory.

The vmstat utility examines virtual memory management and helps to isolate problems. The vmstat command returns some key values to help identify load problems. A short list is provided below.
  1. If the swap value (column swpd) is not zero and is low, it is an indication the system is swapping heavily. A zero value in the column means the system is not swapping. If a system is swapping heavily, use the ps command to identify if a specific process is performing poorly.
  2. If the process (procs) column w is not zero and the swap columns si and so are positive in a sample set, the system is continuously swapping. This indicates that the load is too heavy for the memory, which can be validated by the free command.
  3. If the process columns r and b are high, it indicates that one or more jobs are moving slowly through the scheduling queue.
# vmstat 5 5
procs                      memory    swap          io     system         cpu
r b w swpd free buff cache si so bi bo in cs us sy id
1 0 0 0 16732 256412 923808 0 0 2 8 22 28 2 1 3
4 0 0 0 16732 256416 923808 0 0 0 27 130 798 0 4 96
1 0 0 0 18060 256420 923808 0 0 0 11 134 995 0 6 94
1 0 0 0 18048 256420 923808 0 0 0 26 128 774 1 4 95
4 0 0 0 18048 256420 923808 0 0 0 16 134 1087 1 5 94

The free utility enables a snapshot of system memory. The first line shows physical memory. The second line shows memory adjusted for memory buffering. The last line shows swap available, used and free.
# free
total       used       free     shared    buffers     cached
Mem: 1543844 1532740 11104 129692 257892 919884
-/+ buffers/cache: 354964 1188880
Swap: 2040244 0 2040244


The swapon utility provide a view of the device, priority and use of swap.
# swapon -s
Filename                        Type            Size    Used    Priority
/dev/hda2 partition 2040244 0 -1


The GUI version of the Linux top utility is gtop. It can be invoked from the console station or from the X-Windows command-line. On the GNOME desktop, start with the Main Menu Button and navigate to Programs, System and System Monitor. Starting it on the X-Windows command-line is done by using the gtop command. The default panel for the System Monitor is Processes. Memory can be accessed by clicking on the Memory Usage tab. The View menu option enables the user to toggle between resident, shared, total, virtual and swap views of memory.




Monitoring CPU usage.
The top or gtop utilities provide most information necessary to manage the load impact of running processes.


 Command-line approach.
 GUI Interface approach.


Command-line investigation of running processes is utilmately where most zombie and long-running processes will be identified and resolved. The command-line top utility returns output like the following. It is refreshed every five seconds to reflect the highest draw on system resources. Below is an example of the top utility output.


The GUI version of the Linux top utility is gtop. It can be invoked from the console station or from the X-Windows command-line. On the GNOME desktop, start with the Main Menu Button and navigate to Programs, System and System Monitor. Starting it on the X-Windows command-line is done by using the gtop command.
  • The gtop tool monitors CPU in a graphic format. Memory is depicted in the following color schema.
Color Activity
Yellow User processes, user requests for resources.
Light Gray Nice processes, performing well within their assigned priorities.
Dark Gray System processes, requested by the system or by user processes.
Black Idle CPU
  • The default gtop panel processes displays the CPU utilization graph.



Monitoring disk usage & performance.


 Command-line approach.

   The df utility.
   The du utility.
   The iostat utility.
 GUI Interface approach.


Command-line tools are effective to identify and probe disk use and access. Beyond basic use, they require detail knowledge of the machine architecture and operating system.

The df (disk free) utility reports the amount of available disk space. Complete detail is available with the -k argument but a more readable version can be had with the -h argument.
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hda5 4.8G 174M 4.3G 4% /
/dev/hda1 99M 13M 80M 14% /boot
/dev/hdb1 5.8G 33M 5.4G 1% /home
none 1006M 0 1005M 0% /dev/shm
/dev/hda3 9.6G 631M 8.5G 7% /tmp
/dev/hdb5 97G 33M 91G 1% /u01
/dev/hda6 96G 19G 73G 20% /u02
/dev/hdb3 3.8G 1.8G 1.9G 48% /usr
/dev/hdb2 3.8G 75M 3.5G 3% /var


The du (disk usage) utility reports the amount of used disk space. Detailed file usage information is available by directory but is often not too useful. Good summary information for a set of directories is possible. For example, the following syntax provides a snapshot of space used from the root directory.
#  du -mh --max-depth=1
16k     ./lost+found
8.7M ./boot
352k ./dev
368k ./home
898M ./proc
599M ./tmp
24k ./u01
19G ./u02
1.8G ./usr
43M ./var
8.5M ./etc
5.9M ./bin
4.0k ./initrd
75M ./lib
8.0k ./mnt
4.0k ./opt
41M ./root
11M ./sbin
4.0k ./misc
22G .


The iostat (Input/Output statistics) utility enables drilling down into I/O performance. Without arguments iostat provides a summary view into the CPU activity and devices. The -x argument shows detail for devices and slices within devices.
# iostat
Linux 2.4.9-e.16 (mmclaugh-linux)       08/15/2003

avg-cpu: %user %nice %sys %idle
0.31 0.00 1.49 98.21

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
dev3-0 5.82 38.43 47.59 136848 169460
dev3-1 4.04 34.12 25.47 121500 90688



The Red Hat Linux gtop tool is rather limited in analyzing disk performance. The df and du utilities are available in high-level views as shown below. There is no equivalent graphical tool to represent the iostat output.



Monitoring network traffic.


 The ifconfig utility.
 The netstat utility.


The ifconfig utility is the best place to start analyzing network performance. RX and TX packets may be quickly examined for errors, drops and overruns. If the numbers are high, use the netstat utility as the next analysis tool.
# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:04:75:C1:1A:46
inet addr:138.1.145.183 Bcast:138.1.147.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2953365 errors:0 dropped:0 overruns:7 frame:0
TX packets:493705 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:866676842 (826.5 Mb) TX bytes:401224184 (382.6 Mb)
Interrupt:11 Base address:0xa000

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:98 errors:0 dropped:0 overruns:0 frame:0
TX packets:98 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:6330 (6.1 Kb) TX bytes:6330 (6.1 Kb)


  • Start at the raw packet traffic level when analyzing network traffic. This can be done with netstat utility and the -i argument as shown below.
# netstat -i
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 0 4121254 0 0 1 279521 0 0 0 BMRU
lo 16436 0 54 0 0 0 54 0 0 0 LRU

  • Using netstat without any arguments provide a listing of all active Intranet and Internet connections for TCP, UDP and UNIX domain sockets. If there are non-zero values in the Send-Q column and repeated sampling indicates the value is increasing, then the network may be saturated.
# netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 2 mmclaugh-linux.u:telnet dhcp-cosprings1-ge:4351 ESTABLISHED
tcp 0 0 mmclaugh-linux.us:32772 ap113tta.us.oracle:6232 ESTABLISHED
tcp 0 0 mmclaugh-linux.u:telnet ap103ses.us.oracl:37342 ESTABLISHED
Active UNIX domain sockets (w/o servers)
Proto RefCnt Flags Type State I-Node Path
unix 13 [ ] DGRAM 1002 /dev/log
unix 2 [ ] DGRAM 25992193
unix 3 [ ] STREAM CONNECTED 59797 /tmp/orbit-root/orb-3492
unix 3 [ ] STREAM CONNECTED 59785
unix 3 [ ] STREAM CONNECTED 59796 /tmp/orbit-root/orb-1323
Twitter Bird Gadget