Wednesday, October 26, 2011

Tuning Up Squid

Specifying RAM Size 
To improve performance, squid can actually store objects directly to Main Memory (RAM) to serve them to clients. And even though the ‘space available’ is far lower compared to Secondary Storage Devices (Hard Disk), the ‘fetch time’ is significantly higher. As a result, a significant change of performance can be seen.

Step 1: Finding out the RAM usage

root@firefly:~# free -m


root@firefly:~# top

Step 2: Allocating RAM
Always make sure to back up the configuration file before editing

root@firefly:~# vim /etc/squid/squid.conf
    cache_mem 128 MB
    maximum_object_size_in_memory 1 MB

The first option specifies that 128 MB from RAM would be used by squid for storing 'hot' objects. The second option states that the maximum size of a stored 'hot' object would be 1 MB, implying that maximum of 128 objects may be stored in the RAM.

NOTE: Please make sure that the allocated memory size is less than the specified disk cache size discussed in the next section.

Specifying the Disk Cache Size:

As we all know, squid places frequently accessed web elements into the hard disk. Although hard disks are much slower compared to RAM, they can provide significantly more space. The disk cache setting can be manipulated as shown below

root@firefly:~# vim /etc/squid/squid.conf

    # cache_dir ufs /var/spool/squid 100 16 256
    ### default settings.
    ### cache_dir filesystem location size_in_MB L1 L2
    ### L1: Number of parent directories
    ### L2: Number of sub-directories

    cache_dir ufs /var/spool/squid 256 16 256
    minimum_object_size 0 KB
    ### stores anything greater than 0 KB
    maximum_object_size 10 MB
    ### Maximum size of a stored object is 10MB

     cache_swap_low 90
     cache_swap_high 95

##squid will try to maintain cache size of 90% of
##allocated disk space. Whenever the size 
##exceeds 90%, some elements may be deleted. When 
##cache size exceeds 95%, deletion is much more 
##'aggressive' until lower limit (90%) is 
## achieved

Monday, October 17, 2011

Cacti Ping Latency Graph Problem SOLVED

I was having difficulty with generating ping latency graph using Cacti in Debian 6. Being a novice in Cacti, I was not sure what was causing the "-nan" output in the ping latency graphs. It may be mentioned that I was getting bandwidth usage graph without any problem. Well, the solution is here.

Before we start with the explanation, let's check something interesting.

Debian 6

root@firefly:~# ping
64 bytes from ( icmp_req=1 ttl=53 time=303 ms


sarmed@sarmed-ubuntu:~$ ping
64 bytes from ( icmp_seq=1 ttl=47 time=302 ms

Notice the difference in the output? Interesting, eh?

Now, to the root cause of the problem.

Cacti uses a perl script "" for pinging a host. The graph is generated from the output of the script.

root@firefly:~# cat /usr/share/cacti/site/scripts/
# take care for tcp:hostname or TCP:ip@
$host = $ARGV[0];
$host =~ s/tcp:/$1/gis;
open(PROCESS, "ping -c 1 $host | grep icmp_seq | grep time |");
$ping = <PROCESS>;
$ping =~ m/(.*time=)(.*) (ms|usec)/;
if ($2 == "") {
print "U"; # avoid cacti errors, but do not fake rrdtool stats
}elsif ($3 eq "usec") {
print $2/1000; # re-calculate in units of "ms"
print $2;
The output of the script was "U", incomplete output. But as we can see in the ping statistics, the output of the ping command generates "icmp_req". So modifying the script solves the problem.

root@firefly:cat /usr/share/cacti/site/scripts/
# take care for tcp:hostname or TCP:ip@
$host = $ARGV[0];
$host =~ s/tcp:/$1/gis;
open(PROCESS, "ping -c 1 $host | grep icmp_req | grep time |");
$ping = <PROCESS>;
$ping =~ m/(.*time=)(.*) (ms|usec)/;
if ($2 == "") {
print "U"; # avoid cacti errors, but do not fake rrdtool stats
}elsif ($3 eq "usec") {
print $2/1000; # re-calculate in units of "ms"
print $2;

Sample Output:

root@firefly:~# perl  /usr/share/cacti/site/scripts/

There goes the output. The poller will now plot the graph at 286 milliseconds.

Problem solved.



Saturday, October 8, 2011

RAID Parity (From Wikipedia)

Many RAID levels employ an error protection scheme called "parity". Most use the simple XOR parity described in this section, but RAID 6 uses two separate parities based respectively on addition and multiplication in a particular Galois Field[7] or Reed-Solomon error correction. XOR parity calculation is a widely used method in information technology to provide fault tolerance in a given set of data.
In Boolean logic, there is an operation called exclusive or (XOR), meaning "one or the other, but not neither and not both." For example:
0 XOR 0 = 0
0 XOR 1 = 1
1 XOR 0 = 1
1 XOR 1 = 0
The XOR operator is central to how parity data is created and used within an array. It is used both for the protection of data, as well as for the recovery of missing data.
As an example, consider a simple RAID made up of 6 drives (4 for data, 1 for parity, and 1 for use as a hot spare), where each drive has only a single byte worth of storage (a '-' represents a bit, the value of which doesn't matter at this point in the discussion):
Drive #1: -------- (Data)
Drive #2: -------- (Data)
Drive #3: -------- (Data)
Drive #4: -------- (Data)
Drive #5: -------- (Hot Spare)
Drive #6: -------- (Parity)
Let the following data be written to the data drives:
Drive #1: 00101010 (Data)
Drive #2: 10001110 (Data)
Drive #3: 11110111 (Data)
Drive #4: 10110101 (Data)
Drive #5: -------- (Hot Spare)
Drive #6: -------- (Parity)
Every time data is written to the data drives, a parity value is calculated in order to be able to recover from a data drive failure. To calculate the parity for this RAID, a bitwise XOR of each drive's data is calculated as follows, the result of which is the parity data:
00101010 XOR 10001110 XOR 11110111 XOR 10110101 = 11100110
The parity data 11100110 is then written to the dedicated parity drive:
Drive #1: 00101010 (Data)
Drive #2: 10001110 (Data)
Drive #3: 11110111 (Data)
Drive #4: 10110101 (Data)
Drive #5: -------- (Hot Spare)
Drive #6: 11100110 (Parity)
Suppose Drive #3 fails. In order to restore the contents of Drive #3, the same XOR calculation is performed using the data of all the remaining data drives and (as a substitute for Drive #3) the parity value (11100110) stored in Drive #6:
00101010 XOR 10001110 XOR 11100110 XOR 10110101 = 11110111
With the complete contents of Drive #3 recovered, the data is written to the hot spare, and the RAID can continue operating.
Drive #1: 00101010 (Data)
Drive #2: 10001110 (Data)
Drive #3: --Dead-- (Data)
Drive #4: 10110101 (Data)
Drive #5: 11110111 (Hot Spare)
Drive #6: 11100110 (Parity)
At this point the dead drive has to be replaced with a working one of the same size. Depending on the implementation, the new drive either takes over as a new hot spare drive and the old hot spare drive continues to act as a data drive of the array, or (as illustrated below) the original hot spare's contents are automatically copied to the new drive by the array controller, allowing the original hot spare to return to its original purpose as an emergency standby drive. The resulting array is identical to its pre-failure state:
Drive #1: 00101010 (Data)
Drive #2: 10001110 (Data)
Drive #3: 11110111 (Data)
Drive #4: 10110101 (Data)
Drive #5: -------- (Hot Spare)
Drive #6: 11100110 (Parity)
This same basic XOR principle applies to parity within RAID groups regardless of capacity or number of drives. As long as there are enough drives present to allow for an XOR calculation to take place, parity can be used to recover data from any single drive failure. (A minimum of three drives must be present in order for parity to be used for fault tolerance, because the XOR operator requires two operands, and a place to store the result).


Wednesday, October 5, 2011

The DHCP Server

The Dynamic Host Configuration Protocol (DHCP) provides a method for hosts on a network to request, and be granted, configuration information including the addresses of routers and name servers. Usually, there is a single DHCP server per network segment, but in some cases there may be more than one. IP addresses assigned from a range of addresses i.e. pool by DHCP. The assignments are made for a configurable amount of time i.e. lease period and may be renewed by the client after lease expires. If desired, the server can be configured to accept requests from a specific set of MAC addresses.
Typically, the server supplies information about the network’s subnet address and netmask, its default gateway, domain name and DNS server, time servers and location of kickstart configuration files as per required.
In Red Hat Enterprise Linux, the DHCP service is performed by the dhcpd daemon.

Service Profile: DHCP

·         Type: SystemV-managed service
·         Package: dhcp
·         Daemon: /usr/sbin/dhcpd
·         Script: /etc/init.d/dhcpd
·         Ports: 67, 68
·         Configuration:
o   /etc/dhcpd.conf
o   /var/lib/dhcpd/dhcpd.leases

DHCP Server Configuration

Installing RPMs

First, the required RPM dhcp* has to be installed. As the YUM server is installed, this can be done by running the command-
[root@prime ~]#yum install dhcp

Preparing the Configuration Files

1.      The configuration file /etc/dhcpd.conf is a blank file after installing the RPM. We copy a sample file from /usr/share/doc/dhcp-*/dhcpd.conf.sample into the /etc directory.
#cp /usr/share/doc/dhcp-*/dhcpd.conf.sample /etc/dhcpd.conf

2.      The configuration file must be modified as per requirement. A sample file can be seen below. The text in italic have been added/modified by the user-

ddns-update-style interim;
ignore client-updates;

subnet netmask {

# --- default gateway
  option routers;
  option subnet-mask;

  option domain-name      "";
  option domain-name-servers;

  option time-offset -18000;   # Eastern Standard Time

#the range can be set as per requirements
  range dynamic-bootp;
  default-lease-time 21600;
  #max-lease-time 43200;

Binding IP addresses into MAC addresses

The configuration file can be modified to bind specific IP addresses to specific MAC addresses. This causes specific MAC addresses to obtain fixed IPs every time that client requests the DHCP. This can be done in the following procedure-
1.      If an IP address can be pinged successfully, it is possible to obtain the MAC address of that host with the command arp.

2.   The following lines should be added within the subnet section of a network definition in /etc/dhcpd.conf-

#following section can bind IP addresses to specific MAC #addresses
#the name of the host section is user defined and does #not affect configuration
host {

                hardware ethernet 08:00:27:0C:2A:14;
host {

                hardware ethernet 00:10:33:0E:C6:1B;

Initiating the DHCP Service

1.      The syntax of the configuration file can be checked using the command-
[root@prime ~]# service dhcpd configtest

2.      Now that the configuration file is ready, the dhcpd service can be initialized and put to startup. This can be done by-
[root@prime ~]#service dhcpd restart
[root@prime ~]#chkconfig dhcpd on

3.      All IP lease information can be found in /usr/lib/dhcpd/dhcpd.leases file.

RPM Forge for CentOS

I'm using CentOS 5.5. I'll skip the intro and go straight to the commands-

# rpm -ivh rpmforge-release-0.5.2-2.el5.rf.*.rpm

And you're done!!! :D
Try yum install package-name. Should work.

OPENWEBMAIL Repository for Red Hat

# lftpget

# rpm -ivh openwebmail.repo

# yum install openwebmail

And that's it!!! :D

RHEL vs. CentOS

When it comes to small business to enterprise level servers, a lot of professionals choose Red Hat Enterprise Linux (RHEL). However, I prefer to deploy Community Enterprise Operating System (CentOS). I give my arguments -

  • When it comes to OS architecture, both RHEL and CentOS are identical.
  • Both RHEL and CentOS are capable taking really heavy load yet providing smooth, stable service.
  • CentOS is actually a clone of RHEL. All the function and features that RHEL offers, CentOS has the same features and functions.
But the MAJOR difference is that, RHEL is a COMMERCIAL product. Yes, surely you can download a copy of RHEL and nobody will ever sue you for that (as far as I know). But a free version of RHEL has some major limitations.
  • Finding and synchronizing with an online software repository for RHEL is pretty tough.
  • RHEL maintains it's own software and security repository, but it comes at a price.
  • CentOS also maintains it's own software repository, but it doesn't cost a thing.
  • CentOS can be easily synchronized with prominent online repositories such as RPM Forge and
  • CentOS ISO can be easily downloaded via FTP/HTTP/Torrent.
The only problem with CentOS is that, since it doesn't cost anything, it's less rapidly upgraded compared to RHEL. personally, I'm not in such a hurry in upgrading my Server OS. I'd rather be patient.

Can't wait for CentOS 6. 

CentOS rocks!!! 

\m/ ^_^ \m/

The YUM Server

The RPM Package Manager is used for distribution, installation, upgrading and removal of software on Red Hat Systems. Originally designed to be used in Red Hat Linux, the RPM is used by many GNU/Linux distributions. [1] The RPM system consists of a local database, the rpm executable, the rpm package files.
The local RPM database is maintained in /var/lib/rpm. The database stores information about installed packages such as file attributes and package prerequisites. Software to be installed using rpm is distributed through rpm package files, which are essentially compressed archives of files and associated dependency information.

Dependency Problems

When installing software via rpm, one of the problems that users face is dependency errors. The primary drawback of RPM is that it not able to resolve dependencies i.e. additional RPMs that have to be preinstalled before a certain RPM can be installed. In worst cases, the pre-required rpm itself requires another rpm to be preinstalled, and it would be up to the user to locate and install each of them.

Solving Dependency Problems with YUM

To solve the problems of dependency resolution and package locations, volunteer programmers at Duke University have developed Yellow do Update, Modified, or YUM for short. The system is based on repositories that hold RPMs and a repodata filelist. The yum application can call upon several repositories for dependency resolution, fetch the RPMs and install the needed packages. The following example illustrates the YUM installation procedure-
[root@prime ~]# yum install zsh
Dependencies Resolved

 Package      Arch       Version          Repository        Size
 zsh          i386       4.2.6-1          rhel-debuginfo    1.7 M

Transaction Summary
Install      1 Package(s)        
Update       0 Package(s)        
Remove       0 Package(s)        

Total download size: 1.7 M
Downloading Packages:
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing: zsh                     ######################### [1/1]

Installed: zsh.i386 0:4.2.6-1

Configuring YUM

Installing RPMs

  1. createrepo (to create the repository)
  2. vasftpd (FTP would be used to transfer the necessary files from the Server to the client)

The IP address of my Server is with hostname The first thing that YUM requires is to prepare a repository from which it can access all the RPMs and call any of them as per necessary. To do this, the only required RPM is createrepo.rpm. The RPM can be found in the RHEL installation DVD in the Server directory.
We would configure the YUM server in such manner that any client Red Hat machine can use the repository of to install RPMs. To do this, the client machine would use the FTP service. So, the ftp service must be installed and run if it is not already installed. These RPMs can easily be installed with the command-
[root@prime ~]# rpm -ivh createrepo-*.rpm
[root@prime ~]# rpm -ivh vsftpd-*.rpm
[root@prime ~]# service vsftpd restart
[root@prime ~]# chkconfig vsftpd on

Creating the Repository

Creating a repository can be done in the following procedure-
1.      Copying the entire Server directory to the hard drive. In this case, we copy the directory to FTP home directory /var/ftp/pub/Server

cp -r /mnt/Server /var/ftp/pub/Server

2.      Creating the repodata directory that contains information about all the RPMs stored in the directory.
 createrepo -v /var/ftp/pub/Server

Preparing the Configuration File

The YUM configuration file can be found in /etc/yum.repos.d/filename.repo. Even if the file name can be any, the file has to be named as filename.repo. We use the default /etc/yum.repos.d/rhel-debuginfo.repo as a reference. The lines in italic are added/modified by the user.

#cp /etc/yum.repos.d/rhel-debuginfo.repo /etc/yum.repos.d/MyYumServer.repo

#repository name

#the name can be any name
name=prime YUM server

#the location of the repository
#access protocol may be ftp://, http:// or file://

#enabling or disabling the repository

#enabling or disabling gpg checking for digital signature

#gpg key database

YUM Commands

The following list contains frequently used commands associated with YUM.
1.      yum list - If configured correctly, this command returns the list of all available RPMs in the repository.

2.      yum install rpm-name - This if the RPM exists in the repository, this command installs the required RPM and resolves any dependencies automatically if dependency RPMs also exist.

3.      yum clean all - This command clears the YUM cache. Particularly useful if installation of an RPM is cancelled or illegally aborted.

3.      yum remove rpm-name - This command will remove the package but will keep any configuration file. Usually the configuration file is renamed as filename.rpmsave

3.      yum erase rpm-name - This command will remove the package including any configuration file.

Client End Configuration

To use the YUM server setup in, any Red Hat client in the network needs to modify the file /etc/yum.repos.d/rhel-debuginfo.repo configuration file as below. The lines in italic are added/modified by the user.

#cp /etc/yum.repos.d/rhel-debuginfo.repo /etc/yum.repos.d/myyum.repo

#repository name

#the name can be any name
name= prime YUM repository

#the location of the repository
#access protocol may be ftp://, http:// or file://

#enabling or disabling the repository

#enabling or disabling gpg checking for digital signature

#gpg key database


Sunday, October 2, 2011

To Do List in Ubuntu

I have recently started using Ubuntu 10 in my office PC. So far, I have been enjoying the experience. It's fast, stable and gave me a break from Windows interface after 10 years. But like most people, I have been having a bit trouble with necessary software. I have manged to find alternate software, wine for windows based software, but there are still a couple of things that I could not manage. For example, I prefer the 'to do list' of google-gadgets when I work in Windows. But the google-gadget of Linux has no 'to do list' yet. After trying a couple of methods, Google finally pointed me to a solution.

I have already installed 'tasque' with a simple apt-get command and I love this lightweight organizer software :D.

Thank you google and thank you publisher.

Linux Rocks!!!