Create clean RHEL/CentOS 6 Template for VMware

Here is how I’m creating Templates for VMware

1.) Update the OS and install vmware tools

# yum update

2.) Clean the yum cache

# yum clean all

2.) Remove SSH host keys

# rm -f /etc/ssh/ssh_host_*

3.) Remove MAC and UUID’s from network configuration files.

# sed -i ‘/^(HWADDR|UUID)=/d’ /etc/sysconfig/network-scripts/ifcfg-eth*

4.) Remove persistent device rules

# rm -f /etc/udev/rules.d/70-persistent-*

5.) Force the log rotate and clean the log files

# logrotate –f /etc/logrotate.conf
# rm –f /var/log/*-???????? /var/log/*.gz
# cat /dev/null > /var/log/audit/audit.log
# cat /dev/null > /var/log/wtmp
# cat /dev/null > /var/log/messages

6.) Clean the /tmp

# rm -rf /tmp/*
# rm -rf /var/tmp/*

7.) un-configured the system if you’r not using customization specification

# touch /.unconfigured

7.) Remove the shell history

# rm -f ~/.bash_history
# unset HISTFILE

8.) Finally poweroff the system.

# poweroff

NIC Channel Bonding in Linux!

As you know network channel bonding is grouping physical network interfaces in to one single virtual interface to provide redundancy and increased throughput. In linux we have seven (7) bonding modes (mode 0 – mode 6) to support the network channel bonding. you can check the all available bonding method here and you can select the best method to use in your environment, but form my experience, most of the times we can go with mode 1, mode 4 and mode 6.

Assume if you need only the fault tolerance, then you can use mode 1 as your bonding mode or you need load balancing + fault tolerance, then you can go with mode 4 or 6 that depend on your underling physical network switch. if it support and configured to use IEEE 802.3ad Dynamic link aggregation, surely you can use mode 4 and if not simply go with mode 6. And also most of our physical boxes we have more than one network interfaces and I’m pretty sure most of us only using one interface to connect to the network. In other hand it can lead to single point of failure, so if you are doing any production deployment make sure to avoid single point of failures as much as possible and we can use NIC channel bonding to avoid network port failures, network cable failure or NIC failure (if you have two physical network cards) in our linux server.

Okay cool! how to configure that?

If you are using Fedora/RHEL or based distributions like CentOS or Oracle Linux NIC channel bonding process is quite simple but you have to edit few files here. First to enable bonding kernel module for your virtual network interface bond0, create a new file called “bonding.conf” in “/etc/modprobe.d/ directory and edit as follows or you need to create more than one bonding interfaces, need to add separate alias for them as “alias bondX bonding”.

# cat /etc/modprobe.d/bonding.conf

alias bond0 bonding

And then you have to create a new network interface configuration file for “bond0″ virtual interface as “ifcfg-eth0″ in “/etc/sysconfig/network-scripts” directory and all network related stuffs are need to defined.

# cat /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0
NM_CONTROLLED=no
USERCTL=no
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.0.11
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
DNS1=192.168.0.2
DNS2=192.168.0.3
DOMAIN=hasitha.org
BONDING_OPTS="mode=1 miimon=100"

Okay! now we create the bond0 interface and we need to configure eth0 and eth1 network interfaces as slave interfaces for the bond0 virtual interfaces. for that we need to edit “ifcfg-eth0″ and “ifcfg-eth1″ files located in the same directory “/etc/sysconfig/network-scripts”.

# cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
HWADDR=XX:XX:XX:XX:XX:XX
NM_CONTROLLED=no
USERCTL=no
ONBOOT=yes
BOOTPROTO=none
SLAVE=yes
MASTER=bond0
# cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
HWADDR=XX:XX:XX:XX:XX:XX
NM_CONTROLLED=no
USERCTL=no
ONBOOT=yes
BOOTPROTO=none
SLAVE=yes
MASTER=bond0

HAHA! now you are almost done! time to restart the network service or if it’s possible restart the server. after that, you can see the newly configured bond0 virtual interface is up and running.

# ifconfig

bond0 Link encap:Ethernet HWaddr 00:10:E0:22:50:70
inet addr:192.168.0.11 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::210:e0ff:fe22:5070/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:6001623 errors:0 dropped:928245 overruns:0 frame:0
TX packets:2547959 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:853632915 (814.0 MiB) TX bytes:551819829 (526.2 MiB)

eth0 Link encap:Ethernet HWaddr 00:10:E0:22:50:70
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:5073378 errors:0 dropped:0 overruns:0 frame:0
TX packets:2547964 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:768161611 (732.5 MiB) TX bytes:551820999 (526.2 MiB)

eth1 Link encap:Ethernet HWaddr 00:10:E0:22:50:70
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:928245 errors:0 dropped:928245 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:85471304 (81.5 MiB) TX bytes:0 (0.0 b)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:24102239 errors:0 dropped:0 overruns:0 frame:0
TX packets:24102239 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:5913772444 (5.5 GiB) TX bytes:5913772444 (5.5 GiB)

If your using Debian/Ubuntu or based distribution, process is also simple but quite different than the Fedora/RHEL based distributions, In here you need to install additional package called “ifenslave – Attach and detach slave network devices to a bonding device” to support the network bonding.

# apt-get install ifenslave

Now we have to enable the bonding kernel module for the Debian/Ubuntu based system, for that we have to append the “bonding” keywords to the “/etc/modules” file.

# echo "bonding" >> /etc/modules

And now you can edit the network interface configuration file to configure the virtual bonding interface (bond0) and slave “eth0″ and “eth1″ interfaces. please note that in Fedora/RHEL based systems each network interface have their own configuration file and in Debian/Ubuntu based system we have only “/etc/network/interfaces” file to edit all network interfaces configuration.

# cat /etc/network/interfaces

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual
bond-master bond0

auto eth1
iface eth1 inet manual
bond-master bond0

auto bond0
iface bond0 inet static
address 192.168.0.12
netmask 255.255.255.0
gateway 192.168.0.1
dns-nameservers 192.168.0.2
dns-nameservers 192.168.0.3
dns-search hasitha.org
bond-mode 1
bond-miimon 100

Finally restart the network and you’r almost done. you can simply cable unplug or “ifdown” one network interface and check your network connectivity. feel free to comment if you have anything to clarify!

What's up 2013? Remarkable Year for Me

Hey there! Back on blogging again and it’s the end of the year. Yeah, this is one of my best years. I accomplished a lot this year; getting closer to my dream. Hope you guys had the same and this post is all about my life through this year. Before start the year 2013 in review, just want to tell you that most of the things go back to 2012. In August last year I did my RHCE exam and that time I was the youngest Red Hat Certified Engineer in Sri Lanka and September 24th I joined E-W Information Systems Ltd. My first project was Ceylon Electricity Board Disaster Recovery Site Project and year 2013 for me started with that. In March we finished the project and afterwards I was a core member in most of the enterprise level projects and it was a good exposure for me to become an Infrastructure Implementation team head for CHOGM 2013 Event Management System Project and Airport and Aviation Sri Lanka ERP Project.

In September 30th I went to Malaysia for “Dell Blade Master Technical Training” and it is my very first overseas training session and it was awesome. We saw Dell’s massive Data Centers and learn all about Dell Blade Servers and it was a 2 day session and we were back in Sri Lanka on the morning of October 3rd and from 4th and onward CHOGM EMS implementation started and this time period was my hardest time for the whole year. Anyway I met new friends and worked with them on the project and learned many new things. Yeah it was kind of fun and finally we did the RFID gaits UP before the CHOGM Opening Ceremony started.

I was busy with my work and I missed so many FOSS community events throughout the whole year. By the way my only event contribution was Fedora 18 Technical Seminar held at University of Rajarata and I did two presentations about Fedora 18 features and about Virtualization. You can find the Digit Magazine article about it from here. in 2014 I surely wanna contribute more to the FOSS community.

In other news I bought Nikon D7000 with two 18-55mm and 55-200mm lenses and also I sold my old Galaxy Ace and bought all new Galaxy S Duos, but this is too damn slow :( also I got a feeling about the new iPhone because of its sexy iOS 7 update but as you know I’m an android guy always! :P also I have a love for Google Nexus series though it is a dream as of yet; maybe in next year I’d be able to make that dream come true and buy a Nexus 5

Also the movie “Butterfly Symphony” really struck me. It is the most heart touching movie I’ve watched this year and I have a lots of new music tracks and I love most of them so I don’t know what is my favorite this year. Anyhow this is a bit about how the year 2013 affected me. Although I don’t remember everything that went down this year; but I accomplished a lot. So this would be the end of this post my friends and I wish all of you a Merry Christmas and a Happy New Year. Until I meet you guys again, Good Bye!

RHCS – Cluster from Scratch | Part 02

Okay after long time back, anyway I’m here and let’s get started with clustering :D We are going to implement two (2) node Active/Passive cluster for provide continues web service to the end users. I’n my scenario I’m  using two virtual servers as a cluster nodes and network attach storage (NAS) as a shared storage for both servers. also there are three (3) virtual networks for public network, private network(cluster heartbeat network) and for the storage network. here all of this virtual servers, network and storage are deployed on a CentOS 6 environment using KVM based hypervisor. anyway all virtual resources work as a actual physical resources.

web

above figure show the initial architecture for our high availability web service deployment, In each virtual server has three (3) network interface cards (NIC) for connect to the public, private and storage networks. In addition to the servers, network attached storage has two(2) ip address. we use both this address to configure multi-path for provide efficient and reliable storage infrastructure to our cluster deployment. here the configuration details of servers and storage.

Server 01
2.6 GHZ 2 vCPU’s with 1 GB RAM
Hostname : web1.hasitha.org
NIC 01 : 192.168.0.11 (Public Network)
NIC 02 : 192.168.1.11 (Private Network)
NIC 03 : 192.168.2.11 (Storage Network)

Server 02
2.6 GHZ 2 vCPU’s with 1 GB RAM
Hostname : web2.hasitha.org
NIC 01 : 192.168.0.12 (Public Network)
NIC 02 : 192.168.1.12 (Private Network)
NIC 03 : 192.168.2.12 (Storage Network)

Network Attach Storage (NAS)
NIC 01 : 192.168.2.1 (Storage Network)
NIC 02 : 192.168.2.2 (Storage Network)
LUN 01 : 10GB (iqn.2013-08-10.storage.hasitha.org:web)

Now we are ready to go and next part is all about configuring servers, wait till the next then, see ya soon.

RHCS – Cluster from scratch | Part 01

In my previous post, I’ve just touch Red Hat High Availability Add-on for Red Hat Enterprise Linux and it’s eliminate single point of failures, so in case if the active cluster member on which a high availability service group is running become inoperative, the high availability service will start up again(fail over) in another cluster node without interruption.

Okay let’s get started with high availability clustering! but first of all, let’s understand some basic concepts. if you need clear and fully understand about all of these things, I highly recommend to read Red Hat Enterprise Linux 6 Cluster Administration Guide. It’s the best resources for RHEL6 HA clustering. and also you can use CentOS or Oracle Linux as alternatives to follow this article series without Using Red Hat Enterprise Linux.

Cluster Node
Cluster node is a server that is configured to be a cluster member. Normally shared storage (SAN,NAS) is available to all cluster members.

Cluster Resources
Cluster resources is the things you going to keep high available and all of these resources need to available for all cluster nodes. all or some of these resources can be associated with an application you plan to keep highly available.

Cluster Service Group
A collection of related cluster resources that defines actions to be taken during a fail-over operation of the access point of resilient resources. These resilient resources include applications, data, and devices.

Fencing
Fencing is the method that cuts off access to a cluster resource (shared storage, etc.) from a node in your cluster if it loses contact with the rest of the nodes in the cluster.

There are some more things related to clustering including this basic components and we can learn most of them when we deploying our high availability web service. So wait till the next post ;)

RHCS – Cluster from scratch

According to the Red Hat, Red Hat Cluster Suite (RHCS) High Availability Add-On provides on-demand failover to make applications highly available. It delivers continuous availability of services by eliminating single points of failure. And clustering is a group of computers (called node or members) to work together as a team to provide continued service when system components fail.

Assume that we are running a critical database service on a standalone server, if a software or hardware component failed on that database server, administrative intervention is required and database service will be unavailable until the crashed server is fixed, but with clustering that database service get automatically restarted on another available node in the cluster without administrator invention and database service will be continuously available to the end-users. cluster can deploy as a active/active (one active node and one standby node) or active/passive (both nodes are active) to suite our clustering needs.

In this series of “RHCS – Cluster from scratch” articles, I’m planning to deeply explain how to deploy high availability web service as a active/passive cluster using Red Hat High Availability Add-On on a Red Hat Enterprise Linux 6.

Storage LUN Online Re-Scan on RHEL6

[A]lways storage maintenance and downtimes are comes together under production environment. but it’s not necessary at all the times because ISCSI(NAS) and FC(SAN) storage’s LUN re-scanning and re-sizing(expanding) is quiet easy under linux(not only Red Hat Enterprise Linux 6) environments. Okay then, assume that we have 10GB LUN on the ISCSI(NAS) storage and it has four paths to the server and also that four paths are muiltipathed. now we want to re-size this 10GB LUN to the 20GB on the storage and perform a online re-scan in the RHEL6 linux server to get that extra 10GB blocks to existing 10GB multipathed volume.
[cc lang=”bash”][root@server ~]# multipath -ll
mpatha (1IET 00010001) dm-4 IET,VIRTUAL-DISK
size=10G features=’0′ hwhandler=’0′ wp=rw
|-+- policy=’round-robin 0′ prio=1 status=active
| `- 4:0:0:1 sda 8:0 active ready running
|-+- policy=’round-robin 0′ prio=1 status=enabled
| `- 5:0:0:1 sdb 8:16 active ready running
|-+- policy=’round-robin 0′ prio=1 status=enabled
| `- 3:0:0:1 sdc 8:32 active ready running
`-+- policy=’round-robin 0′ prio=1 status=enabled
`- 2:0:0:1 sdd 8:48 active ready running[/cc]
You can see there are four path to “mpatha” multipathed volume and sda, sdb, adc, sdd block devices are underling to it. so now we can expand the LUN in the ISCSI(NAS) and re-scan the block devices to add that extra expanded blocks to the “mpatha” volume.
[cc lang=”bash”][root@server ~]# echo 1 > /sys/block/sda/device/rescan
[root@server ~]# echo 1 > /sys/block/sdb/device/rescan
[root@server ~]# echo 1 > /sys/block/sdc/device/rescan
[root@server ~]# echo 1 > /sys/block/sdd/device/rescan[/cc]
Using above commands, we re-scan the “sda”, “sdb”, “sdc” and “sdd” devices and that mean we are re-sized the all four path for “mpatha” volume. but device-mapper-multipath still using the old device blocks and we have to tell that multipathd to underlying devices to “mpatha” are re-sized, using
[cc lang=”bash”][root@server ~]# multipathd -k”resize multipath mpatha”
ok[/cc] or simply reloading the multipathd.
[cc lang=”bash”][root@server ~]# service multipathd reload
Reloading multipathd: [ OK ][/cc]
Now we are re-scanned the underling block devices for “mpatha” volume and let “device-mapper-multipath” know that “mpatha” was re-sized. that mean we are almost done with our ISCSI(NAS) storage re-scanning part. let’s check it out.
[cc lang=”bash”][root@server ~]# multipath -ll
mpatha (1IET 00010001) dm-4 IET,VIRTUAL-DISK
size=20G features=’0′ hwhandler=’0′ wp=rw
|-+- policy=’round-robin 0′ prio=1 status=active
| `- 4:0:0:1 sda 8:0 active ready running
|-+- policy=’round-robin 0′ prio=1 status=enabled
| `- 5:0:0:1 sdb 8:16 active ready running
|-+- policy=’round-robin 0′ prio=1 status=enabled
| `- 2:0:0:1 sdd 8:48 active ready running
`-+- policy=’round-robin 0′ prio=1 status=enabled
`- 3:0:0:1 sdc 8:32 active ready running[/cc]
Oh yeah baby it’s working now. :P see the “multipath -ll” output and now “mpatha” volume is 20GB long.

In addition to ISCSI(NAS), if you are in a FC(SAN) environment, online re-scanning is quiet different but easy as previous we did in ISCSI. We can perform a FC(SAN) LUN online re-scan using the preferred command
[cc lang=”bash”]echo “- – -” > /sys/class/scsi_host/hostX/scan[/cc]
where “hostX” is your HBA(Host Bus Adapter), That’s it.