Posts

Showing posts from 2010

Automated EMC Clariion snapshots

We have a peculiar situation where running multiple snapview clone sync results in I/O wait on the production application. This is a business requirement where we have to take a backup of the production copy before running the EOD (End of day) process and one copy immediately after the EOD (End of day). The duration for EOD varies between 45 minutes to 1 hour. This is the time when the I/O is maximum. The EOD period is not sufficient to schedule snapview sync for the post EOD after the split of Pre-EOD. Their is always an overlap, and that is the period when the user face extreme slowness. We can stripe the luns over multiple Raid groups for the production luns but space constraint prevents us from doing this. The solution what we found was to go for snapshot for Post EOD and with the normal sync for the EOD. 1. We add the luns to the reserved clone group. It depends on the size of the snapshot and the percentage change. 2. We create snapshot on the source luns and add it to the destin...

Linux LVM with EMC powerpath

How you plan to use Linux LVM with a multipathing software like EMc powerpath? The trick is to mask out the non emcpower devices from LVM view. This is to ensure any path failure is handled by the multipathing software rather than LVM itself. The underlying path management complexity is offloaded to multipathing software. When the command pvscan, is issued you see only the emcpower devices. You will have to enable the filter for accepting the emcpower devices and reject all the non emcpower devices inside /etc/lvm/lvm.conf file filter =[ "a|/dev/emcpower.*|", "r|/dev/sd*|", "r|/dev/cdrom|" ] Ensure only one filter is enabled.

Ping, traceroute does not work but nslookup works

Have you ever came across a situation when nslookup works for name resolution but ping or traceroute does not work. Traditional unix commands does not rely on dns for name resolution unless specifically instructed via /etc/nsswitch.conf file. You will have to make an entry in front of hosts parameter: hosts files dns

Create veritas file system Solaris

We have an impressive footprint of Sun server mainly M8000, M5000 and E2900 where frequently we have to create Veritas file system on storage provisioned from Clariions . Since, we handle both the Unix and Storage team it becomes rather easy to fulfill business requirement and avoid possible delays. Once the storage is provisioned we start like this: 1. As leadville drivers are in place on Solaris 10 servers, use cfgadm - al to determine the attachment points for external FC . 2. Use cfgadm -c configure , repeat this for all Ap _Id ,Changing the state of the attachment point for making the occupant hardware resource to be usable by OS 3. devfsadm -C ,Asking the drivers to do whatever to probe for the new devices and build subsequently the device files, it also create links for the files like / dev / rdsk /c1t0dos0 etc 4. powercf -q ,Create emcpower pseudo devices for the newly detected luns 5. powermt config ,Creates powerpath configuration 6. powermt save ,Commit the ch...

Solaris 10 patching best practices

We have a mammoth task every quarter to patch all our Unix servers, and every month if the servers are in DMZ. With time we have formulated a couple of steps that makes life easier during patching of Sun servers. 1. We apply the same patch cluster on every Sun server for a particular quarter, no matter when we patch the server during a particular quarter . 2. Once the patch is downloaded at the centralized location, it is distributed to every server on the / directory. This may sound a little off track as the patch cluster consumes a good 1GB of space and the patching process might fail if we do not have sufficient disk space on our / and /var file system, but we had faced some NFS issues before where we had a lot of trouble bringing up the system after the nfs server crash in middle of the patching process. 3. We inform the database team before hand (1 week) about the patch cluster release and also share with the them release notes, for them to identify any issues with the database a...

Online LUN detection in RHEL 5

Things have changed much post migration from RHEL 4 to RHEL 5. In 4, we can easily detect the luns by sending an echo command to rescan the HBA adapter driver, however in 5 this does not work or shall I say it is not available. But still I feel their should be some mechanism to query either the hba driver or the SCSI mid layer. I have tried the following on RHEL 5 to scan for the new luns and found it to work, but do not try it on production. # for name in `ls /sys/class/scsi_device`; do echo 1 > /sys/class/scsi_device/$name/device/rescan ;done Or use blockdev per device # for name in `ls /dev`; do blockdev --rereadpt /dev/$name ;done Got a script to do the same: ####### Script starts ######## #!/bin/bash FDBf=/tmp/fdbf FDAf=/tmp/fdaf PBf=/tmp/pbf PAf=/tmp/paf #before fdisk output fdisk -ll | egrep Disk | egrep -v emcpower >> $FDBf #before powerpath powermt display dev=all | egrep Pseudo >> $PBf Lun=$1 for name in `ls /sys/class/fc_transport/ | awk -Ft '{print $3}...