Posts

Showing posts from April, 2012

Rsync for bulk transfer

If you have to transfer a huge number of files lets say 50000 over WAN as one time data copy. Each of the files are of 3-4 Mb. We have couple of options like tar (will need extra space; unless you pipe it through ssh), scp (will be sequential), or use rsync. What I have done here, is used the --files-from option of rsync which specifies which files to transfer. On the source, create the file list 50000 lines (huge number) with ls > /tmp/a, then split the files based on line count. The line count is the deciding factor, if you have DS3 or higher make it smaller. Smaller line count will determine the concurrency or the number of simultaneous sessions that can start. split -l 1000 /tmp/a This will create 50000/1000 number of files starting xaa (you can decide the starting prefix, this is default) in the present working directory assuming its /tmp. Once done use the below either as single commands or put them in a script and start your job. nohup /root/rsync/bin/rsync -avz -...

NFS common problems

Have you come across very silly problem while mounting NFS share. It just comes out of no where and refuses to go. Hours of efforts are wasted when we find the exact issue and seems like banging our head on the doors. I have summarized some of them, I hope you might be having your set of goodies as well. If you are getting an error like permission denied on the client server (90% of the issue falls here), then check in sequence what has to be done: 1. If the server has multiple IP address, it can no surprises. Do a traceroute from the server to the client and see which interface is being used to connect. Of course please check first whether the client is pinging or not. Once you have the interface detail, this IP will be used on the clients to map the network share. 2. See if you can resolve the client IP address using DNS, if you can then use dns name in the share access list on the NFS server. 3. If the client has multiple interface, then do a traceroute to the IP found in 1...

Friday night, almost there

It started on Friday evening, yes Friday for all but not for me. Waiting for the EMC engineer to do a webex and configure our Recoverpoint appliance. Normally, if you are planning for DR site (big project indeed) for the last 6 months; how much time will you give for the lifeline to work yes, data replication. 1 month would be a decent guess, we have to do in 2 days. Don't laugh, very serious. We had done the rack and stack and all the cabling beforehand. It started around 9 PM and went on till 1 AM. All set for the testing. We just cross checked whether every IP is reachable or not. Viola, the wan IP was not pinging. It was on a different VLAN and was not in the firewall rules. Called up firewall engineer, didn't picked up the call; another same result. Escalated to manager, his phone was busy. called up project manager, he was furious (had to be). Escalated to Director, Information security and he confirmed will do something. Never ending wait started, finally around 3...

Move LVM from multiple local and iscsi disk to SAN

We are setting up our DR site. For replication we are using EMC Recoverpoint appliance. The license what we have works for only SAN volumes on EMC arrays namely VNX in our case. The database and file shares which are on SAN have no problem however, the critical file shares that are on local storage or Dell iscsi storage have an issue. It sounds easy to move them to SAN and start replication but not so. Consider below: 1. The LVM housing the file shares on a Dell server contains PV from local disk partition (yes not complete disk but like /dev/sda10 on extended partition) and other full disk from iscsi storage /dev/sdq and /dev/sdi 2. Total size is ~400 GB with 100 GB from local disk partition (total local disk size is 300 GB) and 300 (200 + 100) from iscsi. I know its wrong but it was done long time ago and we were running out of space. 3. The dell server does not have FC connection. 4. The file system structure contains 32000 nested directory structure with approx 1.2 billion fi...