Posts

Showing posts from 2014

zpool and power path partial compatibility

Is power path fully supported with zpools, no. It's partial. You can have the pseudo devices assigned to a pool but it uses the native ctd disk only. Recently I lost 2 paths from the vnx array to m9000 server running Solaris 11.1. When I ran the zpool status -x command some of the pools were online while some of them were un available. Power path was reporting 2 paths only but it was active. So ideally the zpools should be online. But it's not. I enabled the remaining 2 paths and all became online. Internally the pools were pointing to one of the ctd devices that belonged to dead path and it was not coming online. Also few of the emcpower devices were renamed, so I had to do zpool import -d with the alternate disk. It seems power path is not completely supported now but may become in future.

Recoverpoint replication reporting with vmax splitter

Guys, Have you ever thought of automating the RP replication status and send status via email or post it to any reporting platform. Since the splitter impacts the VMAX FA's and there is an overhead of 13% on the utilization its good idea to see if the replication is driving the utilization of vmax FA. There are ways of pulling iops/read/write using symstat but its utilization (Director CPU utilization etc) which plays a significant role. As it crosses 80% we start seeing the increase in device response time. However, to measure FA utilization we will have to go through SMAS UI then pull out the reports. Its manual effort. But there is a way to automate it through REST API. You can get the details what type of objects are supported by pointing to : https://smc server :8443/univmax/restapi Coming to the point, getting recoverpoint consistency group status fully automated doesn't seems to be tough. Lets get through the steps. 1. Hash your SSH keys of the unix host to ...