After a slightly strange power outage in the server room at work - the UPS stayed up everything else in the server room went down!! - I came across the situation that an ESX server had lost the primary connection to its SAN through the multipath fibre channel switch fabric.
Cue: extreme nervousness. To put it mildly.
There were a number of messages on the ESX Server console of the form:
cpu2:1034)LVM: ProbeDeviceInt:4903: vmhab1:1:0:1 may be snapshot: disabling access. See resignaturing section in SAN config guide
Actually, the last part of the error message is very good advice. A good read of the SAN configuration guide is well worth the time and effort.
Somewhere along the line the ESX host has lost this VMFS3 volume and picked it up on a different path, vmhba1:1:0:1. When the host came back up, it picked up the VMFS3 on the different path, but importantly kept information about this partition at it's previous path. This is why it's decided it's looking at a Snapshot, and responded in this manner.
So go into the console, click on the Configuration tab and select "Advanced Settings"
Expand the LVM section and set LVM.EnableResignature to 1
Then click OK to apply settings.
Select the "storage adapters" link under the configuration tab and click the "rescan" button (upper right).
Right click the vmhba (under the controller adapter for your machine) and click "rescan".
Then when you go to the summary tab, you right click and slect "refresh" and you should see your storage volume.
At least, that is what the manual would have you believe. My experience was rather different.
ESX was perfectly happy to see the volume on its new path as a new volume. Consequently, I had to remove all my inaccessible VMs and re-register them from the "new" volume. I may have had other options. This seemed to be the quickest at the time.
After all that, all the VMs started up without error, and other than a delay restarting the VMs, the users were unaware of the problem.