Well...in this case, it was a matter of taking corporate responsibility. The excerpts below are from an article on Slashdot.org, How CoreSite Survived Sandy.
When Hurricane Sandy hit the East Coast, the combination of high winds, rain, and storm surges wreaked havoc on homes and businesses alike. SlashDataCenter paid close attention to New York City, where floods swept the southern tip of Manhattan. With a data center on the Avenue of the Americas, CoreSite Realty escaped the worst the storm had to offer. But was it coincidence or careful planning?
....
First off, can you explain your [Billie Haggard's - CC] role within CoreSite?
We build, maintain and operate all of our datacenters, from the construction to the facilities, engineering, and security of all the data centers. So they’re the ones the in the field making all the magic happen.
....
What sort of contingency planning do you always have in place? And what sort of specialized plans did you have in place for Sandy, when it came on your radar?
For any incident, we have a business continuity plan as well as a disaster recovery plan. And they’re driven by certain timing—disasters are. For Sandy, five days out, what it triggers is—we hold a meeting, and we go down through our checklist. Do we have this? Do we have this? Do we apply new people? Do we need to reserve hotel rooms? We go through and we check communications, the staffing of our personnel, looking at our individuals, and determining whether they have special needs. Coordinate with people we depend upon, which is our vendors and contractors. Things like making sure we have an electrician available, so if we need to we can run an alternate power source, if we have an extended outage.
Something that gets missed, when we have extended outages like this, is feeding people. So just taking the preparations and going out and shop and have three days worth of food at the site. Spare parts, run through all of our equipment checks, make sure all of our operations are ready to handle an outage.
So specifically for Sandy, it’s five days out, going through the contingency plans, making sure that we’ve gone through the checklist and making sure that everyone knew what they were doing and making sure all the preparations were made.
On the evening of the 29th, when the hurricane made landfall, what happened?
We had three people on site, one facilities person and one operations person. [Uh...I don't know who the third person was! CC] We didn’t know what the impact was going to be. And, initially, we started talking about power, and the lights started flickering. Our facilities personnel made the decision that utility power was not stable, so we proactively transferred our site to generator [power]. Because you don’t have to have an outage to create problems in the data center. Those momentary flickers take hits on the batteries, they can cause spikes on the electrical grid, grounding problems and things like that. So we proactively went to generators, prior to Sandy hitting. So we were all stable.
....
So at what time did you start worrying that [3.5-day supply of fuel for the generator - CC] wasn’t going to be enough? Or did you?
Well, I’ll tell you, the best investment I ever made… was that we paid $9,000 at the beginning of the year to have a guaranteed fuel delivery within eight hours. So when everyone started scrambling, trying to find fuel, ours was already paid for. We were at the top of the list.
So eight hours in, we already had fuel trucks running. And every 24 hours, we had fuel, even though we didn’t need to.
....
You said you had three days worth of food on-site, and three employees there. So they had cots, and slept on-site?
Yes. We reserved hotel rooms, but talk about lessons learned: what we found is even though we had personnel in hotels, they lost power and water in the hotels. So it was actually more advantageous for the guys to be at the sites sleeping, and they had water and shower capabilities and food.
We also found that our customers didn’t plan on not finding places to eat, and we were actually feeding our customers.
So how much food did you actually have on hand?
We had more than three days, and we actually had food delivered from uptown.
....
As the hurricane moved through, and past, it sounds like CoreSite’s experience wasn’t that bad. What lessons did you learn from all of this?
One is to ensure that our documentation and our checklist goes beyond 24 hours. I think that most of—if you look at the tier ratings, how a datacenter is classified, Tier 1, 2, 3, or 4 and 4 being the most reliable—all the requirements are based on 12 hours. Twelve hours of fuel, twelve hours of water for your cooling systems, and we’ve always looked at 24 hours. In preparations for disasters such as Sandy, in the future, we’re going to expand that to three days.
And the other thing is, we had to depend a lot on outside organizations. As I said, the data center becomes an island. And within the building at 32 Ave. of the Americas, the biggest problem was with network people. Even though the data center stayed up, we actually had customers of the building that had lost power and had lost connectivity and so we were sending electricians to reroute power and connectivity on our site so they [customers] could use our power and our connectivity to power their facilities.
The other thing is making sure our customers understand that temporary systems are not good in situations like this. One of our major carriers, their backup system was to bring up a rollup generator. And from what I understand, they paid to have this generator there in four hours, and when they had this generator up, the police confiscated it for emergency use. So their backup generator wasn’t there any more.
Recent Comments