Rkindaleft wrote: »This is what I ate for dinner
BelmontDrakul wrote: »
EU PC 2000+ CP professional mudballer and pie thrower"Sheggorath, you are the Skooma Cat, for what is crazier than a cat on skooma?" - Fadomai
WolfCombatPet wrote: »For anyone wondering:
"At this time, we are estimating it could be at least 12 hours until we're up and running" That was posted at 9:45 AM Pacific.
Check back at 9:45 PM Pacific.
I wonder what REALLY happened. Not really sold on the lost power thing. Big operations are supposed to have fault tolerances built in. I bet someone pulled the wrong lever. I bet it was red.
There are DC's (datacenters) everywhere in the US.
My bet is a power upgrade/work went sideways at the location.
The company I work for had this happen once which took out an entire leg of the DC and the DC staff had cross run a bunch of the power runs to the racks so that stuff that looked like the PDUs were all correctly split between circuits we actually all on the one that was knocked out.
Was almost as fun as the Crowdstrike outage.
Quoted post has been removed.
o_Primate_o wrote: »I wonder what REALLY happened. Not really sold on the lost power thing. Big operations are supposed to have fault tolerances built in. I bet someone pulled the wrong lever. I bet it was red.
They locked down the crown store some time prior to the failure, from what I can determine.
sorry but i dont buy it guys. every enterprise grade DC ive worked in from a power delivery perspective has;
- at least two redundant inbound mains grid connections on redundant grid circuits
- an ent class UPS (such as an APC schneider Galaxy series).
- diesel emergency genset for failover.
come on what's the real story?
We now have much more robust systems, but until that catastrophic failure happened, we thought our previous configuration was fine.
HatchetHaro wrote: »Ah the good old days.
snip rad video
I've worked in a business where the failover failed. While not a live service gaming company, it is one with a significant global presence. For 2 full weeks we were running an entire national division's infrastructure on a 10 year old backup-to-the-backup server and providing most of our deliverables to clients via hand typed excel spreadsheets because all the one server could manage was to make existing data internally accessible, nothing new could be written to it and it couldn't export or interface with our online client service applications.
We now have much more robust systems, but until that catastrophic failure happened, we thought our previous configuration was fine.
There are DC's everywhere in the US.
My bet is a power upgrade/work went sideways at the location.
The company I work for had this happen once which took out an entire leg of the DC and the DC staff had cross run a bunch of the power runs to the racks so that stuff that looked like the PDUs were all correctly split between circuits we actually all on the one that was knocked out.
Was almost as fun as the crowdstrike outage.
DreadKnight wrote: »2 Billion USD in profit and there's no backup servers? I'm not bashing but aren't there contingencies in place for such a world ending event?
MasterSpatula wrote: »So I hear there's this stuff called "grass." I'm headed out to investigate right now.
LatentBuzzard wrote: »We now have much more robust systems, but until that catastrophic failure happened, we thought our previous configuration was fine.
That's why responsible companies run regular BCP tests, so that they don't have to wait for a catastrophic failure before they find out that they can't recover.
MasterSpatula wrote: »So I hear there's this stuff called "grass." I'm headed out to investigate right now.