Pyr0xyrecuprotite wrote: »Having worked in server support/maintenance before, the thing about maintenance windows is that all sorts of activities have to be shoehorned in to your limited outage window. Yes, there is a server outage for a patch, which likely includes a server-side application code update - and testing to verify if it is correctly applied. Then, their client code distribution system has to be updated with the client updates. In addition, the server application code update might require an update or patch on some underlying tool or middleware system, plus the server OS often requires security patches, and occasionally there may be a hardware replacement. I can guarantee that they also have a complex network setup, and believe it or not, routers and switches also sometimes need maintenance work or replacement with new devices - all of which has to fit into the same timeframe. Anything that goes wrong (anywhere along the line) can have knock-on effects, disrupting other activities and causing further problems, especially in a really complex system setup like this.
No, ZoS is not going to be specific about what specific activities they are including in maintenance each week, nor provide details of their server platform design etc. Nor does a communications person like GinaBruno necessarily even know all of the details - she most likely only gets the high level view, i.e. how long the techs/Ops expect the outage window to last.
autumnsongbird wrote: »So I'm realizing .... for NA players Mondays are pretty much a no go for ESO unless you play in the evening? Yes?
Pyr0xyrecuprotite wrote: »Having worked in server support/maintenance before, the thing about maintenance windows is that all sorts of activities have to be shoehorned in to your limited outage window. Yes, there is a server outage for a patch, which likely includes a server-side application code update - and testing to verify if it is correctly applied. Then, their client code distribution system has to be updated with the client updates. In addition, the server application code update might require an update or patch on some underlying tool or middleware system, plus the server OS often requires security patches, and occasionally there may be a hardware replacement. I can guarantee that they also have a complex network setup, and believe it or not, routers and switches also sometimes need maintenance work or replacement with new devices - all of which has to fit into the same timeframe. Anything that goes wrong (anywhere along the line) can have knock-on effects, disrupting other activities and causing further problems, especially in a really complex system setup like this.
No, ZoS is not going to be specific about what specific activities they are including in maintenance each week, nor provide details of their server platform design etc. Nor does a communications person like GinaBruno necessarily even know all of the details - she most likely only gets the high level view, i.e. how long the techs/Ops expect the outage window to last.
This guy gets it.
I work in IT as well.
For the random player it's just much easier to say, "ZoS is incompetent and doesn't care about us," than to even consider that something like this is quite complex.
No, i am serious. I really would like to know.
archangel_7 wrote: »Some smart guy deployed a 3750 switch onto the production network without adjusting the configuration revision number and VTP advertisements deleted most of the operational VLANs in the SNA Architecture."
Sometimes fixing things, means fixing things you broke while fixing them too.

For all the people claiming to "work in IT" and trying to justifiably explain this maintenance dilemma, quit fanboi lying to the rest of the forums population.
The correct implementation of a client side/server side patch, whether it be an mmo or an auto parts ordering Web based application is the same. It's done this way to prevent exactly this kind of wonton firedrill come patch time: if patch is on Monday, a small segment of the development team (preferably including the developer who wrote the majority of the patch) attempts implementation on a staging client alongside a staging server... on WEDNESDAY. This will reveal the mass majority of missed problems during the QA CYCLE which also had to take place for said patch. Things will inevitably be missed, but they will be few, manifest immediately, and be rectified long before PATCH DAY.
Now the reasons for the above caps: This is not patch day. No software firm in their right mind would weekly "patch". It's server maintenance day. The only excuse for extended maintenance on a working infrastructure is hardware failure resulting from.. you guessed it... the maintenance. The chances that this happens every time for the past six? Steve Harvey says "zeee-ro".
ZOS is finding themselves the victim of being an online gaming company with absolutely no (collaborative) prior maintenance, patching, benchmarking, or load balancing experience. Oh I'm sure that there is some amongst their employees, but either nobody is speaking up, or nobody is listening.
The whole paradigm needs to change.
That's not what anyone is doing.For all the people claiming to "work in IT" and trying to justifiably explain this maintenance dilemma, quit fanboi lying to the rest of the forums population.
No, it's really not.The correct implementation of a client side/server side patch, whether it be an mmo or an auto parts ordering Web based application is the same.
You really don't think they have an internal staging server?It's done this way to prevent exactly this kind of wonton firedrill come patch time: if patch is on Monday, a small segment of the development team (preferably including the developer who wrote the majority of the patch) attempts implementation on a staging client alongside a staging server... on WEDNESDAY. This will reveal the mass majority of missed problems during the QA CYCLE which also had to take place for said patch. Things will inevitably be missed, but they will be few, manifest immediately, and be rectified long before PATCH DAY.
Uh, yes it is. See the patch notes forum.Now the reasons for the above caps: This is not patch day.
WAT? Have you ever heard of continuous delivery? I have had projects that were redeployed daily.No software firm in their right mind would weekly "patch".
No it's patch day I promise, v2.5.8.It's server maintenance day.
That is in no way the only excuse, not at all.The only excuse for extended maintenance on a working infrastructure is hardware failure resulting from.. you guessed it... the maintenance. The chances that this happens every time for the past six? Steve Harvey says "zeee-ro".
The technical skill of ZOS employees notwithstanding, very little if any of your post is accurate.ZOS is finding themselves the victim of being an online gaming company with absolutely no (collaborative) prior maintenance, patching, benchmarking, or load balancing experience. Oh I'm sure that there is some amongst their employees, but either nobody is speaking up, or nobody is listening.
The whole paradigm needs to change.
There are MMOs in great numbers out there .Some are even more complicated with much heavier graphics etc etc.
None of them (2.5y after release) does 8h+ weekly maintenance and NONE of them has such a great LAG issues !
So what's the excuse here?
Pyr0xyrecuprotite wrote: »Having worked in server support/maintenance before, the thing about maintenance windows is that all sorts of activities have to be shoehorned in to your limited outage window. Yes, there is a server outage for a patch, which likely includes a server-side application code update - and testing to verify if it is correctly applied. Then, their client code distribution system has to be updated with the client updates. In addition, the server application code update might require an update or patch on some underlying tool or middleware system, plus the server OS often requires security patches, and occasionally there may be a hardware replacement. I can guarantee that they also have a complex network setup, and believe it or not, routers and switches also sometimes need maintenance work or replacement with new devices - all of which has to fit into the same timeframe. Anything that goes wrong (anywhere along the line) can have knock-on effects, disrupting other activities and causing further problems, especially in a really complex system setup like this.
No, ZoS is not going to be specific about what specific activities they are including in maintenance each week, nor provide details of their server platform design etc. Nor does a communications person like GinaBruno necessarily even know all of the details - she most likely only gets the high level view, i.e. how long the techs/Ops expect the outage window to last.
Some of my above post is inaccurate because of brevity. I'm over 40 years old, I'm not going to sit around the house mashing the F5... so I'm posting from my phone at the lake. That said, your response has caused me to realize that I did indeed post a few inaccuracies and needed to edit:
1. The lying is most likely unintentional. Positive people like to see positives.. and continually justify negatives until a breaking point. That breaking point having not been reached for them yet.
2. The process of patch implementation varies from application to application obviously, but the necessary steps prior to rollout are still checked: QA the code. Stage the code internally. Fix the code. Stage the client/server communication semi-internally. Fix the code. Load balance and throttle for unforseen strain on the new patch. Fix the code. Rollout the patch.
3. No. I do not believe they have a correct internal staging setup for patches, they aren't using it, or they are using it incorrectly (or do you believe they roll out these patches knowing full well that they broke in staging?)
4. I was incorrect. It's patch day. Tomorrow is maintenance day! Does this also set off red flags to your IT sensibilities? It should if your work in IT includes more than saying "The lights on this Cisco are blinking" over a walkie-talkie to a geek squad agent.
5. Continuous Delivery is a system not usually implemented on large user based applications, customer centric applications, or cross platform applications. All of which ESO is. But yes you are again correct... it can and does work and I stand chastised.
6. Okay.. hardware failure caused by any number of reasons. Or upgrade. Doesn't matter anyway since you pointed out this is not server maintenance day.
Zenimax has (at least we all hope they still have) an asset in Bethesda being a mother company. They should tap this asset when they find themselves up to their neck with an issue -and is evident by the month of August I'd say this at least deserves an email.
Giles.floydub17_ESO wrote: »There are MMOs in great numbers out there .Some are even more complicated with much heavier graphics etc etc.
None of them (2.5y after release) does 8h+ weekly maintenance and NONE of them has such a great LAG issues !
So what's the excuse here?
Honestly, maybe they don't care as much or maybe those are the games you might want to consider.
However, there is a reason your playing ESO and regardless of how long maintenance takes its a small inconvenience.
Pyr0xyrecuprotite wrote: »Having worked in server support/maintenance before, the thing about maintenance windows is that all sorts of activities have to be shoehorned in to your limited outage window. Yes, there is a server outage for a patch, which likely includes a server-side application code update - and testing to verify if it is correctly applied. Then, their client code distribution system has to be updated with the client updates. In addition, the server application code update might require an update or patch on some underlying tool or middleware system, plus the server OS often requires security patches, and occasionally there may be a hardware replacement. I can guarantee that they also have a complex network setup, and believe it or not, routers and switches also sometimes need maintenance work or replacement with new devices - all of which has to fit into the same timeframe. Anything that goes wrong (anywhere along the line) can have knock-on effects, disrupting other activities and causing further problems, especially in a really complex system setup like this.
No, ZoS is not going to be specific about what specific activities they are including in maintenance each week, nor provide details of their server platform design etc. Nor does a communications person like GinaBruno necessarily even know all of the details - she most likely only gets the high level view, i.e. how long the techs/Ops expect the outage window to last.
This guy gets it.
I work in IT as well.
For the random player it's just much easier to say, "ZoS is incompetent and doesn't care about us," than to even consider that something like this is quite complex.
For all the people claiming to "work in IT" and trying to justifiably explain this maintenance dilemma, quit fanboi lying to the rest of the forums population.
The correct implementation of a client side/server side patch, whether it be an mmo or an auto parts ordering Web based application is the same. It's done this way to prevent exactly this kind of wonton firedrill come patch time: if patch is on Monday, a small segment of the development team (preferably including the developer who wrote the majority of the patch) attempts implementation on a staging client alongside a staging server... on WEDNESDAY. This will reveal the mass majority of missed problems during the QA CYCLE which also had to take place for said patch. Things will inevitably be missed, but they will be few, manifest immediately, and be rectified long before PATCH DAY.
Now the reasons for the above caps: This is not patch day. No software firm in their right mind would weekly "patch". It's server maintenance day. The only excuse for extended maintenance on a working infrastructure is hardware failure resulting from.. you guessed it... the maintenance. The chances that this happens every time for the past six? Steve Harvey says "zeee-ro".
ZOS is finding themselves the victim of being an online gaming company with absolutely no (collaborative) prior maintenance, patching, benchmarking, or load balancing experience. Oh I'm sure that there is some amongst their employees, but either nobody is speaking up, or nobody is listening.
The whole paradigm needs to change.
@Takamoro I don't recognize your name so you might not remember but I remember 2 years ago someone accidentally brought down the EU server when they were supposed to take down the NA server (or vice versa) multiple times in the past.
@NewBlacksmurf for some reason I image your (wise, and much needed) community communications updates being something like "Stan messed up again and accidentally locked himself in the T1 room cuz he finds the warmth extra-snuggle." Ok, so we're extending the maintenance window 3 more hours because it turns out Stan lost his pants somewhere back behind the servers when he "felt something run up his leg" and threw his pants across the room in terror and now they've blocked off the servers' cooling ports and melted them into goo and he was too embarrassed to tell us.
I hate you, Stan.
daswahnsinn wrote: »TL;DR past the first sentence. I'm in the IT field and currently work in networking, however if you have never worked in the field or have had something never goes as planned then I can understand why you'd say this. I have worked plenty of maintenances that had MOPs with backout procedures and *** still hits the fan due to unforeseeable issues or events. So unless you are perfect or can see the future your point in null and void here on this matter.
daswahnsinn wrote: »TL;DR past the first sentence. I'm in the IT field and currently work in networking, however if you have never worked in the field or have had something never goes as planned then I can understand why you'd say this. I have worked plenty of maintenances that had MOPs with backout procedures and *** still hits the fan due to unforeseeable issues or events. So unless you are perfect or can see the future your point in null and void here on this matter.
And because you chose to comment without reading the post, I'm going to reply. **** does indeed hit the fan as stated later down the thread, but it does not happen continuously or without professionalism at least attempting to keep it from happening again on the next patch day (let alone the next six). Your "however" in sentence two is also misplaced. I simply don't feel the need to scream IT accolades from the mountaintop in order to feel more important in an MMO forums like some of these other posters.
Again, the point wasn't to say "Things shouldn't have gone wrong!" The point was to say "When things go wrong a half dozen times in a row it's time to look at why!"