Maintenance for the week of March 27:
• [COMPLETE] PC/Mac: NA and EU megaservers for patch maintenance – March 27, 4:00AM EDT (8:00 UTC) - 9:00AM EDT (13:00 UTC)
• Xbox: NA and EU megaservers for patch maintenance – March 28, 6:00AM EDT (10:00 UTC) - 12:00PM EDT (16:00 UTC)
• PlayStation®: NA and EU megaservers for patch maintenance – March 28, 6:00AM EDT (10:00 UTC) - 12:00PM EDT (16:00 UTC)

Update 33 PC Launch Postmortem

  • LalMirchi
    LalMirchi
    ✭✭✭✭✭
    A question:

    They should perhaps nuke their own data-centers and employ stable services, perhaps Azure or AWS?

    Perhaps focusing on the game instead of the backend would help?
  • TechMaybeHic
    TechMaybeHic
    ✭✭✭✭✭
    I know there was a port issue but...seeing as PC NA is still in rough shape; you can't be serious. Really a bad idea for an April fool's joke @ZOS_GinaBruno
    We’re excited to let everyone know we’re approaching the final stage of completing database sharding and are planning to shard the PC EU megaserver next Tuesday, April 5 so you can reap the performance benefits of sharding as soon as possible.

    Let's be honest group finder didn't break because of a port issue.

    Also what happened this time. A week later the fixes break everything again?

    Yeah. No judgement here one way or the other. Just so far this database thing hasn't really shown much improvement on PC. Maybe it's because PC has been around longer than console. Maybe it's sheer volume of players. (If so; good luck PCEU who might have the most). I just don't know any more than whatever it was; it still ain't right.
  • SeaUnicorn
    SeaUnicorn
    ✭✭✭✭✭
    smacx250 wrote: »
    Given how things are tonight, it seems like that failed hardware was another red herring. Keep looking...

    Or it was not the only piece of hardware that is failing.
  • Onomog
    Onomog
    ✭✭✭
    SeaUnicorn wrote: »
    smacx250 wrote: »
    Given how things are tonight, it seems like that failed hardware was another red herring. Keep looking...

    Or it was not the only piece of hardware that is failing.

    I'm beginning to think that we've been playing on a house of cards...
  • Dietche
    Dietche
    ✭✭✭
    Servers are back to the exact same state during the time the network card was supposedly bad.
    >>Cannot group people.
    >>Cannot leave group.
    >>Cannot disband group.
    >>Finder doesn't work unless it's a 4 man premade.
    >>Once the finder pops, only one person actually zones in, and everyone else has to manual port.
    >>Logging in, or out, at the password screen or the character screen, is all a complete joke.
    >>Zoning *anywhere* for any reason is laughable. And for a game that relies SO heavily on zoning, what with having 10 doors and ladders for even the simplest of quests, the inability to zone--basically at all--makes it nearly impossible to do "other" things besides dungeons to pass the time.
    >>Node picking (really our only option left?) is now crazy slow

    All these things are the exact same issues we had when the network card "supposedly died". All these things were happening during an event, just like this time. In the last 2 years, we have had horrible server performance *each time* an event came. So forgive me if I just don't believe that it's a matter of a "simple lan card issue" anymore.

    The coincidences just keep piling up. Who wants to bet if they stopped the Jesters Event, right now, that all the login queues and broken finder and grouping issues and poor zoning performance suddenly vanishes? What? No bet? Yeahhhh....
    Guild Leader: Sardonically Synthesized
  • TheAlphaRaider
    TheAlphaRaider
    ✭✭✭
    hey @ZOS_MattFiror we are still dealing with bugs from the patch. See bug reports about queues in dungeons, occurs during prime time mostly.
  • TheAlphaRaider
    TheAlphaRaider
    ✭✭✭
    I think mortem is going on still.
  • FeedbackOnly
    FeedbackOnly
    ✭✭✭✭✭
    ✭✭
    I did tell say after the fix something was still wrong. Latency was still slightly higher then average

  • LalMirchi
    LalMirchi
    ✭✭✭✭✭
    coletas wrote: »
    LalMirchi wrote: »
    A question:

    They should perhaps nuke their own data-centers and employ stable services, perhaps Azure or AWS?

    Perhaps focusing on the game instead of the backend would help?

    No datacenter can give any big improvement if software architecture is terrible. The problem is not the server. Is a leadership problem. If you have the money to buy a big boat, you buy it and dont know to manage and hire the sailor properly, u will float the boat and you will have terrible problems when non easy tasks has to be performed, like in a storm. When nothing is planed carefully you front the problems in a hard way when most of them would be easy to fix and most important, easy to avoid. Passengers are leaving the ship while the captain is only capable of bringing more passengers, but that will have an end that people with some experience know well.

    Resume: no, no datacenter is going to save the game

    I do think that gutting the vanity project (in-house servers) >>> ZOS very own datacenter would release developer in-house resources for relevant in-game work, this could be beneficial.
  • coletas
    coletas
    ✭✭✭✭
    I would never use those "resources" for anything relevant. They hit and hit the same rock over and over. When something doesnt work has to be replaced by something that works or outsource it, and I doubt anyone wants to take that job in current state
  • SilverWrought
    SilverWrought
    Soul Shriven
    I hope it gets fixed soon. I had planned to do some exploring with a couple young friends and show them ESO. But, well... this isn't the best look when I'm trying to convince them to try the game out...

    Mebbe get a whel=el barrow full of those Microsoft dollars and build some hefty backend? PLZ?
  • zharkovian
    zharkovian
    ✭✭✭
    One thing that I cannot understand, having worked on large database systems, is that whenever we wanted to process the database, backup, divide or organize the primary, we would take it offline, the database was backed up of course which happened all the time while online, but when we wanted to process things, the primary was not "live" and I suppose in retrospect, ZoS should have chosen the quiet times to do a sharding "maintenance" and shut us all out of the process. However, that's my opinion and when it comes to database management I know just enought to be dangerous.
  • coletas
    coletas
    ✭✭✭✭
    zharkovian wrote: »
    One thing that I cannot understand, having worked on large database systems, is that whenever we wanted to process the database, backup, divide or organize the primary, we would take it offline, the database was backed up of course which happened all the time while online, but when we wanted to process things, the primary was not "live" and I suppose in retrospect, ZoS should have chosen the quiet times to do a sharding "maintenance" and shut us all out of the process. However, that's my opinion and when it comes to database management I know just enought to be dangerous.

    Leaving apart that sharding key and looking to the big locks and knowing anything... I would bet some gold they are using a bad chosen clustered index for that sharding. Backups? They never do any unitest with real data and they ask customers to play to take statistical data instead of simulating it... I never seen a rollback even in worse scenarios... I would bet the backup is a raid 1 and a weekend copy... with luck.
  • Sylvermynx
    Sylvermynx
    ✭✭✭✭✭
    ✭✭✭✭✭
    coletas wrote: »
    zharkovian wrote: »
    One thing that I cannot understand, having worked on large database systems, is that whenever we wanted to process the database, backup, divide or organize the primary, we would take it offline, the database was backed up of course which happened all the time while online, but when we wanted to process things, the primary was not "live" and I suppose in retrospect, ZoS should have chosen the quiet times to do a sharding "maintenance" and shut us all out of the process. However, that's my opinion and when it comes to database management I know just enought to be dangerous.

    Leaving apart that sharding key and looking to the big locks and knowing anything... I would bet some gold they are using a bad chosen clustered index for that sharding. Backups? They never do any unitest with real data and they ask customers to play to take statistical data instead of simulating it... I never seen a rollback even in worse scenarios... I would bet the backup is a raid 1 and a weekend copy... with luck.

    Goddesses, I hope you're wrong. My little forum and blog databases back up every night.... Yeah, I've actually never needed a nightly (since 2000 when I started website management) but hey, I still have EVERY one of them....

    Well, except for the former client who moved to the UK, where her new provider had a major fire, and couldn't recover her site - but I still had a copy from before she moved.....
    Edited by Sylvermynx on April 3, 2022 12:09AM
  • coletas
    coletas
    ✭✭✭✭
    Sylvermynx wrote: »
    coletas wrote: »
    zharkovian wrote: »
    One thing that I cannot understand, having worked on large database systems, is that whenever we wanted to process the database, backup, divide or organize the primary, we would take it offline, the database was backed up of course which happened all the time while online, but when we wanted to process things, the primary was not "live" and I suppose in retrospect, ZoS should have chosen the quiet times to do a sharding "maintenance" and shut us all out of the process. However, that's my opinion and when it comes to database management I know just enought to be dangerous.

    Leaving apart that sharding key and looking to the big locks and knowing anything... I would bet some gold they are using a bad chosen clustered index for that sharding. Backups? They never do any unitest with real data and they ask customers to play to take statistical data instead of simulating it... I never seen a rollback even in worse scenarios... I would bet the backup is a raid 1 and a weekend copy... with luck.

    Goddesses, I hope you're wrong. My little forum and blog databases back up every night.... Yeah, I've actually never needed a nightly (since 2000 when I started website management) but hey, I still have EVERY one of them....

    Well, except for the former client who moved to the UK, where her new provider had a major fire, and couldn't recover her site - but I still had a copy from before she moved.....

    With a good design, u never need nightlys. If u have to do nightlys and most updates are hard, is better to take a serious look to the architecture you designed.

    About backups... Yeah, apart of raid and externals wich are a must, do al kind of backups you can... Logs for live rollbacks, complete remote backups and even image backups. A customer that has his product down for minutes gets angry, for an hour gets extremely angry, and for some more hours, is looking for an alternative and will keep looking forever.

    Here they give you exp scrolls...
  • LalMirchi
    LalMirchi
    ✭✭✭✭✭
    IMHO The weak link is the weakest hardware, that will say the very inadequate Playstations & Xboxes.

    It would be hard but rather beneficial to rid us of these applicices. Or reduce their influence in the current build.

    "Will no one rid me of this turbulent priest?"
    Edited by LalMirchi on April 3, 2022 11:22AM
  • Aardappelboom
    Aardappelboom
    ✭✭✭✭
    LalMirchi wrote: »
    IMHO The weak link is the weakest hardware, that will say the very inadequate Playstations & Xboxes.

    It would be hard but rather beneficial to rid us of these applicices. Or reduce their influence in the current build.

    "Will no one rid me of this turbulent priest?"

    Why would this have anything to do with this? Most problems are clearly server side from what I can tell. Client side is further devided on consoles and PC ever since the enhanced version came out and they've even started differentiating graphical settings (which are client-side) and that part actually works great.

    Except for maybe some more overhead to cater to all these devices there's nothing holding ESO back, there's just problems with the server not able to keep up, the fact that this is (mostly) only happening on PC NA also indicates server side problems.
  • sarahthes
    sarahthes
    ✭✭✭✭✭
    I do not think most of the issues have anything to do with software, database design, or even database sharding - because PC NA is the 5th server to undergo sharding and the first to have issues.

    This all reeks of hardware infrastructure problems.
  • SerafinaWaterstar
    SerafinaWaterstar
    ✭✭✭✭✭
    LalMirchi wrote: »
    IMHO The weak link is the weakest hardware, that will say the very inadequate Playstations & Xboxes.

    It would be hard but rather beneficial to rid us of these applicices. Or reduce their influence in the current build.

    "Will no one rid me of this turbulent priest?"

    Why blame consoles when we’re not having the issues PC is having? Some people have rather rubbish pcs too, you know.

    And yes, getting ‘rid’ of potentially 2/3rds of your player base is *such* a good idea for long term sustainability. /s
    Edited by SerafinaWaterstar on April 3, 2022 4:19PM
Sign In or Register to comment.