Ragged_Claw wrote: »When ZoS go to take my ESO+ money:
'I am currently investigating issues some companies are having regarding taking subscription fees from my bank account. I will update as new information becomes available.'
Ragged_Claw wrote: »When ZoS go to take my ESO+ money:
'I am currently investigating issues some companies are having regarding taking subscription fees from my bank account. I will update as new information becomes available.'
you should make a separate thread by that name and would be awesome to see it closed by mods
Grimreaper2000 wrote: »Unbelievable and unacceptable! Server issues are always possible but not 6 or more times in a few days.
Most people pay pay for it, are subscribed ... and all we we get are "we are investigating, thx for your patience".
Never heard of a compensation for paying members?!
I pay good money to play this game, I can only play so many hours a day as I'm in rehab so I have quite an extensive schedule while I'm in here, playing this for a couple of hours in the evening is a relief to the stress, this *** isn't f*cking helping, at least compensate, with something..
Trikie_Dik wrote: »A typical SDLC will involve replication of the issue in dev/sandbox environment, packing up a deployment script that will contain the proposed fix AFTER creating the necessary backup files in case of a rollback, then testing that fix in the dev/sandbox instance. Once thats proven to be a fix, you need to test the roll back feature in case it wont preform the same in production, and then finally get all the approvals, checkoffs, user verification testing, etc before conducting a production rollout.
Trikie_Dik wrote: »As for the comment between BCP and DR....
You do NOT roll to your Disaster Recovery instance for a tiny blip like loging issues - many of times would experience the same problem is there is deeper seeded integration issues. Not only do you need to ensure the DR env is setup with the latest copy of production, but you also need a plan to re-sync with production once the DR env is no longer needed.
For my line of work, which is in the ATM and Banking support industry, you will only roll to DR in the event of a total disaster and the current env will require days of work and not be suitable for quite some time. It must warrant not only the time to cut over, but also the time required to resync and cut back over to production once that is ready.
Long story short - we are wayyyy before a 'roll to DR' plan at this point
Even for a BCP plan... if your having integration issues between the app and the credential store, swapping over to a backup DB or app server will likly have the same issue.
Keep in mind all of that above is from my exposure to the companies i have worked for and a general industry standard there - however thats NOT a game thats primary intent is to provide entertainment. No one will die if they stay down for 2 hours, no one is locked in a vault or running out of air, etc..... give them soem time and lets hope they get a speedy recovery!
RodneyRegis wrote: »Hey guys - no need to keep telling us you have the same problem as everybody else has been reporting for 12 pages...
I just started playing a few days ago on EU server after long break, and since I started there have been constant issues. So far I experienced super long loading screens after which I was usually returned to the starting point, a quest that bugged out ( enchanted boat in Stonefall), super crazy lags, and now I can't log in to the game or my account, I can't even open a support ticket.. I must have chosen the worst time to rejoin I suppose.
The graphics and sound track are great, and it is fun to play, but the technical issues are really spoiling it right now. I have only a couple hours at most a day after all day of work and when I finally can sit down and play I can't log in.
Trikie_Dik wrote: »(..)
At the same time, you have to look at it from the dev and ops point of view, and know they did not just make this issue for our inconvenience. Speaking from personal experience, when a large enterprise issue pops up its not always the easiest to fully diagnose and get down to the root cause. Even further, once you find said issue, the systems are so complex you cant just go throwing untested code fixes into production.
A typical SDLC will involve replication of the issue in dev/sandbox environment, packing up a deployment script that will contain the proposed fix AFTER creating the necessary backup files in case of a rollback, then testing that fix in the dev/sandbox instance. Once thats proven to be a fix, you need to test the roll back feature in case it wont preform the same in production, and then finally get all the approvals, checkoffs, user verification testing, etc before conducting a production rollout.
(..)
RefLiberty wrote: »DrOuttaSight wrote: »PC EU Server
Press Bold to fix faster