Luke_Flamesword wrote: »On test servers you can only have much less people. If you think that now we have no useful data, than on test servers it can be only worse, so what's you point?
Luke_Flamesword wrote: »On test servers you can only have much less people. If you think that now we have no useful data, than on test servers it can be only worse, so what's you point?
pauld1_ESO wrote: »This is why things like this belong on a TEST SERVER. People who want to test will test, people who do not will not.
Damnationie wrote: »Can we just put this "we have to do this on live" nonsense to bed.
Outside the gaming industry, in the more general IT world, if you are doing performance investigations, live is the last place you go to do performance testing. I've over two decades in programming, a lot of spent as a troubleshooter of misbehaving systems and this idea you can't find performance problems on the test system is a warning sign I've come across a number of times. What it normally translates too is
- We don't want to spend money on a proper test system that is a proper scaled down replica of our live system.
- We don't know enough about how the system works to set up a proper test system.
- Proper testing automation and load testing tools have not been purchased for use by the developers.
- Trying to get people without the experience / expertise in trouble shooting to investigate complex issues because they won't spend for specialist help.
None of what ZOS is doing makes logical sense if they actually want to solve their issues. For a performance test to be any good you need to have control over the inputs and be able to repeat the exact same activity every time you adjust the code.
If the conditions are not identical than you cannot properly gauge the impact of a change. For example, currently there is a global GCD cool down in Cyrodiil. As a result a lot of people seem to be just removing those skills off their skill bars, maybe leaving just one. So the test is not actually seeing the impact of the changes to performance, rather the impact of people using different skills altogether. As they try the different scenarios they have suggested people will again alter their characters to suit. So each test will not be comparable to each other. Any conclusions they draw will be flawed, and no, you can't compensate for that. Been there, watched as a lot of people made bad decisions with confidence resulting in a major disaster.
What they should be doing is going and getting a load of bot scripts (If they don't have test automation tooling they'd work just as fine for this) and running them on their internal test system to replicate standard player activity. With monitoring you can see the impact each bot has and the impact of each of the skills. You can enable debug level tracing of the code and properly determine where the code bottlenecks are. Debug level logging enabled on most production applications would crash them. Its why you don't do this type of testing in live.
You can start with one simulated player and just ramp up the numbers each run watching performance. As you add simulated players you'd start to see the pain points and bottlenecks in the code. They may not get as dramatic as what is happening on live but it should be detectable. And from that you can figure where to look and once you make changes you can get a reliable read on whether they had any impact or not. What they are currently doing is hoping they can find a solution.
What they are currently doing falls into the less then professional end of the IT world.
Damnationie wrote: »Can we just put this "we have to do this on live" nonsense to bed.
Outside the gaming industry, in the more general IT world, if you are doing performance investigations, live is the last place you go to do performance testing. I've over two decades in programming, a lot of spent as a troubleshooter of misbehaving systems and this idea you can't find performance problems on the test system is a warning sign I've come across a number of times. What it normally translates too is
- We don't want to spend money on a proper test system that is a proper scaled down replica of our live system.
- We don't know enough about how the system works to set up a proper test system.
- Proper testing automation and load testing tools have not been purchased for use by the developers.
- Trying to get people without the experience / expertise in trouble shooting to investigate complex issues because they won't spend for specialist help.
None of what ZOS is doing makes logical sense if they actually want to solve their issues. For a performance test to be any good you need to have control over the inputs and be able to repeat the exact same activity every time you adjust the code.
If the conditions are not identical than you cannot properly gauge the impact of a change. For example, currently there is a global GCD cool down in Cyrodiil. As a result a lot of people seem to be just removing those skills off their skill bars, maybe leaving just one. So the test is not actually seeing the impact of the changes to performance, rather the impact of people using different skills altogether. As they try the different scenarios they have suggested people will again alter their characters to suit. So each test will not be comparable to each other. Any conclusions they draw will be flawed, and no, you can't compensate for that. Been there, watched as a lot of people made bad decisions with confidence resulting in a major disaster.
What they should be doing is going and getting a load of bot scripts (If they don't have test automation tooling they'd work just as fine for this) and running them on their internal test system to replicate standard player activity. With monitoring you can see the impact each bot has and the impact of each of the skills. You can enable debug level tracing of the code and properly determine where the code bottlenecks are. Debug level logging enabled on most production applications would crash them. Its why you don't do this type of testing in live.
You can start with one simulated player and just ramp up the numbers each run watching performance. As you add simulated players you'd start to see the pain points and bottlenecks in the code. They may not get as dramatic as what is happening on live but it should be detectable. And from that you can figure where to look and once you make changes you can get a reliable read on whether they had any impact or not. What they are currently doing is hoping they can find a solution.
What they are currently doing falls into the less then professional end of the IT world.
Damnationie wrote: »Can we just put this "we have to do this on live" nonsense to bed.
Outside the gaming industry, in the more general IT world, if you are doing performance investigations, live is the last place you go to do performance testing. I've over two decades in programming, a lot of spent as a troubleshooter of misbehaving systems and this idea you can't find performance problems on the test system is a warning sign I've come across a number of times. What it normally translates too is
- We don't want to spend money on a proper test system that is a proper scaled down replica of our live system.
- We don't know enough about how the system works to set up a proper test system.
- Proper testing automation and load testing tools have not been purchased for use by the developers.
- Trying to get people without the experience / expertise in trouble shooting to investigate complex issues because they won't spend for specialist help.
None of what ZOS is doing makes logical sense if they actually want to solve their issues. For a performance test to be any good you need to have control over the inputs and be able to repeat the exact same activity every time you adjust the code.
If the conditions are not identical than you cannot properly gauge the impact of a change. For example, currently there is a global GCD cool down in Cyrodiil. As a result a lot of people seem to be just removing those skills off their skill bars, maybe leaving just one. So the test is not actually seeing the impact of the changes to performance, rather the impact of people using different skills altogether. As they try the different scenarios they have suggested people will again alter their characters to suit. So each test will not be comparable to each other. Any conclusions they draw will be flawed, and no, you can't compensate for that. Been there, watched as a lot of people made bad decisions with confidence resulting in a major disaster.
What they should be doing is going and getting a load of bot scripts (If they don't have test automation tooling they'd work just as fine for this) and running them on their internal test system to replicate standard player activity. With monitoring you can see the impact each bot has and the impact of each of the skills. You can enable debug level tracing of the code and properly determine where the code bottlenecks are. Debug level logging enabled on most production applications would crash them. Its why you don't do this type of testing in live.
You can start with one simulated player and just ramp up the numbers each run watching performance. As you add simulated players you'd start to see the pain points and bottlenecks in the code. They may not get as dramatic as what is happening on live but it should be detectable. And from that you can figure where to look and once you make changes you can get a reliable read on whether they had any impact or not. What they are currently doing is hoping they can find a solution.
What they are currently doing falls into the less then professional end of the IT world.
redlink1979 wrote: »Tests like these must be done on live server, during periods of high population such as events.
Bots that ZOS can create on internal/test servers can't replicate all the human behaviors/gear/skill combos players have while playing.
redlink1979 wrote: »Tests like these must be done on live server, during periods of high population such as events.
Bots that ZOS can create on internal/test servers can't replicate all the human behaviors/gear/skill combos players have while playing.
Damnationie wrote: »Can we just put this "we have to do this on live" nonsense to bed.
Outside the gaming industry, in the more general IT world, if you are doing performance investigations, live is the last place you go to do performance testing. I've over two decades in programming, a lot of spent as a troubleshooter of misbehaving systems and this idea you can't find performance problems on the test system is a warning sign I've come across a number of times. What it normally translates too is
- We don't want to spend money on a proper test system that is a proper scaled down replica of our live system.
- We don't know enough about how the system works to set up a proper test system.
- Proper testing automation and load testing tools have not been purchased for use by the developers.
- Trying to get people without the experience / expertise in trouble shooting to investigate complex issues because they won't spend for specialist help.
None of what ZOS is doing makes logical sense if they actually want to solve their issues. For a performance test to be any good you need to have control over the inputs and be able to repeat the exact same activity every time you adjust the code.
If the conditions are not identical than you cannot properly gauge the impact of a change. For example, currently there is a global GCD cool down in Cyrodiil. As a result a lot of people seem to be just removing those skills off their skill bars, maybe leaving just one. So the test is not actually seeing the impact of the changes to performance, rather the impact of people using different skills altogether. As they try the different scenarios they have suggested people will again alter their characters to suit. So each test will not be comparable to each other. Any conclusions they draw will be flawed, and no, you can't compensate for that. Been there, watched as a lot of people made bad decisions with confidence resulting in a major disaster.
What they should be doing is going and getting a load of bot scripts (If they don't have test automation tooling they'd work just as fine for this) and running them on their internal test system to replicate standard player activity. With monitoring you can see the impact each bot has and the impact of each of the skills. You can enable debug level tracing of the code and properly determine where the code bottlenecks are. Debug level logging enabled on most production applications would crash them. Its why you don't do this type of testing in live.
You can start with one simulated player and just ramp up the numbers each run watching performance. As you add simulated players you'd start to see the pain points and bottlenecks in the code. They may not get as dramatic as what is happening on live but it should be detectable. And from that you can figure where to look and once you make changes you can get a reliable read on whether they had any impact or not. What they are currently doing is hoping they can find a solution.
What they are currently doing falls into the less then professional end of the IT world.
Damnationie wrote: »Can we just put this "we have to do this on live" nonsense to bed.
Outside the gaming industry, in the more general IT world, if you are doing performance investigations, live is the last place you go to do performance testing. I've over two decades in programming, a lot of spent as a troubleshooter of misbehaving systems and this idea you can't find performance problems on the test system is a warning sign I've come across a number of times. What it normally translates too is
- We don't want to spend money on a proper test system that is a proper scaled down replica of our live system.
- We don't know enough about how the system works to set up a proper test system.
- Proper testing automation and load testing tools have not been purchased for use by the developers.
- Trying to get people without the experience / expertise in trouble shooting to investigate complex issues because they won't spend for specialist help.
None of what ZOS is doing makes logical sense if they actually want to solve their issues. For a performance test to be any good you need to have control over the inputs and be able to repeat the exact same activity every time you adjust the code.
If the conditions are not identical than you cannot properly gauge the impact of a change. For example, currently there is a global GCD cool down in Cyrodiil. As a result a lot of people seem to be just removing those skills off their skill bars, maybe leaving just one. So the test is not actually seeing the impact of the changes to performance, rather the impact of people using different skills altogether. As they try the different scenarios they have suggested people will again alter their characters to suit. So each test will not be comparable to each other. Any conclusions they draw will be flawed, and no, you can't compensate for that. Been there, watched as a lot of people made bad decisions with confidence resulting in a major disaster.
What they should be doing is going and getting a load of bot scripts (If they don't have test automation tooling they'd work just as fine for this) and running them on their internal test system to replicate standard player activity. With monitoring you can see the impact each bot has and the impact of each of the skills. You can enable debug level tracing of the code and properly determine where the code bottlenecks are. Debug level logging enabled on most production applications would crash them. Its why you don't do this type of testing in live.
You can start with one simulated player and just ramp up the numbers each run watching performance. As you add simulated players you'd start to see the pain points and bottlenecks in the code. They may not get as dramatic as what is happening on live but it should be detectable. And from that you can figure where to look and once you make changes you can get a reliable read on whether they had any impact or not. What they are currently doing is hoping they can find a solution.
What they are currently doing falls into the less then professional end of the IT world.
I don't understand it either and made a similar post awhile back.
Like you say, all that would be required to test this AoE theory of their's would be a simple simulation of said amount of players spamming AoEs. I also know they already know this (any basic programmer who has ever worked with game engines does). So I've concluded there must be another reason for this test besides the one described.
I suspect the combat team is considering transitioning into a cool down based system (instead of the current APM system) in order to narrow the skill gap they want to fix. So I think that's the real reason behind this test. They want to judge how their player base reacts to longer cool downs on their abilities and have cleverly disguised the ploy as an experiment to reduce lag.
TineaCruris wrote: »Damnationie wrote: »Can we just put this "we have to do this on live" nonsense to bed.
Outside the gaming industry, in the more general IT world, if you are doing performance investigations, live is the last place you go to do performance testing. I've over two decades in programming, a lot of spent as a troubleshooter of misbehaving systems and this idea you can't find performance problems on the test system is a warning sign I've come across a number of times. What it normally translates too is
- We don't want to spend money on a proper test system that is a proper scaled down replica of our live system.
- We don't know enough about how the system works to set up a proper test system.
- Proper testing automation and load testing tools have not been purchased for use by the developers.
- Trying to get people without the experience / expertise in trouble shooting to investigate complex issues because they won't spend for specialist help.
None of what ZOS is doing makes logical sense if they actually want to solve their issues. For a performance test to be any good you need to have control over the inputs and be able to repeat the exact same activity every time you adjust the code.
If the conditions are not identical than you cannot properly gauge the impact of a change. For example, currently there is a global GCD cool down in Cyrodiil. As a result a lot of people seem to be just removing those skills off their skill bars, maybe leaving just one. So the test is not actually seeing the impact of the changes to performance, rather the impact of people using different skills altogether. As they try the different scenarios they have suggested people will again alter their characters to suit. So each test will not be comparable to each other. Any conclusions they draw will be flawed, and no, you can't compensate for that. Been there, watched as a lot of people made bad decisions with confidence resulting in a major disaster.
What they should be doing is going and getting a load of bot scripts (If they don't have test automation tooling they'd work just as fine for this) and running them on their internal test system to replicate standard player activity. With monitoring you can see the impact each bot has and the impact of each of the skills. You can enable debug level tracing of the code and properly determine where the code bottlenecks are. Debug level logging enabled on most production applications would crash them. Its why you don't do this type of testing in live.
You can start with one simulated player and just ramp up the numbers each run watching performance. As you add simulated players you'd start to see the pain points and bottlenecks in the code. They may not get as dramatic as what is happening on live but it should be detectable. And from that you can figure where to look and once you make changes you can get a reliable read on whether they had any impact or not. What they are currently doing is hoping they can find a solution.
What they are currently doing falls into the less then professional end of the IT world.
I don't understand it either and made a similar post awhile back.
Like you say, all that would be required to test this AoE theory of their's would be a simple simulation of said amount of players spamming AoEs. I also know they already know this (any basic programmer who has ever worked with game engines does). So I've concluded there must be another reason for this test besides the one described.
I suspect the combat team is considering transitioning into a cool down based system (instead of the current APM system) in order to narrow the skill gap they want to fix. So I think that's the real reason behind this test. They want to judge how their player base reacts to longer cool downs on their abilities and have cleverly disguised the ploy as an experiment to reduce lag.
This IS NOT their motivation.
TineaCruris wrote: »....and those that are in cyrodiil have all changed their builds and bars to not use any aoe's.
Damnationie wrote: »redlink1979 wrote: »Tests like these must be done on live server, during periods of high population such as events.
Bots that ZOS can create on internal/test servers can't replicate all the human behaviors/gear/skill combos players have while playing.
If they can't replicate them then humans can't do them in the first place.
techyeshic wrote: »TineaCruris wrote: »....and those that are in cyrodiil have all changed their builds and bars to not use any aoe's.
I'm pretty sure thats the idea behind the tests. They are saying they think it is the AOEs doing calculations to determine who it hit in an area and by how much so they want to see if less AOEs are used, if performance gets better. I don't think they care much about how well it works on abilities as they have said they would have to review abilities once they coclude which tests work at reducing the lag.
The unfortunate thing is; this really came to a head at Update 25 with degrading performance leading up to that and for whatever reason; they can't seem to go back. That also was the patch where they took out a huge chunk of client file size