FrancisCrawford wrote: »As I said in reply to @ZOS_RichLambert, there's no reason this shouldn't all scale well, at least within the confines of a single server. For example:
- List everybody who's anywhere near an actual or potential fight. That list changes slowly.
- Calculate their exact positions every 100 milliseconds or so.
- Tabulate those positions redundantly, with a few different sorts. This part could safely be done every 900 milliseconds.
- Give each player their own little bit of a GPU.
- In the player-specific part of the GPU, on each skill cast or proc, pull a list of possible targets from the most appropriate sort. Update it with precise positions from the 100 millisecond table. Proceed from there.
I just wish the EU server had its capacity increased. May be naive of me, but the NA one is less populated and has less issues (everything else being equal?), so rather than wish my fellow Europeans go and find another game, I would rather have a server with enough space for all of us.
Now I am going to sit here for about 15 mins thinking of a good analogy.
Ok, how about this one, I went to go renew my license at the DMV after like 12 years.
Back in the day, they would
1. give us a number and have us sit
2. call us up one by one no matter what the reason of the visit was.
3. We would go to whoever was available to take us at the counter.
This obviously led to unpredicatble and mostly long times at the dmv.
Fast forward to 2020.
Before we walk in we need to
1. register to a computer that is not controlled by a human.
2. it asks us what is the nature of the visit.
3. They assign us to sit at a certain part of the building. This is because they are splitting us up into groups of who need to have the similar things done.
4. Instead of calling us up one by one, they call a portion of the group up in 20s to go to the counter.
5. Then they split the 20 by 5 and have the smaller group stand at one of the 5 counters that were previously empty. These counters specialize in what our group needs to have done.
So once we are standing there it's about 5 minutes per person. Obviously, this data was taken over the years and created this system to make the DMV more efficient. No matter how much people flood in the wait times should be about the same for that specific group. It allows people to plan their time when they go to the dmv.
Note that this was the same DMV in the 12 yrs, they had the same amount of counters, the same amount of people at those counters, similar hardware, the building was the same size. The only thing that changed to make the dmv more efficient was the logic behind how to take in the people and process them.
So what does this mean? buying better servers won't solve the overall problem.
Yes, the DMV of 12 years ago could have invest in a bigger DMV, but then they will have to take more space, hire more people to fill in those new counters, buy more hardware to take pictures and signatures, oh yeah rebuild the whole DMV and have it close down for a while, buy more seats, more restrooms, etc. However realize, the problem was not fixed. The times are still wildly different. When the more people start coming in, it will still get jammed even though they expanded the size. You'll have people who could have been processed fast and easily out the door being behind people who need time to be processed. Just because they still wanted to keep the first come first serve motto for all processes instead of splitting the process into different types like the 2020 version.
Remember, people who would go to the dmv of 12 years ago would just look at the lines on the outside and drive away. Since this newer DMV would be larger, they can't see that perpetual line and it would make people flood it even more and the same problem would occur just on a larger scale. So we wasted resources just to make a problem bigger and unweildly.
The 2020 logical fix does not have that problem because they know the times will stay constant as they process the people. Their logic allows them to monitor the amount of people, predict the wait times, predict if they will close on time, predict a whole lot more. If the people flood the dmv 2020, the wait time should be the same. However if they are not, the time will go up but not to far from the average (there is always a chance that many people have very very unique needs that take up a good amount of time from the average) they will still be predictable.
So with that said, the TEST that ZoS is doing is to get that data. If it's found out to be what lambert says, they will have to overhaul their system. All sets, AOEs, etc. will have to change.
That being said, the current fix to the problem proposed is the lazy way of solving it. That is why it is my hopes that they are only doing the testing for a process of elimination.
I am expecting to see the lag go down. Then this should give them the OK to overhaul the whole system. NOT WITH COOLDOWNS OR COST INCREASES. May be as a temporary fix until they finish the full overhaul but NOT the final rendition.
First of all, I agree that beneficial AOEs should only affect people in one own group and group sizes should come down to 12. (As fengrush may have said)
Secondly, if a caster is by themselves the heal should only heal themselves.
Synergies should still be activated by anyone who can do it. (maybe put that to groups only as well lol)
Monster helms and sets should also conform to this... ONLY HEALING THOSE IN THE GROUP!!!!
Ritual of retrubution should only heal those in the group... not whoever steps in it. ..You know what? actually?
How does the AOE WORK? Push pop Q?
Push whoever walks in, pop who ever walks out?
move whoever is in the AOE has low health to the bottom of the q every time?
I don't know how ZoS does it, but if it was like this now, the Q would have to be created every time I drop down the AOE monster helm, AOE set, AOE DOT, AOE HOT.
if a whole group of people does this at the same time, obviously creating a queue where multiple people are walking in and out, creating queue and destroying queue, might cause it to be bogged down.
So that's why limiting the queue to the group you are in SHOULD INCREASE THE SPEED !!!!
As fengrush also said (and I knew as well) no need to worry about joe loser dying to an NB walking on the side of the keep in range of my group heals...
Ima edit this later maybe...
FrancisCrawford wrote: »As I said in reply to @ZOS_RichLambert, there's no reason this shouldn't all scale well, at least within the confines of a single server. For example:
- List everybody who's anywhere near an actual or potential fight. That list changes slowly.
- Calculate their exact positions every 100 milliseconds or so.
- Tabulate those positions redundantly, with a few different sorts. This part could safely be done every 900 milliseconds.
- Give each player their own little bit of a GPU.
- In the player-specific part of the GPU, on each skill cast or proc, pull a list of possible targets from the most appropriate sort. Update it with precise positions from the 100 millisecond table. Proceed from there.
Ectheliontnacil wrote: »Why is performance in pvp greatly improved during midyear mayhem then?
Ectheliontnacil wrote: »Why is performance in pvp greatly improved during midyear mayhem then?
Ectheliontnacil wrote: »Why is performance in pvp greatly improved during midyear mayhem then?
So by your logic ZOS slapped few rams and hard drivers during MYM and now selling them via ebay ?
FrancisCrawford wrote: »As I said in reply to @ZOS_RichLambert, there's no reason this shouldn't all scale well, at least within the confines of a single server. For example:
- List everybody who's anywhere near an actual or potential fight. That list changes slowly.
- Calculate their exact positions every 100 milliseconds or so.
- Tabulate those positions redundantly, with a few different sorts. This part could safely be done every 900 milliseconds.
- Give each player their own little bit of a GPU.
- In the player-specific part of the GPU, on each skill cast or proc, pull a list of possible targets from the most appropriate sort. Update it with precise positions from the 100 millisecond table. Proceed from there.
Do you have any experience with coding?
I'll freely admit I don't have much experience, but what you're describing sounds exactly like the sort of shoddy code and poor planning that created the performance issues in ESO to begin with.
I am sick and tired of people saying "buy better servers"
Ectheliontnacil wrote: »Why is performance in pvp greatly improved during midyear mayhem then?
During midyearmayhem the issues could have been temporarily fixed as the players were spread out over more PvP instances/shards. Combined with those shards all containing many PvE players as well. PvE players which aren't maxed out, and trying to push every ounce of power out of their character(less light attack weaving, lower skill amount usage, less bar swapping, lesser procs, doing PvE, etc). Meaning less calculations for the servers, simply due to who were playing.
In short: The volume of the type of combat might play a major part in the PvP issues.
Let's do some experimental calculations:
Lets say the Cyrodiil player limit is 500 players maximum. And lets say PvP players account for 100 servercalculations per second(hitting, getting hit, buffs, debuffs, movement, etc). When the server is filled with just PvP players, this comes down to 500 times a 100 servercalculations per second. So a total of 50.000 calculations per second. Now let's say 50% of the players during the event was a PvE player, making 50 servercalculations per second. This means 250 players make 100 servercalculations per second, plus 250 players making 50 servercalculations per second. Which comes down to a total of 25.000 plus 12.500, is 37.500 server calculations per second. Which comes down to a 25% reduced strain on the servers, each second. Which might have been just enough to keep the servers from overstraining. Ofcourse maybe only 10% being PvE players already would cause this, as it is an exponential decrease in strain. If the server allows for more players to be in Cyrodiil, it would make even more of a difference. Ofcourse this is all theoretical, as only ZOS knows numbers.
Again can't speak to ZOS but this ability to scale up and down when required literally saves us a few million per year in costs with the cloud company. Still much cheaper though than owning your own hardware and running/staffing your own data center.
The space, personnel, and infrastructure... you need a constantly cooled environment, in which the servers fit. You can't just throw a server into any room, it will overheat quite quickly. And someone to maintain the servers. If you outsource maintenance, you give that responsibility to another company who specializes in it. If ZOS would run their own servers, they would have to get specialized space. Get an infrastructure to/from the servers running(expensive cables/connections), and they would have to maintain their own servers(specialized personnel). Not to mention if something breaks, ZOS would get the bill, instead of the company they outsourced it to. Since any smart company has hardware defect costs as part of their rentprice, which spread out over multiple companies is cheaper.anitajoneb17_ESO wrote: »Again can't speak to ZOS but this ability to scale up and down when required literally saves us a few million per year in costs with the cloud company. Still much cheaper though than owning your own hardware and running/staffing your own data center.
Asking out of curiosity, since you seem to know a bit on the topic : what's so expensive in a data center, and what's more expensive in running your own rather than renting it ?
Off the top of my head, I can think of hardware, electricity and cooling costs, which I believe would be equivalent in both cases, and staff (which can be mutualized, but it's not the main cost factor here). Just asking.