We could discuss this with an actual IT terms...and bottlenecks therein.
ALL IT SYSTEMS OF EVERY TYPE HAVE A NARRATIVE OF DELIVERING "THE RIGHT DATA TO THE RIGHT PLACE AT THE RIGHT TIME" I coined it in my draft of my IT Service Delivery Management book I'm writingIf you see lag, this is not occurring obviously. If any IT system has ANY problem, something is getting in the way of this maxim. Yes I'm lecturing from the podium. J/K just spouting some of the useless knowledge I have stored up in mah brain from 20+ years in Enterprise billion dollar companies.
We would have to acknowledge actual profiled bandwidth of:
- Networking Routers and marked data types for level of service.
- Transit time through External and Internal DMZ's with enterprise Firewalls like Checkpoint Firewalls
- O/S Servers Gigabit connections and bandwidth
- Physical I/O bus sizes on servers
- Broker/CICS regions that encapsulate player sessions on the Zone partitions
- Transactions to database over Fiber
- SSL appliances bandwidth and timouts
- EMC (like) shared storage for data on SSD vs spindles Hard drives and separate services using the same splindle causing slowdowns.
- Storage Fiber channel cards bottlenecks
- Application Transactional profiling of player functions (fired off skills) that get shared to "Other" players & FIFO Data queues
- Available physical/virtual Memory and the dreaded memory swapping.
All of the items I listed above along with many more I did not mention would need to be profiled/charted for trends after every major patch cycle and of course periodically to see the actual transactional timings during any day/week cycle and most importantly where there is consistent growth on any of the said charts, so any IT manager could see where we are Running Towards a Cliff so the ship can be righted BEFORE going over the edge. Many times you may have weeks notice of something growing unchecked...example like hard drive space on a set partition...you fill it up and the application falls over. Billion dollar companies ignore this daily...Don't get me started on NAT tables, app garbage collection, and some idiot running a full gigabit speed backup on a production server during peak... /cry
But since this is not a low level IT discussion, it's more in the 10,000 foot view, I would say any assertion that does not talk turkey about the lower level actuals may not be relevant to what is actually causing the lag we see.
ZOS should know EXACTLY what I'm talking about and how to remove the lag we see. We could have daily missions to the moon but there is a cost involved and personnel assigned to Manage it. Obviously some party has made a conscious decision top leave things as is and or manage to a good enough level. Not slamming it, just stating that if they wanted to fix "it" and pay for "it", it likely would have occurred already imo. Extreme daily cases of 100 vs 100 in PVP may not warrant the infrastructure cost the company is willing to pay to remove that perceived lag situation.
The system of this Applications & Architechture called ESO is not just made up of Servers.
That said, ZOS may not have the skillset to manage this application to the level you'd actually like or might not be given the capital to do so for the given budget as it relates to their QTRLY profit margin. No amount of screaming on the forums in CAPS will change this.
Most enterprise level companies think IT is a commodity that anyone can do to the point of having many many vendors engaged (Outsourcing). This imo is the number one reason why US companies have degraded product quality. The vendors do not care. They only care about the transaction and getting paid for it, and fixing small "incidents" is fine with them, not preventing underlying long existing "problems"..There is a difference that most don't recognize.
The real question is how much of ZOS IT infrastructure is Hosted elsewhere (SHARED) and touched / designed by non ZOS employees... I am not privy to this and they are not my client. that said I spoke to some of this in an earlier thread. Some of you seem to know little more that what you learned on Facebook...:'( Entitled Brats! JUST KIDDING LOL. /poke hehe.
We could discuss this with an actual IT terms...and bottlenecks therein.
ALL IT SYSTEMS OF EVERY TYPE HAVE A NARRATIVE OF DELIVERING "THE RIGHT DATA TO THE RIGHT PLACE AT THE RIGHT TIME" I coined it in my draft of my IT Service Delivery Management book I'm writingIf you see lag, this is not occurring obviously. If any IT system has ANY problem, something is getting in the way of this maxim. Yes I'm lecturing from the podium. J/K just spouting some of the useless knowledge I have stored up in mah brain from 20+ years in Enterprise billion dollar companies.
We would have to acknowledge actual profiled bandwidth of:
- Networking Routers and marked data types for level of service.
- Transit time through External and Internal DMZ's with enterprise Firewalls like Checkpoint Firewalls
- O/S Servers Gigabit connections and bandwidth
- Physical I/O bus sizes on servers
- Broker/CICS regions that encapsulate player sessions on the Zone partitions
- Transactions to database over Fiber
- SSL appliances bandwidth and timouts
- EMC (like) shared storage for data on SSD vs spindles Hard drives and separate services using the same splindle causing slowdowns.
- Storage Fiber channel cards bottlenecks
- Application Transactional profiling of player functions (fired off skills) that get shared to "Other" players & FIFO Data queues
- Available physical/virtual Memory and the dreaded memory swapping.
All of the items I listed above along with many more I did not mention would need to be profiled/charted for trends after every major patch cycle and of course periodically to see the actual transactional timings during any day/week cycle and most importantly where there is consistent growth on any of the said charts, so any IT manager could see where we are Running Towards a Cliff so the ship can be righted BEFORE going over the edge. Many times you may have weeks notice of something growing unchecked...example like hard drive space on a set partition...you fill it up and the application falls over. Billion dollar companies ignore this daily...Don't get me started on NAT tables, app garbage collection, and some idiot running a full gigabit speed backup on a production server during peak... /cry
But since this is not a low level IT discussion, it's more in the 10,000 foot view, I would say any assertion that does not talk turkey about the lower level actuals may not be relevant to what is actually causing the lag we see.
ZOS should know EXACTLY what I'm talking about and how to remove the lag we see. We could have daily missions to the moon but there is a cost involved and personnel assigned to Manage it. Obviously some party has made a conscious decision top leave things as is and or manage to a good enough level. Not slamming it, just stating that if they wanted to fix "it" and pay for "it", it likely would have occurred already imo. Extreme daily cases of 100 vs 100 in PVP may not warrant the infrastructure cost the company is willing to pay to remove that perceived lag situation.