not sure if my reply will have anything to do with your post but i thought of this when i read it.
some where in the in game settings/video theres an option that when turned on my fps will never pass 60fps, so basically, like you, i am frame locked. now when its turned off i have no problems getting 70+fps. something to do with the refresh rate of your monitor / tearing of artifacts?
KhajitFurTrader wrote: »ESO is an MMO, so there's a client and a server part to it, and it's essential that both stay in sync all of the time. Simply put, the rendering engine of any MMO client has to wait on the server to tell it what dynamic objects it has to draw. When compared with memory or SSD access times, a network is slower by a factor of 1k or more, so the client has to do all kinds of trickery to pretend there's fluid movement of all visible mobile objects.
So, while staying at the character selection screen, there is no need to sync the client to the persistent online world, resulting in a high frame rate. While being logged in with a character, syncing is in place, and it strongly depends on the server load of the particular area (zone and/or cell) he's in -- which in turn depends on the number of concurrent players being present there -- how high the frequency of syncing events can be: the lower it is (i.e. there's more time in between syncing events), the lower the FPS.
Think of it this way: within your client, you're only seeing the locally rendered, graphical representation of a world that exists (i.e. is computed) elsewhere. To create the illusion of a consistent world, shared by all that connect to it, there needs to be synchronization. This cannot be done in realtime, and each connection has its own, unique latency, so there needs to be a lot of leeway. The very nature of the client-server architecture of MMOs inherently prevents them from being hardware hogs like single player games in some aspects.
Ok, let's assume that in both situations the frequency of server updates remains constant, i.e. there is a fixed time interval at which the local network thread can synchronize with the client's main thread (ofc in RL, there isn't). The main thread, or main loop, is the rendering engine, i.e. with every cycle, input/output (including network messaging) gets processed in subthreads, then everything gets synced, after this exactly one frame is computed and passed on to the rendering queue of the GPU driver's API for processing. Rinse and repeat.Thank you for the in depth reply and I think this makes a lot of sense, however my issue cannot be entirely from server update speed.
The first image below is on low settings getting 60fps and the second is just a minute later in the same spot on max settings getting 32 fps. In both scenarios my computer is not being maxed out in any way. If I were hitting some sort of server bottleneck the reduction in quality settings should have no effect.
And this was just standing still in grahtwood....
FPS spiking up and down and up and down and up and down, every 1-2 seconds eventually before the game just crashes. This can go on for a long time before crashing though. As you can see the spikes in fps are happening same as the spikes and dips in gpu load, power usages and memory controller load.
KhajitFurTrader wrote: »Ok, let's assume that in both situations the frequency of server updates remains constant, i.e. there is a fixed time interval at which the local network thread can synchronize with the client's main thread (ofc in RL, there isn't). The main thread, or main loop, is the rendering engine, i.e. with every cycle, input/output (including network messaging) gets processed in subthreads, then everything gets synced, after this exactly one frame is computed and passed on to the rendering queue of the GPU driver's API for processing. Rinse and repeat.Thank you for the in depth reply and I think this makes a lot of sense, however my issue cannot be entirely from server update speed.
The first image below is on low settings getting 60fps and the second is just a minute later in the same spot on max settings getting 32 fps. In both scenarios my computer is not being maxed out in any way. If I were hitting some sort of server bottleneck the reduction in quality settings should have no effect.
In the case of low quality settings and 60 FPS, one cycle (including everything, e.g. driver overhead and rendering time) lasts 1/60 seconds, or approximately 16.6 ms. Likewise, high quality settings (which would require at least the quadruple amount of graphical data being processed, plus a lot more shaders and post-processing) yield 30 FPS, so one cycle lasts 1/30 seconds, or approx. 33.3 ms. This would indicate that in both cases the lower limit (floor) of cycle time is limited by client-server network synchronization (a fraction of 16.6 ms), and thus network thread/main thread synchronization. If sync time increases, so does cycle time, and thus the rate of frames computed per second decreases -- and vice versa, down to the minimum amount of time needed for network messaging (which might be way higher than simple ICMP Echo_Request roundtrip times, a.k.a. "ping" latency).
As I said, the client-server architecture of MMOs with its inherent need for synchronization on at least two different levels is a limiting factor, which is absent in single-player games.
More cores do not necessarily give you more performance. It depends on the amount of tasks which can just be done sequential in relation to those which can be done in parallel. How this is effecting the performance limit is given by Amdahl's law, which is an assumption that all parallel tasks would not take any time at all - so this is the upper boundary for performance speed, all real world software performs worse than this, because parallel tasks take time as well.
When going from low settings to high settings there is another step of quality post-processing added, which happens after the initial rendering task. This requires by a rule of thumb about the same amount of time, and that is what you see, frame rate has about halved.
More cores do not necessarily give you more performance. It depends on the amount of tasks which can just be done sequential in relation to those which can be done in parallel. How this is effecting the performance limit is given by Amdahl's law, which is an assumption that all parallel tasks would not take any time at all - so this is the upper boundary for performance speed, all real world software performs worse than this, because parallel tasks take time as well.
When going from low settings to high settings there is another step of quality post-processing added, which happens after the initial rendering task. This requires by a rule of thumb about the same amount of time, and that is what you see, frame rate has about halved.
I never claimed that it should be fully utilizing all my cores, however it should be fully utilizing one core for everything running in serial and it is not even achieving that.
More cores do not necessarily give you more performance. It depends on the amount of tasks which can just be done sequential in relation to those which can be done in parallel. How this is effecting the performance limit is given by Amdahl's law, which is an assumption that all parallel tasks would not take any time at all - so this is the upper boundary for performance speed, all real world software performs worse than this, because parallel tasks take time as well.
When going from low settings to high settings there is another step of quality post-processing added, which happens after the initial rendering task. This requires by a rule of thumb about the same amount of time, and that is what you see, frame rate has about halved.
I never claimed that it should be fully utilizing all my cores, however it should be fully utilizing one core for everything running in serial and it is not even achieving that.
That is not really how it works - if a CPU is running permanently at max load, then it is overburdened. I will use an analogy with a sports car - you can drive it at max speed in 6th gear, but if you do that for long, it is overburdened. Normal cruising speed is much lower - around 150-180 mph - and so it is in a way with CPU load as well - it should be around 40-70% for a good load.