Maintenance for the week of March 3:
• PC/Mac: No maintenance – March 3
• ESO Store and Account System for maintenance – March 4, 6:00AM EST (11:00 UTC) - 4:00PM EST (21:00 UTC)
• NA megaservers for maintenance – March 5, 4:00AM EST (9:00 UTC) - 11:00AM EST (16:00 UTC)
• EU megaservers for maintenance – March 5, 9:00 UTC (4:00AM EST) - 16:00 UTC (11:00AM EST)

PC Game performance

Shyfty
Shyfty
✭✭✭
I noticed something strange today when trying to discover where my bottleneck in performance.

#1 is my GPU usage while on the character selection screen, it stays at 99% consistently.
#2 is my GPU usage while in game, it hovers between 50% - 70% usage.
#3 is my FPS while on the character selection screen, stayed around 75-85 fps.
#4 is my FPS while in game, here it stays around 60 fps.

SSZMK9I.jpg

So my first thought is that the game is frame locked at 60 fps in game, but the problem is even when my frame rate dips into the 20's the GPU usage never creeps past 70% usage. Notice also that in game my CPU usage (on the bottom of the graph) hovers between 30% - 60% usage.

Just curious why my frame rate can get so low while my CPU and GPU are not even fully utilized.

For any other system considerations, I have 16GB RAM which is less than half used. The game is on an SSD and my internet speed is 200mbps down and 20mbps up
  • coolmodi
    coolmodi
    ✭✭✭
    You have a quadcore? Core 1 100% core 2-3 20% = 40% usage.
    And with more cores it will be even lower!

    This game doesn't really use more than one core. As soon as the one core it's using hits 100% it will be the limiting factor.

    In my case it looks like this (i5@4.6hz in an open area!):
    DTg2i3K.png



    Edited by coolmodi on February 13, 2016 10:58PM
  • IamNoobee
    IamNoobee
    ✭✭
    not sure if my reply will have anything to do with your post but i thought of this when i read it.

    some where in the in game settings/video theres an option that when turned on my fps will never pass 60fps, so basically, like you, i am frame locked. now when its turned off i have no problems getting 70+fps. something to do with the refresh rate of your monitor / tearing of artifacts?
    PC NA ~STD and Wet Noodles~
    ~AD Main Alts - Zerog/Pyle - Magicka NB , Noobee - Stamina DK

    ~DC Alts - Not So Bright - Stamina Sorcerer

    ~EP Alts - Noobee Jr - Magicka NB

    The first reset of VMA-PCNA #6 Nightblade Zerog
    In need of a PVP Guild
  • Shyfty
    Shyfty
    ✭✭✭
    For concerns about using only 1 core as you can see the game isn't fully utilizing any cores. This is while running at 24 fps
    sXCivCP.jpg
  • KhajitFurTrader
    KhajitFurTrader
    ✭✭✭✭✭
    ✭✭
    ESO is an MMO, so there's a client and a server part to it, and it's essential that both stay in sync all of the time. Simply put, the rendering engine of any MMO client has to wait on the server to tell it what dynamic objects it has to draw. When compared with memory or SSD access times, a network is slower by a factor of 1k or more, so the client has to do all kinds of trickery to pretend there's fluid movement of all visible mobile objects.

    So, while staying at the character selection screen, there is no need to sync the client to the persistent online world, resulting in a high frame rate. While being logged in with a character, syncing is in place, and it strongly depends on the server load of the particular area (zone and/or cell) he's in -- which in turn depends on the number of concurrent players being present there -- how high the frequency of syncing events can be: the lower it is (i.e. there's more time in between syncing events), the lower the FPS.

    Think of it this way: within your client, you're only seeing the locally rendered, graphical representation of a world that exists (i.e. is computed) elsewhere. To create the illusion of a consistent world, shared by all that connect to it, there needs to be synchronization. This cannot be done in realtime, and each connection has its own, unique latency, so there needs to be a lot of leeway. The very nature of the client-server architecture of MMOs inherently prevents them from being hardware hogs like single player games in some aspects.
  • Shyfty
    Shyfty
    ✭✭✭
    IamNoobee wrote: »
    not sure if my reply will have anything to do with your post but i thought of this when i read it.

    some where in the in game settings/video theres an option that when turned on my fps will never pass 60fps, so basically, like you, i am frame locked. now when its turned off i have no problems getting 70+fps. something to do with the refresh rate of your monitor / tearing of artifacts?

    I believe you are talking about v sync which syncs your updates with your monitors refresh rate. I have this turned off
  • LegacyDM
    LegacyDM
    ✭✭✭✭✭
    This game is coded very strange. I have a delided i3770K overclocked to 4.7ghrtz 16 gigs of ram and running amd 7950s in trifire. Yet I could be in Rawlinka at 3am on a Monday morning when few people are on and my fps never goes above 40 fps. Something about towns drops fps for no rhyme or reason. Yet in cyrodil home bases I get 80-100 fps. I hope the 64bit client makes an improvement in performance.
    Legacy of Kain
    Vicious Carnage
    ¥ampire Lord of the South
  • Shyfty
    Shyfty
    ✭✭✭
    ESO is an MMO, so there's a client and a server part to it, and it's essential that both stay in sync all of the time. Simply put, the rendering engine of any MMO client has to wait on the server to tell it what dynamic objects it has to draw. When compared with memory or SSD access times, a network is slower by a factor of 1k or more, so the client has to do all kinds of trickery to pretend there's fluid movement of all visible mobile objects.

    So, while staying at the character selection screen, there is no need to sync the client to the persistent online world, resulting in a high frame rate. While being logged in with a character, syncing is in place, and it strongly depends on the server load of the particular area (zone and/or cell) he's in -- which in turn depends on the number of concurrent players being present there -- how high the frequency of syncing events can be: the lower it is (i.e. there's more time in between syncing events), the lower the FPS.

    Think of it this way: within your client, you're only seeing the locally rendered, graphical representation of a world that exists (i.e. is computed) elsewhere. To create the illusion of a consistent world, shared by all that connect to it, there needs to be synchronization. This cannot be done in realtime, and each connection has its own, unique latency, so there needs to be a lot of leeway. The very nature of the client-server architecture of MMOs inherently prevents them from being hardware hogs like single player games in some aspects.

    Thank you for the in depth reply and I think this makes a lot of sense, however my issue cannot be entirely from server update speed.

    The first image below is on low settings getting 60fps and the second is just a minute later in the same spot on max settings getting 32 fps. In both scenarios my computer is not being maxed out in any way. If I were hitting some sort of server bottleneck the reduction in quality settings should have no effect.

    7ZE7VhW.jpg
    sSnrbNM.jpg

  • KhajitFurTrader
    KhajitFurTrader
    ✭✭✭✭✭
    ✭✭
    Shyfty wrote: »
    Thank you for the in depth reply and I think this makes a lot of sense, however my issue cannot be entirely from server update speed.

    The first image below is on low settings getting 60fps and the second is just a minute later in the same spot on max settings getting 32 fps. In both scenarios my computer is not being maxed out in any way. If I were hitting some sort of server bottleneck the reduction in quality settings should have no effect.
    Ok, let's assume that in both situations the frequency of server updates remains constant, i.e. there is a fixed time interval at which the local network thread can synchronize with the client's main thread (ofc in RL, there isn't). The main thread, or main loop, is the rendering engine, i.e. with every cycle, input/output (including network messaging) gets processed in subthreads, then everything gets synced, after this exactly one frame is computed and passed on to the rendering queue of the GPU driver's API for processing. Rinse and repeat.

    In the case of low quality settings and 60 FPS, one cycle (including everything, e.g. driver overhead and rendering time) lasts 1/60 seconds, or approximately 16.6 ms. Likewise, high quality settings (which would require at least the quadruple amount of graphical data being processed, plus a lot more shaders and post-processing) yield 30 FPS, so one cycle lasts 1/30 seconds, or approx. 33.3 ms. This would indicate that in both cases the lower limit (floor) of cycle time is limited by client-server network synchronization (a fraction of 16.6 ms), and thus network thread/main thread synchronization. If sync time increases, so does cycle time, and thus the rate of frames computed per second decreases -- and vice versa, down to the minimum amount of time needed for network messaging (which might be way higher than simple ICMP Echo_Request roundtrip times, a.k.a. "ping" latency).

    As I said, the client-server architecture of MMOs with its inherent need for synchronization on at least two different levels is a limiting factor, which is absent in single-player games.
  • Lysette
    Lysette
    ✭✭✭✭✭
    ✭✭✭✭✭
    More cores do not necessarily give you more performance. It depends on the amount of tasks which can just be done sequential in relation to those which can be done in parallel. How this is effecting the performance limit is given by Amdahl's law, which is an assumption that all parallel tasks would not take any time at all - so this is the upper boundary for performance speed, all real world software performs worse than this, because parallel tasks take time as well.

    When going from low settings to high settings there is another step of quality post-processing added, which happens after the initial rendering task. This requires by a rule of thumb about the same amount of time, and that is what you see, frame rate has about halved.
  • Troneon
    Troneon
    ✭✭✭✭✭
    ✭✭✭
    ZOS changed something and they still have not replied to my topic about this....

    It's effecting GPU performance with ESO and they need to fess up and acknowledge they are working on it.

    http://forums.elderscrollsonline.com/en/discussion/246428/fps-drops-screen-stuttering-since-last-night/p1


    Anyone else getting this since last night? I am doing nothing but running forward in an empty zone with no one around and getting fps drops + stuttering for no reason. Checked everything I can think of and pretty sure it's an issue since last mini patch.....

    Happens every few seconds and only when playing ESO since last night. You can see where it dips in fps/screen freezing at the same time as dips in gpu load, power usages and memory controller load.

    1oYoR9t.jpg
    Did you change something ZOS?
    Troneon wrote: »
    And this was just standing still in grahtwood....

    FPS spiking up and down and up and down and up and down, every 1-2 seconds eventually before the game just crashes. This can go on for a long time before crashing though. As you can see the spikes in fps are happening same as the spikes and dips in gpu load, power usages and memory controller load.

    EhjGlNO.jpg

    EDIT:
    Troneon wrote: »
    Sorien wrote: »
    Yesterday they changed the way the game exe is packed/protected, so that may have something to do with it.

    Yep what Sorien said ^^

    They did change the way the game is packed… and if they use Codereplace on the wrong parts of code, it could definitely cause microstutters/lag.






    Edited by Troneon on February 14, 2016 10:40AM
    PC EU AD
    Master Crafter - Anything you need!!
    High Elf Magicka Templar Healer/DPS/Tank
    Trials / Dungeons / PVP / Everything
  • Shyfty
    Shyfty
    ✭✭✭
    Shyfty wrote: »
    Thank you for the in depth reply and I think this makes a lot of sense, however my issue cannot be entirely from server update speed.

    The first image below is on low settings getting 60fps and the second is just a minute later in the same spot on max settings getting 32 fps. In both scenarios my computer is not being maxed out in any way. If I were hitting some sort of server bottleneck the reduction in quality settings should have no effect.
    Ok, let's assume that in both situations the frequency of server updates remains constant, i.e. there is a fixed time interval at which the local network thread can synchronize with the client's main thread (ofc in RL, there isn't). The main thread, or main loop, is the rendering engine, i.e. with every cycle, input/output (including network messaging) gets processed in subthreads, then everything gets synced, after this exactly one frame is computed and passed on to the rendering queue of the GPU driver's API for processing. Rinse and repeat.

    In the case of low quality settings and 60 FPS, one cycle (including everything, e.g. driver overhead and rendering time) lasts 1/60 seconds, or approximately 16.6 ms. Likewise, high quality settings (which would require at least the quadruple amount of graphical data being processed, plus a lot more shaders and post-processing) yield 30 FPS, so one cycle lasts 1/30 seconds, or approx. 33.3 ms. This would indicate that in both cases the lower limit (floor) of cycle time is limited by client-server network synchronization (a fraction of 16.6 ms), and thus network thread/main thread synchronization. If sync time increases, so does cycle time, and thus the rate of frames computed per second decreases -- and vice versa, down to the minimum amount of time needed for network messaging (which might be way higher than simple ICMP Echo_Request roundtrip times, a.k.a. "ping" latency).

    As I said, the client-server architecture of MMOs with its inherent need for synchronization on at least two different levels is a limiting factor, which is absent in single-player games.

    I feel like your response went more in depth as to why the lower limit is client server synchronization. I do get that, but I clearly demonstrated that while at max settings my resources are under utilized and I'm not near that supposed sync limit. This is the part I do not understand.
  • Shyfty
    Shyfty
    ✭✭✭
    Lysette wrote: »
    More cores do not necessarily give you more performance. It depends on the amount of tasks which can just be done sequential in relation to those which can be done in parallel. How this is effecting the performance limit is given by Amdahl's law, which is an assumption that all parallel tasks would not take any time at all - so this is the upper boundary for performance speed, all real world software performs worse than this, because parallel tasks take time as well.

    When going from low settings to high settings there is another step of quality post-processing added, which happens after the initial rendering task. This requires by a rule of thumb about the same amount of time, and that is what you see, frame rate has about halved.

    I never claimed that it should be fully utilizing all my cores, however it should be fully utilizing one core for everything running in serial and it is not even achieving that.
  • Lysette
    Lysette
    ✭✭✭✭✭
    ✭✭✭✭✭
    Shyfty wrote: »
    Lysette wrote: »
    More cores do not necessarily give you more performance. It depends on the amount of tasks which can just be done sequential in relation to those which can be done in parallel. How this is effecting the performance limit is given by Amdahl's law, which is an assumption that all parallel tasks would not take any time at all - so this is the upper boundary for performance speed, all real world software performs worse than this, because parallel tasks take time as well.

    When going from low settings to high settings there is another step of quality post-processing added, which happens after the initial rendering task. This requires by a rule of thumb about the same amount of time, and that is what you see, frame rate has about halved.

    I never claimed that it should be fully utilizing all my cores, however it should be fully utilizing one core for everything running in serial and it is not even achieving that.

    That is not really how it works - if a CPU is running permanently at max load, then it is overburdened. I will use an analogy with a sports car - you can drive it at max speed in 6th gear, but if you do that for long, it is overburdened. Normal cruising speed is much lower - around 150-180 mph - and so it is in a way with CPU load as well - it should be around 40-70% for a good load.
  • Shyfty
    Shyfty
    ✭✭✭
    Lysette wrote: »
    Shyfty wrote: »
    Lysette wrote: »
    More cores do not necessarily give you more performance. It depends on the amount of tasks which can just be done sequential in relation to those which can be done in parallel. How this is effecting the performance limit is given by Amdahl's law, which is an assumption that all parallel tasks would not take any time at all - so this is the upper boundary for performance speed, all real world software performs worse than this, because parallel tasks take time as well.

    When going from low settings to high settings there is another step of quality post-processing added, which happens after the initial rendering task. This requires by a rule of thumb about the same amount of time, and that is what you see, frame rate has about halved.

    I never claimed that it should be fully utilizing all my cores, however it should be fully utilizing one core for everything running in serial and it is not even achieving that.

    That is not really how it works - if a CPU is running permanently at max load, then it is overburdened. I will use an analogy with a sports car - you can drive it at max speed in 6th gear, but if you do that for long, it is overburdened. Normal cruising speed is much lower - around 150-180 mph - and so it is in a way with CPU load as well - it should be around 40-70% for a good load.

    Ok so are you claiming that my cpu is not the bottleneck here? Because if so we agree. My cpu and gpu are not the bottleneck thus why I made this thread. However if you're claiming that my cpu running its cores in the 40-70% range is fully utilized I have to respectfully disagree. I have played tons of games and run cpu benchmarks for hours that will max out the cpu.
Sign In or Register to comment.