Is it an FPS issue or a latency issue? You don't seem to indicate which. Your system is more than capable of handling settings MUCH higher than what you've got it set at.
Also, are you running any addons? I've also never heard of or even seen a 100mbit synchronous connection (same speed upload as download). Synchronous connections are pretty much unheard of in the consumer world. Are you running on some sort of business class connection?
dwaightb16_ESO wrote: »that is what to be expected in a zerg. Lag or lower framerate, connection wont help that much against it.. even good gpu' or cpus dont really matter. i guess your frame will always drop when you are gonna fight ZVZ
yep, but people arent getting 60+ in big zergs.dwaightb16_ESO wrote: »that is what to be expected in a zerg. Lag or lower framerate, connection wont help that much against it.. even good gpu' or cpus dont really matter. i guess your frame will always drop when you are gonna fight ZVZ
There are people who don't have huge FPS drop in big zergs... I don't see why we should just accept this. THis game does not at all seem optimized for large-scale PvP.
GossiTheDog wrote: »It's the game engine. Zenimax engine spawns 32 threads across your CPU cores. Unfortunately thread 0, which is needed to render graphics on screen, always runs on one core and does 95% of the work. As a result, when you're in PVP and you are near 100s of players, the game slows to a crawl.
You can test this yourself by opening task manager, go to CPU, set it to one graph per CPU (right click), then set always show on top from menu and place it on your screen. Go to PVP. You will see almost all usage on one core. Even in PVP near big battle if you look up at the sky (so no graphs rendering), you will get 20fps.
I had a good look at the engine and basically, it dates back 6 years and was designed for dual core CPUs. People don't really use dual core CPUs now. The problem is it scales terribly. If you look at something like Battlefield 3 or 4 you will see during those big battles, all the CPU cores are used to distribute load.
dwaightb16_ESO wrote: »that is what to be expected in a zerg. Lag or lower framerate, connection wont help that much against it.. even good gpu' or cpus dont really matter. i guess your frame will always drop when you are gonna fight ZVZ
There are people who don't have huge FPS drop in big zergs... I don't see why we should just accept this. THis game does not at all seem optimized for large-scale PvP.
There's some good news, although it's a long way off. Xbox One and PS4 have AMD Jaguar processors, clocked at 1.75ghz (Xbox One) and 1.6ghz (PS4) with 8 cores (6 usable). This is obviously TERRRRIBLE for current Elder Scrolls Online engine - for PVP it would be unworkable.
GossiTheDog wrote: »Audigy, Xbox One programmer here. When you do 3D graphics on XB1, you call DirectX 11 APIs. Exactly the same as on Windows PC. It's the same functions. The code runs same on PC and Xbox. I don't know where you got idea that Xbox has some kind of super CPU solution - it doesn't. PS4 is OpenGL API calls. Xbox One and PS4 both have the same AMD Jaguar processors, which are low power CPUs. Compared to Intel gaming CPUs they're absolutely terrible, hence why you see next gen titles with shiny graphics but dumb as bricks AI - everything is being shifted to GPU.
Microsoft have been painfully aware of the limitations of DirectX for some time and have taken steps to reduce the problems. Some time ago, Microsoft introduced DX10 to the world in an effort to smooth over the performance pitfalls of the high CPU overhead. DX10 and DX11 have both made efforts to improve the CPU overhead, but it just hasn’t been enough. Right now, a high end GPU can process more draw calls than what the CPU can issue – in other words, the CPU is holding the GPU back.
To clarify, a Draw Call is one of the most important aspects of creating a 3d scene. The CPU issues a command to the GPU of what to draw, and must do soe for each unique item on screen. Think of it this way, in the current generation of Xbox 360 and Playstation 3 titles, there can be between 10,000 to 20,000 pieces of geometry in a single frame of animation. Consider that most games run a target of between 30 and 60FPS, and you can see how the numbers quickly start to add up.
One solution is to reduce the number of Draw Calls from the CPU, but this can lead to other issues – such as creating either less complex scenery or the alternative – reducing the amount of performance of your GPU. From games developers there are legions of threads, blogs and posts out there from developers wanting ideas to reduce the number or optimize the way Draw Calls are handled.
If you’re a gamer, you’ll likely be familiar with the fact consoles do more with their hardware than what PC’s do of a similar spec. To put it another way, a Playstation 4 with it’s 1.84TFLOPS of power will produce better graphics than a PC with a similar spec. There’s a variety of different reasons for this – one of them is because of the architecture of consoles. The Playstation 4 was designed from the ground up to play games, but there are more reasons to it than just that. These include the fact it’s a fixed platform, along with more processes running in the background for a PC due to its far larger Operating System (OS). But – the major reason is that it allows developers to delve much deeper into the hardware – and get much more out of the performance than you would be able to if you were only running a high level API.
High and Low level API’s aren’t anything new on consoles – and both have equally important roles in games development for the systems. A console will have a shelf life of between 5 and 10 years. During this time, we’ve all noticed that the first generation of games in a console looks far “worse” than the last generation. Take titles on the Playstation 3 for example, a notoriously hard console to develop and program for. Due to the nature of the Playstation 3′s CPU (the Cell processor) developers needed to learn how to code to the SPE’s (Synergistic Processor Elements) – think of them as Vector helper processors, capable of SIMD (Single Instruction Multi Data). Developers aren’t born with this knowledge, and so will often start out by creating game titles in higher level API’s.
For some developers, who don’t need full performance out of the machine (for example, indie developers who have far simpler graphics and game engines) higher level API’s will be more than enough. But if you want to get every last bit of detail into a frame, if you want the best possible texture quality, model quality and the like then the only option is to go with a low level API. This will reduce CPU and memory overhead, along with providing the GPU better performance and enabling you to get the most out of the machines unique feature set.
GossiTheDog wrote: »The bits you highlighted in bold are all about GPU performance. Playstation 4 does not use Mantle, and Xbox One does not use DirectX 12.
GossiTheDog wrote: »@Audigy, okay - most of what you just wrote is accurateThe problem as you rightly say is multicore usage. Anybody can recreate this problem - type /fps in ESO, go to an empty area, look up at sky so GPU usage is low. 100fps. Go to a city with lots of players or PVP with a big fort battle. Look at sky. 20-30fps. GPU usage will be very little during those times. One core will be 100%, the rest will be low usage.
So you know, AMD Jaguar performance in Xbox One is terrible. It's about a third to a quarter of a high end Intel i5 cpu. But you have 6 available cores, and like we've discussed ESO.exe only uses two (and most processes are limited to one core - e.g. graphics rendering). The good news for PC gamers is you won't need to keep upgrading your processor to keep up with console portsWhat will happen is console ports will force PC developers to go fully multi-core.
PS: GPU in XB1 (and PS4) is pretty good in PC gaming terms right now.
I have no idea why they don't support multicore rendering in this game = what were they thinking?