Maintenance for the week of July 21:
• PC/Mac: NA and EU megaservers for maintenance – July 21, 4:00AM EDT (8:00 UTC) - 12:00PM EDT (16:00 UTC)
• Xbox: NA and EU megaservers for patch maintenance – July 23, 6:00AM EDT (10:00 UTC) - 12:00PM EDT (16:00 UTC)
• PlayStation®: NA and EU megaservers for patch maintenance – July 23, 6:00AM EDT (10:00 UTC) - 12:00PM EDT (16:00 UTC)
The connection issues for the European megaservers have been resolved at this time. If you continue to experience difficulties at login, please restart your client. Thank you for your patience!

Suggestion: On Downloads, Crashes and Bugs

Ulfson
Ulfson
✭✭✭
I suggest that the reason for all the failed downloads, game crashes and many bugs may be that the download mechanism used does not do a good job detecting bad transfers of data.

The transfer of information is not perfectly accurate even with digital systems because noise gets into the process. There will be errors. Since we first started to transfer data electronically we have been performing error detection and correction to achieve exact transfers. This always involves detection and retransmission along with some other mathematical processes.

As files grow in size they become more likely to have errors pass that will not be detected; no matter what detection method is used. To increase detection probability we have increased the complexity of detection from simple checksums to advanced digest calculations like MD5 and beyond. In the days when we had small games of a few hundred megabytes it was common to use MD5 signatures to verify files. They failed often and had to be downloaded again. With total game sizes over 30GB the likelihood of a bad transfer is high. This is true even if there are many smaller files being sent. There is no one answer for how to do the best error detection because that also carries with it practical concerns about calculation time. Advanced digest algorithms can take even the most powerful desktop with top of the line GPU augmented computations a very long time to complete. Random failures in transfer will have effects from nothing noticeable to crashes or anything.

ZO uses TCP/IP and UDP to transfer data. That is based on the ports they require to be forwarded in network settings and routers. TCP/IP does not perform error checking on the data. UDP only uses checksums. A protocol was developed that was for file transfers, FTP, and it does a great job. Checksums were sufficient when small blocks of data were being sent in the 1970’s and early 1980’s. Checksums are trivial tests today and should not be trusted. If ZO is using even a simple MD5 we should not have such download problems. The ever present download errors indicate ZO does not and that leads to the strong possibility that download errors are responsible for poor game performance of all kinds. For files that total over 40GB at least a 256bit MD5 or SHA digest should be used to verify file transfers.

For the third time this year one of my computers game installations was destroyed by a patch that was processed and then resulted in a manifest error. The only resolution I have found for this is to uninstall the game, delete the leftover Zenimax Online folder in Program Files (x86); which after uninstall is still sitting at over 40GB! Then reinstall the game as if it was all new. This can still take many tries until I get lucky. I have tried 4 times since a few days ago and guess I downloaded the game 20 times this last year. From my perspective the download process is a hateful slot machine with no payout. See you in game. If I can ever get a runnable download.

aka, Alvald
  • amaniacub17_ESO
    Ulfson, have to disagree here. The standard TCP/IP protocol DOES have built in checksums. The checksums are for each packet, so transfer size doesn't matter.

    See here for details: http://www.tcpipguide.com/free/t_TCPChecksumCalculationandtheTCPPseudoHeader.htm

    It's more likely that your files are either being corrupted by a problem with the ZOS webserver, bad disk or memory in your machine, or possibly a failure in the ZOS patch download creation process.

    FTP doesn't use any special checksum process by itself, although there are versions out that that can checksum & compare the file against the one on the server.
    See here for FTP details: https://security.stackexchange.com/questions/80694/does-ftp-provide-any-type-of-integrity
  • Ulfson
    Ulfson
    ✭✭✭
    No, You are mistaken. The only check is on the address. While Stack Exchange is a good source for information it is NOT a valid reference. Please go read the TCP Specification. There is NO verification on the data payload in TCP. The TCP/IP was created to deliver messages as a delivery primary protocol. It was made to be sure that Washington received any message of a nuclear attack from any military base even if a large number of relay sites were dead. The notion was that any message even is garbled would be better than nothing and likely discernible.
  • amaniacub17_ESO
    From the IETF TCP specification, rfc793, circa 1981:

    " The TCP must recover from data that is damaged, lost, duplicated, or
    delivered out of order by the internet communication system. This
    is achieved by assigning a sequence number to each octet
    transmitted, and requiring a positive acknowledgment (ACK) from the
    receiving TCP. If the ACK is not received within a timeout
    interval, the data is retransmitted. At the receiver, the sequence
    numbers are used to correctly order segments that may be received
    out of order and to eliminate duplicates. Damage is handled by
    adding a checksum to each segment transmitted
    , checking it at the
    receiver, and discarding damaged segments.

    As long as the TCPs continue to function properly and the internet
    system does not become completely partitioned, no transmission
    errors will affect the correct delivery of data. TCP recovers from
    internet communication system errors."

    I take that to mean checksums ARE used. That's why the checksum routines are either on card/chip, or in software.
  • Ulfson
    Ulfson
    ✭✭✭
    Please amaniacub17_ESO, you misunderstand the TCP/IP protocol. The only checksum that is present in the message header is for the header only and does not include the data payload.

    Read down in the spec and examine the message header format and ask the question, "how do I calculate the checksum for my TCP/IP message that goes into the header?"

    Beyond all that, my point is that if you want to send data with any reliability you need to use modern digest formats not antiquated checksums; they will fail often for files in the MiB range. Also, by 1981 the checksum had been replaced already by the CRC which was a little better but still horrible for MiB size files.
Sign In or Register to comment.