The Gold Road Chapter – which includes the Scribing system – and Update 42 is now available to test on the PTS! You can read the latest patch notes here: https://forums.elderscrollsonline.com/en/discussion/656454/
Maintenance for the week of April 15:
• [COMPLETE] ESO Store and Account System for maintenance – April 16, 8:00AM EDT (12:00 UTC) - 6:00PM EDT (22:00 UTC)

Proposal: Client Software Checks

Pinja
Pinja
✭✭✭✭
This thread is mainly to develop and explain a concept of client software checks that would otherwise clash with Add-Ons & Mods, but due to my proposed solutions should work.

What are client software checks?
They are stages in the launch and load cycle of the game that checks the integrity of software.

Why do we need client software checks?
To prevent hackers from altering the game to gain an advantage that would violate ToS.
To reduce server side checks, increasing performance.

What are some good examples of client software checks?
Xbox one and PS4, from what I understand, both have vigorous software check stages that prevent unauthorized alterations in the software. Even game developers need to pass through a software approval process before they can change their own game. After the generation of xbox 360, xbox one eliminated all unauthorized mods from Xbox.

How would I implement client software checks in ESO? -This is where things get dicey, but a community of moders can be sure to help.
  • First off, the game needs to recognize when it's been modded. To do this I'd have the game launcher do a self verification and a verification of the game before launch. What the verification does is check all the file names and sizes of the game in it's perspective folder. If one of the names or file sizes doesn't match that means it's been edited. This check process should only take few extra seconds at start. The launcher verifies the current start information with a server not necessarily needing to be the mega server.
  • Second off, you need a monitored runtime environment to detect or prevent changes while the game is running or being verified so that files are not swapped after the verification step. The launcher can also serve as the monitor of the game file once is runs the verification. If any unexpected changes are made during the games execution, it closes out. Typically windows wouldn't allow you to alter an executing file and a game would crash if you changed a loaded resource. But the games big enough that you'd have unused resource files valuable to changes, so step two may be necessary. (See problems...)
  • Third, you'd need exceptions for valid add-ons and mods. To do this have mod authors register their published mod & add-ons; have the add-on folder be checked item by item and in itself be exempt from the initial size check due to multiple mods or lack of mods; account for older published variations of add-ons. (See problems...) After verifying the contents of the folder, calculate the total verified size of the games file for the monitor to watch, accounting and changing the size range for the pesky fluctuating player data. Or make an exception to that folder, and monitor other folders individually (May take more processing).
  • Finally, when it's all said and done, you have a game that can't be hacked. Maybe...

Potential Problems:
  1. There is for what ever reason a fluctuating folder in the game that carries character data. This folder would have to be an exception to size checks, but I think the server has to validate the information in the folder anyways. On console this "save data" can be deleted without consequence and be replaced the next time you start the game. I'm not familiar with it's use.
  2. Due to the forgotten, poorly documented nature of some community modding, it may be necessary for players to update all their add-ons to the registered published version the patch client software checks are released.

Any one who’s familiar with modding and computers, feel free to add and look for loopholes in the solution.


Solutions by Pinja.
Pinja for Dual Wands.
Pinja's three server solutions:
  • Smaxx
    Smaxx
    ✭✭✭✭✭
    Anti-cheat measures, preventing hacks, etc. is something you can never perfect. It's always a race between developers (of the game and the tools/hacks).

    Offloading calculations and other things to the client to reduce server load is something some MMOs actually do (e.g. Black Desert Online). But doing so actually increases the potential attack surface for hacks and also requires more strict and powerful anti-cheat tools (e.g. kernel level drivers; hello Valorant controversy!).

    A very great core concept I've read like 15 or 20 years ago when first dabbing into network/multiplayer game development: "The client is in the hands of the enemy." That's still true. Don't trust anything the client ever tells you. Don't tell the client anything they don't have to know.

    If you do this in a strict way, people can modify and hack the client all they want. They won't be able to do things like modifying damage or health numbers (people who stumbled over hackers in the Dark Souls series will know what I mean). Also they won't be able to magically gain information that's not transmitted, e.g. seeing cloaked/invisible characters.

    Besides all this, the launcher already verifies the game client is up to date and complete, but doesn't do that every time you launch the game or even while playing. The game is more than 60 GB in size, which take some time to verify and monitor, evenon modern SSDs.

    Also you don't really need certificates or something like that for addons. The game's UI code, based on Lua, already knows two different layers of code: trusted and untrusted. By default, most code runs in the untrusted mode, including large parts of the game's own original UI code. Untrusted code won't be able to do certain things, such as sending chat messages, trigger character actions, etc. to prevent people writing bots. Also it's not possible for untrusted code to access anything in the store. This way it's impossible to write an addon buying gifts from other players or similar things.

    TL; DR: Waste of time, because there'll always be some way or another to modify/change a game, because even anti anti-cheat tools can run at elevated levels and manipulate the code meant to protect the game, especially if there are bugs somewhere.
  • Pinja
    Pinja
    ✭✭✭✭
    Smaxx wrote: »
    Besides all this, the launcher already verifies the game client is up to date and complete, but doesn't do that every time you launch the game or even while playing. The game is more than 60 GB in size, which take some time to verify and monitor, evenon modern SSDs.

    First off thank you for contributing information.
    In regards to the above, I'd like to highlight that the concept is to track core file's sizes not re scan and verify every inch of a file like it's looking for a corrupted piece of code. This takes significantly less amount of processing and can be done in less resources than you right clicking properties and seeing the file size yourself.

    This did however make me theorize a loop hole, where you could write a driver like file and link it to modded resources, have it falsely proclaim an erroneous file size, and be in place of a core file.
    However I do believe there is a windows process that lets you track where a program is running it's resources from, and checking this list would fix this theoretical loop hole that may not even exist because of the way file types are used in calling files.
    Smaxx wrote: »
    Also you don't really need certificates or something like that for addons. The game's UI code, based on Lua, already knows two different layers of code: trusted and untrusted. By default, most code runs in the untrusted mode, including large parts of the game's own original UI code. Untrusted code won't be able to do certain things, such as sending chat messages, trigger character actions, etc. to prevent people writing bots.

    This goes to show you the challenge at hand as bots still exist in the game probably off modded "keyboard" drivers running controller. Now I'm not saying it go to the scope of monitoring a whole computer system like it's proctored game play, but rising to the challenge of solutions is possible.
    Smaxx wrote: »
    Anti-cheat measures, preventing hacks, etc. is something you can never perfect. It's always a race between developers (of the game and the tools/hacks).

    TL; DR: Waste of time, because there'll always be some way or another to modify/change a game, because even anti anti-cheat tools can run at elevated levels and manipulate the code meant to protect the game, especially if there are bugs somewhere.

    It's a vary similar circumstance in cybersecurity, though people rise to the challenge out of demand and necessity. At one point in time after Snowden they were saying nothing's secure as long as its connected to the internet. This may or may not be true. What matters is, is that you narrow the hackers down to the rarity of getting hit by lightning, then it's very easy to moderate them.
    Pinja for Dual Wands.
    Pinja's three server solutions:
  • idk
    idk
    ✭✭✭✭✭
    ✭✭✭✭✭
    Smaxx wrote: »
    Anti-cheat measures, preventing hacks, etc. is something you can never perfect. It's always a race between developers (of the game and the tools/hacks).

    This is true. And a wise developer would never discuss changes they make to prevent hacks and cheats. Further, checking the integrity of the game at launch does not detect third party software that alters game performance after launch. One of the best-known cheating scandals in the game involved third party software that overrode the client after everything had launched.

    Edit: that is also why we have server-side checks. The game launched with a trusted client that had the check on important information. It is more open to exploits so OP's suggestion would likely have the reverse effect without any real benefit.
    Edited by idk on June 29, 2020 2:12AM
  • nemvar
    nemvar
    ✭✭✭✭✭
    And a wise developer would never discuss changes they make to prevent hacks and cheats.

    A wise developer does not hide his problems, it is was a lazy person does.

    Security by obscurity is one of the worst things you can do. A truly good solution is provably safe and anything else is but a hack around the issue.
    Due to the forgotten, poorly documented nature of some community modding, it may be necessary for players to update all their add-ons to the registered published version the patch client software checks are released.

    What are you talking about? Registered published version of add-ons? How exactly do you expect that to work? How are people supposed to test add-ons before they publish them? And a publishing process would be a giant headache for the publisher, in this case ZOS. Doing so does not only require quite a bit of work but also puts responsibility for safety on them.

    This won't happen and also WILL never happen.

    And it's not even like add-ons, in their current base form, can be used to hack anyways. They can only interact with the game through a rather limited API. You suggest draconian scrutiny test in an area that most likely won't even get used by malicious actors.
    Edited by nemvar on June 29, 2020 4:46PM
  • Pinja
    Pinja
    ✭✭✭✭
    idk wrote: »
    Smaxx wrote: »
    Anti-cheat measures, preventing hacks, etc. is something you can never perfect. It's always a race between developers (of the game and the tools/hacks).

    This is true. And a wise developer would never discuss changes they make to prevent hacks and cheats. Further, checking the integrity of the game at launch does not detect third party software that alters game performance after launch. One of the best-known cheating scandals in the game involved third party software that overrode the client after everything had launched.

    Edit: that is also why we have server-side checks. The game launched with a trusted client that had the check on important information. It is more open to exploits so OP's suggestion would likely have the reverse effect without any real benefit.

    Yes I thought about developing solutions in closed environment, but you can't beat full proof which is the aim. Now, even online voting systems thought to be secure were proven to have holes in them.

    The second part of this developing proposal covers your second concern. However, your third concern points out a potentially fatal bypass that I thank you for sharing.

    What if a program hijacks server keys and replaces the client?

    Good question, but there is a solution. If the hijacking program firewalls the original client, have the client and launcher close out and have the launcher attempt to send a signal. Than have the server do periodic checks looking for the launcher if it can't find it, it invalidates the server key and disconnects.

    What prevents a program from impersonating the launcher is that after verification an encryption key is sent that tells the launcher how to encode the verification data.

    Now how do you prevent a cloned launcher from taking the verification information and sending false reports?
    Well do got to say this is getting complicated... But, you'd need the client to download and execute a small file and send back a unique code every time you start the launcher. This file is the verifier for the launcher, it takes the name, type, and composition of every file in the launcher, with an internal encryption and sends back the unique positive or negative.

    If the false client doesn't firewall the original then you'd have two clients with the same server key running and can easily invalidate the session.

    To close while one aspect of the proposal is integrity and security, it's really to improve performance.
    Edited by Pinja on June 29, 2020 6:29PM
    Pinja for Dual Wands.
    Pinja's three server solutions:
  • Pinja
    Pinja
    ✭✭✭✭
    nemvar wrote: »
    And a wise developer would never discuss changes they make to prevent hacks and cheats.

    A wise developer does not hide his problems, it is was a lazy person does.

    Security by obscurity is one of the worst things you can do. A truly good solution is provably safe and anything else is but a hack around the issue.
    Due to the forgotten, poorly documented nature of some community modding, it may be necessary for players to update all their add-ons to the registered published version the patch client software checks are released.

    What are you talking about? Registered published version of add-ons? How exactly do you expect that to work? How are people supposed to test add-ons before they publish them? And a publishing process would be a giant headache for the publisher, in this case ZOS. Doing so does not only require quite a bit of work but also puts responsibility for safety on them.

    This won't happen and also WILL never happen.

    And it's not even like add-ons, in their current base form, can be used to hack anyways. They can only interact with the game through a rather limited API. You suggest draconian scrutiny test in an area that most likely won't even get used by malicious actors.

    This actually bring up a part of the proposal that should of been in the original draft that I forgot to mention. The unique individual bypass development codes, valid for one user and one device. As easily as you sign up on the forums, you could sign up for a code. These users could be easily spotted with malicious modding as it would be a very finite community. Just as forum mods do a good job cleaning the forums so could a gm on a support ticket. Take away the permissions, take away the bad actors, or so this system intends.

    From what I get the API is just a filter for a special set of developer defined functions. Say someone were to make and modify their API, they would have full capability over functionality of the client.

    Going back to your question on publishing add-ons it could be an automated process to make sure files conform to the standard of the original API.

    I mean client side integrity is a fruitful challenge that couldn't be as difficult as the space race. It could greatly improve performance and develop new options in gaming.
    Edited by Pinja on June 29, 2020 6:27PM
    Pinja for Dual Wands.
    Pinja's three server solutions:
  • nemvar
    nemvar
    ✭✭✭✭✭
    You can't change the functionality of the API without changing the client, so if you were to do that, it can be detected through means other than certification of add-ons. The API is less so a filter as it is an intersection between client and add-ons.
    Only default, only things ZOS explicitly allows to be used can be interacted with.

    A proper implementation of certifaction would boil down to one of the following:
    -Server generates two RSA keys.
    -Server sends one to the client
    -Client hashes its add-ons and encrypts each hash with the received key.
    -Clients sends a dictionary with the ID of each add-on and the corresponding encrypted hash value.
    -Server decrypts the received hashes and compares them with the saved hashes in the database (obtained through verification)
    -Server allows/denies access depending on success.
    -Bamn, 100% safe system unless someone invents quantum computers just so he can cheat in ESO.
    OR
    -Allow client access to the database of certified add-ons.
    -Do the first option and pray the client actually bothers with it. Reminder, the client can never be trusted to perform anything.
    -Bamn, you now have a system that only relies on the determination of the attacker.

    And this system I described is extremely inflexible. Since it forces the add-on creator to certify every! single! release! Something that might sound ok, until you consider that a lot of these things are hobby projects, open source and most importantly *free fricking software*.

    If creators were allowed to change their creations without the need to re-certify, then there would be no way to prevent certificates from being misused as there is no way to uniquely identify the code anymore. Nothing but obscurity would stop people from just forging certificates if that were allowed.

    I'm also not really sure on the "improve performance" as any sliver of processing power used for security is one that can't be used for, well, actual game processing. That's not to say that the effort is in vain but it is always something that must be kept in the back of ones head.

    And lastly:

    Any add-on with the capability to be distributed can only use the standard API, and as such, any add-on that only relies on the API is secure through transitivity. Well, assuming the API is secure.
    As such any centralized automated procedure will can only tell you one thing: "It can be interpreted without error". You could just place this check in the client itself as it doesn't help anyone but the end user.

    So basically: If you can ensure the integrity of the client, there is no need to ensure integrity of add-ons.
    Edited by nemvar on June 29, 2020 8:14PM
  • Pinja
    Pinja
    ✭✭✭✭
    nemvar wrote: »
    You can't change the functionality of the API without changing the client, so if you were to do that, it can be detected through means other than certification of add-ons. The API is less so a filter as it is an intersection between client and add-ons.
    Only default, only things ZOS explicitly allows to be used can be interacted with.

    A proper implementation of certifaction would boil down to one of the following:
    -Server generates two RSA keys.
    -Server sends one to the client
    -Client hashes its add-ons and encrypts each hash with the received key.
    -Clients sends a dictionary with the ID of each add-on and the corresponding encrypted hash value.
    -Server decrypts the received hashes and compares them with the saved hashes in the database (obtained through verification)
    -Server allows/denies access depending on success.
    -Bamn, 100% safe system unless someone invents quantum computers just so he can cheat in ESO.
    OR
    -Allow client access to the database of certified add-ons.
    -Do the first option and pray the client actually bothers with it. Reminder, the client can never be trusted to perform anything.
    -Bamn, you now have a system that only relies on the determination of the attacker.

    And this system I described is extremely inflexible. Since it forces the add-on creator to certify every! single! release! Something that might sound ok, until you consider that a lot of these things are hobby projects, open source and most importantly *free fricking software*.

    If creators were allowed to change their creations without the need to re-certify, then there would be no way to prevent certificates from being misused as there is no way to uniquely identify the code anymore. Nothing but obscurity would stop people from just forging certificates if that were allowed.

    I'm also not really sure on the "improve performance" as any sliver of processing power used for security is one that can't be used for, well, actual game processing. That's not to say that the effort is in vain but it is always something that must be kept in the back of ones head.

    And lastly:

    Any add-on with the capability to be distributed can only use the standard API, and as such, any add-on that only relies on the API is secure through transitivity. Well, assuming the API is secure.
    As such any centralized automated procedure will can only tell you one thing: "It can be interpreted without error". You could just place this check in the client itself as it doesn't help anyone but the end user.

    So basically: If you can ensure the integrity of the client, there is no need to ensure integrity of add-ons.

    All very good laid out points. Your probably right with a secure API checking the addons is a redundant endeavor to an extent. If, however, certain server processes are brought client side, the 'goto' and similar call functions in the API would either have to be duplicated, redesigned, and prohibited from calling certain game resources, or go through a process that checks what they call (goto is not the name of a literal function in the API but it's the concept).
    The first option is more convenient for all parties, which would eliminate, as you say, the whole need to register.

    Now we just need to secure the client. Because just like meteors raining down on your head and apple's original iphone can teach you, jail-broken is broken and can happen as easily as downloading a third party add-on.

    The whole processing power thing gets complicated. But I assure you the processes the server go through in managing the data of several hundred players, their actions, and interactions is probably worse off. Either way you have the process strain somewhere. The security server, as cyber security firms and web certificates show, don't have to be the same server as the game.
    Pinja for Dual Wands.
    Pinja's three server solutions:
  • Dormiglione
    Dormiglione
    ✭✭
    One of the loopholes is, that I can create a hard- or soft-link to any kind of content in place of the real app or modification, and then replace them as wished, after they have passed the tests.

    Another loophole: I can create a modification which passes the tests, but then does completely different things ... much like the VW Diesel Scandal :)

    Yet another one: I can have a legit modification which accesses remote scripts by any means the modification provides. That was recently prohibited by Apple and Google Play Apps.

    And last not least, the verification process is very messy and lengthy. The only true alternative is to provide a reduced command set for, say LUA, which at most can annoy the player.
  • Pinja
    Pinja
    ✭✭✭✭
    One of the loopholes is, that I can create a hard- or soft-link to any kind of content in place of the real app or modification, and then replace them as wished, after they have passed the tests.

    Another loophole: I can create a modification which passes the tests, but then does completely different things ... much like the VW Diesel Scandal :)

    Yet another one: I can have a legit modification which accesses remote scripts by any means the modification provides. That was recently prohibited by Apple and Google Play Apps.

    And last not least, the verification process is very messy and lengthy. The only true alternative is to provide a reduced command set for, say LUA, which at most can annoy the player.

    Thanks for giving me more stuff to evaluate.

    I think we covered the hard and soft link with a the encrypted server key. In order to soft link you'd need to run two clients and you can't be logged in twice. That raises the question, can you log in on a different account on a fake client with the same server key? I'm going to say no because they should be unique keys, but it is something to program for.

    How to fake a test result? I think we I covered that as well in comment #6.
    Pretty much you get a unique expiring self executing file script download itself and check the launcher and send back a unique instanced key.

    We also covered the whole modification theory in some detail in what the API does and doesn't allow. Like a mod that's a platform for mods may not exist, without special permissions.

    You last suggestion sounds like the API, but as many have protested, the original registry design is messy.
    Pinja for Dual Wands.
    Pinja's three server solutions:
  • Pinja
    Pinja
    ✭✭✭✭
    Alright so there was another thread off in the forums that brought up a point about ways to manipulate memory, via a process called code injection.
    While the rest of this thread thus far has been about securing the client directory and stored files, this topic of changing a running scripts values from outside the program must be addressed.

    I already addressed it in a comment on that thread:
    Pinja wrote: »
    Elsonso wrote: »
    daemonios wrote: »
    Elsonso wrote: »
    Pinja wrote: »
    Really OP is just asking if there's anyway to make a trusted client so that the game can start running smooth, and there totally is. People scream impossible here or there, but there's a way to secure the client on standard PCs and Macs. Pretty much you'd need to buy or make a jail broken computer in order to overcome my proposals. And I'll find a way to patch that too.

    Nothing on the client can be trusted absolutely. This is especially true of PC clients, since the bar is much lower. Only a remote server can tell if a client has been compromised.

    I think there are ways to achieve this. Windows marketplace programs are sandboxed and run as protected processes. I think one of the perks of this way of doing things is that it prevents other programs from messing with the game's process and memory, thus preventing cheat engine-like manipulation of game values.

    I really don't want to get into details, so this is the last of it from me. ZOS cannot secure ESO against a determined local user. They have unlimited access to the hardware and software. All ZOS can do is raise the bar to the point that it is very hard to bypass the anti-cheat security, then hope that the people who do break it don't make it trivial to bypass by some process that anyone can follow.

    Awe man, your leaving when things are getting good. Look at all the possibilities!
    I'm looking at the this windows protected process, it looks very appealing as grounds for a new concept that would request of Apple and Microsoft to create a new shell for gaming. There constantly looking for ways to make their systems better. With the rise of virtual gaming comes contributions and challenges to all in the field. Like UPnP was developed so can this.
    If someone big game developer were to reach out and make the suggestion, I'm sure they'd listen.

    For now though lets look at what we can do as a third party:
    Pevey wrote: »
    Pinja wrote: »
    Really OP is just asking if there's anyway to make a trusted client so that the game can start running smooth, and there totally is. People scream impossible here or there, but there's a way to secure the client on standard PCs and Macs. Pretty much you'd need to buy or make a jail broken computer in order to overcome my proposals. And I'll find a way to patch that too.

    Exactly what I am proposing, thank you for trying to understand.

    Every computer is or should be a jailbroken computer. If you are proposing something else, you are proposing malware. I’m sure ZOS has looked at this and decided the number of players (like me) who refuse to install a rootkit on their PC on philosophical grounds for any reason, including their favorite game, is greater than the number of people turned off by the cheating. Most players don’t even PVP anyway.

    I'm not looking to develop a rootkit if you check out my thread (Though that may be a good Idea to help guard memory stored variables,) but in fact the my proposal would work more like anti-virus or other anti-cheats and monitor the system, not hide from it. This monitoring should be mainly a local process, with little data sent back to the server. I may have to develop the process out a bit more to be effective and respect privacy, but who isn't surrendering that to free services nowadays anyway... I think the only time it should send data is when it finds a tampering or a clashing program.
    kringled_1 wrote: »
    Pinja wrote: »
    Really OP is just asking if there's anyway to make a trusted client so that the game can start running smooth, and there totally is. People scream impossible here or there, but there's a way to secure the client on standard PCs and Macs. Pretty much you'd need to buy or make a jail broken computer in order to overcome my proposals. And I'll find a way to patch that too.

    As I have stated several times over the course of this thread I believe cheating would be easier to detect in the absense of all files not sourced from the developer.

    I see the position you are taken, but I don't believe it is based on an accurate assumption.
    I don't think actual add-ons trigger any confusion or problems with detection of cheat software, except in forum discussions like this one.
    Pinja's point is related, but I think also misses key issues.
    I don't believe that cheats have to be installed in the game directory, and as long as users have the ability to install and run other software with administrator privileges, you cannot guarantee that the client is completely trusted. If you want console level security, you need console level control - i.e. users cannot install untrusted software (anywhere on the machine), users cannot run software with administrator privileges outside of well defined system utilities, etc. This is not achievable within the PC gaming environment. Pinja's discussion of jailbreaking points to a misunderstanding; PCs and Macs (as opposed to iOS devices) are in essence by default jailbroken - user can install and execute code from sources of their own choosing.

    Very good discussion. You bring up a great point about code injection that I'll have to address in my own thread. While you don't really want to control what is installed on a system, programs can control what is run on a system. Would this mean you'd need to know the name of a hackers program? Not necessarily. Anti-cheats already work with a blacklist but they are easy to work around. What you'd need to do is make a program be aware of when it's data is being tampered with. Totally possible, but I'll do more research before I get back to you.

    Does this mean I come without an immediate solution, no not all.
    There's a program called Respondus Lockdown Browser I use for school. It's an anti cheat software that monitors and stops applications trying to run through the network drive, and prevents users from using other applications. It even warned me about a windows update saying it could interrupt me while testing. I don't believe it goes as far as to stop all background tasks. For my solution though, that is a developmental option worth testing followed by the developing of a large whitelist.

    Another solution is to make the games stored data variables unique to each game license and client. (And or use that rootkit idea to conceal them <_< ) Pretty much you'd encode the variables for the game than use permutations and combinations to assign them to clients and licenses. On top of that make a custom compiler encrypting them so that the code can't be as easily de-compiled using standard de-compilers. What this does is make it harder for hackers and programs to find a a variable like current stamina, because they wont know what to search the memory for. You can change the codes and encryption every update like a really secure password so that if they do find it, it'll expire. Say they do find it, the code and value identifier will be unique so that they can't redistribute the hack and you can narrow down who's hacking depending on what variables codes are being sent back. The servers of course would have to take data and decrypt it license by license, but preform little in terms of calculations. For debugging purposes have an option for support to assign new license encryption.

    From ones perspective it'd be easier to plead with Microsoft and Apple and have them do the work.

    If anyone wants to help continue or challenge the development of Client Software Checks take a look at the thread. Otherwise I got to up date back and forth.
    But there is still more to be discussed here.

    For example with a secure directory you can scan and stop the injection of modified functions. The launcher already does so much in this discussion, but it will have to do more to secure the client. The small file mentioned in Comment #6 that I will call Packet 1 will also have to check the functions of the launcher. Once there running the dominoes start falling. An addition file would have to be made in the directory of the launcher that has the blue print for every system function called. While the game is running it periodically checks the called functions vs this directory. Storing the information in encoded variables and reloading from the directory each time as that should be secure. What prevents the launcher from being code injected is the the initial test from Packet 1, the unflinching directory, and the encoded variables. Some may wonder what I mean by encoded variables and custom compilers. By those I mean code that deviates away from ANSI and only works with itself and the server. Not the entire program as you still have to work with system resources, but the parts that you need to keep secure such as the validation steps. The validation process shouldn't call outside functions it should merely be true false statements. If exterior functions are swapped it'll catch it.

    Another bypass to all the checks was pointed out by @InvitationNotFound that you could alter the sent traffic I guess via some sort of proxy. The easy solution to that is to encrypt all the traffic like an Opera VPN. As it was stated previously you can't have a fake client running along side the launcher. You'd be getting two client signals from the same server key, or you'd be missing the signal from the validated client which would invalidate the server key.
    This makes me think more on the whole server key idea, which needs to have a way to check for repeat invalidations so you don't have the same hacker getting logged in and out every 10 minutes. That coupled with an extended monitored runtime environment that checks to see what program is blocking it, to issue automatic bans. This system would require touchier development.
    Edited by Pinja on August 9, 2020 7:36PM
    Pinja for Dual Wands.
    Pinja's three server solutions:
Sign In or Register to comment.