Maintenance for the week of November 25:
• [COMPLETE] Xbox: NA and EU megaservers for maintenance – November 27, 6:00AM EST (11:00 UTC) - 9:00AM EST (14:00 UTC)
• [COMPLETE] PlayStation®: NA and EU megaservers for maintenance – November 27, 6:00AM EST (11:00 UTC) - 9:00AM EST (14:00 UTC)

EULA update and losing players

  • PrincessOfThieves
    PrincessOfThieves
    ✭✭✭✭✭
    Gabriel_H wrote: »
    You are misrepresenting these laws. Nowhere does it say that everything must be E-rated in case a child gets access to it.
    I live in the EU, and adult chats and websites are very much legal here even though there is of course a chance that a kid clicks on that "I am 18" button. There is a difference between trying to groom a child online or exposing them to adult content and, well, adult content and conversations existing.
    You also conveniently ignored the second part of my message.

    I'm misrepresenting nothing. I'm running out of ways to say the same thing in a way you understand.

    Correct, the law doesn't require everything to be E-rated, BUT it requires that ZOS do everything they can to monitor their chat system, and to take action where there is a concern about the nature of the content being exchanged; they are required to assess whether the fantasy violence being portayed can lead to real world harm, and to risk assess what impact all this may have on children playing the game - something that ZOS do not control - something that bad parenting is not an excuse for ZOS to use. THEN ZOS has to tell it's players how it will achieve that through updating their EULA - which is what this entire discussion is about.



    The problem is, according to the information we have, they are doing much more than just the stuff they are legally obliged to do.
    This is not a game marketed to children and not a game where you can reasonably assume that there's a lot of kids around you at all times. And yet, people have been banned for things like "Ah yes, the (body fluid) Orbs.". Is it a bad joke? Maybe. But this has nothing to do with saving kids from predators or whatever. And like I said, unless they've changed their policy, it is possible that people who have been wrongfully banned might still get a "black mark" on their account. I would really like some clarification on that honestly, as someone who never uses slurs, rarely swears and is disgusted by child predators, I am avoiding using the chat at all these days just in case. English is not my first language and I often use other languages, what if I accidentally trigger the AI?
    And I can understand you just fine, we are just talking about different things. I am not arguing against keeping an eye on obviously heinous and illegal stuff and reporting it accordingly. I am however against overzealous AI spying and banning people for harmless things or consensual rp. Especially now, when this game is bleeding players.
    Edited by PrincessOfThieves on 13 November 2024 18:47
  • Pelanora
    Pelanora
    ✭✭✭✭✭
    Gabriel_H wrote: »
    LPapirius wrote: »
    ZOS is in our house just as much as we are in theirs, to use your analogy.

    What you say in your house is not regulated by law.
    What you say in ZOS' house is regulated by law.

    Assault and battery, abuse, fraud, terrorism, and other act regulated by law, done in your house, is regulated by law.
    Edited by Pelanora on 13 November 2024 18:30
  • Alinhbo_Tyaka
    Alinhbo_Tyaka
    ✭✭✭✭✭
    ✭✭
    Syldras wrote: »
    Gabriel_H wrote: »
    No, it wouldn't. It would require them to monitor all messages - which their updated EULA says they are - and take action where necessary, while erring on the side of caution - which they appear to be doing.

    Okay, I should have been more precise: It would be the end of every "not child-friendly communication" through the internet, because it could never be 100% ensured that there is no child behind the screen, pretending to be an adult, who could see it.
    Gabriel_H wrote: »
    I don't know, what's the law requiring phone companies to monitor phone calls?

    In my country, it's completely forbidden, unless a court has decided for an exception (for one individual person, if there are already hints that this person might use the phone to communicate with accomplices to plan a severe crime) - then the police and/or secret service might start surveillance. This is the official stance, at least.
    If you are talking sexual consent that varies by state but ranges anywhere from 16 to 18 years old. For other things it is 18 or 21 years old with 18 being the age at which one is considered old enough to vote or serve in the military without their parents consent.

    Thank you. I've been wondering because in my country, the thing that matters when it comes to the topic of "lewd chat talk/roleplay" isn't the age of becoming an adult, but the age of sexual consent (which is a few years lower). Which means once you've reached this age of consent, you're free to engage in as many embarrassing roleplay scenarios as you wish, you're thought of as being able to give consent to that, and no one, not your parents and not the government either, can protect/forbid (depending on perspective) you from doing this. Doesn't matter what I think about it (in case anyone cares: I generally don't think it's a good idea to leave written evidence of things that might be embarrassing later - especially not in an age where breakups are very common, so the evidence could easily fall into wrong hands), but that's how the laws are here. Anti-child-grooming laws end with the minor reaching the age of consent.

    Of course, non-consensual acts (like bothering people with lewd texts or pictures they didn't agree to) is forbidden, regardless of age.

    Again: Just stating how the laws are here in my country.

    The laws in most states are similar to what you describe.

    It is illegal to monitor US citizen phone calls without a search warrant or equivalent authorization from a judge. Phone company records can be subpoenaed as part of a court case. These do not usually involve a judge unless the recipient challenges the subpoena. Police can also obtain records without a warrant via a subpoena but these are suppose to go through a judge. I'm not an attorney but to the best of my knowledge except for some exception buried in a national security law a judge needs to approve any type of personal monitoring of citizens. Even in the national security realm there are special courts for overseeing the processing of warrants and subpoenas.
  • Bammlschwamml
    Bammlschwamml
    ✭✭✭
    I would like to quote the last two sentences of the original english version of the code of conduct, but i can't access it anywhere. It always directs me to the german version for some reason. I can change the language for the terms of service on the zenimax website (not the elder scrolls website) but not the code of conduct, so i will just let "google translate" do the job:

    "Note: Discussion of any moderation activities is not permitted in our community forums. Questions regarding account or moderation actions may only be asked via the complaint function."

    Does this mean that this whole discussion will be closed and every single one of us will get banned or at least will receive a warning and a permanent mark in their account?

    And when the ai takes over the "complaint function" too, what's going to happen then?

    Does this sound right to you?
  • Syldras
    Syldras
    ✭✭✭✭✭
    ✭✭✭✭✭
    I would like to quote the last two sentences of the original english version of the code of conduct, but i can't access it anywhere. It always directs me to the german version for some reason. I can change the language for the terms of service on the zenimax website (not the elder scrolls website) but not the code of conduct, so i will just let "google translate" do the job.

    I just activated VPN to have a look without getting auto-redirected to the translation (can recommend, also helps with media that is blocked for your country for licensing reasons - which is the reason I originally got it for); in case anyone cares, this is the exact English wording:

    oyiqdg4whky2.png


    Edited by Syldras on 13 November 2024 19:05
    @Syldras | PC | EU
    The forceful expression of will gives true honor to the Ancestors.
    Sarayn Andrethi, Telvanni mage (Main)
    Darvasa Andrethi, his "I'm NOT a Necromancer!" sister
    Malacar Sunavarlas, Altmer Ayleid vampire
  • RandomKodiak
    RandomKodiak
    ✭✭✭✭
    This whole thread is unfortunately pointless. As one person already said either you agree and play or don't agree and find another game. This part of the EULA covers everything end of discussion :( "You agree that any and all Game related character data is stored and is resident on ZeniMax computers and servers, and any and all communications that you make within the Game (including, but not limited to, messages solely directed at another player or group of players) traverse through ZeniMax computers and servers, may or may not be monitored by us or our agents, you have no expectation of privacy in any such communications and expressly consent to such monitoring of communications you send and receive." I don't rp, and I never use racial slurs etc.. so I am not worried, but there it is direct from the EULA.
  • Syldras
    Syldras
    ✭✭✭✭✭
    ✭✭✭✭✭
    I never use racial slurs etc.. so I am not worried

    Wait until you tell a friend in chat that you've visited a German restaurant and ate Nuremberg sausages with

    1dvhc0g6jtgo.png

    and the last word gets you banned. It gets blurred out by the profanity filter already now, btw :D Although I've never met any German yet who even felt insulted by this word (nowadays it's seen as colloquial or as teasing banter, no one really finds it offensive), and I've been born in Germany and live here for almost 40 years now.
    @Syldras | PC | EU
    The forceful expression of will gives true honor to the Ancestors.
    Sarayn Andrethi, Telvanni mage (Main)
    Darvasa Andrethi, his "I'm NOT a Necromancer!" sister
    Malacar Sunavarlas, Altmer Ayleid vampire
  • SilverBride
    SilverBride
    ✭✭✭✭✭
    ✭✭✭✭✭
    I don't care if the EULA says they can use IA to listen to every single thing we say in every single circumstance and read all our mail and take punitive action based on certain criteria.

    And I don't care if they have a legal right to do so.

    That does NOT make it good customer service or the right thing to do.

    Oh, and losing players will happen, but not always by the player's choice. When one typo can be enough to earn a permanent ban then yes, they will lose players.
    Edited by SilverBride on 13 November 2024 20:09
    PCNA
  • spartaxoxo
    spartaxoxo
    ✭✭✭✭✭
    ✭✭✭✭✭
    pklemming wrote: »
    This is a parental responsibility. Kids should be monitored, so they do NOT play games that are not suitable for their age group. it should not come down to a company to introduce nanny policies based on bad parenting.

    It is on both and should be on both. Companies have the tools to monitor this stuff and should report any child abuse they see. Parents cannot monitor their children 24/7. They have to work. They have to sleep.

    I don't have a problem with AI flagging potential child abuse. I do have a problem with ZOS using this to monitor private chat for things that aren't criminal and are between consenting adults. The use of it should be strictly about things that may need to be turned over to law enforcement and nothing else imo.
    Edited by spartaxoxo on 13 November 2024 20:15
  • dk_dunkirk
    dk_dunkirk
    ✭✭✭✭
    Syldras wrote: »
    Gabriel_H wrote: »
    No, it wouldn't. It would require them to monitor all messages - which their updated EULA says they are - and take action where necessary, while erring on the side of caution - which they appear to be doing.

    Okay, I should have been more precise: It would be the end of every "not child-friendly communication" through the internet, because it could never be 100% ensured that there is no child behind the screen, pretending to be an adult, who could see it.
    Gabriel_H wrote: »
    I don't know, what's the law requiring phone companies to monitor phone calls?

    In my country, it's completely forbidden, unless a court has decided for an exception (for one individual person, if there are already hints that this person might use the phone to communicate with accomplices to plan a severe crime) - then the police and/or secret service might start surveillance. This is the official stance, at least.
    If you are talking sexual consent that varies by state but ranges anywhere from 16 to 18 years old. For other things it is 18 or 21 years old with 18 being the age at which one is considered old enough to vote or serve in the military without their parents consent.

    Thank you. I've been wondering because in my country, the thing that matters when it comes to the topic of "lewd chat talk/roleplay" isn't the age of becoming an adult, but the age of sexual consent (which is a few years lower). Which means once you've reached this age of consent, you're free to engage in as many embarrassing roleplay scenarios as you wish, you're thought of as being able to give consent to that, and no one, not your parents and not the government either, can protect/forbid (depending on perspective) you from doing this. Doesn't matter what I think about it (in case anyone cares: I generally don't think it's a good idea to leave written evidence of things that might be embarrassing later - especially not in an age where breakups are very common, so the evidence could easily fall into wrong hands), but that's how the laws are here. Anti-child-grooming laws end with the minor reaching the age of consent.

    Of course, non-consensual acts (like bothering people with lewd texts or pictures they didn't agree to) is forbidden, regardless of age.

    Again: Just stating how the laws are here in my country.

    The laws in most states are similar to what you describe.

    It is illegal to monitor US citizen phone calls without a search warrant or equivalent authorization from a judge. Phone company records can be subpoenaed as part of a court case. These do not usually involve a judge unless the recipient challenges the subpoena. Police can also obtain records without a warrant via a subpoena but these are suppose to go through a judge. I'm not an attorney but to the best of my knowledge except for some exception buried in a national security law a judge needs to approve any type of personal monitoring of citizens. Even in the national security realm there are special courts for overseeing the processing of warrants and subpoenas.

    As I said before, they do whatever they want anyway. Snowden proved it to the world. This has changed nothing in our government, and he continues to live outside of US extradition for documenting that the government operates illegally.

    Also, it has been proven many times over that the FISA courts are 1) an embarrassment to our 6th Amendment right of being able to face our accusers and 2) nothing but a rubber stamp anyway.

    So trying to compare the ethics of this situation to what the US government does is lunacy. They do not follow their own rules. The only question before us here is what ZOS is REQUIRED to do and HOW they go about fulfilling this. I've read most of the comments here, and I haven't seen anyone point out a section of the law that says that ZOS must monitor private chats, and action people for the things they deem a violation of their TOS, nor have I seen anyone point out where this needs to be done through a particular method, like an AI bot. I see one person defending the company and this behavior, and claiming it's all required under the law, but I haven't seen an actual reference that would prove that. Have I just missed it?
  • CrazyKitty
    CrazyKitty
    ✭✭✭✭✭
    Syldras wrote: »
    Elsonso wrote: »
    No, consent does not make everything ok. I am not going to get into specifics, but just think about what that means and how it can be abused.

    I'm not sure if we're talking about the same thing, but my premise was the random "not suitable for minors" roleplay chat that some adult players may have in this game, no matter if it's the romantic pursuits of Gronk the orc warrior with Bjørn the bulky Nord smith, or some "We're evil vampires attacking travellers and throwing them into our dungeon" make-believe. If adults consensually decide to chat-roleplay this with each other, I indeed see no harm done, whether I personally have an interest in it or not. My likes or dislikes and my personal sense of taste play no role in other people's conversations.

    Of course ZOS can still forbid it, it's their game and their server, after all.

    The problem is the lack of consistency. What is fine one day often is not the next day. What is deemed fine for one person to do is often enforced as unacceptable when a different person (usually a person who's posted negative feedback) has action taken against them for the exact same behavior.
  • Gabriel_H
    Gabriel_H
    ✭✭✭✭
    Gabriel_H wrote: »
    Syldras wrote: »
    virtus753 wrote: »
    An M rating in the US means 17+ - and that's a suggestion, not a requirement. 17-year-olds are legally minors.

    What's the age of consent in the US?

    Depends on the State. But again, Age rating is a legal requirement for purchase not play.

    There are no legal requirements to prevent a child from purchasing a game with an inappropriate ESRB rating. ESRB compliance is strictly on a voluntary basis of the game company and retailers. Some retailers will check ages but for the most part the expectation is the parent will control what the child purchases based on the rating system.

    I appreciate that, I was simply using lazy shorthand as different countries have different rating systems and legal requirements.

    Fun fact: Selling an 18 rated game to a 17 year old in the UK levies a personal fine against the salesperson and possible prison time, but selling alcohol to a 17 year old levies only a fine against the company, and it is smaller than the fine for selling alcohol. Some context on the absurdity the UK takes on "harmful" video games.
  • Gabriel_H
    Gabriel_H
    ✭✭✭✭
    This is not a game marketed to children and not a game where you can reasonably assume that there's a lot of kids around you at all times.

    It has a 14+ rating in South America, a 16+ rating in Germany. There are kids around. That's all the law cares about. As I said, ZOS are relying heavily on AI to moderate and monitor. There will be teething problems, but given that there are not hundreds of people here complaining about it suggests the problem is not widespread.

    ZOS need to continue to improve their moderation, and ideally human sign-off rather than auto-bans, as well as ensuring they have a robust appeals process. At the end of the day though, this is not ZOS' doing, they are simply protecting themselves against a set of laws that have some ambiguity baked in, that could cost them nothing or tens of millions depending on what judge they have to stand in front of - and that means less money for developing games.

  • Gabriel_H
    Gabriel_H
    ✭✭✭✭
    dk_dunkirk wrote: »
    I see one person defending the company and this behavior, and claiming it's all required under the law, but I haven't seen an actual reference that would prove that. Have I just missed it?

    Really? My first post - highlighted for your convenience:

    https://lewissilkin.com/en/insights/digital-commerce-creative-101-online-safety-act-navigation-for-the-video-games-industry

    The OSA imposes a number of compliance requirements which will be regulated, investigated and enforced by Ofcom, the regulator for online safety.
    • Risk assessments: In-scope game studios will need to carry out regular OSA risk assessments to identify the illegal harms which may arise in the context of their games – for example, whether chat rooms could support dissemination of hateful content, or if any illegal in-game behaviour might encourage real-world illegal harms.
    • Child risk assessments: Additional – and more rigorous – risk assessments will be required wherever children are a likely game audience. Specific guidance from Ofcom on best practice is expected this Spring.
    • Content moderation: All in-scope platforms must actively monitor and remove illegal content, which may include human, manual or automated content moderation tools.
    • In-game reporting systems & complaints procedures: Players must be able to easily report illegal and harmful content within games, with straightforward complaint procedures.
    • Updated Terms of Service: games’ EULAs and Terms of Service must specify how users are protected from illegal content, explain relevant complaint handling policies and new breach of contract rights for users. Where relevant, Terms of Service will also need to clarify how children are prevented from coming across illegal or harmful content, including in respect of any proactive technology used to do so.
    • Reporting and record-keeping: operators of online games must keep records of risk assessments and outcomes and report findings to Ofcom.

    Additional reading: https://taylorwessing.com/en/interface/2024/the-video-game-industry-in-2024/online-safety-rules-and-their-impact-on-games-businesses
    Edited by Gabriel_H on 14 November 2024 00:00
  • endorphinsplox
    endorphinsplox
    ✭✭✭✭
    The fact that there are still those who would agree with the notion of and comply to automated content moderation for a paid service wherein the automation is fundamentally incapable of effective distinction between actual harmful content and discourse that is consented to by participating parties will never stop surprising me I think.

    TOS or not, I think any reasonable person would expect that DMs can indeed be stored and reviewed should there be reported misconduct, but to have an automated system flag anything that might be offensive for it to potentially be reviewed by a real person is just not something I think can offer more benefits than drawbacks. I do indeed consider this to be an invasion of privacy, and while I support the notion of trying to combat potentially illegal interactions in their game, this will not do that. All it will do is push people to another platform that ZOS can't monitor, and if the in-game chat dies, the game would suffer immensely as a result.

    I think its entirely fair to want DMs between friends or private group chats to remain as exactly that: private. I don't care who hosts the service, unless there is valid reason to believe wrongdoing is occurring, its none of ZOS's business what I discuss and with whom.

    Its also insane to me that people would genuinely say that clicking the Agree button on the TOS on login means you actually jive with all the contents therein. We all know we're agreeing to be restricted and not actually act the way we normally would so that we don't get robbed of however much money each of us have spent on this game. In fact, the Arbitration clause is probably the best example of that. You are basically waiving your right to sue ZOS for wrongdoing for any reason. Kinda reminiscent of that thing where a person went to a Disney theme park, got served food they notified the staff they were allergic to and subsequently passed away. Despite this, Disney argued that the husband of the woman could not pursue legal action because he signed an agreement when he signed up for a trial to Disney+ that waived his right to take legal action against Disney for any reason.

    For those that don't know, arbitration clauses are generally considered anti-consumer and are unenforceable in a lot of places in the world, such as the UK, due to consumer protection laws. At the end of the day, all of the people who support these big companies in their efforts to surveil and moderate everything we do, are doomed to extinction. Its usually performative in that they want to "protect the innocent" or "stop crime", or they want all language they disagree with to be censored, but its a power too great that will ultimately be used against them, and when that happens, they suddenly no longer support the idea.

    And I'm sorry but agreeing to a TOS just so I can play the game I paid money for that strips me of my rights and makes me act only in ways the company personally endorses is really stupid, but that's our reality. They've continued to worsen the product and refuse to communicate with us effectively so they can take their 2 billion and eat it for all I care.
    Edited by endorphinsplox on 14 November 2024 00:51
  • Syldras
    Syldras
    ✭✭✭✭✭
    ✭✭✭✭✭
    [Quoted post has been removed.]

    I just pointed out that ESO being 16+ in Germany is a very bad example for "there are children playing the game, ZOS has to protect them". The game is 16+ because here, people are considered to be mature enough to deal with the game's content in that age - without any need for extra protective measures, as the ratings already include aspects like multiplayer modes and online communication with strangers. Without this, ESO might even be 12+ - like TES Oblivion is in Germany.
    [Quoted post has been removed.]

    Actually I just browsed the DSA documents (it's all online) and nowhere is "anyone under 18 a child". They mention "children and young people", but clearly point out that different measures are suitable for different age groups, depending on the age group's different level of mental maturity. And most of the text is focussed on data protection, anti-discrimination and anti-media-addiction measures anyway. You know what's funny? How often they point out the importance of giving users, including minors, flagging/reporting tools to report unwanted behaviour such as harassment. I emphasize: Giving tools to report unwanted behaviour themselves.
    Edited by ZOS_GregoryV on 14 November 2024 19:07
    @Syldras | PC | EU
    The forceful expression of will gives true honor to the Ancestors.
    Sarayn Andrethi, Telvanni mage (Main)
    Darvasa Andrethi, his "I'm NOT a Necromancer!" sister
    Malacar Sunavarlas, Altmer Ayleid vampire
  • dragonlord500
    dragonlord500
    ✭✭✭✭✭
    hmm I clearly saw this hitting the forums.

    I honestly believe some police on the chats and zone chat above all should happen. PM chat though... I think they should only take action if the player reports the offender and so having AI monitoring it I find is a real bad idea because truly innocent players might be banned from it.

    All and all this whole topic will really not go anywhere because Zos will never reverse their choice in changing the EULA and TOS. I clearly see this is all Microsoft doing as they really want to push their AI monitoring program into everything they own.
    Edited by dragonlord500 on 14 November 2024 01:22
    Guild master of Darkness of Sanguinaris. Birthday is December 4th.
  • Amottica
    Amottica
    ✭✭✭✭✭
    ✭✭✭✭✭
    Syldras wrote: »
    Amottica wrote: »
    I doubt many of us are concerned as we do not type things into chat that would be an issue.

    I don't use slurs and usually don't swear either, so normally, I would not be bothered at all. But AI moderation has the risk of false positives, which does make me a little concerned. What if it flags a harmless word I use in a conversation in another language because in English it looks like a slur? A human moderation would see it's not the English-language slur, can the AI (or rather simple word filter, from what I've seen so far) recognize this? I doubt it.

    And I think it's fine to address this and voice concerns, even if we have accepted the TOS (accepting an agreement also doesn't mean one cannot discuss the rules and suggest improvements).

    That's a great hypothetical that may never occur, and Zenimax's Keven clearly stated that they learn and make adjustments from issues that come up. It is also a major assumption that humans would not make the same mistake.

    and the hypothetical does not have a bearing on my quoted comment.



    Edited by Amottica on 14 November 2024 03:23
  • Syldras
    Syldras
    ✭✭✭✭✭
    ✭✭✭✭✭
    Amottica wrote: »
    It is also a major assumption that humans would not make the same mistake.

    At least a human ZOS employee could look at the context of a message while a simple word filter can not.
    @Syldras | PC | EU
    The forceful expression of will gives true honor to the Ancestors.
    Sarayn Andrethi, Telvanni mage (Main)
    Darvasa Andrethi, his "I'm NOT a Necromancer!" sister
    Malacar Sunavarlas, Altmer Ayleid vampire
  • wolfie1.0.
    wolfie1.0.
    ✭✭✭✭✭
    ✭✭✭
    The fact that there are still those who would agree with the notion of and comply to automated content moderation for a paid service wherein the automation is fundamentally incapable of effective distinction between actual harmful content and discourse that is consented to by participating parties will never stop surprising me I think.

    TOS or not, I think any reasonable person would expect that DMs can indeed be stored and reviewed should there be reported misconduct, but to have an automated system flag anything that might be offensive for it to potentially be reviewed by a real person is just not something I think can offer more benefits than drawbacks. I do indeed consider this to be an invasion of privacy, and while I support the notion of trying to combat potentially illegal interactions in their game, this will not do that. All it will do is push people to another platform that ZOS can't monitor, and if the in-game chat dies, the game would suffer immensely as a result.

    I think its entirely fair to want DMs between friends or private group chats to remain as exactly that: private. I don't care who hosts the service, unless there is valid reason to believe wrongdoing is occurring, its none of ZOS's business what I discuss and with whom.

    Its also insane to me that people would genuinely say that clicking the Agree button on the TOS on login means you actually jive with all the contents therein. We all know we're agreeing to be restricted and not actually act the way we normally would so that we don't get robbed of however much money each of us have spent on this game. In fact, the Arbitration clause is probably the best example of that. You are basically waiving your right to sue ZOS for wrongdoing for any reason. Kinda reminiscent of that thing where a person went to a Disney theme park, got served food they notified the staff they were allergic to and subsequently passed away. Despite this, Disney argued that the husband of the woman could not pursue legal action because he signed an agreement when he signed up for a trial to Disney+ that waived his right to take legal action against Disney for any reason.

    For those that don't know, arbitration clauses are generally considered anti-consumer and are unenforceable in a lot of places in the world, such as the UK, due to consumer protection laws. At the end of the day, all of the people who support these big companies in their efforts to surveil and moderate everything we do, are doomed to extinction. Its usually performative in that they want to "protect the innocent" or "stop crime", or they want all language they disagree with to be censored, but its a power too great that will ultimately be used against them, and when that happens, they suddenly no longer support the idea.

    And I'm sorry but agreeing to a TOS just so I can play the game I paid money for that strips me of my rights and makes me act only in ways the company personally endorses is really stupid, but that's our reality. They've continued to worsen the product and refuse to communicate with us effectively so they can take their 2 billion and eat it for all I care.

    Almost every financial institution monitors every single financial transaction you make. Every card company does the same. They know who your paying and when. They are required by law and financial obligations to do so, both to protect your money and comply with laws. Much of this work is being algorithms and AI and most FIs let it roll along much the same as ZOS will.

    I point this out because many millions of people in the US and across the world have signed onto this system.
  • Amottica
    Amottica
    ✭✭✭✭✭
    ✭✭✭✭✭
    Syldras wrote: »
    Amottica wrote: »
    It is also a major assumption that humans would not make the same mistake.

    At least a human ZOS employee could look at the context of a message while a simple word filter can not.

    My full comment, from which the one sentence was plucked, noted that Zenimax is adjusting as it learns from the process—something it also has to do with human intervention. Not sure why that was left out since it is germane to the reply.

    The benefits of using technology like this in a private forum (the game) outweigh the drawbacks since it can quickly flag and handle abusive players.

    Granted, that is my opinion, and I understand some have differing views, which is ok.

  • Amottica
    Amottica
    ✭✭✭✭✭
    ✭✭✭✭✭
    hmm I clearly saw this hitting the forums.

    I honestly believe some police on the chats and zone chat above all should happen. PM chat though... I think they should only take action if the player reports the offender and so having AI monitoring it I find is a real bad idea because truly innocent players might be banned from it.

    All and all this whole topic will really not go anywhere because Zos will never reverse their choice in changing the EULA and TOS. I clearly see this is all Microsoft doing as they really want to push their AI monitoring program into everything they own.

    Zenimax has said there is no auto-ban on private chat though it is flagged. Considering private chat is not always welcome chat it makes sense.

  • pklemming
    pklemming
    ✭✭✭✭✭
    Amottica wrote: »
    hmm I clearly saw this hitting the forums.

    I honestly believe some police on the chats and zone chat above all should happen. PM chat though... I think they should only take action if the player reports the offender and so having AI monitoring it I find is a real bad idea because truly innocent players might be banned from it.

    All and all this whole topic will really not go anywhere because Zos will never reverse their choice in changing the EULA and TOS. I clearly see this is all Microsoft doing as they really want to push their AI monitoring program into everything they own.

    Zenimax has said there is no auto-ban on private chat though it is flagged. Considering private chat is not always welcome chat it makes sense.

    I am sure the two people, possibly having an intimate chat, would be overjoyed that a person gets to look at their conversation after it is 'flagged'.

    It just seems extremely creepy.

    If we take the likes of Everquest, so many marriages came about from people finding each other in that game. Marriages that have lasted to this day. I am sure there were a fair few flaggable conversations back then.

    Advances that are not consensual can be reported and blocked, bringing AI and auto flagging in the the equation is a massive invasion of privacy. I do not care if they 'own' the game. As I said previous, there is an expectation of privacy in a private conversation.

    OFC, now there is not, which is a very, very bad precedent to set.
    Edited by pklemming on 14 November 2024 04:35
  • Gabriel_H
    Gabriel_H
    ✭✭✭✭
    Syldras wrote: »
    Actually I just browsed the DSA documents (it's all online) and nowhere is "anyone under 18 a child". They mention "children and young people"

    *sigh* the wording is children and young people because in some EU countries being 16, for example, does not make you a child - however the DSA specifically regards anyone under 18 as being in the same class as children. Hence me saying it "in effect" regards anyone under 18 as a child for the purposes of online safety.

    Second paragraph: https://digital-strategy.ec.europa.eu/en/library/digital-services-act-dsa-explained-measures-protect-children-and-young-people-online

    Second page of the EUs own booklet on the DSA: pc879siftqra.png
  • Gabriel_H
    Gabriel_H
    ✭✭✭✭
    All and all this whole topic will really not go anywhere because Zos will never reverse their choice in changing the EULA and TOS. I clearly see this is all Microsoft doing as they really want to push their AI monitoring program into everything they own.

    It is not Microsoft's doing. It is the result of UK and EU legislation. ZOS are required by law to monitor the chat system.

  • Gabriel_H
    Gabriel_H
    ✭✭✭✭
    pklemming wrote: »
    I am sure the two people, possibly having an intimate chat, would be overjoyed that a person gets to look at their conversation after it is 'flagged'.

    It just seems extremely creepy.

    If we take the likes of Everquest, so many marriages came about from people finding each other in that game. Marriages that have lasted to this day. I am sure there were a fair few flaggable conversations back then.

    Advances that are not consensual can be reported and blocked, bringing AI and auto flagging in the the equation is a massive invasion of privacy. I do not care if they 'own' the game. As I said previous, there is an expectation of privacy in a private conversation.

    OFC, now there is not, which is a very, very bad precedent to set.

    Is it creepy? Yes.
    Is it an invasion of privacy? Yes.
    Is it a legal requirement imposed on ZOS? Yes!

    I very much doubt ZOS are all that happy about spending additional time, money, and resources on this, but they do not have a choice. I'm not here to defend ZOS, I'm simply pointing out they have to deal with the laws imposed on them.

  • SeaGtGruff
    SeaGtGruff
    ✭✭✭✭✭
    ✭✭✭✭✭
    Syldras wrote: »
    So it may be legally required. I'm wondering now how exactly this should work in a fantasy game set in a game world where violence is common and also a normal game objective ("Kill 15 innocent people with the Blade of Woe.")? How are we supposed to talk to other players about game content with these rules?

    If this is being done to remain in compliance with legal requirements, your question should really be addressed to the people (politicians and lawmakers) who came up with the legal requirements in the first place. And to be honest, I don't think a lot of those people care about your question. In fact, they probably would prefer that fantasy games set in a world where violence is common and often a normal game objective were not even being created, sold, bought, and played in the first place. :(
    I've fought mudcrabs more fearsome than me!
  • Gabriel_H
    Gabriel_H
    ✭✭✭✭
    SeaGtGruff wrote: »
    Syldras wrote: »
    So it may be legally required. I'm wondering now how exactly this should work in a fantasy game set in a game world where violence is common and also a normal game objective ("Kill 15 innocent people with the Blade of Woe.")? How are we supposed to talk to other players about game content with these rules?

    If this is being done to remain in compliance with legal requirements, your question should really be addressed to the people (politicians and lawmakers) who came up with the legal requirements in the first place. And to be honest, I don't think a lot of those people care about your question. In fact, they probably would prefer that fantasy games set in a world where violence is common and often a normal game objective were not even being created, sold, bought, and played in the first place. :(

    That ^^

    As I said in an earlier post, taking the UK as an example, whose OSA is causing these issues, the fervor around "video game violence" is next level absurd.

    If you work in Tesco and sell alcohol to a 17 year old, then Tesco get fined £1,000.
    If you work in Tesco and sell an 18 rated game to a 17 year old, then Tesco get fined £5,000, YOU get fined £5,000 and YOU can go to prison for 6 months.
  • MJallday
    MJallday
    ✭✭✭✭✭
    Gabriel_H wrote: »
    All and all this whole topic will really not go anywhere because Zos will never reverse their choice in changing the EULA and TOS. I clearly see this is all Microsoft doing as they really want to push their AI monitoring program into everything they own.

    It is not Microsoft's doing. It is the result of UK and EU legislation. ZOS are required by law to monitor the chat system.

    there is no such "law" to monitor chat systems in a game - either in the UK or EU. please stop making things up.

    What companies are obliged to do is keep "appropriate" records/logs of activity which can be requested by law enforcement . this doesnt just include chat, but also in game activities. these records must be kept in line with the GDPR (amongst other DP laws)

    they (ZOS) can use appropriate means (should they choose to do so) to monitor things - this includes AI, but is not prohibitive to this method - they are also morally obligated to act upon reports and escalate where appropriate. thats what the EULA and TOS are about

    companies can deem whatever records are approrpriate. including chat logs. eg it would look pretty daft if they didnt keep chat logs and the police were to ask for them - for example real life threats

    so are they required by law to monitor chat? no
    do they chose to? yes , using appropriate means

    im not sure this entire thread is going anywhere tbh
  • Gabriel_H
    Gabriel_H
    ✭✭✭✭
    MJallday wrote: »
    there is no such "law" to monitor chat systems in a game - either in the UK or EU. please stop making things up.

    What companies are obliged to do is keep "appropriate" records/logs of activity which can be requested by law enforcement . this doesnt just include chat, but also in game activities. these records must be kept in line with the GDPR (amongst other DP laws)

    they (ZOS) can use appropriate means (should they choose to do so) to monitor things - this includes AI, but is not prohibitive to this method - they are also morally obligated to act upon reports and escalate where appropriate. thats what the EULA and TOS are about

    companies can deem whatever records are approrpriate. including chat logs. eg it would look pretty daft if they didnt keep chat logs and the police were to ask for them - for example real life threats

    so are they required by law to monitor chat? no
    do they chose to? yes , using appropriate means

    im not sure this entire thread is going anywhere tbh

    I have already posted multiple links regarding the OSA:

    Content moderation: All in-scope platforms must actively monitor and remove illegal content, which may include human, manual or automated content moderation tools.
    Text or voice chat functionality (team chat in team-based games, or chat in large open servers which bring player avatars together). There is an exemption for services that only enable user-generated content in the form of SMS, email or MMS messages and/or one-to-one live aural communications.
    All in-scope services will need to comply with a range of obligations, including risk assessment and mitigation, protecting users from illegal content and child users from certain harmful content, operating complaints processes, reporting and record-keeping. Most businesses will also need to make changes to their terms of service (or EULAs). Larger services that pose risks of several different types of harm are subject to the most onerous obligations.
    Edited by Gabriel_H on 14 November 2024 09:51
This discussion has been closed.