I do not know that the chat functions of a MMO qualifies for such legal protection.
However it did spark a question in my mind. You (playing on EU; possibly being a citizen of the EU also) previously in this thread mention being surprised to not have been sanctioned.
While I haven't looked into where other players who report being sanctioned are coming from, it seems relevant to mention EU specific legal frameworks such as the EU AI act and GDPR as possible explanations, as several major AI services do not operate in the EU market (or at least postpone deployment) specifically because of this.
Hi all, just wanted to chime in here. We’re looking into some of the questions in the thread and checking in with the team for feedback. Since it’s pretty late in the day on a Friday, we probably won’t have any feedback until earlier next week. But wanted to acknowledge that we’ve seen this and are investigating.
For now, anyone with ban issues, please make sure to put in an appeal and share your ticket number. Happy to pass those along.
Skarphedinn wrote: »[snip]
SilverBride wrote: »Being told that this will be looked into and investigated rather than "No we aren't invading players privacy" speaks volumes.
Wow. A bot should not be monitoring and reporting on private conversations. Unless a person reports it, nothing should be done. What's next? Monitoring Discord?
It's hilarious that in a game that has us running around killing people, and that includes the thieves guild, the dark brotherhood, slavery, and necromancy, they are concerned about what's being said in private chat between consenting adults.
spartaxoxo wrote: »SilverBride wrote: »Being told that this will be looked into and investigated rather than "No we aren't invading players privacy" speaks volumes.
You can say that again
CoolBlast3 wrote: »If this is intentional and/or isn't rolled back entirely I'm kinda done. I like ESO as a game, but a vast majority of my 10000 hours of playtime are RP and my purchase power goes directly to stuff I can use in RP. With this, I can no longer RP without fear of being banned for calling a fellow RPer's character stupid in roleplay. So I'll no longer spend money on the game. Simple as.
Wow. A bot should not be monitoring and reporting on private conversations. Unless a person reports it, nothing should be done. What's next? Monitoring Discord?
It's hilarious that in a game that has us running around killing people, and that includes the thieves guild, the dark brotherhood, slavery, and necromancy, they are concerned about what's being said in private chat between consenting adults.
Do you really think Discord is not being monitored?
If this is a new form of monitoring, then it would explain the recent lag spikes.
ZOS, if your game can't be expanded in, say, housing or PvP because of "technical limitations", it's a bit disingenuous to be using a bot that monitors private conversations.
SilverBride wrote: »
I am guessing that they are using pattern matching, which is probably what they have always been doing.
I am guessing that they are using pattern matching, which is probably what they have always been doing.
I could understand pattern matching and then autoforwarding the part of the dialogue to a real person to make a decision - when it comes to keywords that might mean an actual threat and planning of severe crimes. Obvious example: The word "bomb", or names of real terrorist organizations (although it's questionable whether people planning such crimes would openly write about it in normal words in a game chat - but that's a different topic).
But bans because of absolutely harmless things like mild swearwords? Or sometimes complete "nonsense" or out of context jokes? Not only that it's clear that there is no real person involved when it comes to such decisions; it generally seems out of scope to even scan text automatically for something as trivial as some stupid cusswords.
That would be the "what they have always been doing" part of my comment. If you look back, you see complaints about them banning or suspending people for all sorts of things that seem automated. Automated does not mean AI. As I said, AI might be an improvement, but it will not come for free.
Wow. A bot should not be monitoring and reporting on private conversations. Unless a person reports it, nothing should be done. What's next? Monitoring Discord?
It's hilarious that in a game that has us running around killing people, and that includes the thieves guild, the dark brotherhood, slavery, and necromancy, they are concerned about what's being said in private chat between consenting adults.
Do you really think Discord is not being monitored?
I meant by ZOS.
Kashya_Vulano wrote: »Monitoring keywords does not equal AI or bots... Setting up a monitor on all in-game coms to flag specific words used in chat literally requires zero AI. It's no different than a basic word sensor on forums or in character creation. A simple query against a database of flagged words that returns a simple result.
Here's a snippet from the TOS. It doesn't have to mean AI, you're right, but this is also in the terms of service.
Inari Telvanni wrote: »TOS says,
" Content Moderation
To the extent that ZeniMax performs any content moderation of UGC to ensure its compatibility with these Terms of Service (including the Code of Conduct or any relevant EULA), such content moderation may be carried out via human review as well as through the use of AI-powered proactive and reactive moderation methods including without limitation, software that uses algorithmic decision making.
ZeniMax's proactive content moderation includes without limitation using tools to block and filter UGC that is illegal and/or incompatible with these Terms of Service.
Reactive content moderation methods include without limitation user reporting features which allow You to inform ZeniMax of the behavior or Content of other users that You have encountered which you believe is illegal and/or incompatible with these Terms of Service (including the Code of Conduct) and any such behaviour or Content can be reported to ZeniMax by contacting ZeniMax Customer Support at help.bethesda.net or help.elderscrollsonline.com. If You are in-Game, You can report an issue using an in-Game help feature where applicable. Where ZeniMax is required to do so by virtue of the Statutory Obligations (as defined in Section 1), ZeniMax shall advise You of remedial steps action taken against another user as a result of Your report including details of what steps ZeniMax has taken to investigate your report, if ZeniMax has removed Content that You have reported or if any other restrictions have been applied to the Content or the other user. "
This is an enormous red flag.
No it is not..
Zos, as a company that can be held liable for conversations that might happen through the use of their platform, have a responsibility to make sure that they can monitor illegal things being done in their game. Even if those things are done in private chats. If you are using their tools to do illegal things, they are legally put in a spot to intervene or face severe legal repurcussions if they do not.
'Illegal things.'
The whole game is about doing things illegal irl. Talking about the game would in that case be a bannable offence. They will need to be more sophisticated than key words.
I mean, yes. Just like any chat or social media service is potentially liable for the planning or conducting of actual illegal activity with their service Zos is also beholden to that. If a group of people use whispers or group chat to plan a real crime, and Zos did not do everything within their power to prevent their service from being used for that manner, then they could be held liable for allowing that conduct. An entire platform recently is under fire for exactly this in Europe.
and people use chat to plan killing a boss or other players in PVP and software can distinguish that from IRL chat how?
and if someone is buying a game licence and logging in here to plan IRL stuff rather than an encyrpted tool accessible on a phone- i mean really you think thats whats happening? because thats just silly
Skarphedinn wrote: »Good to see the mods are taking this seriously and snipping post saying to report it to the GDPR.
wolfie1.0. wrote: »It's actually not, which is sad. Remember that it wasn't that long ago that a dispute in a video game resulted in a Swatting incident resulting in death. Grooming, exploitation, release of classified documents... among other things has all happened either in live service games or ancillary services related to them.
wolfie1.0. wrote: »Inari Telvanni wrote: »TOS says,
" Content Moderation
To the extent that ZeniMax performs any content moderation of UGC to ensure its compatibility with these Terms of Service (including the Code of Conduct or any relevant EULA), such content moderation may be carried out via human review as well as through the use of AI-powered proactive and reactive moderation methods including without limitation, software that uses algorithmic decision making.
ZeniMax's proactive content moderation includes without limitation using tools to block and filter UGC that is illegal and/or incompatible with these Terms of Service.
Reactive content moderation methods include without limitation user reporting features which allow You to inform ZeniMax of the behavior or Content of other users that You have encountered which you believe is illegal and/or incompatible with these Terms of Service (including the Code of Conduct) and any such behaviour or Content can be reported to ZeniMax by contacting ZeniMax Customer Support at help.bethesda.net or help.elderscrollsonline.com. If You are in-Game, You can report an issue using an in-Game help feature where applicable. Where ZeniMax is required to do so by virtue of the Statutory Obligations (as defined in Section 1), ZeniMax shall advise You of remedial steps action taken against another user as a result of Your report including details of what steps ZeniMax has taken to investigate your report, if ZeniMax has removed Content that You have reported or if any other restrictions have been applied to the Content or the other user. "
This is an enormous red flag.
No it is not..
Zos, as a company that can be held liable for conversations that might happen through the use of their platform, have a responsibility to make sure that they can monitor illegal things being done in their game. Even if those things are done in private chats. If you are using their tools to do illegal things, they are legally put in a spot to intervene or face severe legal repurcussions if they do not.
'Illegal things.'
The whole game is about doing things illegal irl. Talking about the game would in that case be a bannable offence. They will need to be more sophisticated than key words.
I mean, yes. Just like any chat or social media service is potentially liable for the planning or conducting of actual illegal activity with their service Zos is also beholden to that. If a group of people use whispers or group chat to plan a real crime, and Zos did not do everything within their power to prevent their service from being used for that manner, then they could be held liable for allowing that conduct. An entire platform recently is under fire for exactly this in Europe.
and people use chat to plan killing a boss or other players in PVP and software can distinguish that from IRL chat how?
and if someone is buying a game licence and logging in here to plan IRL stuff rather than an encyrpted tool accessible on a phone- i mean really you think thats whats happening? because thats just silly
It's actually not, which is sad. Remember that it wasn't that long ago that a dispute in a video game resulted in a Swatting incident resulting in death. Grooming, exploitation, release of classified documents... among other things has all happened either in live service games or ancillary services related to them.
wolfie1.0. wrote: »It's actually not, which is sad. Remember that it wasn't that long ago that a dispute in a video game resulted in a Swatting incident resulting in death. Grooming, exploitation, release of classified documents... among other things has all happened either in live service games or ancillary services related to them.
As sad as it is: this is humans. Just the medium they use for these things changed over the centuries.
Before somebody complains: No, not all humans. Not even the majority. But it is the human behind the screen that causes this, not the tech.
AI will never be able to distinguish RP from any of that.
wolfie1.0. wrote: »Yep, and as the mediums change so do the steps to try to protect against it.
I could understand pattern matching and then autoforwarding the part of the dialogue to a real person to make a decision - when it comes to keywords that might mean an actual threat and planning of severe crimes.