I do not know that the chat functions of a MMO qualifies for such legal protection.
However it did spark a question in my mind. You (playing on EU; possibly being a citizen of the EU also) previously in this thread mention being surprised to not have been sanctioned.
While I haven't looked into where other players who report being sanctioned are coming from, it seems relevant to mention EU specific legal frameworks such as the EU AI act and GDPR as possible explanations, as several major AI services do not operate in the EU market (or at least postpone deployment) specifically because of this.
Hi all, just wanted to chime in here. We’re looking into some of the questions in the thread and checking in with the team for feedback. Since it’s pretty late in the day on a Friday, we probably won’t have any feedback until earlier next week. But wanted to acknowledge that we’ve seen this and are investigating.
For now, anyone with ban issues, please make sure to put in an appeal and share your ticket number. Happy to pass those along.
Skarphedinn wrote: »[snip]
SilverBride wrote: »Being told that this will be looked into and investigated rather than "No we aren't invading players privacy" speaks volumes.
Wow. A bot should not be monitoring and reporting on private conversations. Unless a person reports it, nothing should be done. What's next? Monitoring Discord?
It's hilarious that in a game that has us running around killing people, and that includes the thieves guild, the dark brotherhood, slavery, and necromancy, they are concerned about what's being said in private chat between consenting adults.
spartaxoxo wrote: »
You can say that again
CoolBlast3 wrote: »If this is intentional and/or isn't rolled back entirely I'm kinda done. I like ESO as a game, but a vast majority of my 10000 hours of playtime are RP and my purchase power goes directly to stuff I can use in RP. With this, I can no longer RP without fear of being banned for calling a fellow RPer's character stupid in roleplay. So I'll no longer spend money on the game. Simple as.
If this is a new form of monitoring, then it would explain the recent lag spikes.
ZOS, if your game can't be expanded in, say, housing or PvP because of "technical limitations", it's a bit disingenuous to be using a bot that monitors private conversations.
SilverBride wrote: »
I am guessing that they are using pattern matching, which is probably what they have always been doing.
I could understand pattern matching and then autoforwarding the part of the dialogue to a real person to make a decision - when it comes to keywords that might mean an actual threat and planning of severe crimes. Obvious example: The word "bomb", or names of real terrorist organizations (although it's questionable whether people planning such crimes would openly write about it in normal words in a game chat - but that's a different topic).
But bans because of absolutely harmless things like mild swearwords? Or sometimes complete "nonsense" or out of context jokes? Not only that it's clear that there is no real person involved when it comes to such decisions; it generally seems out of scope to even scan text automatically for something as trivial as some stupid cusswords.
That would be the "what they have always been doing" part of my comment. If you look back, you see complaints about them banning or suspending people for all sorts of things that seem automated. Automated does not mean AI. As I said, AI might be an improvement, but it will not come for free.
I meant by ZOS.
Kashya_Vulano wrote: »
Here's a snippet from the TOS. It doesn't have to mean AI, you're right, but this is also in the terms of service.
and people use chat to plan killing a boss or other players in PVP and software can distinguish that from IRL chat how?
and if someone is buying a game licence and logging in here to plan IRL stuff rather than an encyrpted tool accessible on a phone- i mean really you think thats whats happening? because thats just silly
Skarphedinn wrote: »Good to see the mods are taking this seriously and snipping post saying to report it to the GDPR.
wolfie1.0. wrote: »It's actually not, which is sad. Remember that it wasn't that long ago that a dispute in a video game resulted in a Swatting incident resulting in death. Grooming, exploitation, release of classified documents... among other things has all happened either in live service games or ancillary services related to them.
wolfie1.0. wrote: »
It's actually not, which is sad. Remember that it wasn't that long ago that a dispute in a video game resulted in a Swatting incident resulting in death. Grooming, exploitation, release of classified documents... among other things has all happened either in live service games or ancillary services related to them.
As sad as it is: this is humans. Just the medium they use for these things changed over the centuries.
Before somebody complains: No, not all humans. Not even the majority. But it is the human behind the screen that causes this, not the tech.
AI will never be able to distinguish RP from any of that.
wolfie1.0. wrote: »Yep, and as the mediums change so do the steps to try to protect against it.
I could understand pattern matching and then autoforwarding the part of the dialogue to a real person to make a decision - when it comes to keywords that might mean an actual threat and planning of severe crimes.