I'm talking about in game messages to friends. I've had multiple friends get their accounts suspended for interacting with each other, including myself. Simply calling someone "regarded" or saying "come" (both written in different forms) should not be grounds for account suspension.
I'm talking about in game messages to friends. I've had multiple friends get their accounts suspended for interacting with each other, including myself. Simply calling someone "regarded" or saying "come" (both written in different forms) should not be grounds for account suspension.
https://forums.elderscrollsonline.com/en/discussion/comment/8190298/#Comment_8190298ZOS_Kevin wrote:We want to follow up on this thread regarding moderation tools and how this intersects with the role-play community. First, thank you for your feedback and raising your concerns about some recent actions we took due to identified chat-based Terms of Service violations. Since you all raised these concerns, we wanted to provide a bit more insight and context to the tools and process.
As with any online game, our goal is to make sure you all can have fun while making sure bad actors do not have the ability to cause harm. To achieve this, our customer service team uses tools to check for potentially harmful terms and phrases. No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups. But we want to stress a few core points.
We are by no means trying to disrupt or limit your role-play experiences or general discourse with friends and guildmates. You should have confidence that your private role-play experiences and conversations are yours and we are not looking to action anyone engaging in consensual conversations with fellow players.
The tools used are intended to be preventative, and alert us to serious crimes, hate speech, and extreme cases of harm.
To reiterate, no system is auto-banning players. If an action does occur, it’s because one of our CS agents identified something concerning enough to action on. That can always be appealed through our support ticketing system. And in an instance where you challenge the appeal process, please feel free to flag here on the forum and we can work with you to get to the bottom of the situation.
As a company we also abide by the Digital Service Act law and all similar laws.
To wrap this up, for those who were actioned, we have reversed most of the small number of temporary suspensions and bans. If you believe you were impacted and the action was not reversed, please issue an appeal and share your ticket number. We will pass it along to our customer service to investigate.
We hope this helps to alleviate any concern around our in-game chat moderation and your role-play experiences. We understand the importance of having safe spaces for a variety of role-play communities and want to continue to foster that in ESO.
karthrag_inak wrote: »Their system. Their responsibility. Their privilege.
karthrag_inak wrote: »Their system. Their responsibility. Their privilege.
Yep. Every aspect of our characters and game is owned by Zenimax, not us. They also have the right to investigate inappropriate behavior and actions.
Let's be real; they do not do such things randomly, as that wastes time. If they are checking a player's account, then a red flag or a report has occurred.
Edit: According to Kevin's comments that were quoted here, such a system would raise a red flag.
spartaxoxo wrote: »
Wauw the amount of yes . Rly surprised by that . Sad to see freedom gets given away so easy 😭
spartaxoxo wrote: »I'm not going to vote because the poll is biased.
My answer is "sometimes" or "other" if those were included.
I understand scanning for certain criminal things to protect themselves from liability and also because it could save lives. I don't have a problem with that.
But, I don't think AI should be looking for things like rude language. If someone else reports you, that's a different story.
JemadarofCaerSalis wrote: »karthrag_inak wrote: »Their system. Their responsibility. Their privilege.
Yep. Every aspect of our characters and game is owned by Zenimax, not us. They also have the right to investigate inappropriate behavior and actions.
Let's be real; they do not do such things randomly, as that wastes time. If they are checking a player's account, then a red flag or a report has occurred.
Edit: According to Kevin's comments that were quoted here, such a system would raise a red flag.
If they have AI that takes action, without any human seeing the messages, then I agree that it is too far.
However, if it isn't AI and rather the system just flags the messages and a human looks at it, then I have no issue with the messages being checked.
As said, it is ZOS's server, their game, and their TOS that they can change and enforce as they wish.
spartaxoxo wrote: »Wauw the amount of yes . Rly surprised by that . Sad to see freedom gets given away so easy 😭
Again the problem with the poll is it's biased. I don't mean in a point of view sense. I mean that the selection of response chosen forces people to select responses that may not reflect their true opinion. This is why even most questions that are often a simple yes/no include an "IDK/other" option.
In this case, many users may not agree with your example ban or banning for things like jokes. But, they'd agree with narrowly defined monitoring for criminal activity. That's a pretty common response.
So you're seeing comments like "No, unless it's criminal" or "Yes, but only for criminal stuff."
If you had included a survey option that captured this common response type either by correctly predicting it or including an option for "other" then you may have received less "yes" answers.
The_Meathead wrote: »spartaxoxo wrote: »I'm not going to vote because the poll is biased.
My answer is "sometimes" or "other" if those were included.
I understand scanning for certain criminal things to protect themselves from liability and also because it could save lives. I don't have a problem with that.
But, I don't think AI should be looking for things like rude language. If someone else reports you, that's a different story.
And honestly, this^
Someone messaging their pal with "Hey F-word, what's hanging?" is obviously a ridiculous thing to act on, but there probably are legitimate outliers.
There should be humans making the final decisions, and those humans should have a great deal of familiarity with context, familiarity, slang, and humor that they take into account.
spartaxoxo wrote: »Wauw the amount of yes . Rly surprised by that . Sad to see freedom gets given away so easy 😭
Again the problem with the poll is it's biased. I don't mean in a point of view sense. I mean that the selection of response chosen forces people to select responses that may not reflect their true opinion. This is why even most questions that are often a simple yes/no include an "IDK/other" option.
In this case, many users may not agree with your example ban or banning for things like jokes. But, they'd agree with narrowly defined monitoring for criminal activity. That's a pretty common response.
So you're seeing comments like "No, unless it's criminal" or "Yes, but only for criminal stuff."
If you had included a survey option that captured this common response type either by correctly predicting it or including an option for "other" then you may have received less "yes" answers.
We can choose yes or no; the question is a yes or no question with nothing to push a player toward either choice. That is unbiased by definition.
Microsoft Copilot wrote:Forced-choice bias. When respondents are only given a limited set of options (like “yes” or “no”) without the opportunity to express other opinions or nuances, it can skew the results and not accurately reflect the respondents’ true feelings or thoughts.