SuspensionDispersingAutomaton wrote: »The humans hired were no more useful than automatic bots, as they banned me for consensual chat among close friends and sent a generic copypasted reply I saw they sent to others affected, with absolutely no consideration for my explanation and reasoning that it was all consensual.
Hi All,
We want to follow up on this thread regarding moderation tools and how this intersects with the role-play community. First, thank you for your feedback and raising your concerns about some recent actions we took due to identified chat-based Terms of Service violations. Since you all raised these concerns, we wanted to provide a bit more insight and context to the tools and process.
As with any online game, our goal is to make sure you all can have fun while making sure bad actors do not have the ability to cause harm. To achieve this, our customer service team uses tools to check for potentially harmful terms and phrases. No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups. But we want to stress a few core points.
- We are by no means trying to disrupt or limit your role-play experiences or general discourse with friends and guildmates. You should have confidence that your private role-play experiences and conversations are yours and we are not looking to action anyone engaging in consensual conversations with fellow players.
- The tools used are intended to be preventative, and alert us to serious crimes, hate speech, and extreme cases of harm.
- To reiterate, no system is auto-banning players. If an action does occur, it’s because one of our CS agents identified something concerning enough to action on. That can always be appealed through our support ticketing system. And in an instance where you challenge the appeal process, please feel free to flag here on the forum and we can work with you to get to the bottom of the situation.
- As a company we also abide by the Digital Service Act law and all similar laws.
To wrap this up, for those who were actioned, we have reversed most of the small number of temporary suspensions and bans. If you believe you were impacted and the action was not reversed, please issue an appeal and share your ticket number. We will pass it along to our customer service to investigate.
We hope this helps to alleviate any concern around our in-game chat moderation and your role-play experiences. We understand the importance of having safe spaces for a variety of role-play communities and want to continue to foster that in ESO.
Dragonnord wrote: »From ZOS_Kevin:
"No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
To reiterate, no system is auto-banning players. If an action does occur, it’s because one of our CS agents identified something concerning enough to action on."
Thank you Kevin. Because several people were blaming ZOS and AI of automatic banning.
As I said, people becoming paranoid with AI.
I hope @StaticWave and @Heren are relieved now.
Dragonnord wrote: »From ZOS_Kevin:
"No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
To reiterate, no system is auto-banning players. If an action does occur, it’s because one of our CS agents identified something concerning enough to action on."
Thank you Kevin. Because several people were blaming ZOS and AI of automatic banning.
As I said, people becoming paranoid with AI.
I hope @StaticWave and @Heren are relieved now.
He literally confirmed their system had "hiccups" and actions had to be undone. Whether these humans decided stuff based on snippets with broken context or whatever led to this, the result was still that people were penalized without there being a violated party. This is literally what people were bemoaning, if now AI is behind it, a bot or flawed human action.
The pipeline should be:
offence > report > action
and not:
consensual interaction > action > appeal > 24-96 h customer service processing time > work through bot response 1-4 > pray you get to play again
Are you deliberately trying to paint this in a good light because you picked your opinion before being fully aware of the context in the other thread?
Players being confronted with losing years worth of commitment and money due to a malfunctioning system shouldn't be met with "seems it wasn't AI, so no harm done".
As with any online game, our goal is to make sure you all can have fun while making sure bad actors do not have the ability to cause harm. To achieve this, our customer service team uses tools to check for potentially harmful terms and phrases. No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
HatchetHaro wrote: »If I might make a suggestion, since the automated checks will always flag these "potentially harmful words or phrases", it means your customer support representatives will have to check through a large number of messages, and therefore also increasing the likelihood of misinterpretation of chat messages and therefore sending out undeserved warnings, suspensions, or bans.
Instead of having automated checks flood the system with false positives, perhaps it'd be better to have the players themselves determine whether a message was harmful. For example, if a player receives a potentially harmful message, they then can determine for themselves whether or not it was harmful to them, and then have the ability to flag those phrases themselves for customer support to review and take action on. I'm thinking they can right click the player's name in the chat for that option. We can call it the "Report Player" button!
spartaxoxo wrote: »If the humans are going to automatically side with the AI, instead of being trained that it's often wrong, then it's not any better than AI doing the moderating.
I hope that this serves as a lesson to the relevant training teams.
Mod teams handle a large volume of upsetting speech, it is inevitable that they will make mistakes.
Dragonnord wrote: »spartaxoxo wrote: »If the humans are going to automatically side with the AI, instead of being trained that it's often wrong, then it's not any better than AI doing the moderating.
I hope that this serves as a lesson to the relevant training teams.
Mod teams handle a large volume of upsetting speech, it is inevitable that they will make mistakes.
And how do you (we) know a CS agent made a mistake when actioning a case?
Just because a banned player claims innocence that's enough to say that the CS agent made a mistake?
Also, we don't need and shouldn't have to endure another AI chat bot monitor thread, not least because we haven't yet seen any direct evidence of anything beyond automated swear word recognition which we've always known about.
If there's anything new to add to the main thread on this then fine, let's focus on the discussion there, otherwise there'll be no follow-up by ZOS except the locking of the myriad of other secondary threads on the subject.
Maybe because Kevin himself outright said they've reversed multiple actions made against accounts? Why would they have done that if they weren't making mistakes or just outright actioning accounts for stuff they didn't like but weren't actually offending the other involved party(ies)?Dragonnord wrote: »spartaxoxo wrote: »If the humans are going to automatically side with the AI, instead of being trained that it's often wrong, then it's not any better than AI doing the moderating.
I hope that this serves as a lesson to the relevant training teams.
Mod teams handle a large volume of upsetting speech, it is inevitable that they will make mistakes.
And how do you (we) know a CS agent made a mistake when actioning a case?
Just because a banned player claims innocence that's enough to say that the CS agent made a mistake?
A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring.
There several post now discussing this issue.And who told you ZOS is doing anything?
I can ask ZOS for that proof. How else will the question be answered?So I come to the forums saying ZOS is spying us with a satellite and you demand explanation just because me, a random person, said that and I didn't provide any proof?
"You don't need AI to monitor that. It's been like that with ZOS for years. There are several certain words that are flagged and can trigger an alert on ZOS side."Also, seems you didn't read the part above where I say:
That doesn't make is acceptable. Or legal.Every mmo has that since ever.
Transparency is the issue. Paying customers have the right to be able to make informed choices.IA has nothing to do here.
This also doesn't answer the question of what kind of program is scrubbing chat to begin with, and how it's affecting latency in the game. Please give a more clear answer about WHAT is sending "concerning" chats to support, and how intensive the system being used is.
Since they have been logging chats for the last 10 years, and the program reviewing the chat logs doesn't even have to be running on the game servers, it is not unreasonable to guess that nothing has changed.
"offshore resources reading chats"?
Since they have been logging chats for the last 10 years, and the program reviewing the chat logs doesn't even have to be running on the game servers, it is not unreasonable to guess that nothing has changed.
I'd find it interesting to know if something changed or not. Are the many complaints right now just a conincidence? Or was the filter updated and the rules got stricter? An update is possible, after all. Then again I'm not sure whether they'll tell us.
Everyone is "offshore" to someone.
I am not sure where this "offshore" thing is coming from, except from the player above. Does anyone know what they are talking about?
Since they have been logging chats for the last 10 years, and the program reviewing the chat logs doesn't even have to be running on the game servers, it is not unreasonable to guess that nothing has changed.
I'd find it interesting to know if something changed or not. Are the many complaints right now just a coincidence? Or was the filter updated and the rules got stricter? An update is possible, after all. Then again I'm not sure whether they'll tell us.