Not really. In HR and staffing, offshore specifically means countries like India and PH.
I manage offshore resources quite frequently, and it’s a common occurrence in global IT to leverage offshore resources due to the lower costs. It’s a common outsourcing practice.
My firm staffs content moderation and the resources largely come from offshore, but there are a few local call centers that contract out work.
Hi All,
We want to follow up on this thread regarding moderation tools and how this intersects with the role-play community. First, thank you for your feedback and raising your concerns about some recent actions we took due to identified chat-based Terms of Service violations. Since you all raised these concerns, we wanted to provide a bit more insight and context to the tools and process.
As with any online game, our goal is to make sure you all can have fun while making sure bad actors do not have the ability to cause harm. To achieve this, our customer service team uses tools to check for potentially harmful terms and phrases. No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups. But we want to stress a few core points.
- We are by no means trying to disrupt or limit your role-play experiences or general discourse with friends and guildmates. You should have confidence that your private role-play experiences and conversations are yours and we are not looking to action anyone engaging in consensual conversations with fellow players.
- The tools used are intended to be preventative, and alert us to serious crimes, hate speech, and extreme cases of harm.
- To reiterate, no system is auto-banning players. If an action does occur, it’s because one of our CS agents identified something concerning enough to action on. That can always be appealed through our support ticketing system. And in an instance where you challenge the appeal process, please feel free to flag here on the forum and we can work with you to get to the bottom of the situation.
- As a company we also abide by the Digital Service Act law and all similar laws.
To wrap this up, for those who were actioned, we have reversed most of the small number of temporary suspensions and bans. If you believe you were impacted and the action was not reversed, please issue an appeal and share your ticket number. We will pass it along to our customer service to investigate.
We hope this helps to alleviate any concern around our in-game chat moderation and your role-play experiences. We understand the importance of having safe spaces for a variety of role-play communities and want to continue to foster that in ESO.
katanagirl1 wrote: »So the profanity filter and the reporting system aren’t enough I guess. Sounds like a zero tolerance policy that even if everyone is okay with those words you still can’t say them.
Congratulations ZOS, you have already effectively killed zone chat for all reasons other than gold sellers, guild recruiters, and selling in zone chat. Seriously, no one is saying anything anymore. Those are the things everyone really hates, too. Zone chat could be entertaining and it could sometimes be horrible. I can’t really defend it other than that it made the game feel more alive. Now it’s like there is no one else on.
It’s not just zone chat that is the problem now, from what I have read here. Even private chat can get you a ban.
Lately there have been a lot of decisions made that have upset the playerbase. I hope this is not the final nail in the coffin. I personally don’t have to worry about typing anything offensive myself but I’m an outlier. This is a multiplayer game and I do have to rely on others to get some things done. If everyone leaves that will be problematic.
DenverRalphy wrote: »katanagirl1 wrote: »So the profanity filter and the reporting system aren’t enough I guess. Sounds like a zero tolerance policy that even if everyone is okay with those words you still can’t say them.
Congratulations ZOS, you have already effectively killed zone chat for all reasons other than gold sellers, guild recruiters, and selling in zone chat. Seriously, no one is saying anything anymore. Those are the things everyone really hates, too. Zone chat could be entertaining and it could sometimes be horrible. I can’t really defend it other than that it made the game feel more alive. Now it’s like there is no one else on.
It’s not just zone chat that is the problem now, from what I have read here. Even private chat can get you a ban.
Lately there have been a lot of decisions made that have upset the playerbase. I hope this is not the final nail in the coffin. I personally don’t have to worry about typing anything offensive myself but I’m an outlier. This is a multiplayer game and I do have to rely on others to get some things done. If everyone leaves that will be problematic.
ZoS hasn't changed anything. It's the same system that's been in place since year one.
DenverRalphy wrote: »katanagirl1 wrote: »So the profanity filter and the reporting system aren’t enough I guess. Sounds like a zero tolerance policy that even if everyone is okay with those words you still can’t say them.
Congratulations ZOS, you have already effectively killed zone chat for all reasons other than gold sellers, guild recruiters, and selling in zone chat. Seriously, no one is saying anything anymore. Those are the things everyone really hates, too. Zone chat could be entertaining and it could sometimes be horrible. I can’t really defend it other than that it made the game feel more alive. Now it’s like there is no one else on.
It’s not just zone chat that is the problem now, from what I have read here. Even private chat can get you a ban.
Lately there have been a lot of decisions made that have upset the playerbase. I hope this is not the final nail in the coffin. I personally don’t have to worry about typing anything offensive myself but I’m an outlier. This is a multiplayer game and I do have to rely on others to get some things done. If everyone leaves that will be problematic.
ZoS hasn't changed anything. It's the same system that's been in place since year one.
That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups.
HatchetHaro wrote: »As with any online game, our goal is to make sure you all can have fun while making sure bad actors do not have the ability to cause harm. To achieve this, our customer service team uses tools to check for potentially harmful terms and phrases. No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
If I might make a suggestion, since the automated checks will always flag these "potentially harmful words or phrases", it means your customer support representatives will have to check through a large number of messages, and therefore also increasing the likelihood of misinterpretation of chat messages and therefore sending out undeserved warnings, suspensions, or bans.
Instead of having automated checks flood the system with false positives, perhaps it'd be better to have the players themselves determine whether a message was harmful. For example, if a player receives a potentially harmful message, they then can determine for themselves whether or not it was harmful to them, and then have the ability to flag those phrases themselves for customer support to review and take action on. I'm thinking they can right click the player's name in the chat for that option. We can call it the "Report Player" button!
AngryPenguin wrote: »We already have a profanity filter and a report player option. We certainly didn't need the addition of a hyperactive and highly inaccurate AI system getting involved. Who ever heard of a robot getting offended anyway? Why would a robot care if someone insulted it? Would calling a robot a useless tin can make it cry?
AngryPenguin wrote: »We already have a profanity filter and a report player option. We certainly didn't need the addition of a hyperactive and highly inaccurate AI system getting involved. Who ever heard of a robot getting offended anyway? Why would a robot care if someone insulted it? Would calling a robot a useless tin can make it cry?
Considering it has no human sensivities, a bot would actually even be better than a human reviewer who could be influenced by personal opinions, upbringing, culture and other such factors - wouldn't the problem be that it would also not understand context, fiction vs reality, jokes, etc.
Anyway, ZOS says it's only auto-flagging, but then the cases get sent to a human reviewer. The question is why that process leads to so many wrong bans. If it's outsourced: Are the people making these decisions reliable? Do they understand the language well, including idioms and colloquialisms? Are they provided enough of the chat to be able to see context (one line only won't help)? Are they flooded with cases so much that they don't have enough time to review them properly?
How about we not call for closing a thread where discussions are still being had and questions asked? You have no way of knowing whether we're going to get more answers or not. So please don't try to shut down the conversation of a very serious matter.I think the issue is done.
We asked the queation. We got our official answer.
Nothing will be done because ZOS is denying the accusations and has said it will not change anything.
Since this thread will do nothing more to enact change @ZOS_Kevin please close the thread.
All of this. We need answers. What we got was a vague admission of guilt and ""assured"" that actual real people made these choices. How are we supposed to trust that they won't keep making bad choices or letting personal opinion on what they're reading make choices?endorphinsplox wrote: »Hi All,
We want to follow up on this thread regarding moderation tools and how this intersects with the role-play community. First, thank you for your feedback and raising your concerns about some recent actions we took due to identified chat-based Terms of Service violations. Since you all raised these concerns, we wanted to provide a bit more insight and context to the tools and process.
As with any online game, our goal is to make sure you all can have fun while making sure bad actors do not have the ability to cause harm. To achieve this, our customer service team uses tools to check for potentially harmful terms and phrases. No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups. But we want to stress a few core points.
- We are by no means trying to disrupt or limit your role-play experiences or general discourse with friends and guildmates. You should have confidence that your private role-play experiences and conversations are yours and we are not looking to action anyone engaging in consensual conversations with fellow players.
- The tools used are intended to be preventative, and alert us to serious crimes, hate speech, and extreme cases of harm.
- To reiterate, no system is auto-banning players. If an action does occur, it’s because one of our CS agents identified something concerning enough to action on. That can always be appealed through our support ticketing system. And in an instance where you challenge the appeal process, please feel free to flag here on the forum and we can work with you to get to the bottom of the situation.
- As a company we also abide by the Digital Service Act law and all similar laws.
To wrap this up, for those who were actioned, we have reversed most of the small number of temporary suspensions and bans. If you believe you were impacted and the action was not reversed, please issue an appeal and share your ticket number. We will pass it along to our customer service to investigate.
We hope this helps to alleviate any concern around our in-game chat moderation and your role-play experiences. We understand the importance of having safe spaces for a variety of role-play communities and want to continue to foster that in ESO.
So basically, ZOS is calling this a "hiccup", refusing to explain why they implemented this system with no warning, claiming the disciplinary actions are performed by a real person after manually reviewing flagged content, and that a real, living, breathing human being, employed by a major developer, genuinely believed that someone jokingly referring to a furnishing as looking like a certain bodily fluid was reasonable to categorize as a "serious crime, hate speech, or extreme case of harm"?
Not only that, but you aren't planning on stopping the NSA style monitoring of what we once believed were private chats, won't give us a list of words we can not say, and won't acknowledge that the casualties of this extreme error in judgement you call a "hiccup" outweigh any potential benefit it could have had, given that many cheaters, bullies, and scammers are still present throughout the game, completely unaffected by this change?
Yeah I waited for a response, and this confirms that ZOS is just going down a road I can not follow.
Dragonnord wrote: »Dragonnord wrote: »From ZOS_Kevin:
"No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
To reiterate, no system is auto-banning players. If an action does occur, it’s because one of our CS agents identified something concerning enough to action on."
Thank you Kevin. Because several people were blaming ZOS and AI of automatic banning.
As I said, people becoming paranoid with AI.
I hope @StaticWave and @Heren are relieved now.
He literally confirmed their system had "hiccups" and actions had to be undone. Whether these humans decided stuff based on snippets with broken context or whatever led to this, the result was still that people were penalized without there being a violated party. This is literally what people were bemoaning, if now AI is behind it, a bot or flawed human action.
The pipeline should be:
offence > report > action
and not:
consensual interaction > action > appeal > 24-96 h customer service processing time > work through bot response 1-4 > pray you get to play again
Are you deliberately trying to paint this in a good light because you picked your opinion before being fully aware of the context in the other thread?
Players being confronted with losing years worth of commitment and money due to a malfunctioning system shouldn't be met with "seems it wasn't AI, so no harm done".
Human errors are everywhere in this life. Thing is AI does not decide anything here, it just takes info and provides it to humans. So, again, people can stop being paranoid with AI.
AngryPenguin wrote: »We already have a profanity filter and a report player option. We certainly didn't need the addition of a hyperactive and highly inaccurate AI system getting involved. Who ever heard of a robot getting offended anyway? Why would a robot care if someone insulted it? Would calling a robot a useless tin can make it cry?
Considering it has no human sensivities, a bot would actually even be better than a human reviewer who could be influenced by personal opinions, upbringing, culture and other such factors - wouldn't the problem be that it would also not understand context, fiction vs reality, jokes, etc.
Anyway, ZOS says it's only auto-flagging, but then the cases get sent to a human reviewer. The question is why that process leads to so many wrong bans. If it's outsourced: Are the people making these decisions reliable? Do they understand the language well, including idioms and colloquialisms? Are they provided enough of the chat to be able to see context (one line only won't help)? Are they flooded with cases so much that they don't have enough time to review them properly?
No this is completely unacceptable. You did this without notifying your players and yet it has failed so badly we still noticed.Hi All,
We want to follow up on this thread regarding moderation tools and how this intersects with the role-play community. First, thank you for your feedback and raising your concerns about some recent actions we took due to identified chat-based Terms of Service violations. Since you all raised these concerns, we wanted to provide a bit more insight and context to the tools and process.
As with any online game, our goal is to make sure you all can have fun while making sure bad actors do not have the ability to cause harm. To achieve this, our customer service team uses tools to check for potentially harmful terms and phrases. No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups. But we want to stress a few core points.
- We are by no means trying to disrupt or limit your role-play experiences or general discourse with friends and guildmates. You should have confidence that your private role-play experiences and conversations are yours and we are not looking to action anyone engaging in consensual conversations with fellow players.
- The tools used are intended to be preventative, and alert us to serious crimes, hate speech, and extreme cases of harm.
- To reiterate, no system is auto-banning players. If an action does occur, it’s because one of our CS agents identified something concerning enough to action on. That can always be appealed through our support ticketing system. And in an instance where you challenge the appeal process, please feel free to flag here on the forum and we can work with you to get to the bottom of the situation.
- As a company we also abide by the Digital Service Act law and all similar laws.
To wrap this up, for those who were actioned, we have reversed most of the small number of temporary suspensions and bans. If you believe you were impacted and the action was not reversed, please issue an appeal and share your ticket number. We will pass it along to our customer service to investigate.
We hope this helps to alleviate any concern around our in-game chat moderation and your role-play experiences. We understand the importance of having safe spaces for a variety of role-play communities and want to continue to foster that in ESO.
Hi All,
- We are by no means trying to disrupt or limit your role-play experiences or general discourse with friends and guildmates. You should have confidence that your private role-play experiences and conversations are yours and we are not looking to action anyone engaging in consensual conversations with fellow players.
I have a character which I roleplay as a Hagraven, being a Hagraven she enjoys plucking eyes out of anything she’s killed to turn into eyeball stew or cake. I use emote chat with my guild when I do this to npcs, or my main sometimes runs guild events where we kill slaver npcs to release slaves. Would both of these examples this still be deemed ok? as it’s hard to know what we can do anymore
Stafford197 wrote: »No this is completely unacceptable. You did this without notifying your players and yet it has failed so badly we still noticed.
The Elder Scrolls Online is rated M and already has the following precautions in place:
• A system to automatically censor explicit words or phrases. This can be toggled on/off in our Options.
• A system to report other players for their behavior if we feel bothered by them.
Your new system is an automatic Report tool which reads text from both Public and Private chats. The reported content is then acted on by overworked Human CSRs who have incomplete context and a tight schedule. It is no surprise that tons of innocent players have received wrongful account bans over this.
Furthermore, attempting to appeal an account ban is a challenge in itself. The process often involves weeks of frustrating discourse with automated messages as ZOS conducts investigations. Even if everything works out, you’ll still miss out on limited time events and even paid ESO+ subscription time. ZOS provides zero compensation for wrongful bans. (the transmute event PTS/Live situation was a unique exception)
Instead of moderating zone chats to auto-flag Gold Sellers and Bots, you are using it to police private chats between consenting adults. Thanks for clarifying how this works Kevin, but it certainly does not instill any confidence.
spartaxoxo wrote: »DenverRalphy wrote: »katanagirl1 wrote: »So the profanity filter and the reporting system aren’t enough I guess. Sounds like a zero tolerance policy that even if everyone is okay with those words you still can’t say them.
Congratulations ZOS, you have already effectively killed zone chat for all reasons other than gold sellers, guild recruiters, and selling in zone chat. Seriously, no one is saying anything anymore. Those are the things everyone really hates, too. Zone chat could be entertaining and it could sometimes be horrible. I can’t really defend it other than that it made the game feel more alive. Now it’s like there is no one else on.
It’s not just zone chat that is the problem now, from what I have read here. Even private chat can get you a ban.
Lately there have been a lot of decisions made that have upset the playerbase. I hope this is not the final nail in the coffin. I personally don’t have to worry about typing anything offensive myself but I’m an outlier. This is a multiplayer game and I do have to rely on others to get some things done. If everyone leaves that will be problematic.
ZoS hasn't changed anything. It's the same system that's been in place since year one.
Kevin stated they are working with new tools that they are still training on. Here's that part of the quote, snipped for brevity.That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups.
DenverRalphy wrote: »spartaxoxo wrote: »DenverRalphy wrote: »katanagirl1 wrote: »So the profanity filter and the reporting system aren’t enough I guess. Sounds like a zero tolerance policy that even if everyone is okay with those words you still can’t say them.
Congratulations ZOS, you have already effectively killed zone chat for all reasons other than gold sellers, guild recruiters, and selling in zone chat. Seriously, no one is saying anything anymore. Those are the things everyone really hates, too. Zone chat could be entertaining and it could sometimes be horrible. I can’t really defend it other than that it made the game feel more alive. Now it’s like there is no one else on.
It’s not just zone chat that is the problem now, from what I have read here. Even private chat can get you a ban.
Lately there have been a lot of decisions made that have upset the playerbase. I hope this is not the final nail in the coffin. I personally don’t have to worry about typing anything offensive myself but I’m an outlier. This is a multiplayer game and I do have to rely on others to get some things done. If everyone leaves that will be problematic.
ZoS hasn't changed anything. It's the same system that's been in place since year one.
Kevin stated they are working with new tools that they are still training on. Here's that part of the quote, snipped for brevity.That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups.
That's just saying that they're consistently reviewing their practices of how to review and recognize human behavioural intention and interactions. Not that they're training on a new system. There's no mention of "new tools" in that statement.
The same way any corporate HR department is constantly reviewing their practices (and thus constantly re-training their employees every year or two with mandatory attendance classes).
spartaxoxo wrote: »DenverRalphy wrote: »spartaxoxo wrote: »DenverRalphy wrote: »katanagirl1 wrote: »So the profanity filter and the reporting system aren’t enough I guess. Sounds like a zero tolerance policy that even if everyone is okay with those words you still can’t say them.
Congratulations ZOS, you have already effectively killed zone chat for all reasons other than gold sellers, guild recruiters, and selling in zone chat. Seriously, no one is saying anything anymore. Those are the things everyone really hates, too. Zone chat could be entertaining and it could sometimes be horrible. I can’t really defend it other than that it made the game feel more alive. Now it’s like there is no one else on.
It’s not just zone chat that is the problem now, from what I have read here. Even private chat can get you a ban.
Lately there have been a lot of decisions made that have upset the playerbase. I hope this is not the final nail in the coffin. I personally don’t have to worry about typing anything offensive myself but I’m an outlier. This is a multiplayer game and I do have to rely on others to get some things done. If everyone leaves that will be problematic.
ZoS hasn't changed anything. It's the same system that's been in place since year one.
Kevin stated they are working with new tools that they are still training on. Here's that part of the quote, snipped for brevity.That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups.
That's just saying that they're consistently reviewing their practices of how to review and recognize human behavioural intention and interactions. Not that they're training on a new system. There's no mention of "new tools" in that statement.
The same way any corporate HR department is constantly reviewing their practices (and thus constantly re-training their employees every year or two with mandatory attendance classes).
"Iterating on some processes recently" reads to me as if they've come up with new tools using their old software. Like an update that came by refining the process and tools. That they are still learning, training, and expect hiccups as they learn and train, reinforces that.
That statement doesn't sound like routine training they've been doing all along.
Massively article also noted that the TOS updated in February.
https://massivelyop.com/2024/09/16/roleplaying-elder-scrolls-online-players-raise-red-flags-over-automated-chat-moderation/
spartaxoxo wrote: »
"Iterating on some processes recently" reads to me as if they've come up with new tools using their old software. Like an update that came by refining the process and tools. That they are still learning, training, and expect hiccups as they learn and train, reinforces that.
That statement doesn't sound like routine training they've been doing all along.
.
DenverRalphy wrote: »spartaxoxo wrote: »DenverRalphy wrote: »spartaxoxo wrote: »DenverRalphy wrote: »katanagirl1 wrote: »So the profanity filter and the reporting system aren’t enough I guess. Sounds like a zero tolerance policy that even if everyone is okay with those words you still can’t say them.
Congratulations ZOS, you have already effectively killed zone chat for all reasons other than gold sellers, guild recruiters, and selling in zone chat. Seriously, no one is saying anything anymore. Those are the things everyone really hates, too. Zone chat could be entertaining and it could sometimes be horrible. I can’t really defend it other than that it made the game feel more alive. Now it’s like there is no one else on.
It’s not just zone chat that is the problem now, from what I have read here. Even private chat can get you a ban.
Lately there have been a lot of decisions made that have upset the playerbase. I hope this is not the final nail in the coffin. I personally don’t have to worry about typing anything offensive myself but I’m an outlier. This is a multiplayer game and I do have to rely on others to get some things done. If everyone leaves that will be problematic.
ZoS hasn't changed anything. It's the same system that's been in place since year one.
Kevin stated they are working with new tools that they are still training on. Here's that part of the quote, snipped for brevity.That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups.
That's just saying that they're consistently reviewing their practices of how to review and recognize human behavioural intention and interactions. Not that they're training on a new system. There's no mention of "new tools" in that statement.
The same way any corporate HR department is constantly reviewing their practices (and thus constantly re-training their employees every year or two with mandatory attendance classes).
"Iterating on some processes recently" reads to me as if they've come up with new tools using their old software. Like an update that came by refining the process and tools. That they are still learning, training, and expect hiccups as they learn and train, reinforces that.
That statement doesn't sound like routine training they've been doing all along.
Massively article also noted that the TOS updated in February.
https://massivelyop.com/2024/09/16/roleplaying-elder-scrolls-online-players-raise-red-flags-over-automated-chat-moderation/
Iterating can also simply mean that applied practice has changed and it needs to be told/taught to the masses.