Not really. In HR and staffing, offshore specifically means countries like India and PH.
I manage offshore resources quite frequently, and it’s a common occurrence in global IT to leverage offshore resources due to the lower costs. It’s a common outsourcing practice.
My firm staffs content moderation and the resources largely come from offshore, but there are a few local call centers that contract out work.
Hi All,
We want to follow up on this thread regarding moderation tools and how this intersects with the role-play community. First, thank you for your feedback and raising your concerns about some recent actions we took due to identified chat-based Terms of Service violations. Since you all raised these concerns, we wanted to provide a bit more insight and context to the tools and process.
As with any online game, our goal is to make sure you all can have fun while making sure bad actors do not have the ability to cause harm. To achieve this, our customer service team uses tools to check for potentially harmful terms and phrases. No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups. But we want to stress a few core points.
- We are by no means trying to disrupt or limit your role-play experiences or general discourse with friends and guildmates. You should have confidence that your private role-play experiences and conversations are yours and we are not looking to action anyone engaging in consensual conversations with fellow players.
- The tools used are intended to be preventative, and alert us to serious crimes, hate speech, and extreme cases of harm.
- To reiterate, no system is auto-banning players. If an action does occur, it’s because one of our CS agents identified something concerning enough to action on. That can always be appealed through our support ticketing system. And in an instance where you challenge the appeal process, please feel free to flag here on the forum and we can work with you to get to the bottom of the situation.
- As a company we also abide by the Digital Service Act law and all similar laws.
To wrap this up, for those who were actioned, we have reversed most of the small number of temporary suspensions and bans. If you believe you were impacted and the action was not reversed, please issue an appeal and share your ticket number. We will pass it along to our customer service to investigate.
We hope this helps to alleviate any concern around our in-game chat moderation and your role-play experiences. We understand the importance of having safe spaces for a variety of role-play communities and want to continue to foster that in ESO.
katanagirl1 wrote: »So the profanity filter and the reporting system aren’t enough I guess. Sounds like a zero tolerance policy that even if everyone is okay with those words you still can’t say them.
Congratulations ZOS, you have already effectively killed zone chat for all reasons other than gold sellers, guild recruiters, and selling in zone chat. Seriously, no one is saying anything anymore. Those are the things everyone really hates, too. Zone chat could be entertaining and it could sometimes be horrible. I can’t really defend it other than that it made the game feel more alive. Now it’s like there is no one else on.
It’s not just zone chat that is the problem now, from what I have read here. Even private chat can get you a ban.
Lately there have been a lot of decisions made that have upset the playerbase. I hope this is not the final nail in the coffin. I personally don’t have to worry about typing anything offensive myself but I’m an outlier. This is a multiplayer game and I do have to rely on others to get some things done. If everyone leaves that will be problematic.
DenverRalphy wrote: »katanagirl1 wrote: »So the profanity filter and the reporting system aren’t enough I guess. Sounds like a zero tolerance policy that even if everyone is okay with those words you still can’t say them.
Congratulations ZOS, you have already effectively killed zone chat for all reasons other than gold sellers, guild recruiters, and selling in zone chat. Seriously, no one is saying anything anymore. Those are the things everyone really hates, too. Zone chat could be entertaining and it could sometimes be horrible. I can’t really defend it other than that it made the game feel more alive. Now it’s like there is no one else on.
It’s not just zone chat that is the problem now, from what I have read here. Even private chat can get you a ban.
Lately there have been a lot of decisions made that have upset the playerbase. I hope this is not the final nail in the coffin. I personally don’t have to worry about typing anything offensive myself but I’m an outlier. This is a multiplayer game and I do have to rely on others to get some things done. If everyone leaves that will be problematic.
ZoS hasn't changed anything. It's the same system that's been in place since year one.
DenverRalphy wrote: »katanagirl1 wrote: »So the profanity filter and the reporting system aren’t enough I guess. Sounds like a zero tolerance policy that even if everyone is okay with those words you still can’t say them.
Congratulations ZOS, you have already effectively killed zone chat for all reasons other than gold sellers, guild recruiters, and selling in zone chat. Seriously, no one is saying anything anymore. Those are the things everyone really hates, too. Zone chat could be entertaining and it could sometimes be horrible. I can’t really defend it other than that it made the game feel more alive. Now it’s like there is no one else on.
It’s not just zone chat that is the problem now, from what I have read here. Even private chat can get you a ban.
Lately there have been a lot of decisions made that have upset the playerbase. I hope this is not the final nail in the coffin. I personally don’t have to worry about typing anything offensive myself but I’m an outlier. This is a multiplayer game and I do have to rely on others to get some things done. If everyone leaves that will be problematic.
ZoS hasn't changed anything. It's the same system that's been in place since year one.
That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups.
HatchetHaro wrote: »As with any online game, our goal is to make sure you all can have fun while making sure bad actors do not have the ability to cause harm. To achieve this, our customer service team uses tools to check for potentially harmful terms and phrases. No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
If I might make a suggestion, since the automated checks will always flag these "potentially harmful words or phrases", it means your customer support representatives will have to check through a large number of messages, and therefore also increasing the likelihood of misinterpretation of chat messages and therefore sending out undeserved warnings, suspensions, or bans.
Instead of having automated checks flood the system with false positives, perhaps it'd be better to have the players themselves determine whether a message was harmful. For example, if a player receives a potentially harmful message, they then can determine for themselves whether or not it was harmful to them, and then have the ability to flag those phrases themselves for customer support to review and take action on. I'm thinking they can right click the player's name in the chat for that option. We can call it the "Report Player" button!
AngryPenguin wrote: »We already have a profanity filter and a report player option. We certainly didn't need the addition of a hyperactive and highly inaccurate AI system getting involved. Who ever heard of a robot getting offended anyway? Why would a robot care if someone insulted it? Would calling a robot a useless tin can make it cry?
AngryPenguin wrote: »We already have a profanity filter and a report player option. We certainly didn't need the addition of a hyperactive and highly inaccurate AI system getting involved. Who ever heard of a robot getting offended anyway? Why would a robot care if someone insulted it? Would calling a robot a useless tin can make it cry?
Considering it has no human sensivities, a bot would actually even be better than a human reviewer who could be influenced by personal opinions, upbringing, culture and other such factors - wouldn't the problem be that it would also not understand context, fiction vs reality, jokes, etc.
Anyway, ZOS says it's only auto-flagging, but then the cases get sent to a human reviewer. The question is why that process leads to so many wrong bans. If it's outsourced: Are the people making these decisions reliable? Do they understand the language well, including idioms and colloquialisms? Are they provided enough of the chat to be able to see context (one line only won't help)? Are they flooded with cases so much that they don't have enough time to review them properly?
How about we not call for closing a thread where discussions are still being had and questions asked? You have no way of knowing whether we're going to get more answers or not. So please don't try to shut down the conversation of a very serious matter.I think the issue is done.
We asked the queation. We got our official answer.
Nothing will be done because ZOS is denying the accusations and has said it will not change anything.
Since this thread will do nothing more to enact change @ZOS_Kevin please close the thread.
All of this. We need answers. What we got was a vague admission of guilt and ""assured"" that actual real people made these choices. How are we supposed to trust that they won't keep making bad choices or letting personal opinion on what they're reading make choices?endorphinsplox wrote: »Hi All,
We want to follow up on this thread regarding moderation tools and how this intersects with the role-play community. First, thank you for your feedback and raising your concerns about some recent actions we took due to identified chat-based Terms of Service violations. Since you all raised these concerns, we wanted to provide a bit more insight and context to the tools and process.
As with any online game, our goal is to make sure you all can have fun while making sure bad actors do not have the ability to cause harm. To achieve this, our customer service team uses tools to check for potentially harmful terms and phrases. No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups. But we want to stress a few core points.
- We are by no means trying to disrupt or limit your role-play experiences or general discourse with friends and guildmates. You should have confidence that your private role-play experiences and conversations are yours and we are not looking to action anyone engaging in consensual conversations with fellow players.
- The tools used are intended to be preventative, and alert us to serious crimes, hate speech, and extreme cases of harm.
- To reiterate, no system is auto-banning players. If an action does occur, it’s because one of our CS agents identified something concerning enough to action on. That can always be appealed through our support ticketing system. And in an instance where you challenge the appeal process, please feel free to flag here on the forum and we can work with you to get to the bottom of the situation.
- As a company we also abide by the Digital Service Act law and all similar laws.
To wrap this up, for those who were actioned, we have reversed most of the small number of temporary suspensions and bans. If you believe you were impacted and the action was not reversed, please issue an appeal and share your ticket number. We will pass it along to our customer service to investigate.
We hope this helps to alleviate any concern around our in-game chat moderation and your role-play experiences. We understand the importance of having safe spaces for a variety of role-play communities and want to continue to foster that in ESO.
So basically, ZOS is calling this a "hiccup", refusing to explain why they implemented this system with no warning, claiming the disciplinary actions are performed by a real person after manually reviewing flagged content, and that a real, living, breathing human being, employed by a major developer, genuinely believed that someone jokingly referring to a furnishing as looking like a certain bodily fluid was reasonable to categorize as a "serious crime, hate speech, or extreme case of harm"?
Not only that, but you aren't planning on stopping the NSA style monitoring of what we once believed were private chats, won't give us a list of words we can not say, and won't acknowledge that the casualties of this extreme error in judgement you call a "hiccup" outweigh any potential benefit it could have had, given that many cheaters, bullies, and scammers are still present throughout the game, completely unaffected by this change?
Yeah I waited for a response, and this confirms that ZOS is just going down a road I can not follow.