Idea for a solution:
Moving to speaking brainrot language fluently. Confuse the bots.
https://anythingtranslate.com/translators/brainrot-translator/
A lot of people don't maintain a puritan demeanor at all times.
Idea for a solution:
Moving to speaking brainrot language fluently. Confuse the bots.
https://anythingtranslate.com/translators/brainrot-translator/
Synapsis123 wrote: »Someone make an addon that changes what you type into chat into gibberish and only people who have the addon can see what you actually said. That way we can get around the chat bot blocking normal activity.
Synapsis123 wrote: »Someone make an addon that changes what you type into chat into gibberish and only people who have the addon can see what you actually said. That way we can get around the chat bot blocking normal activity.
I_killed_Vivec wrote: »The geese will be flying east tonight
Are you a secret service agent passing on a coded message to your handler?
Synapsis123 wrote: »Someone make an addon that changes what you type into chat into gibberish and only people who have the addon can see what you actually said. That way we can get around the chat bot blocking normal activity.
Synapsis123 wrote: »Someone make an addon that changes what you type into chat into gibberish and only people who have the addon can see what you actually said. That way we can get around the chat bot blocking normal activity.
Bet you get banned for that. Spamming, probably.
I_killed_Vivec wrote: »Indeed. It was one of the conspiracy theories, that private chat would be used for nefarious communications... and AI could detect this and rat us out to the feds
Synapsis123 wrote: »Someone make an addon that changes what you type into chat into gibberish and only people who have the addon can see what you actually said. That way we can get around the chat bot blocking normal activity.
I think since it's a more complicated topic, they might have to confer with their experts (legal, programmers,... - everyone who is involved in is somehow) to make a correct statement. That can take some time.
By now, I'd expect them to tell us that now there's an automoderator (I'd rather suspect keyword list than a true AI) and that we have agreed on the TOS, so it's correct that they enforce it. And that in case of faulty decisions, we can message them to review the case.
Beyond that, I'd really like to see them acknowledging the problems we have named in this thread, e.g. a bot not recognizing context and cultural and language differences.
Synapsis123 wrote: »Someone make an addon that changes what you type into chat into gibberish and only people who have the addon can see what you actually said. That way we can get around the chat bot blocking normal activity.
Bet you get banned for that. Spamming, probably.
Hi All,
We want to follow up on this thread regarding moderation tools and how this intersects with the role-play community. First, thank you for your feedback and raising your concerns about some recent actions we took due to identified chat-based Terms of Service violations. Since you all raised these concerns, we wanted to provide a bit more insight and context to the tools and process.
As with any online game, our goal is to make sure you all can have fun while making sure bad actors do not have the ability to cause harm. To achieve this, our customer service team uses tools to check for potentially harmful terms and phrases. No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups. But we want to stress a few core points.
- We are by no means trying to disrupt or limit your role-play experiences or general discourse with friends and guildmates. You should have confidence that your private role-play experiences and conversations are yours and we are not looking to action anyone engaging in consensual conversations with fellow players.
- The tools used are intended to be preventative, and alert us to serious crimes, hate speech, and extreme cases of harm.
- To reiterate, no system is auto-banning players. If an action does occur, it’s because one of our CS agents identified something concerning enough to action on. That can always be appealed through our support ticketing system. And in an instance where you challenge the appeal process, please feel free to flag here on the forum and we can work with you to get to the bottom of the situation.
- As a company we also abide by the Digital Service Act law and all similar laws.
To wrap this up, for those who were actioned, we have reversed most of the small number of temporary suspensions and bans. If you believe you were impacted and the action was not reversed, please issue an appeal and share your ticket number. We will pass it along to our customer service to investigate.
We hope this helps to alleviate any concern around our in-game chat moderation and your role-play experiences. We understand the importance of having safe spaces for a variety of role-play communities and want to continue to foster that in ESO.
The humans hired were no more useful than automatic bots, as they banned me for consensual chat among close friends and sent a generic copypasted reply I saw they sent to others affected, with absolutely no consideration for my explanation and reasoning that it was all consensual.No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
To wrap this up, for those who were actioned, we have reversed most of the small number of temporary suspensions and bans. If you believe you were impacted and the action was not reversed, please issue an appeal and share your ticket number. We will pass it along to our customer service to investigate.
Hi All,
We want to follow up on this thread regarding moderation tools and how this intersects with the role-play community. First, thank you for your feedback and raising your concerns about some recent actions we took due to identified chat-based Terms of Service violations. Since you all raised these concerns, we wanted to provide a bit more insight and context to the tools and process.
As with any online game, our goal is to make sure you all can have fun while making sure bad actors do not have the ability to cause harm. To achieve this, our customer service team uses tools to check for potentially harmful terms and phrases. No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups. But we want to stress a few core points.
- We are by no means trying to disrupt or limit your role-play experiences or general discourse with friends and guildmates. You should have confidence that your private role-play experiences and conversations are yours and we are not looking to action anyone engaging in consensual conversations with fellow players.
- The tools used are intended to be preventative, and alert us to serious crimes, hate speech, and extreme cases of harm.
- To reiterate, no system is auto-banning players. If an action does occur, it’s because one of our CS agents identified something concerning enough to action on. That can always be appealed through our support ticketing system. And in an instance where you challenge the appeal process, please feel free to flag here on the forum and we can work with you to get to the bottom of the situation.
- As a company we also abide by the Digital Service Act law and all similar laws.
To wrap this up, for those who were actioned, we have reversed most of the small number of temporary suspensions and bans. If you believe you were impacted and the action was not reversed, please issue an appeal and share your ticket number. We will pass it along to our customer service to investigate.
We hope this helps to alleviate any concern around our in-game chat moderation and your role-play experiences. We understand the importance of having safe spaces for a variety of role-play communities and want to continue to foster that in ESO.
It's obviously important to protect customers from harassment and other forms of abuse, but words shared between consulting adults should be off-limits for both ZOS staff and automated processes. Proactive enforcement has no place in group and private chats.
People who question whether ESO is financially successful should look at this policy as proof that it is because a struggling business would never consider alienating paying customers in this way.
We want to follow up on this thread regarding moderation tools and how this intersects with the role-play community. First, thank you for your feedback and raising your concerns about some recent actions we took due to identified chat-based Terms of Service violations. Since you all raised these concerns, we wanted to provide a bit more insight and context to the tools and process.
As with any online game, our goal is to make sure you all can have fun while making sure bad actors do not have the ability to cause harm. To achieve this, our customer service team uses tools to check for potentially harmful terms and phrases. No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups.
But we want to stress a few core points.
- We are by no means trying to disrupt or limit your role-play experiences or general discourse with friends and guildmates. You should have confidence that your private role-play experiences and conversations are yours and we are not looking to action anyone engaging in consensual conversations with fellow players.
- The tools used are intended to be preventative, and alert us to serious crimes, hate speech, and extreme cases of harm.
- To reiterate, no system is auto-banning players. If an action does occur, it’s because one of our CS agents identified something concerning enough to action on. That can always be appealed through our support ticketing system. And in an instance where you challenge the appeal process, please feel free to flag here on the forum and we can work with you to get to the bottom of the situation.
- As a company we also abide by the Digital Service Act law and all similar laws.
To wrap this up, for those who were actioned, we have reversed most of the small number of temporary suspensions and bans. If you believe you were impacted and the action was not reversed, please issue an appeal and share your ticket number. We will pass it along to our customer service to investigate.
We hope this helps to alleviate any concern around our in-game chat moderation and your role-play experiences. We understand the importance of having safe spaces for a variety of role-play communities and want to continue to foster that in ESO.
Hi All,
We want to follow up on this thread regarding moderation tools and how this intersects with the role-play community. First, thank you for your feedback and raising your concerns about some recent actions we took due to identified chat-based Terms of Service violations. Since you all raised these concerns, we wanted to provide a bit more insight and context to the tools and process.
As with any online game, our goal is to make sure you all can have fun while making sure bad actors do not have the ability to cause harm. To achieve this, our customer service team uses tools to check for potentially harmful terms and phrases. No action is taken at that point. A human then evaluates the full context of the terms or phrases to ensure nothing harmful or illegal is occurring. A human is always in control of the final call of an action and not an AI system.
That being said, we have been iterating on some processes recently and are still learning and training on the best way to use these tools, so there will be some occasional hiccups. But we want to stress a few core points.
- We are by no means trying to disrupt or limit your role-play experiences or general discourse with friends and guildmates. You should have confidence that your private role-play experiences and conversations are yours and we are not looking to action anyone engaging in consensual conversations with fellow players.
- The tools used are intended to be preventative, and alert us to serious crimes, hate speech, and extreme cases of harm.
- To reiterate, no system is auto-banning players. If an action does occur, it’s because one of our CS agents identified something concerning enough to action on. That can always be appealed through our support ticketing system. And in an instance where you challenge the appeal process, please feel free to flag here on the forum and we can work with you to get to the bottom of the situation.
- As a company we also abide by the Digital Service Act law and all similar laws.
To wrap this up, for those who were actioned, we have reversed most of the small number of temporary suspensions and bans. If you believe you were impacted and the action was not reversed, please issue an appeal and share your ticket number. We will pass it along to our customer service to investigate.
We hope this helps to alleviate any concern around our in-game chat moderation and your role-play experiences. We understand the importance of having safe spaces for a variety of role-play communities and want to continue to foster that in ESO.
And/or instead of eavesdropping and punishing, the automod could be set up so that all communication is filtered through it in real time, so that anything that triggers it is simply rejected with a warning.
"offshore resources reading chats"?