Maintenance for the week of December 23:
· [COMPLETE] NA megaservers for maintenance – December 23, 4:00AM EST (9:00 UTC) - 9:00AM EST (14:00 UTC)
· [COMPLETE] EU megaservers for maintenance – December 23, 9:00 UTC (4:00AM EST) - 14:00 UTC (9:00AM EST)

How will the AI that monitors our in-game chats effect RP?

  • I_killed_Vivec
    I_killed_Vivec
    ✭✭✭✭✭
    ✭✭
    The geese will be flying east tonight ;)
  • manukartofanu
    manukartofanu
    ✭✭✭✭
    If what people here understand as AI is actually an LLM, then we should be glad that it's being fine-tuned. Those who deserve to be banned will be banned, and those who know how to communicate properly, even in roleplay, have nothing to worry about once the model is tuned correctly. The issue of context isn’t even relevant anymore. LLMs are now capable of recognizing any context and explaining to you in plain human language what was said and how. What is appropriate or inappropriate is a matter of how the model is set up, not about context.

    If I may ask one question regarding this statement:

    If we assume that there is an LLM AI being trained to handle automodding of chats, at what point is it trained enough and appropriate to roll it out? What is the rate of false to true results that we should aim for?

    You’re saying that eventually it will be able to understand what constitutes roleplay and what constitutes irl hate speech. But if it’s catching a bunch of false positives while it’s learning, then is that actually going to be a problem for the game to ban these people unilaterally and then spend the resources to have support manually unban them (and deal with a set of disgruntled players), or would it be better to lessen the punishment while the model is still learning so as not to inconvenience those players (and keep them interacting with the game)?

    For example, I know in the US, we will usually have a period of a few weeks or months if a law is newly enforced (e.g. if a permanent red light camera is added to a road) where people are given warnings before the fines are applied. Similarly, it should not be the norm to suddenly enforce the CoC and ban people automatically without warning on the actions of an as-yet-untrained software.

    I saw your previous text about the necessity of different punishments for different cases, and I share your point of view. I believe this is already in effect in the game. At least, that’s what I’ve heard. I’ve also heard that some people can get what is called a “social ban.” This is when you can play, but you can’t chat. I think this is a perfectly appropriate punishment for those who can’t communicate properly, and I don’t believe there’s any need to ban an entire account for something that’s simply written.

    So, answering the question about when an LLM is trained well enough to be released. I believe it’s ready when its maximum punishment is applied correctly 100% of the time, and the smaller errors occur with lighter punishments. I proceed from the assumption that no moderation system can ever punish people for 100% of their violations. Therefore, there’s no point in striving for complete alignment between violations and punishments. Accordingly, it’s always better to miss a violation than to punish someone unfairly. The environment around the LLM should be set up in such a way that there are no unnecessary bans or overly harsh punishments.

    As for training, the LLM shouldn’t be trained on a live server. It should be trained on training data, preferably on game logs from the past 10 years, and the correctness of its training should be checked on test data, where the outcome is already known. Only after all of that can it be tested on a live server, and even then, a person should be placed in charge with a convenient dashboard to monitor how and why the LLM is punishing players. You still can’t remove all the humans, but you can significantly reduce the moderation team.
  • Tandor
    Tandor
    ✭✭✭✭✭
    ✭✭✭✭✭
    Syldras wrote: »
    You usually get a message that your report was received, but they won't tell you about the outcome. At least that's the way it had been a few years ago (I don't report often, only if people throw around racist, homophobic or other slurs in general chat - because I indeed don't want this toxicity in this game).
    I see. In other projects, if a report leads to a ban, there is an automatic notification. I think that’s the right approach, as it lets you know whether it’s even worth bothering with reporting and whether such tickets are actually being reviewed.

    These days I think the data sharing and privacy protections are such that I quite understand the inability of a company to tell you the actual outcome of an individual report, and having played MMOs for 25 years or so I don't recall a single developer who has done that. The most that has generally happened has been when a few developers have released occasional statistics along the lines of "in the last month we have perma-banned 1000 accounts for botting". The only exception I can recall is when a particular player - and sadly I don't recall the game - was so adamant on the forum of his complete innocence that the developer went to the extreme length on the forum of quoting his entire account history culminating in full details of his final offences - all the expressed sympathy for the player vanished instantly!

    What I do think should be done, however, is that in addition to acknowledging receipt of the report developers, in this case ZOS, should send a subsequent notification that they have now fully investigated the incident reported and have dealt with it appropriately. You don't need to know the outcome, just that the report has been dealt with. There's no reason why that information can't be passed on.
  • Tandor
    Tandor
    ✭✭✭✭✭
    ✭✭✭✭✭
    The geese will be flying east tonight ;)

    Are you a secret service agent passing on a coded message to your handler?
  • manukartofanu
    manukartofanu
    ✭✭✭✭
    LLMs aren’t smart enough to decide when to ban paying customers, and probably won’t be any time soon. Especially when those paying customers spread the word to other paying customers

    There’s still no word back from the ZOS team since Friday?

    If this is something as simple as XYZ player was once reported by an actual human being, then placed in a sub-category of naughty list where the AI scrutinizes everything by that player, it might bother me less. I’ve seen evidence of that in the past, and if that’s the case here they could at least come out and say something to that effect.

    I also feel that this is more realistic than what we’re discussing here. Someone wrote something really bad to the wrong person and got reported. Since they weren’t banned immediately, they thought everything was fine, that it was allowed. Maybe they even wrote something really bad to a few more people while time passed. Then, when a live moderator, not an AI, got to the report a few weeks later and saw what this person had said and how many reports they’d received, they decided to ban them. And now the person is confused, trying to remember what they did wrong yesterday, because they only insulted a friend during RP, and they’re sure the friend didn’t report them. But what happened a week ago, or two, or three, they don’t even think about. What do you think, is this possible?
  • WynnGwynn
    WynnGwynn
    ✭✭✭


    "In that case I'd like to bring up something someone else did above me. Why wait going on 11 years to start handing out actions against accounts then? I will admit I've had my share of spicy RP. I've had my share of gruesome RP. No characters I've ever played use explicit language, or if they do, a lot of the time it's Old English like arse (which is used in-game) or something like "sod" in place of the f-word."


    The F word was actually in fairly regular use in the 1400's lol.
  • manukartofanu
    manukartofanu
    ✭✭✭✭
    Tandor wrote: »
    What I do think should be done, however, is that in addition to acknowledging receipt of the report developers, in this case ZOS, should send a subsequent notification that they have now fully investigated the incident reported and have dealt with it appropriately. You don't need to know the outcome, just that the report has been dealt with. There's no reason why that information can't be passed on.

    This notification I'm talking about cannot violate any rules. It's just a notification that one of your reports was helpful. Without any details about who, for what, or what kind of ban was issued.
  • tomofhyrule
    tomofhyrule
    ✭✭✭✭✭
    ✭✭✭✭
    I don’t think the actual event, whether it happened or not, is the main issue anymore.

    People are now in fear that they could be handed a ban for an unreported conversation with consenting individuals because it triggers some software.

    Is this healthy for the game for players to have that fear? If not, then no amount of “I haven’t seen evidence yet so obviously it’s not a thing” or “who cares if innocent people get banned accidentally as long as the actual toxic people do as well” is going to help here.
  • Syldras
    Syldras
    ✭✭✭✭✭
    ✭✭✭✭✭
    I've been chatting as usual the past few days and nothing happened, btw.
    If people are really getting banned because of the auto-filter, then that’s a problem, but it doesn’t seem to be the case here, since the complaints are about AI, and the auto-filter isn’t AI.

    We don't know. People who aren't further involved with the topic call everything "AI" these days. Maybe it's indeed just the good old autofilter, just that for some offenses bans are now handed out automatically. I have seen no clear example yet.
    @Syldras | PC | EU
    The forceful expression of will gives true honor to the Ancestors.
    Sarayn Andrethi, Telvanni mage (Main)
    Darvasa Andrethi, his "I'm NOT a Necromancer!" sister
    Malacar Sunavarlas, Altmer Ayleid vampire
  • Northwold
    Northwold
    ✭✭✭✭✭
    What people are describing doesn't sound like "AI" as such, just sets of key words that are flagged automatically.

    Bit of an aside but there was an interesting issue when a British newspaper bought in an American comment system that seemed to have been set up by time travelling puritans from the 18th Century. One comment out of every two was being automatically binned at one pointed because it got so scandalised. People resorted to saying "gosh" a lot because perfectly normal words in British English were suddenly brands of sin.
    Edited by Northwold on September 18, 2024 10:48PM
  • Ingel_Riday
    Ingel_Riday
    ✭✭✭✭✭
    Northwold wrote: »
    What people are describing doesn't sound like "AI" as such, just sets of key words that are flagged automatically.

    Bit of an aside but there was an interesting issue when a British newspaper bought in an American comment system that seemed to have been set up by time travelling puritans from the 18th Century. One comment out of every two was being automatically binned at one pointed because it got so scandalised. People resorted to saying "gosh" a lot because perfectly normal words in British English were suddenly brands of sin.

    This doesn't surprise me in the least. I love my country, but The United States is rooted in Puritanism. It flairs up in waves, such as in the current time period.

    This is a M rated game that encourages socialization. Having a goose-stomping gestapo AI-bot / key-word flagging program monitoring every single message for wrong-think and throwing out suspensions and warnings for the slightest bit of perceived naughtiness is NOT going to promote a social space. It's going to muzzle most of the ever-dwindling community.

    "Oh, but we're doing it for the right reasons! We're protecting the marginalized and promoting a safe, diverse, inclusive, equity space for everyone!"

    No, you're not. You're promoting a culture of fear, where people are afraid to speak to each other openly lest they be punished. Where the most you're going to say to each other is "gg" and "tyfg." I mute my microphone on my PlayStation 5 at all times, because they're trigger happy about the slightest slip-up. I have no mic on my Xbox Series X because they scan your audio and punish you for the slightest slip-up. Heaven forbid I die and say F&@& by accident, or forget myself and tell a risque joke to my friend of 20 years during a co-op session.

    But hey, for the best, right? I wouldn't want my private conversation with my best friend to make some random stranger who can't even hear it feel excluded via magic, or think that I don't value whatever special cause he or she is identifying with this month. You know, because I giggled about it during a private conversation between me and my friend that said individual HAD NO ACCESS TO.

    Never you mind that this particular game is also full of murder, torture, and has a literal god of *** as the big bad for the base game. Even trying to be clean in chat, how can I talk about the actual content in the game itself without getting in trouble for objectionable content?

    *sigh* I'm glad the tech sector is hurting right now. Maybe enough financial pain will snap these people out of their trance for a spell. My ancestors were actual Puritans, for real. It's a bad thing to be. Trust me, you're not winding up on the right side of history going down this path. There's a reason my ancestors are literally made fun of in every junior high American history class. :-/
    Edited by Ingel_Riday on September 19, 2024 1:47AM
  • TaSheen
    TaSheen
    ✭✭✭✭✭
    ✭✭✭✭✭
    Northwold wrote: »
    What people are describing doesn't sound like "AI" as such, just sets of key words that are flagged automatically.

    Bit of an aside but there was an interesting issue when a British newspaper bought in an American comment system that seemed to have been set up by time travelling puritans from the 18th Century. One comment out of every two was being automatically binned at one pointed because it got so scandalised. People resorted to saying "gosh" a lot because perfectly normal words in British English were suddenly brands of sin.

    This doesn't surprise me in the least. I love my country, but The United States is rooted in Puritanism. It flairs up in waves, such as in the current time period.

    This is a M rated game that encourages socialization. Having a goose-stomping gestapo AI-bot / key-word flagging program monitoring every single message for wrong-think and throwing out suspensions and warnings for the slightest bit of perceived naughtiness is NOT going to promote a social space. It's going to muzzle most of the ever-dwindling community.

    "Oh, but we're doing it for the right reasons! We're protecting the marginalized and promoting a safe, diverse, inclusive, equity space for everyone!"

    No, you're not. You're promoting a culture of fear, where people are afraid to speak to each other openly lest they be punished. Where the most you're going to say to each other is "gg" and "tyfg." I mute my microphone on my PlayStation 5 at all times, because they're trigger happy about the slightest slip-up. I have no mic on my Xbox Series X because they scan your audio and punish you for the slightest slip-up. Heaven forbid I die and say F&@& by accident, or forget myself and tell a risque joke to my friend of 20 years during a co-op session.

    But hey, for the best, right? I wouldn't want my private conversation with my best friend to make some random stranger who can't even hear it feel excluded via magic, or think that I don't value whatever special cause he or she is identifying with this month. You know, because I giggled about it during a private conversation between me and my friend that said individual HAD NO ACCESS TO.

    Never you mind that this particular game is also full of murder, torture, and has a literal god of *** as the big bad for the base game. Even trying to be clean in chat, how can I talk about the actual content in the game itself without getting in trouble for objectionable content?

    *sigh* I'm glad the tech sector is hurting right now. Maybe enough financial pain will snap these people out of their trance for a spell. My ancestors were actual Puritans, for real. It's a bad thing to be. Trust me, you're not winding up on the right side of history going down this path. There's a reason my ancestors are literally made fun of in every junior high American history class. :-/

    Oh, I get you. My ancestors arrived on this continent with the Mass Bay Colony. This country has been dealing not only with the "puritan ethic" but also the "mid victorian ethic" ever since. Seriously? 400 years, and we STILL can't just move into the real world not to mention the 21st century?

    *sigh*
    ______________________________________________________

    "But even in books, the heroes make mistakes, and there isn't always a happy ending." Mercedes Lackey, Into the West

    PC NA, PC EU (non steam)- four accounts, many alts....
  • yourhpgod
    yourhpgod
    ✭✭✭
    I'm going to keep an eye on this one as well. Ai monitoring chat? what is that? That's a thing now?
    https://tiktok.com/@yourhpgod/video/7412553639924944159?is_from_webapp=1&sender_device=pc&web_id=7405052762109806122

    "Health tanking in Cyrodiil isn’t about glory—it’s about stepping up when no one else will. Someone has to stand their ground, and if it's going to be anyone, it might as well be me."
  • Ingel_Riday
    Ingel_Riday
    ✭✭✭✭✭
    yourhpgod wrote: »
    I'm going to keep an eye on this one as well. Ai monitoring chat? what is that? That's a thing now?

    Been a Microsoft thing for a while, at least. They monitor anything you say or type on the Xbox Series X and, if you get flagged, you're in for a very bad time.

    Best bet is to just never use any in-game voice chat or text function whenever possible. You still have free speech on WhatsApp and/or Discord, so you can chat with your buddies via those (for now). Assuming that Microsoft is enforcing the same standard on ZOS and ESO, probably safe to assume that you should treat social interactions in this game like LinkedIn going forward.

    Expect this all to get worse before it gets better, too. For a while, Microsoft was floating around the idea of monitoring what you type in Microsoft Word for "hate speech and misinformation." You know, to strip you of your Office license if you violated their TOS by typing something they deemed wrongthink. Went over exceedingly poorly and now MIcrosoft brags about your privacy and security while using Office instead. Turns out that people don't like Orwellian techno-dystopia stuff when it directly affects them. Who would have thunk it?
  • DreamyLu
    DreamyLu
    ✭✭✭✭✭
    I understand all the counters highlighted in the previous pages/posts, and I agree on most, but I would like to bring a different aspect that shouldn't be forgotten: an IA check would have the - enormous - advantage that in case of a unacceptable chat, counter reaction would be immediate.

    Regularly - sadly: really regularly - I witness zone chats that are racist and/or discriminating and/or humiliating, up to absolutely unacceptable (some days ago, jokes about kids). In such cases, no context can be a justification: the chat shall be stopped straight.
    However, as is now, no matter what we report, nothing happens. The chat just keeps going. There seems to be no emergency response system and I really wish there would be one. For me, it's a priority.

    Because of this, I believe that an IA check could be right tool. Of course, more than probably, there will be a phase of adaptation, where the system will need to be tuned properly based on errors done and experience. It cannot work perfectly on first try. But for me, the main aspect is the possibility to stop immediately unacceptable chats, that should be a priority, and I see it therefore as a supportive tool.
    Edited by DreamyLu on September 19, 2024 4:08AM
    I'm out of my mind, feel free to leave a message... PC/NA
  • ArchangelIsraphel
    ArchangelIsraphel
    ✭✭✭✭✭
    ✭✭✭✭
    TaSheen wrote: »
    Northwold wrote: »
    What people are describing doesn't sound like "AI" as such, just sets of key words that are flagged automatically.

    Bit of an aside but there was an interesting issue when a British newspaper bought in an American comment system that seemed to have been set up by time travelling puritans from the 18th Century. One comment out of every two was being automatically binned at one pointed because it got so scandalised. People resorted to saying "gosh" a lot because perfectly normal words in British English were suddenly brands of sin.

    This doesn't surprise me in the least. I love my country, but The United States is rooted in Puritanism. It flairs up in waves, such as in the current time period.

    This is a M rated game that encourages socialization. Having a goose-stomping gestapo AI-bot / key-word flagging program monitoring every single message for wrong-think and throwing out suspensions and warnings for the slightest bit of perceived naughtiness is NOT going to promote a social space. It's going to muzzle most of the ever-dwindling community.

    "Oh, but we're doing it for the right reasons! We're protecting the marginalized and promoting a safe, diverse, inclusive, equity space for everyone!"

    No, you're not. You're promoting a culture of fear, where people are afraid to speak to each other openly lest they be punished. Where the most you're going to say to each other is "gg" and "tyfg." I mute my microphone on my PlayStation 5 at all times, because they're trigger happy about the slightest slip-up. I have no mic on my Xbox Series X because they scan your audio and punish you for the slightest slip-up. Heaven forbid I die and say F&@& by accident, or forget myself and tell a risque joke to my friend of 20 years during a co-op session.

    But hey, for the best, right? I wouldn't want my private conversation with my best friend to make some random stranger who can't even hear it feel excluded via magic, or think that I don't value whatever special cause he or she is identifying with this month. You know, because I giggled about it during a private conversation between me and my friend that said individual HAD NO ACCESS TO.

    Never you mind that this particular game is also full of murder, torture, and has a literal god of *** as the big bad for the base game. Even trying to be clean in chat, how can I talk about the actual content in the game itself without getting in trouble for objectionable content?

    *sigh* I'm glad the tech sector is hurting right now. Maybe enough financial pain will snap these people out of their trance for a spell. My ancestors were actual Puritans, for real. It's a bad thing to be. Trust me, you're not winding up on the right side of history going down this path. There's a reason my ancestors are literally made fun of in every junior high American history class. :-/

    Oh, I get you. My ancestors arrived on this continent with the Mass Bay Colony. This country has been dealing not only with the "puritan ethic" but also the "mid victorian ethic" ever since. Seriously? 400 years, and we STILL can't just move into the real world not to mention the 21st century?

    *sigh*

    As a New Englander, the accuracy of this is painful.

    But when you think about it, 400 years isn't a very long time. The influences of the Victorian era and those who were directly affected by it would have still been around in the 1970's-1980's and even later- there are adults today who had grandparents and great grandparents who lived through it. The Victorian era is still rather "young" in terms of world history, and in some ways, still alive through children who have passed on certain ideas.

    Another problem, I think, is that the Victorian era and it's ideals are frequently romanticized, and so it's concepts of etiquette tend to resurface over and over again.
    Legends never die
    They're written down in eternity
    But you'll never see the price it costs
    The scars collected all their lives
    When everything's lost, they pick up their hearts and avenge defeat
    Before it all starts, they suffer through harm just to touch a dream
    Oh, pick yourself up, 'cause
    Legends never die
  • IncultaWolf
    IncultaWolf
    ✭✭✭✭✭
    yu4zjnq857yg.png

    This is real and is actually happening, I know people who were banned for making mature jokes to their friends in private whispers. Very concerning...
  • Ratzkifal
    Ratzkifal
    ✭✭✭✭✭
    ✭✭✭✭✭
    DreamyLu wrote: »
    I understand all the counters highlighted in the previous pages/posts, and I agree on most, but I would like to bring a different aspect that shouldn't be forgotten: an IA check would have the - enormous - advantage that in case of a unacceptable chat, counter reaction would be immediate.

    Regularly - sadly: really regularly - I witness zone chats that are racist and/or discriminating and/or humiliating, up to absolutely unacceptable (some days ago, jokes about kids). In such cases, no context can be a justification: the chat shall be stopped straight.
    However, as is now, no matter what we report, nothing happens. The chat just keeps going. There seems to be no emergency response system and I really wish there would be one. For me, it's a priority.

    Because of this, I believe that an IA check could be right tool. Of course, more than probably, there will be a phase of adaptation, where the system will need to be tuned properly based on errors done and experience. It cannot work perfectly on first try. But for me, the main aspect is the possibility to stop immediately unacceptable chats, that should be a priority, and I see it therefore as a supportive tool.

    Not sure if you are playing on console, but on PC reporting a player will block them and a blocked player's chat won't appear for you. This is not an argument at all, unless you somehow managed to fill your blocked players list with 100 people, in which case the thing you should be arguing for is an increase in the amount of players you can block. Arguing for this type of automated surveillance state that's prying into private conversations just because it can take immediate action in some other case is crazy. The immediate action you seek is the block feature.
    This Bosmer was tortured to death. There is nothing left to be done.
  • DreamyLu
    DreamyLu
    ✭✭✭✭✭
    Ratzkifal wrote: »
    DreamyLu wrote: »
    I understand all the counters highlighted in the previous pages/posts, and I agree on most, but I would like to bring a different aspect that shouldn't be forgotten: an IA check would have the - enormous - advantage that in case of a unacceptable chat, counter reaction would be immediate.

    Regularly - sadly: really regularly - I witness zone chats that are racist and/or discriminating and/or humiliating, up to absolutely unacceptable (some days ago, jokes about kids). In such cases, no context can be a justification: the chat shall be stopped straight.
    However, as is now, no matter what we report, nothing happens. The chat just keeps going. There seems to be no emergency response system and I really wish there would be one. For me, it's a priority.

    Because of this, I believe that an IA check could be right tool. Of course, more than probably, there will be a phase of adaptation, where the system will need to be tuned properly based on errors done and experience. It cannot work perfectly on first try. But for me, the main aspect is the possibility to stop immediately unacceptable chats, that should be a priority, and I see it therefore as a supportive tool.

    Not sure if you are playing on console, but on PC reporting a player will block them and a blocked player's chat won't appear for you. This is not an argument at all, unless you somehow managed to fill your blocked players list with 100 people, in which case the thing you should be arguing for is an increase in the amount of players you can block. Arguing for this type of automated surveillance state that's prying into private conversations just because it can take immediate action in some other case is crazy. The immediate action you seek is the block feature.

    I see your point but don't agree with you. It's not a matter of hiding it for self comfort. It's a matter of stopping it when it derails out of legal limits and is against in-game policy as well. Blocking someone does nothing. That's passive.
    I'm out of my mind, feel free to leave a message... PC/NA
  • Dayhjawk
    Dayhjawk
    ✭✭✭
    Here's my two cents.

    If the Ai is being used to "monitor chat" both global, private, guild and otherways, then based on what it "finds" issuing bans. That's bad and i'm done.

    If it's being used like said above, but instead of issuing a ban, it's creating a report, both minutes before and after, either private, guild, global etc, that a human is then looking at and then issuing a ban basiced on context. I am ok with it, kinda.

    Personally, the idea that AI is monitoring private (between guildmates, friend's list) and guild chat, to me, feels like that violating my rights and is invading my personaly space. If it's monitoring private chat during and moments after pvp, then i'm ok with it, considering how many death threats, and toxic DMs that come out of battlegrounds and pvp in general.

    It all comes down to how it's being used, where the human element is. Considering this was pushed out, and I feel like it was never told to the public. I feel like this is just an easy answer with no human element going to be involved. I am taking a hard long look if I want to stay and continue playing. I am already dont' want to give anymore money to this company and with this on top that makes me feels like i'm being violated, I really do not want to be here. There are guild members that feel the same way or just don't like the idea that this was pushed out and never talked about.
  • Unfadingsilence
    Unfadingsilence
    ✭✭✭✭✭
    DigiAngel wrote: »
    Please....ZOS can't even fix the servers....and your asking about tacking on AI? Currently there's no evidence AI is even present in this game. If it actually WAS added, I'm sure it would be terrible as most AI is.

    Someone currently live on twitch right now claiming AI banned him in ESO due to a private message with a group member and he is claiming that the email response was that AI picked up on the message
  • LikiLoki
    LikiLoki
    ✭✭✭✭
    Apparently, another escalation of censorship is taking place now. This already happened a couple of years ago, it seems that the grip has loosened, but now we are seeing a deterioration in the mood of the players in many areas. I also received a warning for an innocent phrase the other day.
  • Idelise
    Idelise
    ✭✭✭✭
    LikiLoki wrote: »
    Apparently, another escalation of censorship is taking place now. This already happened a couple of years ago, it seems that the grip has loosened, but now we are seeing a deterioration in the mood of the players in many areas. I also received a warning for an innocent phrase the other day.

    Doi you mind sharing the phrase? I am kinda curious what naughty words people apparently get warned over.
  • Hvíthákarl
    Hvíthákarl
    ✭✭✭
    Between the Leona fiasco, the companion paywall, this and many other bad choices lately... I'm unsubbing and considering even taking a break away from the game. This is an horrible choice that has been done several times in the past and each time, whatever game decided to enforce AI monitoring ended up losing a noticeable chunk of its playerbase bc of the draconian nature of it. Not to say it may eventually devolve into censoring things that aren't even NSFW (what if I decide to RP a drag queen and the filter decides drags are inherently sexual in nature thus bans me?)

    No thanks. I've got little spare time and don't want to invest much of it in a game that enables such malpractices.
  • royalwench
    royalwench
    ✭✭✭
    The fact that there’s been no official response to this speaks volumes and that it’s true, it’s made everyone in my roleplaying guild nervous about saying anything, if they keep this then it will kill off the roleplaying community
  • Idelise
    Idelise
    ✭✭✭✭
    Can someone also tell me which aforementioned Twitch streamer got banned for AI mod (and over what)?
  • Grec1a
    Grec1a
    ✭✭✭
    Idelise wrote: »
    Can someone also tell me which aforementioned Twitch streamer got banned for AI mod (and over what)?

    No one is going to tell you, and risk getting banned themselves, when there is a screenshot of the stream about half a dozen posts above yours and you can go and check it out for yourself :p
    It's a tradition, or an old charter, or something...
  • Idelise
    Idelise
    ✭✭✭✭
    Grec1a wrote: »
    Idelise wrote: »
    Can someone also tell me which aforementioned Twitch streamer got banned for AI mod (and over what)?

    No one is going to tell you, and risk getting banned themselves, when there is a screenshot of the stream about half a dozen posts above yours and you can go and check it out for yourself :p

    I mean the screen did not say the word but I DID check the stream and found out (the Streamer did mention what he said in comments)
  • Arrodisia
    Arrodisia
    ✭✭✭✭✭
    Idelise wrote: »
    Grec1a wrote: »
    Idelise wrote: »
    Can someone also tell me which aforementioned Twitch streamer got banned for AI mod (and over what)?

    No one is going to tell you, and risk getting banned themselves, when there is a screenshot of the stream about half a dozen posts above yours and you can go and check it out for yourself :p

    I mean the screen did not say the word but I DID check the stream and found out (the Streamer did mention what he said in comments)

    which stream is this? link it please. I read the other thread and there was no proof this was the reason they were suspended, only a self written text saying they were banned for this reason. That isn't proof.

    Only an email sent directly from the devs to the player can confirm this.
    Edited by Arrodisia on September 19, 2024 12:41PM
  • xylena_lazarow
    xylena_lazarow
    ✭✭✭✭✭
    ✭✭✭✭
    Meanwhile, malicious trolls who are clever enough to avoid specific language are still allowed to spam all the abusive offline hate tells they want, reporting these players over and over has done nothing.
    PC/NA || CP/Cyro || RIP soft caps
This discussion has been closed.