tomofhyrule wrote: »manukartofanu wrote: »If what people here understand as AI is actually an LLM, then we should be glad that it's being fine-tuned. Those who deserve to be banned will be banned, and those who know how to communicate properly, even in roleplay, have nothing to worry about once the model is tuned correctly. The issue of context isn’t even relevant anymore. LLMs are now capable of recognizing any context and explaining to you in plain human language what was said and how. What is appropriate or inappropriate is a matter of how the model is set up, not about context.
If I may ask one question regarding this statement:
If we assume that there is an LLM AI being trained to handle automodding of chats, at what point is it trained enough and appropriate to roll it out? What is the rate of false to true results that we should aim for?
You’re saying that eventually it will be able to understand what constitutes roleplay and what constitutes irl hate speech. But if it’s catching a bunch of false positives while it’s learning, then is that actually going to be a problem for the game to ban these people unilaterally and then spend the resources to have support manually unban them (and deal with a set of disgruntled players), or would it be better to lessen the punishment while the model is still learning so as not to inconvenience those players (and keep them interacting with the game)?
For example, I know in the US, we will usually have a period of a few weeks or months if a law is newly enforced (e.g. if a permanent red light camera is added to a road) where people are given warnings before the fines are applied. Similarly, it should not be the norm to suddenly enforce the CoC and ban people automatically without warning on the actions of an as-yet-untrained software.
manukartofanu wrote: »I see. In other projects, if a report leads to a ban, there is an automatic notification. I think that’s the right approach, as it lets you know whether it’s even worth bothering with reporting and whether such tickets are actually being reviewed.You usually get a message that your report was received, but they won't tell you about the outcome. At least that's the way it had been a few years ago (I don't report often, only if people throw around racist, homophobic or other slurs in general chat - because I indeed don't want this toxicity in this game).
I_killed_Vivec wrote: »The geese will be flying east tonight
MudcrabAttack wrote: »LLMs aren’t smart enough to decide when to ban paying customers, and probably won’t be any time soon. Especially when those paying customers spread the word to other paying customers
There’s still no word back from the ZOS team since Friday?
If this is something as simple as XYZ player was once reported by an actual human being, then placed in a sub-category of naughty list where the AI scrutinizes everything by that player, it might bother me less. I’ve seen evidence of that in the past, and if that’s the case here they could at least come out and say something to that effect.
manukartofanu wrote: »What I do think should be done, however, is that in addition to acknowledging receipt of the report developers, in this case ZOS, should send a subsequent notification that they have now fully investigated the incident reported and have dealt with it appropriately. You don't need to know the outcome, just that the report has been dealt with. There's no reason why that information can't be passed on.
This notification I'm talking about cannot violate any rules. It's just a notification that one of your reports was helpful. Without any details about who, for what, or what kind of ban was issued.
manukartofanu wrote: »If people are really getting banned because of the auto-filter, then that’s a problem, but it doesn’t seem to be the case here, since the complaints are about AI, and the auto-filter isn’t AI.
What people are describing doesn't sound like "AI" as such, just sets of key words that are flagged automatically.
Bit of an aside but there was an interesting issue when a British newspaper bought in an American comment system that seemed to have been set up by time travelling puritans from the 18th Century. One comment out of every two was being automatically binned at one pointed because it got so scandalised. People resorted to saying "gosh" a lot because perfectly normal words in British English were suddenly brands of sin.
Ingel_Riday wrote: »What people are describing doesn't sound like "AI" as such, just sets of key words that are flagged automatically.
Bit of an aside but there was an interesting issue when a British newspaper bought in an American comment system that seemed to have been set up by time travelling puritans from the 18th Century. One comment out of every two was being automatically binned at one pointed because it got so scandalised. People resorted to saying "gosh" a lot because perfectly normal words in British English were suddenly brands of sin.
This doesn't surprise me in the least. I love my country, but The United States is rooted in Puritanism. It flairs up in waves, such as in the current time period.
This is a M rated game that encourages socialization. Having a goose-stomping gestapo AI-bot / key-word flagging program monitoring every single message for wrong-think and throwing out suspensions and warnings for the slightest bit of perceived naughtiness is NOT going to promote a social space. It's going to muzzle most of the ever-dwindling community.
"Oh, but we're doing it for the right reasons! We're protecting the marginalized and promoting a safe, diverse, inclusive, equity space for everyone!"
No, you're not. You're promoting a culture of fear, where people are afraid to speak to each other openly lest they be punished. Where the most you're going to say to each other is "gg" and "tyfg." I mute my microphone on my PlayStation 5 at all times, because they're trigger happy about the slightest slip-up. I have no mic on my Xbox Series X because they scan your audio and punish you for the slightest slip-up. Heaven forbid I die and say F&@& by accident, or forget myself and tell a risque joke to my friend of 20 years during a co-op session.
But hey, for the best, right? I wouldn't want my private conversation with my best friend to make some random stranger who can't even hear it feel excluded via magic, or think that I don't value whatever special cause he or she is identifying with this month. You know, because I giggled about it during a private conversation between me and my friend that said individual HAD NO ACCESS TO.
Never you mind that this particular game is also full of murder, torture, and has a literal god of *** as the big bad for the base game. Even trying to be clean in chat, how can I talk about the actual content in the game itself without getting in trouble for objectionable content?
*sigh* I'm glad the tech sector is hurting right now. Maybe enough financial pain will snap these people out of their trance for a spell. My ancestors were actual Puritans, for real. It's a bad thing to be. Trust me, you're not winding up on the right side of history going down this path. There's a reason my ancestors are literally made fun of in every junior high American history class. :-/
I'm going to keep an eye on this one as well. Ai monitoring chat? what is that? That's a thing now?
Ingel_Riday wrote: »What people are describing doesn't sound like "AI" as such, just sets of key words that are flagged automatically.
Bit of an aside but there was an interesting issue when a British newspaper bought in an American comment system that seemed to have been set up by time travelling puritans from the 18th Century. One comment out of every two was being automatically binned at one pointed because it got so scandalised. People resorted to saying "gosh" a lot because perfectly normal words in British English were suddenly brands of sin.
This doesn't surprise me in the least. I love my country, but The United States is rooted in Puritanism. It flairs up in waves, such as in the current time period.
This is a M rated game that encourages socialization. Having a goose-stomping gestapo AI-bot / key-word flagging program monitoring every single message for wrong-think and throwing out suspensions and warnings for the slightest bit of perceived naughtiness is NOT going to promote a social space. It's going to muzzle most of the ever-dwindling community.
"Oh, but we're doing it for the right reasons! We're protecting the marginalized and promoting a safe, diverse, inclusive, equity space for everyone!"
No, you're not. You're promoting a culture of fear, where people are afraid to speak to each other openly lest they be punished. Where the most you're going to say to each other is "gg" and "tyfg." I mute my microphone on my PlayStation 5 at all times, because they're trigger happy about the slightest slip-up. I have no mic on my Xbox Series X because they scan your audio and punish you for the slightest slip-up. Heaven forbid I die and say F&@& by accident, or forget myself and tell a risque joke to my friend of 20 years during a co-op session.
But hey, for the best, right? I wouldn't want my private conversation with my best friend to make some random stranger who can't even hear it feel excluded via magic, or think that I don't value whatever special cause he or she is identifying with this month. You know, because I giggled about it during a private conversation between me and my friend that said individual HAD NO ACCESS TO.
Never you mind that this particular game is also full of murder, torture, and has a literal god of *** as the big bad for the base game. Even trying to be clean in chat, how can I talk about the actual content in the game itself without getting in trouble for objectionable content?
*sigh* I'm glad the tech sector is hurting right now. Maybe enough financial pain will snap these people out of their trance for a spell. My ancestors were actual Puritans, for real. It's a bad thing to be. Trust me, you're not winding up on the right side of history going down this path. There's a reason my ancestors are literally made fun of in every junior high American history class. :-/
Oh, I get you. My ancestors arrived on this continent with the Mass Bay Colony. This country has been dealing not only with the "puritan ethic" but also the "mid victorian ethic" ever since. Seriously? 400 years, and we STILL can't just move into the real world not to mention the 21st century?
*sigh*
I understand all the counters highlighted in the previous pages/posts, and I agree on most, but I would like to bring a different aspect that shouldn't be forgotten: an IA check would have the - enormous - advantage that in case of a unacceptable chat, counter reaction would be immediate.
Regularly - sadly: really regularly - I witness zone chats that are racist and/or discriminating and/or humiliating, up to absolutely unacceptable (some days ago, jokes about kids). In such cases, no context can be a justification: the chat shall be stopped straight.
However, as is now, no matter what we report, nothing happens. The chat just keeps going. There seems to be no emergency response system and I really wish there would be one. For me, it's a priority.
Because of this, I believe that an IA check could be right tool. Of course, more than probably, there will be a phase of adaptation, where the system will need to be tuned properly based on errors done and experience. It cannot work perfectly on first try. But for me, the main aspect is the possibility to stop immediately unacceptable chats, that should be a priority, and I see it therefore as a supportive tool.
I understand all the counters highlighted in the previous pages/posts, and I agree on most, but I would like to bring a different aspect that shouldn't be forgotten: an IA check would have the - enormous - advantage that in case of a unacceptable chat, counter reaction would be immediate.
Regularly - sadly: really regularly - I witness zone chats that are racist and/or discriminating and/or humiliating, up to absolutely unacceptable (some days ago, jokes about kids). In such cases, no context can be a justification: the chat shall be stopped straight.
However, as is now, no matter what we report, nothing happens. The chat just keeps going. There seems to be no emergency response system and I really wish there would be one. For me, it's a priority.
Because of this, I believe that an IA check could be right tool. Of course, more than probably, there will be a phase of adaptation, where the system will need to be tuned properly based on errors done and experience. It cannot work perfectly on first try. But for me, the main aspect is the possibility to stop immediately unacceptable chats, that should be a priority, and I see it therefore as a supportive tool.
Not sure if you are playing on console, but on PC reporting a player will block them and a blocked player's chat won't appear for you. This is not an argument at all, unless you somehow managed to fill your blocked players list with 100 people, in which case the thing you should be arguing for is an increase in the amount of players you can block. Arguing for this type of automated surveillance state that's prying into private conversations just because it can take immediate action in some other case is crazy. The immediate action you seek is the block feature.
Please....ZOS can't even fix the servers....and your asking about tacking on AI? Currently there's no evidence AI is even present in this game. If it actually WAS added, I'm sure it would be terrible as most AI is.
Apparently, another escalation of censorship is taking place now. This already happened a couple of years ago, it seems that the grip has loosened, but now we are seeing a deterioration in the mood of the players in many areas. I also received a warning for an innocent phrase the other day.
Can someone also tell me which aforementioned Twitch streamer got banned for AI mod (and over what)?
Can someone also tell me which aforementioned Twitch streamer got banned for AI mod (and over what)?
No one is going to tell you, and risk getting banned themselves, when there is a screenshot of the stream about half a dozen posts above yours and you can go and check it out for yourself
Can someone also tell me which aforementioned Twitch streamer got banned for AI mod (and over what)?
No one is going to tell you, and risk getting banned themselves, when there is a screenshot of the stream about half a dozen posts above yours and you can go and check it out for yourself
I mean the screen did not say the word but I DID check the stream and found out (the Streamer did mention what he said in comments)