PrincessOfThieves wrote: »You are misrepresenting these laws. Nowhere does it say that everything must be E-rated in case a child gets access to it.
I live in the EU, and adult chats and websites are very much legal here even though there is of course a chance that a kid clicks on that "I am 18" button. There is a difference between trying to groom a child online or exposing them to adult content and, well, adult content and conversations existing.
You also conveniently ignored the second part of my message.
I'm misrepresenting nothing. I'm running out of ways to say the same thing in a way you understand.
Correct, the law doesn't require everything to be E-rated, BUT it requires that ZOS do everything they can to monitor their chat system, and to take action where there is a concern about the nature of the content being exchanged; they are required to assess whether the fantasy violence being portayed can lead to real world harm, and to risk assess what impact all this may have on children playing the game - something that ZOS do not control - something that bad parenting is not an excuse for ZOS to use. THEN ZOS has to tell it's players how it will achieve that through updating their EULA - which is what this entire discussion is about.
No, it wouldn't. It would require them to monitor all messages - which their updated EULA says they are - and take action where necessary, while erring on the side of caution - which they appear to be doing.
Okay, I should have been more precise: It would be the end of every "not child-friendly communication" through the internet, because it could never be 100% ensured that there is no child behind the screen, pretending to be an adult, who could see it.I don't know, what's the law requiring phone companies to monitor phone calls?
In my country, it's completely forbidden, unless a court has decided for an exception (for one individual person, if there are already hints that this person might use the phone to communicate with accomplices to plan a severe crime) - then the police and/or secret service might start surveillance. This is the official stance, at least.Alinhbo_Tyaka wrote: »If you are talking sexual consent that varies by state but ranges anywhere from 16 to 18 years old. For other things it is 18 or 21 years old with 18 being the age at which one is considered old enough to vote or serve in the military without their parents consent.
Thank you. I've been wondering because in my country, the thing that matters when it comes to the topic of "lewd chat talk/roleplay" isn't the age of becoming an adult, but the age of sexual consent (which is a few years lower). Which means once you've reached this age of consent, you're free to engage in as many embarrassing roleplay scenarios as you wish, you're thought of as being able to give consent to that, and no one, not your parents and not the government either, can protect/forbid (depending on perspective) you from doing this. Doesn't matter what I think about it (in case anyone cares: I generally don't think it's a good idea to leave written evidence of things that might be embarrassing later - especially not in an age where breakups are very common, so the evidence could easily fall into wrong hands), but that's how the laws are here. Anti-child-grooming laws end with the minor reaching the age of consent.
Of course, non-consensual acts (like bothering people with lewd texts or pictures they didn't agree to) is forbidden, regardless of age.
Again: Just stating how the laws are here in my country.
Bammlschwamml wrote: »I would like to quote the last two sentences of the original english version of the code of conduct, but i can't access it anywhere. It always directs me to the german version for some reason. I can change the language for the terms of service on the zenimax website (not the elder scrolls website) but not the code of conduct, so i will just let "google translate" do the job.
RandomKodiak wrote: »I never use racial slurs etc.. so I am not worried
This is a parental responsibility. Kids should be monitored, so they do NOT play games that are not suitable for their age group. it should not come down to a company to introduce nanny policies based on bad parenting.
Alinhbo_Tyaka wrote: »No, it wouldn't. It would require them to monitor all messages - which their updated EULA says they are - and take action where necessary, while erring on the side of caution - which they appear to be doing.
Okay, I should have been more precise: It would be the end of every "not child-friendly communication" through the internet, because it could never be 100% ensured that there is no child behind the screen, pretending to be an adult, who could see it.I don't know, what's the law requiring phone companies to monitor phone calls?
In my country, it's completely forbidden, unless a court has decided for an exception (for one individual person, if there are already hints that this person might use the phone to communicate with accomplices to plan a severe crime) - then the police and/or secret service might start surveillance. This is the official stance, at least.Alinhbo_Tyaka wrote: »If you are talking sexual consent that varies by state but ranges anywhere from 16 to 18 years old. For other things it is 18 or 21 years old with 18 being the age at which one is considered old enough to vote or serve in the military without their parents consent.
Thank you. I've been wondering because in my country, the thing that matters when it comes to the topic of "lewd chat talk/roleplay" isn't the age of becoming an adult, but the age of sexual consent (which is a few years lower). Which means once you've reached this age of consent, you're free to engage in as many embarrassing roleplay scenarios as you wish, you're thought of as being able to give consent to that, and no one, not your parents and not the government either, can protect/forbid (depending on perspective) you from doing this. Doesn't matter what I think about it (in case anyone cares: I generally don't think it's a good idea to leave written evidence of things that might be embarrassing later - especially not in an age where breakups are very common, so the evidence could easily fall into wrong hands), but that's how the laws are here. Anti-child-grooming laws end with the minor reaching the age of consent.
Of course, non-consensual acts (like bothering people with lewd texts or pictures they didn't agree to) is forbidden, regardless of age.
Again: Just stating how the laws are here in my country.
The laws in most states are similar to what you describe.
It is illegal to monitor US citizen phone calls without a search warrant or equivalent authorization from a judge. Phone company records can be subpoenaed as part of a court case. These do not usually involve a judge unless the recipient challenges the subpoena. Police can also obtain records without a warrant via a subpoena but these are suppose to go through a judge. I'm not an attorney but to the best of my knowledge except for some exception buried in a national security law a judge needs to approve any type of personal monitoring of citizens. Even in the national security realm there are special courts for overseeing the processing of warrants and subpoenas.
No, consent does not make everything ok. I am not going to get into specifics, but just think about what that means and how it can be abused.
I'm not sure if we're talking about the same thing, but my premise was the random "not suitable for minors" roleplay chat that some adult players may have in this game, no matter if it's the romantic pursuits of Gronk the orc warrior with Bjørn the bulky Nord smith, or some "We're evil vampires attacking travellers and throwing them into our dungeon" make-believe. If adults consensually decide to chat-roleplay this with each other, I indeed see no harm done, whether I personally have an interest in it or not. My likes or dislikes and my personal sense of taste play no role in other people's conversations.
Of course ZOS can still forbid it, it's their game and their server, after all.
Alinhbo_Tyaka wrote: »
There are no legal requirements to prevent a child from purchasing a game with an inappropriate ESRB rating. ESRB compliance is strictly on a voluntary basis of the game company and retailers. Some retailers will check ages but for the most part the expectation is the parent will control what the child purchases based on the rating system.
PrincessOfThieves wrote: »This is not a game marketed to children and not a game where you can reasonably assume that there's a lot of kids around you at all times.
dk_dunkirk wrote: »I see one person defending the company and this behavior, and claiming it's all required under the law, but I haven't seen an actual reference that would prove that. Have I just missed it?
[Quoted post has been removed.]
[Quoted post has been removed.]
I doubt many of us are concerned as we do not type things into chat that would be an issue.
I don't use slurs and usually don't swear either, so normally, I would not be bothered at all. But AI moderation has the risk of false positives, which does make me a little concerned. What if it flags a harmless word I use in a conversation in another language because in English it looks like a slur? A human moderation would see it's not the English-language slur, can the AI (or rather simple word filter, from what I've seen so far) recognize this? I doubt it.
And I think it's fine to address this and voice concerns, even if we have accepted the TOS (accepting an agreement also doesn't mean one cannot discuss the rules and suggest improvements).
It is also a major assumption that humans would not make the same mistake.
endorphinsplox wrote: »The fact that there are still those who would agree with the notion of and comply to automated content moderation for a paid service wherein the automation is fundamentally incapable of effective distinction between actual harmful content and discourse that is consented to by participating parties will never stop surprising me I think.
TOS or not, I think any reasonable person would expect that DMs can indeed be stored and reviewed should there be reported misconduct, but to have an automated system flag anything that might be offensive for it to potentially be reviewed by a real person is just not something I think can offer more benefits than drawbacks. I do indeed consider this to be an invasion of privacy, and while I support the notion of trying to combat potentially illegal interactions in their game, this will not do that. All it will do is push people to another platform that ZOS can't monitor, and if the in-game chat dies, the game would suffer immensely as a result.
I think its entirely fair to want DMs between friends or private group chats to remain as exactly that: private. I don't care who hosts the service, unless there is valid reason to believe wrongdoing is occurring, its none of ZOS's business what I discuss and with whom.
Its also insane to me that people would genuinely say that clicking the Agree button on the TOS on login means you actually jive with all the contents therein. We all know we're agreeing to be restricted and not actually act the way we normally would so that we don't get robbed of however much money each of us have spent on this game. In fact, the Arbitration clause is probably the best example of that. You are basically waiving your right to sue ZOS for wrongdoing for any reason. Kinda reminiscent of that thing where a person went to a Disney theme park, got served food they notified the staff they were allergic to and subsequently passed away. Despite this, Disney argued that the husband of the woman could not pursue legal action because he signed an agreement when he signed up for a trial to Disney+ that waived his right to take legal action against Disney for any reason.
For those that don't know, arbitration clauses are generally considered anti-consumer and are unenforceable in a lot of places in the world, such as the UK, due to consumer protection laws. At the end of the day, all of the people who support these big companies in their efforts to surveil and moderate everything we do, are doomed to extinction. Its usually performative in that they want to "protect the innocent" or "stop crime", or they want all language they disagree with to be censored, but its a power too great that will ultimately be used against them, and when that happens, they suddenly no longer support the idea.
And I'm sorry but agreeing to a TOS just so I can play the game I paid money for that strips me of my rights and makes me act only in ways the company personally endorses is really stupid, but that's our reality. They've continued to worsen the product and refuse to communicate with us effectively so they can take their 2 billion and eat it for all I care.
dragonlord500 wrote: »hmm I clearly saw this hitting the forums.
I honestly believe some police on the chats and zone chat above all should happen. PM chat though... I think they should only take action if the player reports the offender and so having AI monitoring it I find is a real bad idea because truly innocent players might be banned from it.
All and all this whole topic will really not go anywhere because Zos will never reverse their choice in changing the EULA and TOS. I clearly see this is all Microsoft doing as they really want to push their AI monitoring program into everything they own.
dragonlord500 wrote: »hmm I clearly saw this hitting the forums.
I honestly believe some police on the chats and zone chat above all should happen. PM chat though... I think they should only take action if the player reports the offender and so having AI monitoring it I find is a real bad idea because truly innocent players might be banned from it.
All and all this whole topic will really not go anywhere because Zos will never reverse their choice in changing the EULA and TOS. I clearly see this is all Microsoft doing as they really want to push their AI monitoring program into everything they own.
Zenimax has said there is no auto-ban on private chat though it is flagged. Considering private chat is not always welcome chat it makes sense.
Actually I just browsed the DSA documents (it's all online) and nowhere is "anyone under 18 a child". They mention "children and young people"
dragonlord500 wrote: »All and all this whole topic will really not go anywhere because Zos will never reverse their choice in changing the EULA and TOS. I clearly see this is all Microsoft doing as they really want to push their AI monitoring program into everything they own.
I am sure the two people, possibly having an intimate chat, would be overjoyed that a person gets to look at their conversation after it is 'flagged'.
It just seems extremely creepy.
If we take the likes of Everquest, so many marriages came about from people finding each other in that game. Marriages that have lasted to this day. I am sure there were a fair few flaggable conversations back then.
Advances that are not consensual can be reported and blocked, bringing AI and auto flagging in the the equation is a massive invasion of privacy. I do not care if they 'own' the game. As I said previous, there is an expectation of privacy in a private conversation.
OFC, now there is not, which is a very, very bad precedent to set.
So it may be legally required. I'm wondering now how exactly this should work in a fantasy game set in a game world where violence is common and also a normal game objective ("Kill 15 innocent people with the Blade of Woe.")? How are we supposed to talk to other players about game content with these rules?
SeaGtGruff wrote: »So it may be legally required. I'm wondering now how exactly this should work in a fantasy game set in a game world where violence is common and also a normal game objective ("Kill 15 innocent people with the Blade of Woe.")? How are we supposed to talk to other players about game content with these rules?
If this is being done to remain in compliance with legal requirements, your question should really be addressed to the people (politicians and lawmakers) who came up with the legal requirements in the first place. And to be honest, I don't think a lot of those people care about your question. In fact, they probably would prefer that fantasy games set in a world where violence is common and often a normal game objective were not even being created, sold, bought, and played in the first place.
dragonlord500 wrote: »All and all this whole topic will really not go anywhere because Zos will never reverse their choice in changing the EULA and TOS. I clearly see this is all Microsoft doing as they really want to push their AI monitoring program into everything they own.
It is not Microsoft's doing. It is the result of UK and EU legislation. ZOS are required by law to monitor the chat system.
there is no such "law" to monitor chat systems in a game - either in the UK or EU. please stop making things up.
What companies are obliged to do is keep "appropriate" records/logs of activity which can be requested by law enforcement . this doesnt just include chat, but also in game activities. these records must be kept in line with the GDPR (amongst other DP laws)
they (ZOS) can use appropriate means (should they choose to do so) to monitor things - this includes AI, but is not prohibitive to this method - they are also morally obligated to act upon reports and escalate where appropriate. thats what the EULA and TOS are about
companies can deem whatever records are approrpriate. including chat logs. eg it would look pretty daft if they didnt keep chat logs and the police were to ask for them - for example real life threats
so are they required by law to monitor chat? no
do they chose to? yes , using appropriate means
im not sure this entire thread is going anywhere tbh