• 59 Posts
  • 830 Comments
Joined 3 years ago
cake
Cake day: July 5th, 2023

help-circle
  • Yes, but only insomuch as laws that protect minors impose additional constraints on those who have “actual knowledge” that a user is actually a child.

    So, if I understand right, basically they assume its correct unless given significant evidence otherwise? So like, if this flag is enabled and I visit a website and don’t directly provide personal information, then they have to assume I am a child under CCPA and thus can’t share my data. Right?

    Statr law can expand upon federal law but not contradict. And it smells like AB1043 is more “add a more explicit signal of user age” than anything affecting data retention relating to children.

    What part do you think is contradictory?

    I was wondering more if they could just argue that it isn’t an reliable metric and thus was ignored for COPPA if it ever came up in Federal court - esspecially if adults end up using the flag for CCPA or Civil Code protections. As opposed to in California law, where it is assumed to be true unless shown otherwise.


  • My interpretation was that slipery slope was more about the event in question (AC1043) being predicted to directly lead to escalation (AI/ID verification). As from you’re Wikipedia quote, “to result in the claimed effects”. I don’t see any reason to predict that this law will directly influence their decision to escalate or not. That said, perhaps its a disagreement on how much cultural influence a law like this would have, and how seperate a parent/user-managed system of age verification is from a government managed one technically.

    I would be interested to hear your argument for technical implementation, however.


  • I’m trying to give you the benifit of the doubt, but at this point you seem to increasing be resorting to insults, and arguing against stawmen, to the point where I’m having trouble even understanding what you’re saying. I’m doing my best to remain respectful and civil, but you aren’t returning the favour. That said, I am trying to give you a chance, and want to be open to being convinced. So…

    If I understand what you’re trying to say, you think there should never be any prompt, warning, or other safety measure on any content? Not gore videos, not dating sites, not shock sites? Am I understanding you correctly, and if not, can you please restate your argument more clearly.


  • PlzGivHugs@sh.itjust.workstoTechnology@lemmy.zipSystem76 on Age Verification Laws
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    6 hours ago

    The fallacy isn’t assuming that it will happen. Clearly, there is a significant push towards it, and its something we need to be fighting against. The reason its a slippery slope fallacy is the assumption that this law is a direct attempt to implement those systems, in spite of the fact that AB1043 implements a system that would be redundant with AI or ID based methods, technically doesn’t offer any good way to transition into an AI or ID based system (since it all has to be done locally), and legally, imposes additional data protection laws that are likely to interfere with AI-based age verification.

    The problem with AI and ID age verification isn’t the age verification. Its the data collection, limits on personal freedom, and to some, the inconvenience. So far as I can tell, AB1043 doesn’t have a significant impact on data collection (it does add another metric that could be used for fingerprinting, but also adds stricter regulation on data collection when this flag is used,) or personal freedoms - esspecially not when compared to what is already the existing standard of asking the user for their age and/or if they’re over 18.


  • PlzGivHugs@sh.itjust.workstoTechnology@lemmy.zipSystem76 on Age Verification Laws
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    edit-2
    8 hours ago

    Well, from a privacy/freedom standpoint, how is this different from a website requiring you to enter your age and/or asking you to confirm that you’re 18? They record your age, store it with your data, then let you continue (or don’t). The fact that baffles me is that this is widely accepted as standard practice, and not a significant privacy concern, while having an account-level flag that does the exact same thing isn’t. Like, is it because its managed by the browser/OS/app store? In that case, why isn’t there the same backlash against the existance of things like system theme flags, user agents, and even usernames.


  • ‘This law is fine because it won’t affect child predators’ is a brave argument.

    This obviously isn’t the argument I’m making. This law obviously isn’t meant to stop predators. Its meant to provide a parental control option for parents to limit their own children’s access to potentially harmfull or mature materials.

    Critics seem to agree, it’s a foot in the door for all of the other privacy-defeating efforts going on, now running in protection ring zero. What does this nonsense do, besides set off those red flags?

    This huge uproar is the point of my confusion. You and others in the field seem certain that this is a direct first step towards ID and AI data collection. Meanwhile, before this, I actually saw this occasionally proposed as a good option in privacy-related blogs/communities specifically because it was optional and entirely handled by the users.

    What impact do you honestly expect, versus telling websites to have an ‘18+ only’ click-through?

    More convenience for adults (not having to click “yes” every time), and having a more effective way of slowing down children accessing content that might be dangerous. For example, if I was a parent who had access to this, I’d likely set up two accounts for my kids: one set to 18+ for when I’m directly supervising them, and one set to under 18 for when I’m supervising them less thoroughly.


  • If I had to take a photo of my genitals to sign into my own computer, promises against storage or sharing are not addressing my complaints about privacy. Asking my age is a lot less personal - but it’s still information about me, which this object does not need.

    If you’re that concerned, leave the field at its default value, or (since its your PC and there will absolutely be a way to) set it to a null value. Or set it based on the amount of legal protections you want on your data, because that also appears to work.

    ‘I’m only okay with this idea because I know it won’t work’ is, just, why are we even talking? What is the function of an argument when you’re not listening to yourself?

    Saying it can be bypassed doesn’t mean it doesn’t work. Like most safety and security measures, the point is to disincentivise and prevent errs of convenience - esspecially since children particularly lack impulse control. In the same way, having a railing or fence on a cliff won’t prevent people from passing, but will make them think twice. It doesn’t mean having that railing/fence is pointless.


  • Okay, but should we not oppose laws about data collection and facial recognition in that case, rather than a law that implements an entirely separate, optional, user driven approach. Saying this is bad because those are bad is not an argument any more so than saying CCPA and GDPR are bad because the government want to collect data. Your argument isn’t against this law, or even the concept of having age verification in general. Its against government overreach as a broad concept. You’re again relying on slipery slope falacy to say that because I’m okay with this one specific form of age gating, I’m okay with every other one, which I have repeatedly made clear is not true.


  • PlzGivHugs@sh.itjust.workstoTechnology@lemmy.zipSystem76 on Age Verification Laws
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    11 hours ago

    Or that anyone working for or with kid-filled sites of any size could make it incidentally about preying on said kids. Apparently people manage when they’re just anonymous users.

    But like, thats exactly my point. Its platforms like Roblox that predators seek out to prey on children. They don’t create their own. An age verification law will have no effect on that. A hidden backend value thats illegal to share doesn’t make it significantly easier for predators. Even if they did have unrestricted access to user data, wouldn’t a hundred other variables better identify vulnerable users, like use of voice chat and past text messages? Hell, I would expect children with the age flag left at a default value to be more vulnerable, given that it would likely mean the parent is less likely to be tech-savy and/or less likely to be paying attention to their child, but again, its ambiguous.



  • PlzGivHugs@sh.itjust.workstoTechnology@lemmy.zipSystem76 on Age Verification Laws
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    11 hours ago

    This is a compelling argument, but do you think its really a significant attack vector? Its already illegal to share or leak (even unintentionally) this data, and from my understanding, if you chose to set your age to a lower bracket via this process, companies sharing (also collecting? Currently unclear on this.) this data would also break CCPA and possibly COPPA, and from my understanding, the companies are required to provide additional data privacy measures under California Civil Code.

    Yes, these laws will be broken, but will it be on a significant enough scale, and with reliable enough information to be worth-while? Like, since this bans the use of data from those who set their age low, wouldn’t this likely reduce the data collection pool overall, not to mention inventiving adults to poison this data. For those who do illegally collect this data anyway, is it that much of an advantage compared to just asking the user’s age upon reaching the site as most sites currently do? Beyond that, when these sites operating illegally do leak their data, will that data be a realistic attack vector? Like I said to another commenter, collating data in this way seems extremely impractical and unreliable for predators. Wouldn’t those who want to seek out children just go to existing spaces where they can connect directly like Roblox or Discord? Like, don’t get me wrong, I don’t like data collection, but compared to everything else, this seems like a relatively unreliable and unhelpful data point, esspecially given all the legal restrictions.

    Edit: also, would be interested to hear if your opinion changes if even storing this value is illegal, if unnecessary data collection as a whole is banned, and/or if this value has a legally defined default of using the 18+ value, and doesn’t have to be made obvious in account setup.

    Edit 2: Also, wantted to say thanks for responding genuinely and with a well-articulated argument. I know the Fediverse tends to be very… unfriendly… towards anything that may impact privacy and towards government regulation in general, so your civility is really appreciated.


  • You’re completely ignoring my argument. How many of these websites where children gather and self-identity are created and maintained by paedophiles specifically to prey on childen? So far as I know, there has never been a site like this on the modern internet, nonetheless one that remains up and has been running for an extended period. I don’t see any reason to expect this to change.


  • PlzGivHugs@sh.itjust.workstoTechnology@lemmy.zipSystem76 on Age Verification Laws
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    edit-2
    13 hours ago

    Companies shouldn’t even be allowed to demand more than a username and password, on any machine I could pick up and throw. Making anything beyond that a legal requirement is intolerable, in itself. My age is not this object’s business. It sure isn’t this website’s business.

    Edit, because I forgot this part: I agree to this, but it isn’t realistic, unfortunately. That said, even with this law, you can still make unnecessary storage, or sharing of user data illegal.

    Stop excusing these intrusions against adult life, for the sake of children who will bypass them anyway. You know they will. You use the flimsiness of this alleged protection as an excuse for enabling it. There is literally no benefit if it doesn’t fucking work. Even pretending the immediate goal is something you should want - this won’t do that.

    I do know they will. The whole reason I’m even okay idea is because it is completely optional for the user. I don’t see how it’ll impact adult life. That is why I’m so confused at the backlash. Its asking for an option to increase user control and user choice over their experience. Hell, from my understanding, this would provide a means for users to make it actually illegal to collect any user data, but I need to re-read the CCPA to confirm this. It seems that the benifits of user choice provided by this option far outweight the loss of having one more fingerprinting metric - nonetheless one that is illegal to share.


  • PlzGivHugs@sh.itjust.workstoTechnology@lemmy.zipSystem76 on Age Verification Laws
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    edit-2
    13 hours ago

    This is a slippery slope falicy. Just because the option is provided to self-identify age, doesn’t mean that it will be replaced with more complex and direct data collection (which I am against, if it wasn’t clear) later - esspecially considering that if its based on this law, it would be literally impossible. 4a bans the collection of data from your system besides age, and the fact that it is all handled locally and sharing it is prohibited means that it would be impractical to implement anything fancier than a text box to collect data. If anything, this looks like a way to be seen “doing something” without having to change anything for most users. Hell, if California wantted to implement a law for data collection, why would they have implemented the CCPA, why would they have written this law to ban the sharing of data, and why wouldn’t they just write the data collection law instead, given (as you said) there is already significant backing for the idea.


  • illegal

    Yes, in that they can be stopped if noticed. Police are incompetent, but if something is that bad, and draws enough attention, the person will generally be arrested.

    extremely impractical

    Yes, all the time. Thats why safes, passwords and similar exist. Or, more relevant in this case, the adage that the best way to avoid a break-in is to be a less appealing target than your neighbors. Roblox, Minecraft, Discord, and other platforms where kids gather and regularly self-identify are still going to exist, and they are far safer and far more appealing for targetted abuse of children. On the other hand, setting up a public website/app and trying to lure children to it is expensive, risky, and unlikely to succeed on the modern internet.


  • PlzGivHugs@sh.itjust.workstoTechnology@lemmy.zipSystem76 on Age Verification Laws
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    6
    ·
    edit-2
    13 hours ago

    There is no benefit.

    This is obvious hyperbole and know it. Kids are stupid and vulnerable, and measures to protect them aren’t useless. That said, I am open to the idea that this law isn’t worth the cost. Basically every other age verification law (esspecially those based on use ID or AI) is very clearly not. I just haven’t seen a compelling argument as to why this one isn’t.

    You can’t glibly assert that people can just lie, so it’s not a big deal - and then pretend it’ll do the thing it’s for. Which again, is a bad idea anyway, which this approach would not achieve, if it even worked. It’s fractally stupid. It is dangerous bullshit, at every scale.

    Okay, but why? You keep repeating that its dangerous, limits freedoms, and causes privacy issues, but so far, the only argument I’ve seen is that it can help kids identity themselves, but given that its handled locally and is unreliable, I don’t see this being usable on any meaningful scale. Setting up a, “free candy” website or app is going to be way less effective and way more dangerous than just creating a Roblox account. Is there something I’m missing?


  • I mean, from my understanding, this would be both hyper-illegal and extremely impractical. You’d need to have a large enough site to lure users in, and collect identitying information and republish it, but can’t draw enough attention to become a target for data poisoning (given that this flag is freely set by the user) or for law enforcement. It seems like this would be unlikely enough that the benifit gained from having this flag would far outweigh the risks, esspecially in the modern, hyper-corporate internet.


  • PlzGivHugs@sh.itjust.workstoTechnology@lemmy.zipSystem76 on Age Verification Laws
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    6
    ·
    edit-2
    3 hours ago

    Because I was a stupid kid and didn’t realize that watching combat footage might be a bad idea. I thought I was just learning about military history. Same way kids don’t realize they’re being groomed or don’t realize that watching graphic horror movies might be a bad idea. Kids are dumb - and to be clear, I know you can’t shield them from everything and parents are still the primary solution. Still, a local flag for age range seems like exactly the sort of tool that would help a parent to moderate access without limitting privacy or freedom.

    Edit: Also, this argument obviously isn’t what you intended to make, but implying that kids are at fault themselves, for going to dangerous websites looks really bad when replying to a comment partly about child predators. You may want to add a clarification, or reword your comment.


  • PlzGivHugs@sh.itjust.workstoTechnology@lemmy.zipSystem76 on Age Verification Laws
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    6
    ·
    edit-2
    14 hours ago

    The California law is a local flag for age range. Its not a law that requires ID, or tracking, or anything else like that. Given that its set by the user optionally, and from my understanding illegal to use for anything but age verification, I don’t understand how this is that negative for privacy or freedom.

    Edit: Also, setting the age accurately is entirely optional. I don’t see how this impacts freedom either.


  • PlzGivHugs@sh.itjust.workstoTechnology@lemmy.zipSystem76 on Age Verification Laws
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    28
    ·
    edit-2
    3 hours ago

    I and many others I know who grew up with unrestricted internet access (before and after the corporatization of the internet) were exposed to terrible shit. Like, I grew up with unusually tech savvy parents who were able to protect me from the worst of it, but even I have been somewhat traumatized by accessing graphic content I shouldn’t have. I personally know people who grew up with worse parents who grew up browsing shock/gore websites and who were repeatedly groomed and abused by pedophiles.

    Honestly, I don’t really get the backlash to this legislation, beyond that its prehaps being applied to devices it shouldn’t be. Its a local, safe option for reducing child access to things they shouldn’t access. While yes, freedom is important, we’re talking about providing the option to limit access to mature content, not preventing them from downloading python or using the internet. There is a justified reason for wanting this, and this seems like the ideal way to do it.

    Edit: I’m genuinely confused as to why people are against this. All the argument sound like they’re thinking this is another variety on ID collection or AI tracking. From my understanding, this is an optional flag, set locally by the user, about as decentralized and pro-user-choice as it gets. I’m going to reread the law to make sure Im not missing something.

    Edit 2: Reading the law, section 4a seems unpractically vague, but in favour of blocking data collection. From my understanding, that would ban the use of things like user agents, and theme settings in browsers. Notably, the law also specifies fines for both accidental and intentional data sharing. This seems like about as good an option as you can get, for protecting children, while keeping it user-choice driven, decentralized, and anonymous.

    Edit 3: Actually, in combination with the CCPA, possibly COPPA, and California Civil Code, wouldn’t this effectively work as a “tracking me is now illegal” switch?

    Edit 4: My interpretation of 4A was incorrect, it would not block the access of other system-level flags. It would simply block requesting further personal data from the OS’s developer.