I code and do art things. Check https://cloudy.horse64.org/ for the person behind this content. For my projects, https://codeberg.org/ell1e has many of them.

  • 4 Posts
  • 56 Comments
Joined 7 months ago
cake
Cake day: July 16th, 2025

help-circle













  • The EU has apparently decided that this has to be done for most public platforms by July 2026, so Discord may not have much of a choice and other platforms will likely follow: (Edit: I forgot, the EU strict age verification stuff seems to be limited to EU DSA’s definition of “platforms” so as a text messenger I’m not sure Discord is part of it. But this’ll still likely be coming to more services near you and perhaps Discord is just voluntarily joining the chaos…)

    I could be wrong I’m not a lawyer, assume everything I write from here is bullshit, but see here:

    https://www.mlex.com/mlex/articles/2368265/online-services-get-up-to-12-months-to-apply-age-verification-eu-guidelines-say “Online services get up to 12 months to apply age verification, EU guidelines say” This was in July 2025.

    EU guidelines in question seem to be: https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-protection-minors + https://ec.europa.eu/newsroom/dae/redirection/document/118226 Quotes:

    “[…] the Union legislature enacted Article 28 of Regulation (EU) 2022/2065 of the European Parliament and the Council (6). Paragraph 1 of this provision obliges providers of online platforms […] to ensure a high level of privacy, safety, and security of minors, […]”

    “Self-declaration is not considered to be an appropriate age-assurance measure as further explained below.”

    “In the following circumstances, […] the Commission considers the use of access restrictions supported by age verification methods an appropriate and proportionate measure to ensure a high level of privacy, safety, and security of minors: […] an online platform accessible to minors has identified risks to minors’ privacy, safety, or security, including content, conduct and consumer risks as well as contact risks (e.g., arising from features such as live chat, image/video sharing, anonymous messaging)”

    “Age estimation methods can complement age verification technologies and can be used in addition to the former,” (AKA the alternative to a literal gov ID check seems to be big data AI sucking up all user data to estimate user age.)

    The in my opinion horrible solution the EU seems to have found to avoid sharing the physical ID for services that don’t want to request one, is apparently this app: https://github.com/eu-digital-identity-wallet/av-app-android-wallet-ui Which from what I can tell 1. it requires Google device attestation so all custom ROMs are out and to be a citizen you can apparently no longer own your device, 2. unless you use iOS or Android you’re apparently not a citizen, 3. once everyone is used to using some citizen app like that, I feel like a fascist government could easily tie it to a social score or other authoritarian measures bewyond the age verification. 4. There is a privacy friendly alternative approach anyway, that most governments seem to conveniently be ignoring: https://www.politico.com/news/2025/10/13/california-law-online-age-checks-00606115

    Anyway, I’m not a lawyer and this isn’t legal advice. But spread the word, somehow press seems to be ignoring this.



  • I posted this before, but it doesn’t even seem to be voluntary at all, from what I can tell from the draft:

    “Upon that notification, the provider shall, in cooperation with the EU Centre pursuant to Article 50(1a), take the necessary measures to effectively contribute to the development of the relevant technologies to mitigate the risk of child sexual abuse identified on their services. […]”

    “In order to prevent and combat online child sexual abuse effectively, providers of hosting services and providers of publicly available interpersonal communications services should take all reasonable measures to mitigate the risk of their services being misused for such abuse […]”

    These quotes sound mandatory, not voluntary. And let’s look what these technologies referenced are:

    “In order to facilitate the providers’ voluntary activities under Regulation (EU) 2021/1232 compliance with the detection obligations, the EU Centre should make available to providers detection technologies […]”

    “The EU Centre should provide reliable information on which activities can reasonably be considered to constitute online child sexual abuse, so as to enable the detection […] Therefore, the EU Centre should generate accurate and reliable indicators,[…] These indicators should allow technologies to detect the dissemination of either the same material (known material) or of different new child sexual abuse material (new material), […]”

    Oops, it sounds again like mandatory scanning.

    Source: https://cdn.netzpolitik.org/wp-upload/2025/11/2025-11-06_Council_Presidency_LEWP_CSA-R_Presidency-compromise-texts_14092.pdf

    The new draft seems to pretend better to look less mandatory, but it still looks mandatory to me. Feel free to correct me if somebody can figure out that I’m wrong.



  • It doesn’t seem to be voluntary at all, from what I can tell from the draft:

    “Upon that notification, the provider shall, in cooperation with the EU Centre pursuant to Article 50(1a), take the necessary measures to effectively contribute to the development of the relevant technologies to mitigate the risk of child sexual abuse identified on their services. […]”

    “In order to prevent and combat online child sexual abuse effectively, providers of hosting services and providers of publicly available interpersonal communications services should take all reasonable measures to mitigate the risk of their services being misused for such abuse […]”

    These quotes sound mandatory, not voluntary. And let’s look what these technologies referenced are:

    “In order to facilitate the providers’ voluntary activities under Regulation (EU) 2021/1232 compliance with the detection obligations, the EU Centre should make available to providers detection technologies […]”

    “The EU Centre should provide reliable information on which activities can reasonably be considered to constitute online child sexual abuse, so as to enable the detection […] Therefore, the EU Centre should generate accurate and reliable indicators,[…] These indicators should allow technologies to detect the dissemination of either the same material (known material) or of different new child sexual abuse material (new material), […]”

    Oops, it sounds again like mandatory scanning.

    Source: https://cdn.netzpolitik.org/wp-upload/2025/11/2025-11-06_Council_Presidency_LEWP_CSA-R_Presidency-compromise-texts_14092.pdf

    The new draft seems to pretend better to look less mandatory, but it still looks mandatory to me. Feel free to correct me if somebody can figure out that I’m wrong.


  • It doesn’t seem to be voluntary at all, from what I can tell from the draft:

    “Upon that notification, the provider shall, in cooperation with the EU Centre pursuant to Article 50(1a), take the necessary measures to effectively contribute to the development of the relevant technologies to mitigate the risk of child sexual abuse identified on their services. […]”

    “In order to prevent and combat online child sexual abuse effectively, providers of hosting services and providers of publicly available interpersonal communications services should take all reasonable measures to mitigate the risk of their services being misused for such abuse […]”

    These quotes sound mandatory, not voluntary. And let’s look what these technologies referenced are:

    “In order to facilitate the providers’ voluntary activities under Regulation (EU) 2021/1232 compliance with the detection obligations, the EU Centre should make available to providers detection technologies […]”

    “The EU Centre should provide reliable information on which activities can reasonably be considered to constitute online child sexual abuse, so as to enable the detection […] Therefore, the EU Centre should generate accurate and reliable indicators,[…] These indicators should allow technologies to detect the dissemination of either the same material (known material) or of different new child sexual abuse material (new material), […]”

    Oops, it sounds again like mandatory scanning.

    Source: https://cdn.netzpolitik.org/wp-upload/2025/11/2025-11-06_Council_Presidency_LEWP_CSA-R_Presidency-compromise-texts_14092.pdf

    The new draft seems to pretend better to look less mandatory, but it still looks mandatory to me. Feel free to correct me if somebody can figure out that I’m wrong.