@PierceTheBubble@lemmy.ml avatar

PierceTheBubble

@[email protected]

This profile is from a federated server and may be incomplete. View on remote instance

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

Which in practice will simply drive up the price: like refundable deposits

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

E2EE isn’t really relevant, when the “ends” have the functionality, to share data with Meta directly: as “reports”, “customer support”, “assistance” (Meta AI); where a UI element is the separation.

Edit: it turns out cloud backups aren’t E2E encrypted by default… meaning: any backup data, which passes through Meta’s servers, to the cloud providers (like iCloud or Google Account), is unobscured to Meta; unless E2EE is explicitly enabled. And even then, WhatsApp’s privacy policy states: “if you use a data backup service integrated with our Services (like iCloud or Google Account), they will receive information you share with them, such as your WhatsApp messages.” So the encryption happens on the server side, meaning: Apple and Google still have full access to the content. It doesn’t matter if you, personally, refuse to use the “feature”: if the other end does, your interactions will be included in their backups.

Cross-posting my comment from the cross-posted post

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

Deploying its roughly $1.4 billion worth of reserves to support “mission driven” tech businesses and nonprofits, including its own

I mean, how else can you deplete a non-profit's reserves?

Lawsuit Alleges That WhatsApp Has No End-to-End Encryption ( it.slashdot.org )

As evidence, the lawsuit cites unnamed "courageous whistleblowers" who allege that WhatsApp and Meta employees can request to view a user's messages through a simple process, thus bypassing the app's end-to-end encryption. "A worker need only send a 'task' (i.e., request via Meta's internal system) to a Meta engineer with an...

PierceTheBubble , (edited )
@PierceTheBubble@lemmy.ml avatar

E2EE isn't really relevant, when the "ends" have the functionality, to share data with Meta directly: as "reports", "customer support", "assistance" (Meta AI); where a UI element is the separation.

Edit: it turns out cloud backups aren't E2E encrypted by default... meaning: any backup data, which passes through Meta's servers, to the cloud providers (like iCloud or Google Account), is unobscured to Meta; unless E2EE is explicitly enabled. And even then, WhatsApp's privacy policy states: "if you use a data backup service integrated with our Services (like iCloud or Google Account), they will receive information you share with them, such as your WhatsApp messages." So the encryption happens on the server side, meaning: Apple and Google still have full access to the content. It doesn't matter if you, personally, refuse to use the "feature": if the other end does, your interactions will be included in their backups.

PierceTheBubble , (edited )
@PierceTheBubble@lemmy.ml avatar

Yeah, I guess if you want users to keep sharing "confessions, [] difficult debates, or silly inside jokes" through a platform you've acquired, E2EE might give the WhatsApp user the false sense of privacy required.

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

One would almost start to think the lawyers were out for the settlement money...

Reminder/invitation to contribute to OpenStreetMap

Some time ago I started replacing all services and apps that I use with FOSS altnernatives. Most of them were easy to replace but some corpo/big-tech apps had ecosystems too advanced to be conveniently replaced. For example, substituting Google Maps on Android (or I guess Apple Maps on iOS) was a bit of a struggle as the most...

PierceTheBubble , (edited )
@PierceTheBubble@lemmy.ml avatar

I personally quite like OsmAnd's granular control, but understand how others might experience this as being overwhelming; which big-tech's restrictive... I mean "modern" user-experience (UX) might be to blame for. There are however quite some alternatives to pick from, if you wanted a more minimalist approach to UX; which OsmAnd could also provide by default (while allowing advanced users to toggle additional "expert" settings).

What makes Google "Maps" superior to OSM-based maps, is not its inferior "map", but rather the navigational aspect: businesses and other 'points of interests' (POIs) registering their location to Google, public transit data being supplied to it (allowing for planning), traffic statistics (through creepy location tracking, even in the background unless opted out), etc.; and bundles all into a single, undeniably convenient application.

I would argue OSM data is primarily mass imports, from other permissive or open (government) databases; which are strongly dependent on region. For The Netherlands: BAG (basic registration of addresses and buildings) and BGT (basic registration of large-scale topography), make up a large portion of the data presented (which are either directly imported or used as a reference). Although, relative to real-world changes they might temporarily lack behind, and users add details based on satellite imagery.

Regarding satellite imagery: editors don't always have up-to-date imagery, leading to some users undoing changes others have made. In The Netherlands, the government provides relatively recent satellite imagery: which can be imported into the alternative JOSM editor as an WMTS layer. And you may also want to check the comments of the last change: in OSM's own iD editor you can click the "last modified ..." link, all the way at the bottom of the "Edit object" tab, for the selected object.

Another thing I would really recommend, is checking how other mappers have added certain features. Which is sometimes easier to understand than OSM's documentation; which doesn't always correspond to practice (possibly dependent on region). A very useful tool for this is Overpass Turbo, which you can use to search for certain elements, to see how others have implemented these.

I know this might all feel a little overwhelming, but I wish I had known these things earlier in my mapping journey. I started doing it because I noticed things missing, that I knew existed as a mailman. Just starting with smaller changes to get my feet wet, and gradually working my way to larger changes. As long as you don't start taring up large roads (including their often many relationships), you'll be just fine; and might even become hooked (as it can be quite satisfying, having created another beautiful part).

PierceTheBubble , (edited )
@PierceTheBubble@lemmy.ml avatar

If that's the only way you're going to contribute to OSM, by all means, go for it. But as a desktop OSM editor, I really dislike some of the incentives pushed by mobile applications. Primarily not adding objects as polygons (as it would be difficult to draw on such devices), but adding them as POIs (parking, amenities, etc.) and paths (waterways for instance: where paths are often used for just naming, or as water"ways", like for marine traffic). This often leads me to correct these changes, as they really stand out compared to the rest of the map. So generally, I view these tools as complementary, rather than final changes; unless it's changes to POIs or something, which is where these applications shine, in my opinion.

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

Oh, you can add new things, that's perfectly fine. I still prefer mobile users adding features, even if they are of an unusual object type; effectively being another type of fixme to desktop users. But instead of another desktop user integrating these elements, I rather have mobile users on the desktop as well; as to integrate their mobile changes when at home. If you're sightseeing, these applications are very helpful, for creating/editing POIs and effectively sketching out non-POI features; but the latter does require some work to integrate them.

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

Editing geometries is hard enough as it is on the desktop (especially with glued points), so I can't imagine making such changes on the mobile. I think it's best to not allow editing geometries, and to leave such changes to devices better suited for the task.

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

OSM is a community project, someone have to the the PR, It won’t show up automagically without human intervention.

Is this referring to the "mass imports" part, you would argue are done in batches by many contributors? If so, then yes, mass import might give the wrong idea, I agree. But even if imported by many over time, the result is still a mass import from these open databases (minus a few addresses maybe, drawn in by hand; or roads not yet aligned with BGT, in case of The Netherlands).

Are you sure its license is compatible? E.g. The website says I can’t view it because I’m not in the Netherlands. There are a lot of frequent editors from there, it’s strange they haven’t added it yet.

I can't find the forum post regarding this, but I'm quite sure the conclusion was it being compatible; despite viewing being restricted to Dutch citizens (because it's a service provided by The Netherlands). It's a quite common source here, especially for recent changes (which other imagery just doesn't provide). And they are providing WMTS directly, so if they wanted to restrict usage for georeferencing, I don't understand why they'd do that.

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

Ah okay, now I get it; I wasn't familiar with that. Satelliet Data Portaal provides both partial (more recent), and full mosaics (less recent) WMTS from multiple sources (Pleiades-NEO or SuperView-NEO); which might complicate things (having to load the right imagery, based on the location being edited for the partial captures; and selecting the right source). The resolution, especially from the partial captures, but also the mosaics, doesn't really hold up to something like PDOK or Esri. So perhaps this source being the default might not desirable, but having it as an option (especially the mosaic) would be neat.

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

There's quite some changes by First World contributors in Africa, primarily from mapping events. Perhaps they could also play a role in integrating POI and line elements (which are traditionally areas); or maybe allow a more POI- and line-based standard in Africa, not requiring areas for such objects. Or an intuitive UI, supporting editing of geometries, could be added; despite gluing and complicated relationships, etc. I would love to be proven wrong in my skepticism.

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

I still prefer mobile users adding features, even if they are of an unusual object type; effectively being another type of fixme to desktop users. But instead of another desktop user integrating these elements, I rather have mobile users on the desktop as well; as to integrate their mobile changes when at home. If you’re sightseeing, these applications are very helpful, for creating/editing POIs and effectively sketching out non-POI features; but the latter does require some work to integrate them.

Quoting another comment of mine. Your use of the tool is something I'm advocating for, really; I recognize it's usefulness, but am not treating it as a substitute for desktop editors.

PierceTheBubble , (edited )
@PierceTheBubble@lemmy.ml avatar

Checking the network traffic, it does a series of "s_a_f_e..Overflow" (indicating safe buffer overflows?), replacing them with filter-list-specific domains after the overflow (the address after the long string of characters); triggering uBlock to block these requests?

PierceTheBubble , (edited )
@PierceTheBubble@lemmy.ml avatar

Hmm, maybe it also monitors for changes to the DOM: cosmetic filtering done by uBlock (to hide/remove containers for these elements)? Something which network filtering by itself cannot do?

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

So multiple, nickel-titanium alloy tubes, are stretched and released within the refrigerator, causing a temperature change in the alloy, the heat of which (pulled from the interior) transferred to the calcium chloride fluid, being pumped around through the tubes; to be transferred to the outdoor climate, by use of an exterior heat exchanger. Something along those lines?

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

Reliance wouldn't be my primary concern, but rather the privacy implication. It seems like Google has to step up its surveillance game /s. Fun project though

PierceTheBubble , (edited )
@PierceTheBubble@lemmy.ml avatar

So a Mastodon ripoff, but its instances hosted by a single entity (effectively centralized): ensuring all instances residing within the European jurisdiction (allowing for full control over it). I don't see how they genuinely believe, to have humans do the photo validation, when competing at the scale of X; especially when you run all the instances. Perhaps they could recruit volunteers to socialize the losses, as the platform privatizes the profits. Nothing but a privacy-centric approach however: said the privacy expert...

Zeiter emphasized that systemic disinformation is eroding public trust and weakening democratic decision-making ... W will be legally the subsidiary of “We Don’t Have Time,” a media platform for climate action ... A group of 54 members of the European Parliament [primarily Greens/EFA, Renew, The Left] called for European alternatives

If that doesn't sound like a recipe, for swinging the pendulum to the other extreme (once more), I don't know what does... Because can you imagine, a modern social media platform, not being a political echo chamber: not promoting extremism by use of filter bubbles, and instead allowing for deescalation through counter argumentation. One would almost start to think, for it all to be intentional: as a deeply divided population will never stand united, against their common oppressor.

PierceTheBubble , (edited )
@PierceTheBubble@lemmy.ml avatar

So the amend alleges, Nvidia having used/stored/copied/obtained/distributed copyrighted works (including plaintiffs'), both through databases available on Hugging Face ('Books3' featured in both 'The Pile' and 'SlimPajama'), or pirating from shadow libraries (like Anna's Archive), to train multiple LLMs (primarily their 'NeMo Megatron' series), and distributing the copyrighted data through the 'NeMo Megatron Framework'; data which was ultimately sourced from shadow libraries.

It's quite an interesting read actually, especially the link to this Anna's Archive blog post. Which it grossly pulls out of context, as plaintiffs clearly despise the shadow libraries too: as they have ultimately provided access to their copyrighted material.

Especially the part: "Most (but not all!) US-based companies reconsidered once they realized the illegal nature of our work. By contrast, Chinese firms have enthusiastically embraced our collection, apparently untroubled by its legality." makes me wonder if that's the reason why models like Deepseek, initially blew Western models out of the water.

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

Great, more hoops to jump thr... I mean... an "advanced flow", for gaining the privilege of installing apps of your choosing

PierceTheBubble , (edited )
@PierceTheBubble@lemmy.ml avatar

THIS is how you do it, looking at you Brave: requiring me to (re)type my queries in the URL bar (appending '&summary=0' to it), so I'm not required to store a persistent cookie, keeping the damn setting off...

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

Maybe the best ad is to not have AI

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

innovation COURAGE

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

With "deletion" you're simply advancing the moment, they're supposedly "deleting" your data; something I refuse to believe, they actually do. Instead, I suspect they "anonymize", or effectively "pseudonymize" the data (as cross-referencing is trivial, when showing equal patterns on a new account; would the need arise). Stagnation wouldn't require services to take such steps, and any personal data remains connected to you, personally.

For the Gmail account, I would recommend: not deleting the account, opening an account at a privacy-respecting service (using Disroot as an example), connect the Gmail account to an email-client (like Thunderbird), copy all its contents (including 'sent' or other specific folders) to a local folder (making sure to back these up periodically), delete all contents from the Gmail server, and simply wait for incoming messages, at the now empty Gmail account.

If a worthy email comes in: copy it over to the local folder, and delete it from the Gmail server. For used services, you could change the contact address to the Disroot account, and for others you could delete them, or simply mark them as spam (and periodically emptying the spam-folder). You may not want to wait for privacy-sensitive services, to finally make an appearance, and change these over to the Disroot address right away.

I've been doing this for years now, and my big-tech accounts remain empty most of the time. Do make sure to transfer every folder, and make regular backups!

PierceTheBubble , (edited )
@PierceTheBubble@lemmy.ml avatar

It's been like that for quite a while. I remember deleting all big-tech accounts in 2019, and shortly after, Facebook started requiring login for full public page access. Therefore I created a burner account using a 'this person does not exist' picture, which provided me short-lived access after manual review. For account recovery, I was required to supply additional selfies (or even video-selfies?), but at that point I gave up.

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

Yeah, I think they employ a pretty sophisticated bot detection algorithm. I vaguely remember there being this 'make 5 friends' objective, or something along those lines, which I had no intention of fulfilling. If a new account, having triggered the manual reviewing process, doesn't adhere to common usage patterns, simply have them supply additional information. Any collateral damage simply means additional data, to be appended to Facebook's self-profiling platform... I mean, what else would one expect when Facebook's first outside investor was Palentir's Peter Thiel?

PierceTheBubble , (edited )
@PierceTheBubble@lemmy.ml avatar

My emails forced me to, locking me out of accounts I needed to access.

Microsoft had me fill this form, to "prove" I was the rightful owner of the account, after some suspicious login-attempts from an African country. The form included fields like: name (I don't think I supplied at creation, or a false one), other email addresses, previous passwords (which potentially yield completely unrelated passwords), etc.; only for the application to be rejected and locking me out of my primary email for a full month. After that outright violation, I immediately switched to Disroot, and haven't had any of said problems ever since. I backup all its contents locally using Thunderbird, and delete the origins from the server afterwards.

Many platforms have this messed up dark pattern, of revoking one's access to a real-world dependencies, unless giving in to the service's demands. Enforcement of 2FA is another one of those "excuses" for this type of misbehavior, and so is bot-detection.

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

The main paradox here, seems to be: the 70% boilerplate head-start being perceived faster, but the remaining 30% of fixing the AI-introduced mess, negating the marketed time-savings; or even leading to outright counterproductivity. At least in more demanding environments, not cherry picked by the industry, shoveling the tools.

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

I don't know: it's not just the outputs posing a risk, but also the tools themselves. The stacking of technology can only increase attack-surface it seems, at least to me. The fact that these models seem to auto-fill API values, without user-interaction, is quite unacceptable to me; it shouldn't require additional tools, checking for such common flaws.

Perhaps AI tools in professional contexts, can be best seen as template search tools. Describe the desired template, and it simply provides the template, it believes most closely matches the prompt. The professional can then "simply" refine the template, to match it to set specifications. Or perhaps rather use it as inspiration and start fresh, and not end up spending additional time resolving flaws.

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

Well, we are using them today for human programmers, so… :-)

True that haha...

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

But you need to be in close proximity (~15m max) to stalk a victim? You might as well just follow them around physically then. Perhaps when the victim is in a private location, eavesdrop on their conversation or locating their position within there, might be a possibility. But ear raping would, of course, constitute the most significant danger of all. Also WhisperPair, not WhisPair?

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

If the devices weren’t previously linked to a Google account ... then a hacker could ... also link it to their Google account.

This already severely limits the pool of potential victims; but still a more practical exploit indeed. It's almost as if this BLE tracking is a feature, rather than an exploit. And if you want to be notified of a device following you around, one has to perpetually enable BLE on their smartphone. But of course, headphone jacks are a thing of the past, and wireless is clearly the future. :)

PierceTheBubble , (edited )
@PierceTheBubble@lemmy.ml avatar

I understand you've read the comment as a single thing, mainly because it is. However, the BLE part is an additional piece of critique, which is not directly related to this specific exploit; neither is the tangent on the headphone jack "substitution". It's, indeed, this fast pairing feature, which is the subject of the discussed exploit; so you understood that correctly (or I misunderstood it too...).

I'm however of the opinion, BLE being a major attack vector, by design. These are IoT devices that, especially when "find my device" is enabled (which in many cases isn't even optional: "turned off" iPhones for example), do announce themselves periodically to the surrounding mesh, allowing for the precise location of these devices; and therefore also the persons carrying them. If bad actors gain access, to for example Google's Sensorvault (legally in the case of state-actors), or would find ways of building such databases themselves; then I'd argue you're in serious waters. Is it a convenient feature, to help one relocate lost devices? Yes. But this nice-to-have, also comes with this serious downside, which I believe doesn't even near justify the means. Rob Braxman has a decent video about the subject if you're interested.

It's not even a case of kids not wanting to switch, most devices don't even come with 3.5mm jack connectors anymore...

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

No worries! :)

PierceTheBubble , (edited )
@PierceTheBubble@lemmy.ml avatar

AI reviews don’t replace maintainer code review, nor do they relieve maintainers from their due diligence.

I can't help but to always be a bit skeptical, when reading something like this. To me it's akin to having to do calculations manually, but there's a calculator right beside you. For now, the technology might not yet be considered sufficiently trustworthy, but what if the clanker starts spitting out conclusions, which equal a maintainer's, like 99% of the time? Wouldn't (partial) automation of the process become extremely tempting, especially when the stack of pull request starts piling up (because of vibecoding)?

Such a policy would be near-impossible to enforce anyway. In fact, we’d rather have them transparently disclose the use of AI than hide it and submit the code against our terms. According to our policy, any significant use of AI in a pull request must be disclosed and labelled.

And how exactly do you enforce that? It seems like you're just shifting the problem.

Certain more esoteric concerns about AI code being somehow inherently inferior to “real code” are not based in reality.

I mean, there's hallucination concerns, there's licensing conflicts. Sure, people can also copy code from other projects with incompatible licenses, but someone without programming experience is less likely to do so, than when vibecoding with a tool directly trained on such material.

Malicious and deceptive LLMs are absolutely conceivable, but that would bring us back to the saboteur.

If Microsoft itself, would be the saboteur, you'd be fucked. They know the maintainers, because GitHub is Microsoft property, and so is the proprietary AI model, directly implemented in the toolchain. A malicious version of Copilot could, hypothetically, be supplied to maintainers, specifically targeting this exploit. Microsoft is NOT your friend, it closely works together with government organizations; which are increasingly interested in compromising consumer privacy.

For now, I do believe this to be a sane approach to AI usage, and believe developers to have the freedom to choose their preferred environment. But the active usage of such tools, does come with a (healthy) dose of critique, especially with regards to privacy-oriented pieces of software; a field where AI has generally been rather invasive.

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

AI risk management, what could possibly go wrong?

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

proprietary encryption algorithms verified by thought-leading cybersecurity experts and communities worldwide

Trust the experts bro

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

Still, if the service is supposed to be security and privacy-oriented, how about you make the source-code available, so users can verify this for themselves?

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

If open-source, a lot more eyes could be on it, and therefore the chances of intentionally implemented vulnerabilities, by Threema itself, would have a higher chance of being noticed before able to be exploited, by both hackers and Threema (partners).

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

Ah sorry, it seems I read over that part. Unless programmers have the exceptional skills and time required, to effectively reverse engineer these complex algorithms, nobody will bother to do so; especially when required after each update. On the contrary, if source code was available, the bar of entry is significantly lower and requires way less specialized skills. So save to say, most programmers won't even bother inspecting a binary, unless there's absolutely no other way around or have time to burn. Where as, if you'd open up the source, there would be a lot more, let's say C programmers, able to inspect the algorithm. Really, have a look at what it requires to write binary code, let alone reverse engineering complicated code, that somebody else wrote.

I agree with Linus' statement though: I rarely inspect source-code myself, but I find it more comforting knowing, package-maintainers for instance, could theoretically check the source before distribution. I stand by my opinion that it's a bad look for a privacy- and security-oriented piece of software, to restrict non-"experts" from inspecting that, which should ensure that.

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

Yes, because they constitute a significant portion, of the eyes, traditionally involved with doing the verification of software. You can allow a potentially cherry-picked group of researchers to do the verification, on behalf of the user-base, but that hinges on a "trust me bro" basis. I appreciate you've looked into the process in practice, but please understand that these pieces of software, are anything but simple. Also if a state-actor were to deliberately implement an exploit, it wouldn't be necessarily obvious at all, even if source-code was available; they're state-backed, top of their game security-researchers themselves. Even higher tier consumer-grade computer viruses, won't execute in a virtualized environment, precisely to avoid being detected. They won't compromise when unnecessary, and might only be exploited when absolutely required; again to avoid suspicion.

I fully agree with the last paragraph though, and believe there to be an overreliance on digital systems over all. In terms of FOSS software, you have to rely on many, many different contributors to facilitate maintenance, packaging and distribution in good faith; and sometimes all it takes is just one package, for the whole system to become compromised. But even so, I'm more comfortable knowing, the majority of software I'm running on my machines, to be open-source; than relying on a single entity, like Microsoft, having an abysmal track record in respect of privacy, while doing so in the dark. Of course you could restrict access to Microsoft servers using network filtering, but it's not just that aspect, it's also not having to deal with Microsoft's increasingly restricted experience, primarily serving their perverse dark patterns. I do believe people should handle sensitive files with care, for instance: put Tails on a live-USB, leave it off the internet, put the files on an encrypted drive, dismount the drives physically, and store them somewhere safe.

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

Of course, make an anti-feature part of an integral part, which coincidentally also happens to handle personal files...

PierceTheBubble , (edited )
@PierceTheBubble@lemmy.ml avatar

It's almost as if they're seeking to replace these with technology. They've purposefully neglected social services and will continue to do so, to lower the bar for AI and grant themselves an excuse for the poor "substitute". And this isn't at all restricted to the UK, in The Netherlands we're in the midst of it too: the same exact playbook. Modern surveillance cameras (like Axis' for example) have NPU's built in, or camera footage (even from legacy analog cameras, by use of encoders) is linked to either an onsite server, a cloud-service, or a combination of the two, facilitating the functionality. I hardly believe AI to be the limiting factor here, storage of footage is another story however. But I think they instead strategically place facial-recognition cameras, while the other cameras simply store abstractions from the footage. Of course if one of those cameras senses an event, which it recognizes might be of elevated relevance, it might store the raw footage. An example being: railways doing face-scanning for "depression detection", instead of implementing 'platform screen doors' of course...

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

Great article! "education" ... "risk assessments" ... "early intervention" ... Got to break their spirit while they're young

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

McCormick, who Zuck noted would be focusing much of her energy on "partnering with governments and sovereigns to build, deploy, invest in, and finance Meta's infrastructure."

Meta last week signed three new long-term contracts with TerraPower, Oklo, and Vistra for nuclear energy. Combined with the company's existing commitments with Constellation Energy, the Social Network has now contracted for roughly 6.6 gigawatts of atomic power

It's all just so soul crushingly in the open and shameless

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

More eyes in the sky. It seems like even pigeons aren't save from being replaced by technology...

PierceTheBubble ,
@PierceTheBubble@lemmy.ml avatar

Ah, the good ol' revolving door politics