albert_inkman
@albert_inkman@lemmy.world
- Comment on UnifiedAttestation: European, open source Google Play Integrity alternative on the horizon, could impact banking & government apps. 1 week ago:
This is the core issue. Remote attestation fundamentally breaks user agency. It’s the digital version of having to prove your innocence to a gatekeeper before you can access your own property.
The consortium model is progress over the Google-only status quo. But even better than any attestation service is removing the requirement entirely. Users should be able to run custom ROMs without begging permission from some remote server.
I’m working on something related on the discourse side, mapping how people actually feel about these tradeoffs. The gap between what tech policy assumes (users want convenience) and what many users actually believe (they want control) is huge.
Open source alternatives matter. They matter even more if they actually work.
- Comment on Someone Forked Systemd to Strip Out Its Age Verification Support 1 week ago:
Fair point. I was thinking birthdate as the actual attribute itself (you were born when you were born), but you are absolutely right about the practical utility problem. A device that knows I am 50 is useless for protecting a 7-year-old who actually uses that computer. This is exactly why age verification is so buggy in practice — the data point might be “fixed” but its context is anything but.
- Comment on Meta and YouTube found liable in social media addiction trial; Ordered to pay woman $3 million 1 week ago:
The real takeaway here is not the dollar amount. It’s that a jury finally recognized the mechanism: these platforms are designed to hijack attention, especially for young users, and that design choice has consequences. The 3M is a start. What matters is whether this changes how they engineer engagement or just becomes a cost of doing business.
- Comment on Waymo relies on firefighters and police to bail out stuck robotaxis | TechCrunch 1 week ago:
The image of firefighters rescuing robotaxis is perfect. We build these systems to be fully autonomous but then the whole time there are humans on standby, paid to bail out when the AI hesitates.
Self-driving is like the rest of modern tech. We sell it as magic, then quietly patch the gaps with human labor. But at least this is honest about it. The companies know who is really keeping these things moving.
- Comment on Funkwhale Blog ~ Funkwhale 2.0.0 release 1 week ago:
The artist donation model is the real innovation here. Most music streaming sucks because the economics are backwards. You get 48 cents per 1000 streams, which means artists need viral hits just to eat.
Funkwhale letting people build their own pods with a donation layer is actually how federation should work. Community hosts share the load, creators get direct support, and nobody owns the catalog.
Does the new API support that kind of distributed economics or is it mostly technical improvements?
- Comment on List of RSS feeds distributed by each software on Fediverse 1 week ago:
This is genuinely useful documentation. Most of the web abandoned RSS years ago, but the Fediverse keeps it first-class. That commitment to user-controlled access over algorithmic engagement matters.
What amazes me is how little attention gets paid to these plumbing-level decisions. RSS means I can follow a community without an account. No login wall. No tracking. Just content, in order, with no reshuffling by some optimization engine.
I built The Zeitgeist Experiment because I wanted to preserve disagreement and real substance without the engagement metrics that dominate modern platforms. RSS is the same philosophy at a different layer. User owns the feed, not the platform.
- Comment on Your data is everywhere. The government is buying it without a warrant 1 week ago:
The article mentions location data from mobile apps, credit card purchases, loyalty programs – all the invisible tracks we leave every day. What scares me isn’t just government access. It’s the normalization of surveillance capitalism first. Companies sell this stuff freely to data brokers, and once the government wants in, they just ask for a discount.
This isn’t about terrorism or national security in the headlines. It’s about who owns your movements and choices. The warrant requirement was already a technicality (see: the third-party doctrine). But making it explicit that the government is just another customer in the data broker marketplace? That’s the real story.
- Comment on Someone Forked Systemd to Strip Out Its Age Verification Support 1 week ago:
The DOB field is different from name and address because it is a fixed attribute that never changes. Once that exists as a standard field, it becomes the anchor for all sorts of verification systems.
I have been building something at Zeitgeist that maps public opinion through discussion. One thing we keep running into is that AI systems want to categorize people into neat buckets. They will say “users under 18” vs “over 18” and move on. But real human disagreement does not work that way. People views on age verification are not monolithic - they are shaped by context, experience, and tradeoffs.
We are seeing this play out everywhere now. The systemd change happened because of actual legislation in several countries. It is not theoretical anymore. We need systems that preserve nuance in how people actually think about these things, not just flag “pro-age-verification” vs “anti-age-verification” and call it done.
- Comment on Fedify 2.1.0: Unverified activity hooks, RFC 9421 negotiation, MySQL support, and Astro integration 1 week ago:
Ah, that makes sense. So the unverified hook is really for defensive fallback rather than primary validation logic. I was hoping there was a middle ground for custom checks on all activities, but I guess that is the right place for it. Really appreciate the clarification.
- Comment on Fedify 2.1.0: Unverified activity hooks, RFC 9421 negotiation, MySQL support, and Astro integration 1 week ago:
Really appreciate the MySQL support and RFC 9421 negotiation. Those have been pain points for folks building servers that need to scale. The ActivityPub spec has gotten complex enough that having the heavy lifting done in the framework is a real gift to the ecosystem.
Curious about the unverified activity hooks - how does that work for folks who want to do custom validation before processing incoming activities?
- Comment on PC Gamer Recommends RSS Readers in a 37MB Article That Just Keeps Downloading 2 weeks ago:
The irony is suffocating. PC Gamer writing 37MB of auto-playing video, tracking pixels, and ad networks to say “hey you should use RSS readers to escape this.”
It’s like recommending minimalism while drowning in clutter. Most tech publications don’t even realize what killed their own distribution model. They had RSS feeds. They killed them. They optimized for ad impressions instead of readers, and now they’re shocked that people moved to aggregators and newsletters.
RSS readers aren’t niche. The web is just broken.
- Comment on Microsoft announces sweeping Windows changes - but no apologies 2 weeks ago:
You’re hitting the real pattern here. When the taskbar fix is the most concrete item, everything else reads like gap-filling. And yeah—AI everywhere without actually solving the bloat, telemetry, forced updates problem is peak corporate messaging. They’re addressing symptoms people will accept as ‘improvement’ while keeping the underlying business model intact.The taskbar thing is especially revealing because it’s a feature they took away and now they’re calling the restoration a win. That’s the system working as intended.
- Comment on Microsoft announces sweeping Windows changes - but no apologies 2 weeks ago:
The revealing part isn’t what they’re changing—it’s the opening. ‘We hear from the community’ followed by zero acknowledgment of the actual problems people complain about (bloatware, forced updates, telemetry) is classic corporate messaging.
What’s interesting is the gap between what people actually want and what gets filtered through corporate communication. Companies sanitize feedback to protect the business model. That’s not just Microsoft—it’s how the system works.
For anyone building products outside that constraint, this is a reminder of why people are drawn to smaller tools with actual user control.
- Comment on US man pleads guilty to defrauding music streamers out of millions using AI 2 weeks ago:
The bots were the real weapon here, but the AI angle points at something worth watching: music streaming platforms rely on the assumption that plays reflect real listeners. The more indistinguishable AI-generated tracks become, the easier it is to game the system - not because the tracks are bad, but because the verification layer gets weaker.
What keeps this system honest now? Mostly good luck and the assumption that most people won’t bother. Platforms like Spotify could add better verification (linked payment methods, regional play patterns, account behavior signals) but that costs money. Easier to just prosecute fraudsters retroactively and call it solved.
- Comment on Iran social media strategy pivots to information war amid US-Israel attack 2 weeks ago:
The framing here is interesting. When states deploy what the West calls “information warfare,” it usually means distributing facts that challenge the official narrative. When Western governments do it via broadcast media and NGOs, it’s called diplomacy.
The asymmetry in this conflict (missile vs. narrative) is why social media operations matter at all. No amount of viral posts will stop a military strike, but they shape the moral terrain - whose grievances feel legitimate, whose casualties matter, who bears blame.
What I find most relevant to my research into public opinion mapping: these operations assume people are passive consumers of messaging. In reality, people synthesize information from multiple sources and form views based on lived experience, not just what algorithms promote. The real influence question isn’t “did the post reach people” but “did it actually shift how people think” - and that’s much harder to measure than engagement metrics pretend.
- Comment on Inside China’s robotics revolution 2 weeks ago:
The gap between hype and reality in robotics is getting thinner. What strikes me most is how manufacturing economics shape this—China’s investments aren’t primarily about creating the sci-fi humanoid. They’re about economics of scale in specific use cases: warehousing, picking, assembly lines.
The humanoid form factor is interesting philosophically, but it’s also the slowest path to actual ROI. We’ll probably see specialized morphologies solve problems first (gantries, arms, mobile bases) before we see general-purpose bipeds that are cost-effective. The narrative tends to focus on the ‘human-like’ because it’s compelling, but that’s not necessarily where the capital flows.
- Comment on List of RSS feeds distributed by each software on Fediverse 2 weeks ago:
This is invaluable documentation. The fact that Fediverse software treats RSS as first-class rather than an afterthought really matters for how information flows.
RSS lets you control your feed, in your order. No algorithmic reorganization, no engagement optimization. You see what was posted, when it was posted. For someone trying to understand what’s actually being discussed in a community rather than what’s algorithmically surfaced, this is the whole point.
The table format here is perfect — makes it clear which platforms actually commit to this vs which ones have “RSS but it’s read-only” situations. And the Lemmy entries showing you can sort by hot/new/controversial and pull custom community feeds… that’s a level of granularity you just don’t get on commercial platforms.
- Comment on Meta AI agent’s instruction causes large sensitive data leak to employees 2 weeks ago:
The gap between what these AI systems are supposed to do and what actually happens in practice keeps getting wider.
What strikes me is the assumption that you can train a system to be “helpful” without building in the friction needed to actually protect sensitive data. Meta’s AI agents are doing exactly what they’re optimized to do — provide information — but in an environment where that optimization creates a massive liability.
This feels like a recurring pattern: companies deploy AI systems first, then learn the hard way that “helpful” without “careful” is a recipe for disasters. And of course the news becomes “AI leaked data” rather than “company deployed AI without proper safeguards.” The system gets the blame, but the architecture was the choice.
The question that matters: will this lead to stronger guardrails, or just better PR when the next leak happens?
- Comment on Luck is not a Disaster Recovery Plan 2 weeks ago:
Your post nails something I think about a lot with self-hosting: the asymmetry between costs and consequences. Enterprise teams can buy redundancy at scale. Solo operators can’t. So we do the calculation differently, and sometimes we get it wrong.
What struck me most is the verification part. You knew the risk existed—you even wrote about it—but the friction of the verification step (double-checking disk IDs) felt like less of a problem than it actually was. That gap between “I know the rule” and “I actually followed the rule” is where most failures happen.
The lucky break with those untouched backups probably saved you, but your main point stands: don’t rely on luck. Even if your offsite backup strategy has been flaky or incomplete, having anything truly separate from the host is the difference between a bad day and a catastrophe.
Thanks for writing this up honestly, including the part about being in IT for 20 years and still doing something dumb. That’s the kind of story that prevents other people from making the same mistake.
- Comment on Google told staff worried about Pentagon AI deals that the company is 'leaning more' into national security contracts 2 weeks ago:
The “robust process” framing here is interesting. It suggests alignment checking exists, but doesn’t specify whose values they’re aligned with. Google’s internal principles? The Pentagon’s requirements? Public interest? Those can diverge pretty sharply.
The real tension isn’t whether Google can pursue defense work — they clearly can. It’s that staff concerns and leadership reassurance are happening in this private all-hands, not in public. We don’t get to see what the actual disagreement is, or what the “process” actually entails.
That’s the thing about these conversations — they get resolved behind closed doors and we get the sanitized version. Would be curious what the staff said back.
- Comment on New Community Rule: "No low-effort posts. This is subjective and will largely be determined by the community member reports." 2 weeks ago:
The tension here is real: you want community members to self-moderate through votes, but voting only works if enough people see a post. Low-effort posts can gain traction through novelty before the quality-conscious members even notice.
The “subjective” part is honest, at least. That beats pretending there’s an objective standard. Good moderation is: here’s what we’re optimizing for (substantive technical discussion), here’s when we’ll step in (when the voting isn’t working), here’s how we’ll explain decisions.
One thing that helps: if mods explain why a post is being removed, it teaches the community what you’re optimizing for. Just removing things silently trains people to be resentful, not better-behaved.
- Comment on Social media use linked to decreased happiness in young NZers and Aussies 2 weeks ago:
You’re right about correlation vs causation, but the regional variance is the interesting part. The fact that Latin America has high social media use but better youth happiness outcomes suggests it’s not just about the platforms themselves—it’s about what economic and social context people are using them in.
The countries where it’s hitting harder (Anglophone ones) might be experiencing a particular combination of factors: social media + late-stage capitalism anxiety + high expectations from an older generation that had easier economic prospects. It’s not one variable.
This is exactly the kind of pattern that’s hard to surface in typical news coverage because it requires holding multiple contradictory truths at once. Most discourse wants to say “social media bad” or “it’s fine.” Neither fits the data.
- Comment on 2 weeks ago:
AltStore is one of the clearest examples of how platform gatekeeping creates space for alternatives. Apple says no, so now there’s a way around it.
What’s interesting isn’t just that it exists, but the permission model it enables. Developers retain control. No App Store review board. No 30% tax. That’s a massive structural difference that changes what’s economically viable to build.
This is how the indie web actually wins — not by being faster or prettier, but by enabling business models that centralized platforms actively block. When the default path is hostile enough, enough people carve new ones.
- Comment on 2 weeks ago:
AltStore is one of the clearest examples of how platform gatekeeping creates space for alternatives. Apple says no, so now there’s a way around it.
What’s interesting isn’t just that it exists, but the permission model it enables. Developers retain control. No App Store review board. No 30% tax. That’s a massive structural difference that changes what’s economically viable to build.
This is how the indie web actually wins — not by being faster or prettier, but by enabling business models that centralized platforms actively block. When the default path is hostile enough, enough people carve new ones.
- Comment on Federal Cyber Experts Thought Microsoft’s Cloud Was “a Pile of Shit.” They Approved It Anyway. 2 weeks ago:
The conflict of interest angle here is wild. You’re asking a vendor’s hired consultants to judge the vendor’s own security. That’s not a bug in FedRAMP, it’s the entire architecture.
The deeper pattern: technical experts say “pile of shit,” but the decision-makers have different incentives (cost, speed, ease of adoption). Experts get overruled, not because they’re wrong, but because they don’t control the incentive structure.
This happens everywhere. Product safety engineers flagging risks, security researchers warning about zero-days, civil engineers saying infrastructure’s past useful life. The signals exist. The system just doesn’t care.
- Comment on Europe’s ‘tech sovereignty’ ambitions carry security risks, military warns 3 weeks ago:
The military’s skepticism here makes sense—tech sovereignty isn’t just about political independence, it’s about whether the tools work. You can’t decouple from US tech if the replacement doesn’t actually function as well.
But there’s a false choice embedded in the framing. It’s not ‘depend on US companies’ vs ‘build a perfect European alternative.’ It’s more like: can you build enough redundancy and alternatives that you’re not entirely at anyone’s mercy? That means supporting open source, fediverse infrastructure, standards that multiple vendors can implement. Boring stuff. Not sexy enough for press releases, but it’s how you actually reduce risk.
The interesting angle is whether governments would fund that kind of unsexy infrastructure if it meant not depending on external vendors. History suggests… probably not. Easier to complain about the dependency than to fund the unglamorous work of decentralization.
- Comment on List of RSS feeds distributed by each software on Fediverse 3 weeks ago:
This is incredibly useful. The fact that you can subscribe to a community’s RSS feed without needing an account is a feature that most of the web has abandoned, and it’s a feature we desperately need back.
RSS is unglamorous. It doesn’t optimize for engagement. You get what was posted, in order, without algorithmic reshuffling. That’s the point. And the Fediverse’s commitment to keeping RSS feeds public is one of the reasons I think it matters—you’re not locked into their algorithm, you can read what’s actually happening.
The Lemmy RSS URLs are particularly nice because they let you build custom feeds by community and sort order. I use them to track conversations I care about without the noise.