Senal
@Senal@programming.dev
- Comment on Someone Forked Systemd to Strip Out Its Age Verification Support 5 hours ago:
Why do people so often invert the burden of proof?
I know, right ?
If someone says “Picking your nose will cause brain-cancer in 40 years.” Then they have the burden to proof that. Nobody has the burden to disprove that.
Absolutely, and if you’d asked for proof of their accusation you’d be correct in this instance.
They made the accusation that this is a step to make this age fields mandatory, and controlled by third-party age verification services, so they have the burden to proof that there is way to do that.
They did and you could ask them to make a case for that, you didn’t.
You provided your own accusation:
You do know that this is a slippery slope argument, right?
And proceeded to tell them that they are required to provide proof to dispute your new accusation.
You would have to demonstrate that there is an intention there to require third party services to validate the age of users using Linux… Or that there is an intention to do so by systemd and the broader open source developers.
Which is what i was addressing specifically when i said:
You , as the party making the accusation of fallacy would be required to prove that the expectation of escalation is unreasonable or that the intention was not there.
I find it highly unlikely, because most people using Linux systems at home have admin privileges. Which makes this whole point moot, since they can fake whatever they like to the software running on top.
It makes the field itself mostly a non issue in the single isolated context of “does this field, on it’s own, constitute age verification”.
The point most people are trying to make is that it’s a part of a larger context.
- Comment on Someone Forked Systemd to Strip Out Its Age Verification Support 7 hours ago:
Actually no, they don’t.
You , as the party making the accusation of fallacy would be required to prove that the expectation of escalation is unreasonable or that the intention was not there.
- Comment on Google tipped off authorities to illicit images in Canadian doctor's account, search warrants say 1 day ago:
Ah, this is probably my fault.
I’m not the person you were replying to so i wasn’t really arguing any of these points, i just a saw the request and knew of an example, so i provided it.
Just in case this was for me specifically I’ll answer:
Yea I have zero issue with the fact that accounts with pictures of children’s genitals on them should be referred to the the authorities.
Pictures of children’s genitals aren’t inherently CSAM, there are plenty of parents and family members with entirely innocent pictures of their kids on their phones.
There are examples of this in the reported cases of false positives leading to bad outcomes, this is easily searchable.
I’m not saying to not do anything, I’m saying blanket reporting is an ineffective brute-force approach.
If people want privacy, host the pictures locally.
In theory yes, in practice, not so much.
on-device scanning exists and is in use/has been in use on phones, examples of this are also easily searchable.
When you’re storing images with a cloud provider. They become responsible for the images that they store. If it’s a photo of a child’s genitals and that’s illegal for them to have those images on their servers and they need to protect themselves.
The need for legal protection is valid, scanning cloud uploaded photo’s is a user privacy nightmare, but expected.
End to end encryption (where only the users device can decrypt and see the photo) would probably stand up legally but then they wouldn’t be able to use the cloud photo’s to make money.
The problem comes with the recognition of illegal and the way it’s handled.
- Comment on Google tipped off authorities to illicit images in Canadian doctor's account, search warrants say 1 day ago:
Yes, but also it’s the 4th result on that page
- Comment on Google tipped off authorities to illicit images in Canadian doctor's account, search warrants say 1 day ago:
- Comment on 5 days ago:
I agree it’s a bit stark but it does ease up once you get used to the hunting and gathering mechanics, not by much though.
I think the in game reasoning is that the cold your experiencing is already coldest canada, but has an element of extra ice age cold.
Coldness increases calorie consumption due to the heating requirements i think , but i can’t say I’ve been anywhere cold enough to say if it’s accurate or not in the game.
- Comment on 5 days ago:
Honestly it’s still an excellent game, I’m just salty about the nearly 8 year delay.
As it so happens i checked on it and the release date for the last part is apparently the end of march 2026.
I’m not holding my breath, but if it comes out on time, that’ll be a nice bonus.
- Comment on 5 days ago:
Hmm, i slept on that one because i got it mistaken with Keep Talking And Nobody Explodes which is multiplayer and therefore not my usual jam.
I’ll give it a look.
- Comment on 5 days ago:
I really enjoyed the first two chapters, it’s up in my top 50.
To be honest I’ve only made it to the beginning of the third chapter but that’s mainly because i didn’t want to get further into something that wasn’t complete yet.
Which was more prescient than i imagined because it was 6 years ago.
- Comment on 5 days ago:
Gone home was great, another good one was Everybody’s Gone to the Rapture.
- Comment on Lutris now being built with Claude AI, developer decides to hide it after backlash 1 week ago:
Greater compared to human code? Not sure about that, but I’m not disagreeing either. Greater compared to verified able programmers, sure, but in general?..
Both.
The reasons are quite hard to describe, which is why it’s such a trap, but if you spend some time reviewing LLM code you’ll see what I mean.
One reason is that it isn’t coding for logical correctness it’s coding for linguistic passability.
Internally there are mechanisms for mitigating this somewhat, but its not an actual fix so problems slip through.
I don’t think I’m getting your point here. Do you mean by that, the code basically lacks focus on an end goal? Or are you talking about the fuzzyness and randomization of the output?
The latter, if you give it the exact same input in the exact same conditions, it’s not guaranteed to give you the same output.
The fact that its sometimes close to the same actually makes it worse because then you can’t tell at a glance what has changed.
It also isn’t a simple as using a diff tool, at least for anything non-trivial, because it’s variations can be in logical progression as well as language. Meaning you need to track these differences across the whole contextual area.
As I said, there are mitigations, but they aren’t fixes.
- Comment on Lutris now being built with Claude AI, developer decides to hide it after backlash 1 week ago:
Let’s assume we’re skipping the ethical and moral concerns about LLM usage and just discuss the technical.
it makes an impression on me as if human code would be free of such errors
Nobody who knows anything about coding is claiming human code is error free, that’s why code reviews, testing and all the other aspects of the software development lifecycle exist.
To me it sounds like nobody should ever trust AI code
Nobody should trust any code unless it can be verified that it does what is required consistently and predictably.
because there can or will be mistakes you can’t see, which is reasonably careful at best and paranoid at worst
This is a known thing, paranoia doesn’t really apply here, only subjectively appropriate levels of caution.
Also it’s not that they can’t be seen, it’s just that the effort required to spot them is greater and the likelihood to miss something is higher.
Whether or not these problems can be overcome (or mitigated) remains to be seen, but at the moment it still requires additional effort around the LLM parts, which is why hiding them is counterproductive.
At some point there is no difference anymore between “it looks fine” and “it is fine”.
This is important because it’s true, but it’s only true if you can verify it.
This whole issue should theoretically be negated by comprehensive acceptance criteria and testing but if that were the case we’d never have any bugs in human code either.
Personally i think the “uncanny valley code” issue is an inherent part of the way LLM’s work and there is no “solution” to it, the only option is to mitigate as best we can.
I also really really dislike the non-declarative nature of generated code, which fundamentally rules it out as a reliable end to end system tool unless we can get those fully comprehensive tests up to scratch, for me at least.
- Comment on Lutris now being built with Claude AI, developer decides to hide it after backlash 1 week ago:
Think of it like a jeweller suddenly announcing they were going to start mixing in blood diamonds with their usual diamonds “good luck finding them”.
Functionally, blood diamonds aren’t different.
Leaving aside that you might not want blood diamonds, are you really going to trust someone who essentially says “Fuck you, i’m going to hide them because you’re complaining”
If you don’t know what blood diamonds are, it’s easily searchable.
I’ll go on record as saying the aesthetic diamond industry is inflationist monopolist bullshit, but that doesn’t alter the analogy
Secondly, it seems you don’t really understand why LLM generated code can be problematic, i’m not going to go in to it fully here but here’s a relevant outline.
LLM generated code can (and usually does) look fine, but still not do what it’s supposed to do.
This becomes more of an issue the larger the codebase.
The amount of effort needed to find this reasonable looking, but flawed, code is significantly higher than just reading a new dev’s version.
Hiding where this code is make it **even harder ** to find.
Hiding the parts where you really should want additional scrutiny is stupid and self-defeating.
- Comment on Dragon Quest creator Yuji Horii says English translations inevitably strip away a lot of a game's "flavor" 2 weeks ago:
Iirc the situation is similar in the UK, for hunting and “pest control”
- Comment on Dragon Quest creator Yuji Horii says English translations inevitably strip away a lot of a game's "flavor" 2 weeks ago:
Technically there should be some legal recourse, perhaps jail, whether or not that comes to pass is subject to the same shenanigans law afforcement usually comes with.
But that isn’t what they were saying, they were saying that in japan almost no-one is allowed guns so the likelihood that a person was defending their house with a legal gun is very low.
I agree it wasn’t totally clear.
- Comment on System76 on Age Verification Laws 2 weeks ago:
By the sound of it, the disagreement is mostly in how direct an impact AB1043 will have on government plans for data collection and authoritarianism.
That’s not really the original disagreement i was referencing, nor is it a position i’ve taken, we agree that the local only bill isn’t the big bad.
You twice referenced the slippery slope fallacy when replying to comments clearly describing future actions, i was pointing out that it doesn’t meet that criteria because there is a reasonable assumption that the described escalation will occur.
Your original responses to which i was referring:
This is a slippery slope falicy. Just because the option is provided to self-identify age, doesn’t mean that it will be replaced with more complex and direct data collection (which I am against, if it wasn’t clear) later
You’re again relying on slipery slope falacy to say that because I’m okay with this one specific form of age gating, I’m okay with every other one, which I have repeatedly made clear is not true.
The first one is the main issue i was pointing out, the second one isn’t how the fallacy is applied at all.
As no one is taking the position that AB1043 is the actual danger most of what you are arguing doesn’t really apply.
Similarly with the Overton window, where it has been standard practice for over a decade to have a “are you at least 18?” popup, and for every single service to ask you your age, if not more. We absolutely need more data protections for systems such as this (ideally an outright ban on saving this information) but this doesn’t seem to make it worse.
Emphasis mine.
Hard disagree, moving the responsibility of this from individual websites to the OS is a big jump in scope.
The same kind of jump as making it the ISP’s responsibility if they serve illegal content from individual websites ( as has been suggested ).
Aside from that it centralises the surface area for future changes and enforcement.
Basically, from my understanding, this isn’t a step towards data collection or authoritarianism, and provides no significant benifit to either of those causes - its effectively a technical standard.
This is the disagreement, i (and obviously many others) are pointing at the long and comprehensive list of similar initiatives, both recent and historic, that were stepping stones to further encroachment and saying “oh look another small step in the continued and provable encroachment upon privacy” and you seem to be advocating for the benefit of the doubt.
Like, if this age-verification flag was proposed by the Linux Foundation, and agreed to by others, would the backlash be this big?
If the linux foundation had the same history of shenanigans, then yes.
Similarly, I don’t see any contradition between wanting a ban on storage/sharing of user data, and the implementation of a flag like this - even if we are able to ban all storage of user data, this law would be unaffected. That’s what I’m trying to figure out - how do people think that this leads towards those end goals? How would blocking it improve anything?
Ignore the technical implementation of this one step, nobody is saying this is the endgame big bad.
Think of it as a prevention measure, a single ant in the kitchen isn’t a problem in and of itself, but it’s almost certainly an indication of a larger potential future problem.
You are arguing it’s not a problem because the ant only has 5 legs, everyone else is saying the leg count doesn’t matter it’s still an ant.
Is it just a difference in opinion about the signicance of the Overton window?
See above
Is there a technical aspect I’m missing?
Not necessarily , it’s just that you are arguing a single technical issue in a conversation about perceived intentionality.
Is there some legal advantage this provides to survailance that I’ve missed?
See above
Right now, it seems like everyone is arguing against a strawman, implying that I support the idea of government/corporate surveillance and censorship, that I don’t expect that they’ll continue to be evil, or they’re simply saying its bad because its cosmetically similar to laws that do impede on freedoms. Given how unanimous the backlash is, I must be missing something?
That you are using a point nobody disagrees with to imply correctness in a context where said point doesn’t really apply makes it seem like you are coming at this in bad faith.
When bad faith is assumed, people look for underlying reasons.
- Comment on Avocado. Is it really so untasty or I am doing something wrong? 2 weeks ago:
Or the avocado is bland? not all avocado are built equally.
I would hedge that the penis consists of more than just regular skin there is a fair amount of erectile tissue in there as well, though i can’t vouch for a scientific difference is the taste experience.
- Comment on Bird Law 2 weeks ago:
In the case of ducks, that’s quack on quack crime
- Comment on Avocado. Is it really so untasty or I am doing something wrong? 2 weeks ago:
- Comment on System76 on Age Verification Laws 2 weeks ago:
Ah, i think i see where the difference in opinion is, claiming this event leads directly to (as in the very next step is) AI/ID verification could be considered an unreasonable jump i suppose.
In my case i was interpreting the argument as this event will almost certainly lead to further encroachment events into privacy, one of which would probably be the AI/ID verification.
To me this is a reasonable assumption because it’s what has happened in pretty much all of the recent instances of similar event occurring and therefore not a slippery slope fallacy.
TL;DR
On further examination, the technical things you mention seem to be correct if you assume that this bill alone is the vector for privacy encroachment, but they don’t pan out at all if it is assumed that other steps will follow; which, given precedent, is highly likely to happen.
On the technical implementation:
The reason its a slippery slope fallacy is the assumption that this law is a direct attempt to implement those systems, in spite of the fact that AB1043 implements a system that would be redundant with AI or ID based methods,
As an aside i’m not sure anyone is claiming that this bill is a direct attempt at a hard AI/ID verification system, rather they are claiming that this another step in a series of encroachments that will lead to escalating requirements and enforcement, AI/ID verification being an obvious step in that series.
From a technical standpoint you are correct, it outright states that photo ID upload isn’t required, yet.
Opinion : A cynic might see this as indication that the politicians understand that political and public appetite for full photo id requirements is less than optimal, so this is just a small step in shifting the Overton window on this subject.
technically doesn’t offer any good way to transition into an AI or ID based system (since it all has to be done locally),
That is only correct in a very narrow set of circumstances, that local requirement isn’t set in stone at all.
All that needs to happen to go from this to full ID checks is to mandate they use a “trusted” service for verification. It wouldn’t need to be an always online thing either, think of how the bullshit online verification systems that already exist work, i.e. you need to go online every x days or your system/service/app will stop working.
opinion: I fully expect any “trusted” service they designate to be something that serves the governmental and corporate desire for as much data as they can get away with, this isn’t even a stretch, just look at the service discord was trying to implement, the one with deep ties to palantir
and legally, imposes additional data protection laws that are likely to interfere with AI-based age verification.
This isn’t wrong as much as it seems naive, we are talking about bills that change laws, any law introduced can be revoked, superseded or have “exceptions” carved out, such as the current favourite “think of the children” thin veneer they are using.
It wouldn’t take much to move from “all data is protected” to “all data is protected, unless we need it to protect the children”
That’s not even taking in to account that the laws are only as good as the system upholding them, the current US system is sketchy AF, other countries have similar issue with uneven application of laws.
Not to say we should throw out hands up, say “what’s the point?” and just do nothing, but pretending that these laws aren’t susceptible to the same issue affecting everything else doesn’t help anyone either.
The problem with AI and ID age verification isn’t the age verification. Its the data collection, limits on personal freedom, and to some, the inconvenience.
Agreed.
So far as I can tell, AB1043 doesn’t have a significant impact on data collection (it does add another metric that could be used for fingerprinting, but also adds stricter regulation on data collection when this flag is used,) or personal freedoms - esspecially not when compared to what is already the existing standard of asking the user for their age and/or if they’re over 18.
Mostly agreed.
the points i’d raise are that the whole idea of age verification is an encroachment upon personal freedoms for some, so there’s an aspect of subjectivity to that.
I addition, relying on data collection regulations at this point is almost dangerously naive, corporations and governments alike have shown that they will basically ignore them outright or make up some exception, this isn’t conjecture, this is something easily searchable, think flock, ring camera’s, stringray , PRISM, anything palantir is involved in, cambridge analytica, broad warrantless data requests etc.
There is absolutely no reason to give the benefit of the doubt to parties that have repeatedly proven to be doing sketchy shit.
- Comment on System76 on Age Verification Laws 2 weeks ago:
The fallacy is the expectation that following escalating events would arise from the event in question.
It’s only a fallacy if it’s unreasonable to expect the subsequent steps to occur or in this case, be attempted.
Does that mean it’s a guarantee, of course not, just that the fallacy doesn’t apply.
The intention or plan for escalating steps doesn’t have to be laid out perfectly to draw the parallels between this and previous similar events that were then subsequently used as foundations for greater reach.
Your reasoning around the technical implementation of such escalation isn’t applicable here (in the conversation about whether or not the fallacy applies)
If you want to argue that they won’t escalate, or it’s not possible , go right ahead, but raising a fallacy argument when it doesn’t apply isn’t a good start.
If you want i can address your arguments around implementation directly,as a seperate conversation?
- Comment on System76 on Age Verification Laws 2 weeks ago:
If you’re going to reference the slippery slope fallacy so much, you should probably read where and when it actually applies.
From the wikipedia entry:
When the initial step is not demonstrably likely to result in the claimed effects, this is called the slippery slope fallacy.
You yourself just acknowledged that the worst-case is already happening, so the assumption that the worst case will continue to happen is reasonable.
Unless you wish to argue that :
The worst-case scenario is already happening
followed by you saying
Okay, but
isn’t an acknowledgement ?
- Comment on Apple Accidentally Leaks 'MacBook Neo' 3 weeks ago:
Not who replied to you originally but,
You aren’t wrong (you even stated that more is probably better) , just not necessarily presenting the whole picture.
Ram compression isn’t a benefit only scenario, there is a cost in processing power to make that happen.
So it’s a trade off of memory utilisation vs processing requirements.
Whether or not it’s worth it is down to circumstance, though i agree that generally i think it’s worth the tradeoff.
Unified memory is useful in specific circumstances, most notably LLM/ML scenarios where high vram utilisation is part of the process.
It’s not an apples to apples comparison by any means.
- Comment on The same people who rage against authority love moderating communities where their ideology is the only one allowed 3 weeks ago:
I appreciate it.
Yeah, I’m on the default but i’ll explore the other ones now, see if there is anything i prefer.
- Comment on The same people who rage against authority love moderating communities where their ideology is the only one allowed 3 weeks ago:
You’re making quite a lot of frankly weird assumptions.
I’ve clearly stated what i’m referring to and how i got there, if you think there is an unsupported statement then reference it directly and i will respond.
That being said, fuck, i think i’ve seen two posts next to each other and missed where it changed from them to you.
That’s entirely my bad and i apologise, my response was supposed to be for the other person.
- Comment on The same people who rage against authority love moderating communities where their ideology is the only one allowed 3 weeks ago:
“You can’t reason someone out of a position they didn’t reason themselves in to.”
Though it is occasionally possible to point out how their arguments don’t stand up to scrutiny and get them to engage on it.
Only works with the ones not doing it on purpose, however.
- Comment on The same people who rage against authority love moderating communities where their ideology is the only one allowed 3 weeks ago:
key words there are discourse and discussion.
As is explained in a few responses to your paradox of tolerance reply (that you seem to have conveniently not replied to so far), the kind discussion or conversation they are referencing requires both parties to be working in good faith.
from your own reference
as long as we can counter them by rational argument
If one party can’t or won’t provide logic or reasoning to their side of an exchange, that’s not a discussion because there is nothing to discuss with someone not willing to engage in good faith.
There are absolutely places that are ideological echo chambers, despite claiming otherwise, but banning someone for the inability (or unwillingness) to engage in good faith isn’t a removal based on ideology it’s a removal based on not adhering to the basic tenets of how discussions are supposed to work.
If it just so happens that most of that kind of banning happens to people with ideologies you subscribe to, perhaps it’s worth considering how you can help these people understand how to have an actual conversation.
That all being said, from what i’ve seen here I’d guess you’re on the purposeful bad faith side of things so I’m not expecting any reasonable consideration, but feel free to surprise me (or block me, i suppose).
- Comment on The script is mysterious and important. 3 weeks ago:
They even had an ending in the movie that was closer to the original , but they cut it/changed it because it didn’t test well.
I’d guess that was because it was an ending that followed the original storyline and didn’t make sense without the rest of the movie also following the original storyline.
spoiler
It turns out (or is apparent in general) that the “zombies” are sentient/sapient and to them he’s the monster in the dark(or daylight as is the case here), from their point of view he’s basically been abducting people for experimentation and killing anyone who comes looking for their abducted family. The zombie/vampire guy at the end is just looking for his partner to rescue her, once he has her, he leaves. www.youtube.com/watch?v=TKwZOa6CL6U
- Comment on "Being vegan is unnatural" 4 weeks ago:
I’m not sure a strictly maths based ethics is the way to go, that’s where you get into sociopath greater-good style considerations like “If i take out the managing team of <Big Meat Corp> , eventually they’ll recover but i’ll have saved approximately X animals in the meantime”
Don’t get me wrong, i’m not against that kind of thinking, i’m just not sure it’s a viable long-term lifestyle.
In order to produce 1 steak, a cow has to die.
In order to produce n steaks 1 cow has to die.
Arguably it’s probably slightly more than 1, given the morbidity rate of cows before they reach the “production” stage.
In order to produce 1 phone, many different people have to work to produce it, enslaved or not.
In order to produce 1 phone a non-zero number of people will (likely) be maimed/outright killed while working under slave labour conditions.
If you include the more realistic cost/benefits i suggested above does that change the calculations involved for you ?
The following is an aside to the main conversation:
It was been pointed out that some electronics are as good as necessities for most people, while i think there’s a subjective aspect to “necessity” I’ll concede some electronics use it’s not the same as meat consumption. Though i would further argue that under today’s food production and distribution systems, meat consumption could be argued to be a necessity in some situations.
But that’s almost certainly an entirely different conversation.
- Comment on "Being vegan is unnatural" 4 weeks ago:
In reference to my other conversation regarding the comparison of products that use electronics vs meat consumption, I would ask if “convenience” was a valid justification.
Given the horrors of the electronics supply chain (slavery, horrific working conditions, cartels etc) im not sure why convenience electronics (phones, laptops, pc’s) use would be OK, but meat consumption would not.
Im not saying the horrors are equivalent and it’s not a dig at you, I’m genuinely trying to figure out why one kind of horror is OK, but another is not and how people make those calls.