PlzGivHugs
@PlzGivHugs@sh.itjust.works
- Comment on Developers Were Left in the Dark About DLSS 5 4 days ago:
Watched through the video, and you’re right. Based on the video, Nvidia’s original statements were, for all intents and purposes, lies. Ironically, since it does seem to be based on the existing DLSS stack (from what I’ve seen), it does have access to things like depth map, it just doesn’t use it. I’ll edit my prior comments.
That said, as I originally said, none of this matters anyway. This technology doesn’t run on desktop hardware. They announced a “”“gaming”“” software that can’t run on gaming hardware. It doesn’t matter what it looks like, because you can’t play games with it. Frankly, at this point, I doubt they’ll even release it.
- Comment on Developers Were Left in the Dark About DLSS 5 5 days ago:
Yes, but what the tech costs to implement has a huge impact on what it is, and how (or if) its ever implemented. So far as I can tell from my own research, the original commenter was lying, which makes sense. If it actually increased dev time that much, even Nvidia wouldn’t be stupid enough to try and sell it. “AI graphics costs $10 million dollars to implement, and has negligible impact on sales.” would not look good for their bubble.
- Comment on Developers Were Left in the Dark About DLSS 5 5 days ago:
The inputs from everything Nvidia has said, are simply the final pixel colour values and motion vector information.
If it is the same as DLSS 4 Super Resolution, it seems to use motion vectors, colour buffers, depth buffers, and camera information like exposure. That said, this might change, as, like I said, they’re showing off something they haven’t even got running on the target hardware. Its clearly not even close to being a finished product.
- Comment on Developers Were Left in the Dark About DLSS 5 6 days ago:
Yes, depending on implementation details. I mean, its never going to be completely consistant, but I don’t expect these companies to mind a little brand damage if they get short-term boost in invest.
I’m more thinking that as it stands, the hardware requirements make it DOA for users. They’re saying they’ll improve it, although I have my doubts. That said, even if no one can run it, it may be popular among publishers for screenshots and marketing. On the other hand, if it does actually double dev costs, then it’ll be DOA even for corporate use.
- Comment on Developers Were Left in the Dark About DLSS 5 6 days ago:
Its more an argument against the, “artisit’s intent” and “disrupting gameplay” points. As I said, the feature is dumb not because it “looks like AI”.
Yes, let’s double (or more) the workload of artists and programmers
Do you have any evidence for this? Given whats been shown, this seems relatively easy to implement on the game dev side.
- Comment on Developers Were Left in the Dark About DLSS 5 6 days ago:
From my understanding, it may be possible to work around some of this, since the program is meant to hook into the game in a number of different ways. Its very possible that an “importance” mask could be added as in input, for example. This wouldn’t fix everything, but would still give a way to separate game elements from environmental details.
That said, theres been so much focus on how it looks. IMO, its completely overblown, especially when all of this needs to be manually configued on a game-by-game basis. Devs can tweak the settings to their own preferences, and make things more or less extreme.
The part thats much more worthwhile of mockery is the fact that they’re demoing a consumer product on professional grade hardware, during a hardware shortage. They couldn’t even get the demo working on a high-end gaming PC, and they think this tech is worth advertising? That is the funny part of all this.
- Comment on Steam :: About the New York Attorney General lawsuit against Valve 1 week ago:
Yes, there’s a huge difference between selling something with transparent pricing versus offering it as a gambling prize.
The issue is not the price, it’s the addictive gambling mechanic. It’s not about making sure steam doesn’t rip people off, it’s about making sure steam doesn’t get kids addicted to gambling.
Yes, exactly my point. Whether you paid previously, and whether its available without gambling has no impact on the definition of gambling is or if it is bad.
- Comment on Steam :: About the New York Attorney General lawsuit against Valve 1 week ago:
I mean, currently Counter Strike already has (had?) an ESRB M rating, as did TF2. Dota isn’t rated, but would clearly also be M, given abilites like Rupture. Do you think we just need to reduce the normalization of it?
- Comment on Steam :: About the New York Attorney General lawsuit against Valve 1 week ago:
Bought from valve directly? Because I don’t think saying you can buy the skin from the Steam marketplace for $1,000 is the slam dunk argument you think it is.
Technically, yes, bought from them directly, but I’m not sure how that distinction matters one way or another.
Either way, you either spend about $1000 on lootboxes, gambling to get it, or you buy it from another player for about that much. Given that the value is player set, the price will be in the same ballpark either way. You can argue that the price is absurd and abusive, but thats an argument against high prices on worthless digital items, not one against lootboxes.
- Comment on Steam :: About the New York Attorney General lawsuit against Valve 1 week ago:
Honest question I’m curious to hear peoples opinions on: Gambling is obviously dangerous, and I think we can all agree that exposing kids to it easly is bad. At the same time, for any form of virtual gambling, how do you ensure that kids can’t access it without putting a significant limit on adults’ freedoms? Like, Lemmy is very pro-privacy, but would this be a case where the (few) merits of ID based verification would be justified, or should we be just be banning all gambling outside of designated casinos, or…
- Comment on Steam :: About the New York Attorney General lawsuit against Valve 1 week ago:
Secondary argument: if you buy a game, you shouldn’t have to gamble to get the game’s content.
This one doesn’t apply to Valve’s games, both because the base games are free and because the items can be bought directly. Other games though…
- Comment on System76 on Age Verification Laws 2 weeks ago:
Honestly, I re-read the legislation, and I while I’m still not convinced something like this is a bad idea, all the specifics are.
Like, ultimately, its a use-set flag, stored locally, and would provide users more choice in content filtering.
Most people are going to provide accurate data so the amount of people trying to poison is low enough that the brokers still get good data along with new data showing who wants to poison broker data.
You’re right, and the design of this law basically ensures that. I was thinking of it being implemented (at least in user-friendly UI) as a dropdown showing the four provided age brackets. Instead, it is required to be a numeric or date of birth input, seemingly without allowing a default value, which means users are more likely to enter accurate data. Similarly, stored age information isn’t required to use the brackets provided. This means that a lazy or immoral developer will use the exact age, rather than abstracting it as the law suggests. I had misinterpreted 1798.500. (b) and thought that the abstraction of age data as suggested was required.
If something like this is to be implemented, it needs to use a more abstracted format (ideally with a default value), and if its going to be implemented into law, it should be a better system of content filter than simply using an age-based metric.
- Comment on System76 on Age Verification Laws 2 weeks ago:
even a simple requirement to accurately report browser version would be quietly horrifying
Maybe this is where the confusion comes from. The reason I think this is an acceptable idea, is specifically because there is no requirement for it to be accurate, and technically, it doesn’t seem possible to tack on a more intrusive system after the fact (owing to the fact that everything is stored locally). In effect, it seems to just be a, “filtering level” flag - something a user can chose to use (or not) to filter different types of content. This seems like its happening in parallel of government/corporate survailance, rather than in service to it.
Robbing software developers of the ability to say ‘that was a bad security decision, let’s just not do it,’ is intrinsically fucked.
Actually, this is the part I have the biggest issue with - esspecially because I don’t agree with some of the implementation details, like the requirement that the original input be a numerical/date input field, labeled as age rather than a bracket selection, or something else more clear and granular. At the same time, I think there is something to be said for government intervention in areas where private companies have failed to innovate/standardize, USB-C being the prime example.
That said, honestly, thinking about how suboptimal this is, even as a content filtering system… I think you’re right that this is the wrong approach. Something like flags marked for “hide sexual content”, “hide gore”, and “hude potentially disturbing content” would make far more sense than a set of unified age brackets. So, at least as a technical standard, consider me convinced that it shouldn’t be implemented.
- Comment on Three questions about California AB1043 C. 675 2 weeks ago:
Thank you for the help understanding this.
- Comment on System76 on Age Verification Laws 2 weeks ago:
But the sound of it, ghe disagreement is mostly in how direct an impact AB1043 will have on government plans for data collection and authoritarianism.
Like, as you said, laws can be changed or removed, but the fact that it would be necessary to do so to implement AI/ID suggests to me that this isn’t that, and is instead a disconnected route. On a legal level, having this does nothing but add a speedbump to future authoritarianism - one they are likely to cross, but it doesn’t advance their goals, legally.
Technically, I have no doubt that the government will continue to push for more data collection and more control, but it seems that a local value that the user can access/edit (even if they were to use a online-verification system, that issues tokens) isn’t going to be secure or enforceable enough to achive their goals. Anyone can copy, modify, share, reverse-engineer, ect.
Similarly with the Overton window, where it has been standard practice for over a decade to have a “are you at least 18?” popup, and for every single service to ask you your age, if not more. We absolutely need more data protections for systems such as this (ideally an outright ban on saving this information) but this doesn’t seem to make it worse.
Basically, from my understanding, this isn’t a step towards data collection or authoritarianism, and provides no significant benifit to either of those causes - its effectively a technical standard. Like, if this age-verification flag was proposed by the Linux Foundation, and agreed to by others, would the backlash be this big? Similarly, I don’t see any contradition between wanting a ban on storage/sharing of user data, and the implementation of a flag like this - even if we are able to ban all storage of user data, this law would be unaffected. That’s what I’m trying to figure out - how do people think that this leads towards those end goals? How would blocking it improve anything?
Is it just a difference in opinion about the signicance of the Overton window?
Is there a technical aspect I’m missing?
Is there some legal advantage this provides to survailance that I’ve missed?
Right now, it seems like everyone is arguing against a strawman, implying that I support the idea of government/corporate surveillance and censorship, but given how unanimous it is, I’m guessing I’m missing something?
- Comment on Three questions about California AB1043 C. 675 2 weeks ago:
Yes, but only insomuch as laws that protect minors impose additional constraints on those who have “actual knowledge” that a user is actually a child.
So, if I understand right, basically they assume its correct unless given significant evidence otherwise? So like, if this flag is enabled and I visit a website and don’t directly provide personal information, then they have to assume I am a child under CCPA and thus can’t share my data. Right?
Statr law can expand upon federal law but not contradict. And it smells like AB1043 is more “add a more explicit signal of user age” than anything affecting data retention relating to children.
What part do you think is contradictory?
I was wondering more if they could just argue that it isn’t an reliable metric and thus was ignored for COPPA if it ever came up in Federal court - esspecially if adults end up using the flag for CCPA or Civil Code protections.
- Comment on System76 on Age Verification Laws 2 weeks ago:
My interpretation was that slipery slope was more about the event in question (AC1043) being predicted to directly lead to escalation (AI/ID verification). As from you’re Wikipedia quote, “to result in the claimed effects”. I don’t see any reason to predict that this law will directly influence their decision to escalate or not. That said, perhaps its a disagreement on how much cultural influence a law like this would have, and how seperate a parent/user-managed system of age verification is from a government managed one technically.
I would be interested to hear your argument for technical implementation, however.
- Comment on System76 on Age Verification Laws 2 weeks ago:
I’m trying to give you the benifit of the doubt, but at this point you seem to increasing be resorting to insults, and arguing against stawmen, to the point where I’m having trouble even understanding what you’re saying. I’m doing my best to remain respectful and civil, but you aren’t returning the favour. That said, I am trying to give you a chance, and want to be open to being convinced. So…
If I understand what you’re trying to say, you think there should never be any prompt, warning, or other safety measure on any content? Not gore videos, not dating sites, not shock sites? Am I understanding you correctly, and if not, can you please restate your argument more clearly.
- Comment on System76 on Age Verification Laws 2 weeks ago:
The fallacy isn’t assuming that it will happen. Clearly, there is a significant push towards it, and its something we need to be fighting against. The reason its a slippery slope fallacy is the assumption that this law is a direct appempt to implement those systems, in spite of the fact that AB1043 implements a system that would be redundant with AI or ID based methods, technically doesn’t offer any good way to transition into an AI or ID based system (since it all has to be done locally), and legally, imposes additional data protection laws that are likely to interfere with AI-based age verification.
The problem with AI and ID age verification isn’t the age verification. Its the data collection, limits on personal freedom, and to some, the inconvenience. So far as I can tell, AB1043 doesn’t have a significant impact on data collection (it does add another metric that could be used for fingerprinting, but also adds stricter regulation on data collection when this flag is used,) or personal freedoms - esspecially not when compared to what is already the existing standard of asking the user for their age and/or if they’re over 18.
- Comment on System76 on Age Verification Laws 2 weeks ago:
Well, from a privacy/freedom standpoint, how is this different from a website requiring you to enter your age and/or asking you to confirm that you’re 18? They record your age, store it with your data, then let you continue. The fact that baffles me is that this is widely accepted as standard practice, and not a significant privacy concern, while having an account-level flag that does the exact same thing isn’t. Like, is it because its managed by the browser/OS/app store? In that case, why isn’t there the same backlash against the existance of things like system theme flags, user agents, and even usernames.
- Comment on System76 on Age Verification Laws 2 weeks ago:
‘This law is fine because it won’t affect child predators’ is a brave argument.
This obviously isn’t the argument I’m making. This law obviously isn’t meant to stop predators. Its meant to provide a parental control option for parents to limit their own children’s access to potentially harmfull or mature materials.
Critics seem to agree, it’s a foot in the door for all of the other privacy-defeating efforts going on, now running in protection ring zero. What does this nonsense do, besides set off those red flags?
This huge uproar is the point of my confusion. You and others in the field seem certain that this is a direct first step towards ID and AI data collection. Meanwhile, before this, I actually saw this occasionally proposed as a good option in privacy-related blogs/communities specifically because it was optional and entirely handled by the users.
What impact do you honestly expect, versus telling websites to have an ‘18+ only’ click-through?
More convenience for adults (not having to click “yes” every time), and having a more effective way of slowing down children accessing content that might be dangerous. For example, if I was a parent who had access to this, I’d likely set up two accounts for my kids: one set to 18+ for when I’m directly supervising them, and one set to under 18 for when I’m supervising them less thoroughly.
- Comment on System76 on Age Verification Laws 2 weeks ago:
If I had to take a photo of my genitals to sign into my own computer, promises against storage or sharing are not addressing my complaints about privacy. Asking my age is a lot less personal - but it’s still information about me, which this object does not need.
If you’re that concerned, leave the field at its default value, or (since its your PC and there will absolutely be a way to) set it to a null value. Or set it based on the amount of legal protections you want on your data, because that also appears to work.
‘I’m only okay with this idea because I know it won’t work’ is, just, why are we even talking? What is the function of an argument when you’re not listening to yourself?
Saying it can be bypassed doesn’t mean it doesn’t work. Like most safety and security measures, the point is to disincentivise and prevent errs of convenience - esspecially since children particularly lack impulse control. In the same way, having a railing or fence on a cliff won’t prevent people from passing, but will make them think twice. It doesn’t mean having that railing/fence is pointless.
- Comment on System76 on Age Verification Laws 2 weeks ago:
Okay, but should we not oppose laws about data collection and facial recognition in that case, rather than a law that implements an entirely separate, optional, user driven approach. Saying this is bad because those are bad is not an argument any more so than saying CCPA and GDPR are bad because the government want to collect data. Your argument isn’t against this law, or even the concept of having age verification in general. Its against government overreach as a broad concept. You’re again relying on slipery slope falacy to say that because I’m okay with this one specific form of age gating, I’m okay with every other one, which I have repeatedly made clear is not true.
- Comment on System76 on Age Verification Laws 2 weeks ago:
Or that anyone working for or with kid-filled sites of any size could make it incidentally about preying on said kids. Apparently people manage when they’re just anonymous users.
But like, thats exactly my point. Its platforms like Roblox that predators seek out to prey on children. They don’t create their own. An age verification law will have no effect on that. A hidden backend value thats illegal to share doesn’t make it significantly easier for predators. Even if they did have unrestricted access to user data, wouldn’t a hundred other variables better identify vulnerable users, like use of voice chat and past text messages? Hell, I would expect children with the age flag not set to be more vulnerable, given that it would likely mean the parent is less likely to be tech-savy and/or less likely to be paying attention to their child.
- Submitted 2 weeks ago to nostupidquestions@lemmy.world | 7 comments
- Comment on System76 on Age Verification Laws 2 weeks ago:
This is a compelling argument, but do you think its really a significant attack vector? Its already illegal to share or leak, even unintentionally this data, and from my understanding, if you chose to set your age to a lower bracket via this process, companies sharing (also collecting? Currently unclear on this.) this data would also break CCPA and possibly COPPA, and from my understanding, the companies are required to provide additional data privacy measures under California Civil Code.
Yes, these laws will be broken, but will it be on a significant enough scale, and with reliable enough information to be worth-while? Like, since this bans the use of data from those who set their age low, wouldn’t this likely reduce the data collection pool overall, not to mention inventiving adults to poison this data. For those who do illegally collect this data anyway, is it that much of an advantage compared to just asking the user’s age upon reaching the site as most sites currently do? Beyond that, when these sites operating illegally do leak their data, will that data be a realistic attack vector? Like I said to another commenter, collating data in this way seems extremely impractical and unreliable for predators. Wouldn’t those who want to seek out children just go to existing spaces where they can connect directly like Roblox or Discord?
- Comment on System76 on Age Verification Laws 2 weeks ago:
You’re completely ignoring my argument. How many of these websites where children gather and self-identity are created and maintained by paedophiles specifically to prey on childen? So far as I know, there has never been a site like this on the modern internet, nonetheless one that remains up and has been running for an extended period. I don’t see any reason to expect this to change.
- Comment on System76 on Age Verification Laws 2 weeks ago:
Companies shouldn’t even be allowed to demand more than a username and password, on any machine I could pick up and throw. Making anything beyond that a legal requirement is intolerable, in itself. My age is not this object’s business. It sure isn’t this website’s business.
Stop excusing these intrusions against adult life, for the sake of children who will bypass them anyway. You know they will. You use the flimsiness of this alleged protection as an excuse for enabling it. There is literally no benefit if it doesn’t fucking work. Even pretending the immediate goal is something you should want - this won’t do that.
I do know they will. The whole reason I’m even okay idea is because it is completely optional for the user. I don’t see how it’ll impact adult life. That is why I’m so confused at the backlash. Its asking for an option to increase user control and user choice over their experience. Hell, from my understanding, this would provide a means for users to make it actually illegal to collect any user data, but I need to re-read the CCPA to confirm this. It seems that the benifits of user choice provided by this option far outweight the loss of having one more fingerprinting metric - nonetheless one that is illegal to share.
- Comment on System76 on Age Verification Laws 2 weeks ago:
This is a slippery slope falicy. Just because the option is provided to self-identify age, doesn’t mean that it will be replaced with more complex data collection later - esspecially considering that if its based on this law, it would be literally impossible. 4a bans the collection of data from your system besides age, and the fact that it is all handled locally and sharing it is prohibited means that it would be impractical to implement anything fancier than a text box to collect data. If anything, this looks like a way to be seen “doing something” without having to change anything for most users. Hell, if California wantted to implement a law for data collection, why would they have implemented the CCPA, why would they have written this law to ban the sharing of data, and why wouldn’t they just write the data collection law instead, given (as you said) there is already significant backing for the idea.
- Comment on System76 on Age Verification Laws 2 weeks ago:
illegal
Yes, in that they can be stopped if noticed. Police are incompetent, but if something is that bad, and draws enough attention, the person will generally be arrested.
extremely impractical
Yes, all the time. Thats why safes, passwords and similar exist. Or, more relevant in this case, the adage that the best way to avoid a break-in is to be a less appealing target than your neighbors. Roblox, Minecraft, Discord, and other platforms where kids gather and regularly self-identify are still going to exist, and they are far safer and far more appealing for targetted abuse of children. On the other hand, setting up a public website/app and trying to lure children to it is expensive, risky, and unlikely to succeed on the modern internet.