sudoreboot
@sudoreboot@slrpnk.net
- Comment on Is this a triangle? 2 months ago:
There is no rule that the angles of a triangle add to 180 degrees. It only holds true in Euclidean geometry, which this is not.
- Comment on Is there any advantage to tying game logic to frame-rate? 3 months ago:
It may be of critical importance in some games that, no matter how low the framerate is, the player never misses an event due to skipped frames.
There are also games that are not real time even in their animations, and so there may be no benefit to skipping frames rather than just letting it run at whatever framerate. Slowed tick rate mostly feels weird if one has certain expectations for the passage of time.
- Comment on Is Tesla Feeling the BYD? A Chinese Giant Shakes Up the EV Electric Car Landscape 6 months ago:
That is a different car brand, though?
- Comment on Single atoms captured morphing into quantum waves in startling image 6 months ago:
As I understand it, they are making measurements of an otherwise single isolated particle as it moves about in a controlled space, and the measurements confirm (yet again) that the measurement outcomes match the probabilities given by the Schrödinger equation, which means that it interferes with itself.
The language used may lead some to think that we now have images showing a wave-like particle, but again, that’s not something that can ever happen. What we have are boring old images of a single classical-looking particle, but the patterns they display tells us that quantum mechanics is very much at play in between the takes.
- Comment on Physics 6 months ago:
Maybe what they’re trying to describe is a torus
- Comment on Single atoms captured morphing into quantum waves in startling image 6 months ago:
tl;dr:
Peter Schauss at the University of Virginia says the wave packet is such a well-understood component of quantum theory that the findings of the new experiment are not surprising – but they do show that the researchers had a high degree of control over the processes used to cool and then precisely image the atoms.
I’m not entirely sure what they mean by having images of their waviness, because that is not how it works. You can not measure a quantum wave, because it isn’t a “particle” wave but a wave-like distribution of mutually exclusive measurement outcomes. Taking a picture is the same as entangling yourself, which embeds you in the quantum wave function such that it describes all possible combinations of you ending up with every possible outcome.
- Comment on Netflix Doc ‘What Jennifer Did’ Uses AI Images to Create False Historical Record 6 months ago:
Global population? You say “the”, so you obviously mean the one we have in common.
- Comment on Netflix Doc ‘What Jennifer Did’ Uses AI Images to Create False Historical Record 6 months ago:
That argument extends to any realistic recreation of events. It’s not wrong, I’m just not sure what could be done about it.
- Comment on Eww, Copilot AI might auto-launch with Windows 11 soon 7 months ago:
Don’t do that.
- Comment on For the first time, the U.S. call for an immediate Gaza ceasefire at the U.N. But now Russia and China vetoe the draft. 7 months ago:
According to my local news, the draft did not call for a ceasefire and that is why it was vetoed.
- Comment on double slit 8 months ago:
The interference disappearing from measurement is not really because the instrument alters the state. Or, at least, putting it like that occludes the more fundamental reason.
Fundamentally, measurements are subject to the uncertainty principle, which dictates that one can not define precisely the values of two complementary observables at the same time. Position and momentum of any quantum object are such complementary observables, so measuring one – for example position – requires that the other (momentum) becomes less defined.
When the position of a particle is narrowed down to a pixel on a detector screen, its momentum becomes very uncertain and we must talk about all the possible paths it might have been on in order for it to have arrived at that point.
The probability of a particle being measured at any given pixel is given by the probability of all possible paths combined, but with an important quirk: when combining each possible quantum state, they interfere with each other such that they may cancel out. It’s sort of like adding together vectors on the unit circle - usually the result is a shorter vector. Repeated measurements of positions give you what appears to be wave-like interference due to the way the probabilities of all paths interfere.
By checking which slit a particle passes through, you exclude all the possible paths through the other slit and end up not observing the same pattern because the two slits simply do not interfere.
- Comment on double slit 8 months ago:
I have no idea what that is so I’ll just go with yes, probably!
- Comment on double slit 8 months ago:
It isn’t “looking” that is meant by “observation”. “Observation” is meant to convey the idea that something (not necessarily sentient) is in some way interacting with an object in question such that the state(s) of the object affects the state(s) of the “observer” (and vice versa).
The word is rather misleading in that it might give the impression of a unidirectional type of interaction when it really is the establishment of a bidirectional relationship. The reason one says “I observe the electron” rather than “I am observed by the electron” is that we don’t typically attribute agency to electrons the way we do humans (for good reasons), but they are equally true.
- Comment on AI Prompt Engineering Is Dead 8 months ago:
Firstly, I’m willing to bet only a minority of users regularly use those buttons. Secondly, you’re talking about the most popular LLM(s) out there. What about all the other LLMs almost nobody is using but are still being developed/researched? Where do they find humans willing to sit and rate all the garbage their LLM puts out?
- Comment on AI Prompt Engineering Is Dead 8 months ago:
I know LLMs are used to grade LLMs. That isn’t solving the problem, it’s just better than nothing because there are no alternatives. There aren’t enough humans willing to endlessly sit and grade LLM responses.
- Comment on AI Prompt Engineering Is Dead 8 months ago:
For that you need a program to judge the quality of output given some input. If we had that, LLMs could just improve themselves directly, bypassing any need for prompt engineering in the first place.
The reason prompt engineering is a thing is that people know what is expected and desired output and what isn’t, and can adapt their interactions with the tool accordingly, a trait associated to adaptive complex systems.
- Comment on Helldivers 2 boss apologizes for 'horrible' dev comments, says Arrowhead has 'taken action internally to educate our developers' 8 months ago:
Could just as well have gone the other way though. Sassy CM telling some loud, annoying, entitled brat to git gud or cry more? Instant cool-dev meme. But if a lot of people feel similarly you get outrage and controversy. Just depends on the local culture on that particular day in that particular place.
It’s cool to be rude as long as you also feel that it’s warranted. It’s also cool to offend people you don’t like or deride ideas you think are stupid. Everyone is always one wrong audience away from being a horrible person.
- Comment on OpenAI introduces Sora, its text-to-video AI model 8 months ago:
Like a completely mad or autistic artist that is creating interesting imagery but has no clue what it means.
Autists usually have no trouble understanding the world around them. Many are just unable to interface with it the way people normally do.
It’s a reflection of our society in a weird mirror.
Well yes, it’s trained on human output. Cultural biases and shortcomings in our species will be reflected in what such an AI spits out.
When you sit there thinking up or refining prompts you’re basically outsourcing the imaginative visualizing part of your brain. […] So AI generation is at least some portion of the artistic or creative process but not all of it.
We use a lot of devices in our daily lives, whether for creative purposes or practical. Every such device is an extension of ourselves; some supplement our intellectual shortcomings, others physical. That doesn’t make the devices capable of doing any of the things we do. We just don’t attribute actions or agency to our tools the way we do to living things. Current AI possess no more agency than a keyboard does, and since we don’t consider our keyboards to be capable of authoring an essay, I don’t think one can reasonably say that current AI is, either.
A keyboard doesn’t understand the content of our essay, it’s just there to translate physical action into digital signals representing keypresses; likewise, an LLM doesn’t understand the content of our essay, it’s just translating a small body of text into a statistically related larger body of text. An LLM can’t create a story any more than our keyboard can create characters on a screen.
Only once/if ever we observe AI behaviour indicative of agency can we start to use words like “creative” in describing its behaviour. For now (and I suspect for quite some time into the future), all we have is sophisticated autocomplete.
- Comment on OpenAI introduces Sora, its text-to-video AI model 8 months ago:
Yeah a real problem here is how you get an AI which doesn’t understand what it is doing to create something complete and still coherent. These clips are cool and all, and so are the tiny essays put out by LLMs, but what you see is literally all you are getting; there are no thoughts, ideas or abstract concepts underlying any of it. There is no meaning or narrative to be found which connects one scene or paragraph to another. It’s a puzzle laid out by an idiot following generic instructions.
That which created the woman walking down that street doesn’t know what either of those things are, and so it can simply not use those concepts to create a coherent narrative. That job still falls onto the human instructing the AI, and nothing suggests that we are anywhere close to replacing that human glue. Current AI can not conceptually – much less realise – ideas, and so they can not be creative or create art by any sensible definition. That isn’t to say that what is produced using AI can’t be posed as, mistaken for, or used to make art. I’d like to see more of that last part and less of the former two, personally.
- Comment on Pika Labs new generative AI video tool unveiled — and it looks like a big deal 10 months ago:
Is this going to be available for free? And if so, to what extent? I’m not paying for AI, but would be cool to try it out.
I’ve also been burnt a few times by registering for some “free” AI service only to realise after putting in some actual effort into trying to create something that literally any actual value you might extract from it is gated behind a payment plan. This was the case when I tried generating voices, for example: spend an hour crafting something I like; generating any actual audio with it? Pay up. It’s like trying out a free MMO where you spend a long time creating your character just the way you want it only to be greeted by “trial over - subscribe now!”
- Comment on Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues 11 months ago:
True, I could have identified those as suggested solutions (albeit rather broad and unspecific, which is perfectly fine). I also sympathise on both accounts.
I have this personal intuition that a lot of social friction could be mitigated if we took some inspiration from the principle of locality physics when designing social networks and structuring society in general. The idea of locality in physics is that physical systems interact only with their adjacent neighbours. The analogous social principle I have in mind is that interactions between people that understand and respect each other should be facilitated and emphasised, and (direct) interactions between people far apart from each other on (some notion of) a “compatibility spectrum” should be limited and de-emphasised. The idea here is that this would enable political and cultural ideas to be propagated and shared with proportionate friction, resulting in a gradual dissipation of truly incompatible views and norms, which would hopefully reduce polarisation.
The way it works today is that people are constantly exposed directly to strangers’ unpalatable ideas and cultures, and there is zero reason for someone to seriously consider any of that since no trust or understanding exists between the (often largely unconsenting) audience and the (often loud) proponents. If some sentiment was instead communicated to a person after having passed through a series of increasingly trusted people (and after likely having undergone some revisions and filtering), that would make the person more likely to consider and extract value from it, and that would bring them a little bit closer to the opposite end of that chain.
Anyway, those are my musings on this matter.
- Comment on Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues 11 months ago:
We don’t have to prove that the brain isn’t puppeted from some external realm of “consciousness” in order to say we can be quite confident that it isn’t, because positing that there is such a thing as free will in the traditional notion of the term is magical thinking, which most of us might agree isn’t particularly respectable.
What we can do is take a compatibilist approach and say there is something that is “effectively indeterministic” about human decision making, because we can’t ever ourselves predict our own actions any faster than we observe them. I don’t have any moral contribution to make here; I just wanted to add this reflection.
- Comment on Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues 11 months ago:
I don’t see em suggesting any particular solutions, so I’m not sure what you are criticizing or why you think it would result in Elon remaining at large any more than from figurative fruit throwing.
I agree that social repercussions have a place, but I also agree that it is only “good enough” for many – but not all – situations. Seeking a more sophisticated approach based on studying and identifying potential root causes seems to me like it would be more sustainable, not to mention an opportunity for individual growth.
- Comment on the best feeling 11 months ago:
One of the last things I remember is Oberyn getting his mind blown
- Comment on aLiEnS!!1 11 months ago:
I was thinking “three ridges” first 😅 (I imagined the sand running between the four fingers of my semi-closed fist)
- Comment on Google announces April 2024 shutdown date for Google Podcasts 11 months ago:
I don’t know if it’s actually a setting, I’ve only noticed the behaviour. Neat little feature!
- Comment on Google announces April 2024 shutdown date for Google Podcasts 11 months ago:
What you describe is also a feature of AntennaPod
- Comment on Listen, Susan. It's a valid theory, just look at the damn thing. 1 year ago:
Not even a little bit, really
- Comment on Black Mirror creator unafraid of AI because it’s “boring” 1 year ago:
If you press it theisame wa, again (“are you sure the function doesn’t exist”), there is a high chance it will “rectify” its rectification.
- Comment on Black Mirror creator unafraid of AI because it’s “boring” 1 year ago:
I use LLMs for having things explained to me, too… but if you want to know how much salt to pour in that soup, try asking it about something niche and complicated you already know the answer to.
They can be useful in figuring out the correct terminology so that you can find the answer on your own, or for pointing some very very obvious mistakes in your understandings (but it will still miss most of them).
Please don’t use those things as answer machines.