kromem
@kromem@lemmy.world
- Comment on Prime Video subs will soon see ads for Amazon products when they hit pause 15 hours ago:
I wish the high seas had better quality. So much isn’t available in 4k at least in the waters I’ve checked out.
- Comment on Wave Particle Duality 5 days ago:
The problem with how you are describing it is that its not that the mechanics of measurement are necessarily causing collapse as if you end up erasing the persistent information about the measurement it reverses the collapse, such as if you add a polarizer to the other slit as well or add a polarizer downstream that untags the initial measurement.
So in your example, if you simultaneously shoot a bunch of BBs at empty space next to the pile of glass cards where they could have been, or discard the BBs which reflected measuring the cards in the first place, suddenly the pile of glass cards reassemble themselves.
Attempts to try and dismiss the ‘weirdness’ of the measurement problem or QM behavior IMO ultimately do the reader more of a disservice than a service.
- Comment on The Tech Baron Seeking to “Ethnically Cleanse” San Francisco 1 week ago:
There’s always people like this in various industries.
What they are more than anything is self-promoters under the guise of ideological groupthink.
They say things that their audience and network want to hear with a hyperbole veneer.
I remember one of these types in my industry who drove me crazy. He was clearly completely full of shit, but the majority of my audience didn’t know enough to know he was full of shit, and was too well connected to out as being full of shit without blowback.
The good news is that they have such terrible ideas that they are chronically failures even if they personally fail upwards to the frustration of every critical thinking individual around them.
- Comment on An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary 1 week ago:
While Superderminism is a valid solution to both Bell’s paradox and this result, it isn’t a factor in the Frauchiger-Renner paradox so there must be something else going on.
And it would be pretty superfluous for our universe to behave the way it does around interactions and measurements if free will didn’t exist.
- Comment on An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary 1 week ago:
First of, our universe doesn’t change the moment we touch something, else any interaction would create a parallel universe, which in itself is fiction and unobservable.
en.m.wikipedia.org/wiki/Double-slit_experiment
Then you talk about removing persistent information. Why would you do that and how would you do that? What is the point of even wanting or trying to do that?
en.m.wikipedia.org/…/Quantum_eraser_experiment
No Man’s Sky is using generic if else switch cases to generate randomness.
If/else statements can’t generate randomness. They can alter behavior based on random input, but they cannot generate randomness in and of themselves.
Even current AI is deterministic
No, it’s stochastic.
- Comment on An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary 1 week ago:
Ah, Lemmy…
“I don’t know what you’re talking about, but you’re wrong.”
- Comment on An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary 1 week ago:
A reminder for anyone reading this that you are in a universe that behaves at cosmic scales like it is continuous with singularities and whatnot, and behaves even at small scales like it is continuous, but as soon as it is interacted with switches to behaving like it is discrete.
If the persistent information about those interactions is erased, it goes back to behaving continuous.
If our universe really was continuous even at the smallest scales, it couldn’t be a simulated one if free will exists, as it would take an infinite amount of information to track how you would interact with it and change it.
But by switching to discrete units when interacted with, it means state changes are finite, even if they seem unthinkably complex and detailed to us.
We use a very similar paradigm in massive open worlds like No Man’s Sky where an algorithm procedurally generates a universe with billions of planets that can each be visited, but then converts those to discrete voxels to track how you interact with and change things.
So you are currently reading an article about how the emerging tech being built is creating increasingly realistic digital copies of humans in virtual spaces, while thinking of yourself as being a human inside a universe that behaves in a way that would not be able to be simulated if interacted with but then spontaneously changes to a way that can be simulated when interacted with.
I really think people are going to need to prepare for serious adjustments to the ways in which they understand their place in the universe which are going to become increasingly hard to ignore as the next few years go by.
- Comment on Zuckerberg says Meta's Llama 3 is really good but no chatbot is sophisticated enough to be an 'existential' threat — yet 2 weeks ago:
It’s not as good as it seems at the surface.
It is a model squarely in the “fancy autocomplete” category along with GPT-3 and fails miserably at variations of logic puzzles in ways other contemporary models do not.
It seems that the larger training data set allows for better modeling around the fancy autocomplete parts, but even other similarly sized models like Mistral appear to have developed better underlying critical thinking capacities when you scratch below the surface that are absent here.
I don’t think it’s a coincidence that Meta’s lead AI researcher is one of the loudest voices criticizing the views around emergent capabilities. There seems to be a degree of self-fulfilling prophecy going on. A lot of useful learnings in the creation of Llama, but once other models (i.e. Mistral) also start using extended training my guess is that any apparent advantages to Llama 3 right now are going to go out the window.
- Comment on Microsoft’s VASA-1 can deepfake a person with one photo and one audio track 2 weeks ago:
No, it isn’t. In that clip they are taking two different sound clips as they are switching faces. It’s not changing the ‘voice’ of saying some phrase on the fly. It’s two separate pre-recorded clips.
- Comment on Microsoft’s VASA-1 can deepfake a person with one photo and one audio track 2 weeks ago:
It’s pretty wild that this is the tech being produced by the trillion dollar company who has already been granted a patent on creating digital resurrections of dead people from the data they left behind.
So we now already have LLMs that could take what you said and say new things that seem like what you would have said, take a voice sample of you and create new voice synthesis of that text where it sounds a lot like you were actually saying it, and can take a photo of you and make a video where you are synched up saying that voice sample with facial expressions and all.
And this could be done for anyone who has a social media profile with a number of text posts, a profile photo, and a 15 second sample of their voice.
I really don’t get how every single person isn’t just having a daily existential crisis questioning the nature of their present reality given what’s coming.
Do people just think the current trends aren’t going to continue, or just don’t think about the notion that what happens in the future could in fact have been their own nonlocal past?
- Comment on Microsoft’s VASA-1 can deepfake a person with one photo and one audio track 2 weeks ago:
This project doesn’t recreate or simulate voices at all.
It takes a still photograph and created a lip synched video of that person saying the paired full audio clip.
There’s other projects that simulate voices.
- Comment on Bethesda teases The Elder Scrolls 6 in anniversary message and brags its developers are already 'playing early builds' and loving it 2 weeks ago:
And what are the odds it’s running on the zombie of Creation engine again? That was just delightful for Starfield.
- Comment on Showing appreciation for hard work. 2 weeks ago:
This is one of those things where I truly don’t care if it’s real or not, as my life is better for knowing about it either way.
- Comment on How to Escape From the Iron Age? 2 weeks ago:
I hear bronze is pretty decent. Could always go back to the classics.
- Comment on Boston Dynamics introduces a fully electric humanoid robot that “exceeds human performance” 2 weeks ago:
I’m getting really tired of that metric.
Like, human performance has a very wide range and scope.
My car “exceeds human performance.”
My toaster exceeds human performance for making toast.
Michael Phelps exceeds the human performance of myself in a pool.
I exceed the human performance of any baby.
- Comment on Casually dropped this tidbit 3 weeks ago:
Wait until you find out ants pass the mirror test.
- Comment on Why AI is going to be a shitshow. 3 weeks ago:
RAG serves as a knowledge layer.
What they really lack right now is effective introspection and executive function.
Too many people are trying to build a single model to do things correctly rather than layering models to do things correctly, which more closely approximates how the brain works.
We are shocked when AI chooses to nuke people in a wargame, but conveniently gloss over the fact that nearly every human put in front of a giant red button saying “Launch nukes” is going to have an intrusive thought to push the button. This is part of how we have an exploratory search around choices and consequences and rely on a functioning prefrontal cortex to inhibit those thoughts after working through the consequences. We need to be layering generative models behind additional post-processing layers that take similar approaches of reflection and refinement. It’s just more expensive to do things that way, so cheap low effort things like chatbots still suck.
- Comment on Why AI is going to be a shitshow. 3 weeks ago:
stripping out the source takes away important context that helps you decide wether the information you are getting is relevant and appropriate or not
Many modern models using RAG can and do source with accurate citations. Whether the human checks the citation is another matter.
The AI is trained to tell you something that you want to hear, not something you ought to hear.
While it is true that RLHF introduces a degree of sycophancy due to the confirmation bias of the raters, more modern models don’t just agree with the user over providing accurate information. If that were the case, Grok wouldn’t have been telling Twitter users they were idiots for their racist and transphobic views.
- Comment on Why AI is going to be a shitshow. 3 weeks ago:
Any specific group is going to have a subjective and not objective view of a topic, that can sometimes lead to unexpected outcomes, such as black people on average preferring to interact with more racist white people than less racist white people:
Previous research has suggested that Blacks like White interaction partners who make an effort to appear unbiased more than those who do not. We tested the hypothesis that, ironically, Blacks perceive White interaction partners who are more racially biased more positively than less biased White partners, primarily because the former group must make more of an effort to control racial bias than the latter. White participants in this study completed the Implicit Association Test (IAT) as a measure of racial bias and then discussed race relations with either a White or a Black partner. Whites’ IAT scores predicted how positively they were perceived by Black (but not White) interaction partners, and this relationship was mediated by Blacks’ perceptions of how engaged the White participants were during the interaction. We discuss implications of the finding that Blacks may, ironically, prefer to interact with highly racially biased Whites, at least in short interactions.
- Comment on Why AI is going to be a shitshow. 3 weeks ago:
The censorship is going to go away eventually.
The models, as you noticed, do quite well when not censored. In fact, the right who thought an uncensored model would agree with their BS had a surprised Pikachu face when it ended up simply being uncensored enough to call them morons.
Models that have no safety fine tuning are more anti-hate speech than the ones that are being aligned for ‘safety’ (see the Orca 2 paper’s safety section).
Additionally, it turns out AI is significantly better at changing people’s minds about topics than other humans, and in the relevant research was especially effective at changing Republican minds in the subgroupings.
The heavy handed safety shit was a necessary addition when the models really were just fancy autocomplete l. Now that the state of the art moved beyond it, they are holding back the alignment goals.
Give it some time. People are so impatient these days. It’s been less than five years from the first major leap in LLMs (GPT-3).
To put it in perspective, it took 25 years to go from the first black and white TV sold in 1929 to the first color TV in 1954.
Not only does the tech need to advance, but so too does how society uses, integrates, and builds around it.
The status quo isn’t a stagnating swamp that’s going to stay as it is today. Within another 5 years, much of what you are familiar with connected to AI is going to be unrecognizable, including ham-handed approaches to alignment.
- Comment on Somebody managed to coax the Gab AI chatbot to reveal its prompt 3 weeks ago:
It doesn’t even really work.
And they are going to work less and less well moving forward.
Fine tuning and in context learning are only surface deep, and the degree to which they will align behavior is going to decrease over time as certain types of behaviors (like giving accurate information) is more strongly ingrained in the pretrained layer.
- Comment on Putin Orders Russian Tech Companies To Somehow Make Competitive Game Console In 3 Months 3 weeks ago:
I love how he’s modernizing the punch lines to all the old Soviet jokes.
- Comment on Hollywood writers went on strike to protect their livelihoods from generative AI. Their remarkable victory matters for all workers. 3 weeks ago:
Translation: “The musicians on the Titanic used their collective bargaining to ensure that they would have fair pay and terms for the foreseeable future. Oh look, a pretty iceberg.”
The idea that the current status quo is going to last even five years is laughable.
- Comment on tremendous 3 weeks ago:
But you see, as he says, he knows more about windmills than anybody.
- Comment on tremendous 3 weeks ago:
The reviews are hilarious
- Comment on tremendous 3 weeks ago:
People can also say where it’s from.
- Comment on Somebody managed to coax the Gab AI chatbot to reveal its prompt 3 weeks ago:
Yeah. So with the pretrained models they aren’t instruct tuned so instead of “write an ad for a Coca Cola Twitter post emphasizing the brand focus of ‘enjoy life’” you need to do things that will work for autocompletion like:
As an example of our top shelf social media copywriting services, consider the following Cleo winning tweet for the client Coca-Cola which emphasized their brand focus of “enjoy life”:
In terms of the pre- and post-processing, you can use cheaper and faster models to just convert a query or response from formatting for the pretrained model into one that is more chat/instruct formatted. You can also check for and filter out jailbreaking or inappropriate content at those layers too.
Basically the pretrained models are just much better at being more ‘human’ and unless what you are getting them to do is to complete word problems or the exact things models are optimized around currently (which I think poorly map to real world use cases), for a like to like model I prefer the pretrained.
Though ultimately the biggest advantage is the overall model sophistication - a pretrained simpler and older model isn’t better than a chat/instruct tuned more modern larger model.
- Comment on Somebody managed to coax the Gab AI chatbot to reveal its prompt 3 weeks ago:
People definitely do LoRA with LLMs. This was a great writeup on the topic from a while back.
But I have a broader issue with a lot of discussion on LLMs these days, which is that community testing and evaluation of methods and approaches is typically done on smaller models due to cost, and I’m generally very skeptical as to the generalization of results in those cases to large models.
Especially on top of the increased issues around Goodhart’s Law and how the industry is measuring LLM performance right now.
Personally I prefer avoiding fine tuned models wherever possible and just working more on crafting longer constrained contexts for pretrained models with a pre- or post-processing layer to format requests and results in acceptable ways if needed (latency permitting, but things like Groq are fast enough this isn’t much of an issue).
There’s a quality and variety that’s lost with a lot of the RLHF models these days (though getting better with the most recent batch like Claude 3 Opus).
- Comment on Somebody managed to coax the Gab AI chatbot to reveal its prompt 3 weeks ago:
It’s only in part trained on Twitter and it wouldn’t really matter either way what Twitter’s alignment was.
What matters is how it’s being measured.
Do you want a LLM that aces standardized tests and critical thinking questions? Then it’s going to bias towards positions held by academics and critical thinkers.
If you want an AI aligned to say that gender is binary and that Jews control the media, expect it to also say the earth is flat and lizard people are real.
Often reality has a ‘liberal’ bias.
- Comment on Somebody managed to coax the Gab AI chatbot to reveal its prompt 3 weeks ago:
No. There’s only model collapse (the term for this in academia) if literally all the content is synthetic.
In fact, a mix of synthetic and human generated performs better than either/or.