jj4211
@jj4211@lemmy.world
- Comment on Big tech has spent $155 billion on AI this year. It’s about to spend hundreds of billions more 1 day ago:
They see Meta paying $200 mil or more to get a single employee and think about half that is a steal for a whole team of failed AI centric people
- Comment on AI chatbots are becoming popular alternatives to therapy. But they may worsen mental health crises, experts warn 1 day ago:
You might as well talk to yourself than a chatbot, or read some stories.
Having an overly agreeable puree of language dispensed to you in place of actual conversation with you is neither healthy nor meaningfully engaging.
Conversation is valuable because it is an actual external perspective. LLM chatbots are designed as echo chambers. They have their uses but conversation for the sake of conversation is not one of them.
- Comment on Wi-Fi 8 won't be faster, but will be better - more details emerge just hours after Wi-Fi 7 protocols are officially ratified 1 day ago:
Same thing happened with 5G, claiming that categorically new stuff would be possible with 5G that just couldn’t be done at all with LTE. IoT and VR were buzzwords thrown around as simply demanding 5G and utterly impossible without it.
Then 5G came and it was welcome, but much more mundane. IoT applications are generally so light that even today such devices only bother to ship with LTE hardware. VR didn’t catch on that hard, but to the extent it has, 5G doesn’t matter, no cellular modems and Internet speed is too slow to support anything directly even with 5G.
Same is happening in pretty much every technology with AI right now, claiming that AI absolutely requires whatever the hell it is they want to push. Trying to lean hard on AI FOMO to push their tech.
- Comment on Peter Thiel’s bestie going mask off 2 days ago:
Now there’s some x-rays to see…
- Comment on Peter Thiel’s bestie going mask off 2 days ago:
Success is awarded to the confident. You can be an absolute weird idiot and still, in the right circumstances, get massive success just because people assume there’s a reason you seem so confident, and who are they to second guess that confidence?
- Comment on SEC says it will deregulate cryptocurrencies with 'Project Crypto' 3 days ago:
The reason for volatility is that any such concept at scale is subject to just the messiest lump of evolving opinions on everything. It will deflate, inflate, deflate wildly because it’s utterly subject to the whims of the people without any mechanism to counter a lack of mass consensus on what ‘value’ is.
We started noticing as things scaled up, there needed to be some regulatory management to counter the whimsical populace. Hard to fight mass inflation or deflation when you can’t do anything to manage the “money supply” to offset panic.
- Comment on What's the easiest way to get hookups without seeing escorts? 3 days ago:
I mean given the right socket…
- Comment on Duckstation(one of the most popular PS1 Emulators) dev plans on eventually dropping Linux support due to Linux users, especially Arch Linux users. 3 days ago:
Sometimes devs are the most difficult users.
“Why is this not working the way it should? Ok, yes I did rewrite how the code manages save data in the filesystem, but that shouldn’t have any impact, I just thought it should make sure it only writes in 8k chunks because I read a comment somewhere that says it would increase ssd life by 3%, but I promise you it’s exactly equivalent to the original code and the problem most be elsewhere, not my patch. I patched dozens of other packages without issue with my 8k barrier start without any problems”
Devs come up with wild ideas, rewrite stuff, fail to mention it until you run into it, then explain why it doesn’t matter and stubbornly refuse to at least try without their weird change.
- Comment on Duckstation(one of the most popular PS1 Emulators) dev plans on eventually dropping Linux support due to Linux users, especially Arch Linux users. 4 days ago:
Getting flashbacks to installing qmail back in the day…
I have a heard time imagining it to be worth it with other psx emulators readily available without weird hoops to go through.
- Comment on Slurrrrrrrrrrrrrrrrrrrrrrrrrrrrp 4 days ago:
Wouldn’t be surprised if they ran animated splash.
Hell, wouldn’t be surprised if they started pushing ads through the screens.
- Comment on A leap toward lighter, sleeker mixed reality displays 6 days ago:
If you had, hypothetically, AR glasses that weighed 25 grams with a 12 hour battery runtime with transparent or equivalent real world visuals and perfectly opaque virtual content across the entire field of view, youd have even broader adoption than earbuds have today.
Being able to pull up your phone apps without holding your phone, the ability to have real world subtitles in any language. If they go the camera and reproduce route, they can have a nice solution to presbyopia (reading glasses suck to have to switch out).
Unfortunately current headsets weighs the same as twenty eyeglasses and has much improved, but still terrible passthrough, and wouldn’t last but a couple of hours even if you wanted to try. Bigscreen beyond gets down to 100 grams, but still looks weird and requires external battery and processor.
- Comment on Oh My God, TAKE IT DOWN Kills Parody 6 days ago:
Frankly, while the general depiction is realistic, the actual penis doesn’t look like any real penis, regardless of size. It shouldn’t fall in the scope of the law.
- Comment on AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots. 1 week ago:
It gave me flashbacks when the Replit guy complained that the LLM deleted his data despite being told in all caps not to multiple times.
People really really don’t understand how these things work…
- Comment on AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots. 1 week ago:
Well, not irrelevant. Lots of our world is trying to treat the LLM output as human-like output, so if human’s are going to treat LLM output the same way they treat human generated content, then we have to characterize, for the people, how their expectations are broken in that context.
So as weird as it may seem to treat a stastical content extrapolation engine in the context of social science, there’s a great deal of the reality and investment that wants to treat it as “person equivalent” output and so it must be studied in that context, if for no other reason to demonstrate to people that it should be considered “weird”.
- Comment on AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots. 1 week ago:
Interaction with the physical world isn’t really required for us to evaluate how they deal with ‘experiences’. They have in principle access to all sorts of interesting experiences in the online data. Some models have been enabled to fetch internet data and add them to the prompt to help synthesize an answer.
One key thing is they don’t bother until direction tells them. They don’t have any desire they just have “generate search query from prompt, execute search query and fetch results, consider the combination of the original prompt and the results to be the context for generating more content and return to user”.
LLM is not a scheme that credibly implies that more LLM == sapient existance. Such a concept may come, but it will be something different than LLM. LLM just looks crazily like dealing with people.
- Comment on AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots. 1 week ago:
Fun thing, when it gets the answer right, tell it is was wrong and then see it apologize and “correct” itself to give the wrong answer.
- Comment on AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots. 1 week ago:
I kid you not, early on (mid 2023) some guy mentioned using ChatGPT for his work and not even checking the output (he was in some sort of non-techie field that was still in the wheelhouse of text generation). I expresssed that LLMs can include some glaring mistakes and he said he fixed it by always including in his prompt “Do not hallucinate content and verify all data is actually correct.”.
- Comment on AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots. 1 week ago:
It’s not that they may be deceived, it’s that they have no concept of what truth or fiction, mistake or success even are.
Our brains know the concepts and may fall to deceipt without recognizing it, but we at least recognize that the concept exists.
An AI generates content that is a blend of material from the training material consistent with extending the given prompt. It only seems to introduce a concept of lying or mistakes when the human injects that into the human half of the prompt material. It will also do so in a way that the human can just as easily instruct it to correct a genuine mistake as well as the human instruct it to correct something that is already correct (unless the training data includes a lot of reaffirmation of the material in the face of such doubts).
An LLM can consume more input than a human can gather in multiple lifetimes and still bo wonky in generating content, because it needs enough to credibly blend content to extend every conceivable input. It’s why so many people used to judging human content get derailed by judging AI content. An AI generates a fantastic answer to an interview question that only solid humans get right, only to falter ‘on the job’ because the utterly generic interview question looks like millions of samples in the input, but the actual job was niche.
- Comment on AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots. 1 week ago:
They are not only unaware of their own mistakes, they are unaware of their successes. They are generating content that is, per their training corpus, consistent with the input. This gets eerie, and the ‘uncanny valley’ of the mistakes are all the more striking, but they are just generating content without concept of ‘mistake’ or’ ‘success’ or the content being a model for something else and not just being a blend of stuff from the training data.
For example: Me: Generate an image of a frog on a lilypad. LLM: I’ll try to create that — a peaceful frog on a lilypad in a serene pond scene. The image will appear shortly below.
<includes a perfectly credible picture of a frog on a lilypad, request successfully processed>
Me (lying): That seems to have produced a frog under a lilypad instead of on top. LLM: Thanks for pointing that out! I’m generating a corrected version now with the frog clearly sitting on top of the lilypad. It’ll appear below shortly.
<includes another perfectly credible picture>
It didn’t know anything about the picture, it just took the input at it’s word. A human would have stopped to say “uhh… what do you mean, the lilypad is on water and frog is on top of that?” Or if the human were really trying to just do the request without clarification, they might have tried to think “maybe he wanted it from the perspective of a fish, and he wanted the frog underwater?”.
But tha training data isn’t predominantly people blatantly lying about such obvious things or second guessing things that were done so obviously normally correct.
- Comment on AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots. 1 week ago:
Also, generally the best interfaces for LLM will combine non-LLM facilities transparently. The LLM might be able to translate the prose to the format the math engine desires and then an intermediate layer recognizes a tag to submit an excerpt to a math engine and substitute the chunk with output from the math engine.
Even for servicing a request to generate an image, the text generation model runs independent of the image generation, and the intermediate layer combines them. Which can cause fun disconnects like the guy asking for a full glass of wine. The text generation half is completely oblivious to the image generation half. So it responds playing the role of a graphic artist dutifully doing the work without ever ‘seeing’ the image, but it assumes the image is good because that’s consistent with training output, but then the user corrects it and it goes about admitting that the picture (that it never ‘looked’ at) was wrong and retrying the image generator with the additional context, to produce a similarly botched picture.
- Comment on back in those days... 1 week ago:
20% layoff, that’s rough
- Comment on Humans can be tracked with unique 'fingerprint' based on how their bodies block Wi-Fi signals 1 week ago:
They explicitly went into the advantages over cameras:
- Any light condition (of course IR lighting with IR cameras are the gold standard so this can argueably be met otherwise)
- The ability to cover multiple rooms through walls with a device. A sub-10 GHz signal can penetrate most interior walls. People could be tracked without even being able to see a camera and by extension not knowing where to mess with to defeat surveillance.
So perhaps a building takes a picture of everyone as they come in the front door and also establishes a ‘WhoFi’ profile for that person. They could keep track of their movement through the building while maintaining an actionable correlation to a photo.
- Comment on They even got their own island 1 week ago:
For example, the age of consent in the US state of Delaware is 18, but it is allowed for teenagers aged 16 and 17 to engage in sexual intercourse as long as the older partner is younger than 30
16 and 17 is still under the age of consent, but they have a special exception for partners under 30. So the age of consent is still exactly what people think it is
- Comment on They even got their own island 1 week ago:
Your reference is your own comment? Shouldn’t somewhere down the chain there be a link explaining this? It would be nice for you to be right, but I don’t think you are.
If it were only about activities with other shouldn’t be of age people, then it makes no sense. Either they are allowed and it’s legal or it’s illegal for both of them but neither can be considered criminally responsible because neither can legally consent either.
There are stories constantly about someone older having relations with someone way too young but the jurisdiction has age of consent to make it ok.
- Comment on New Executive Order:AI must agree on the Administration views on Sex,Race, cant mention what they deem to be Critical Race Theory,Unconscious Bias,Intersectionality,Systemic Racism or "Transgenderism 1 week ago:
LLMs don’t just regurgitate training data, it’s a blend of the material used in the training material. So even if you did somehow assure that every bit of content that was fed in was in and of itself completely objectively true and factual, an LLM is still going to blend it together in ways that would no longer be true and factual.
So either it’s nothing but a parrot/search engine and only regurgitates input data or it’s an LLM that can do the full manipulation of the representative content and it can provide incorrect responses from purely factual and truthful training fodder.
Of course we have “real” LLM, LLM is by definition real LLM, and I actually had no problem with things like LLM or GPT, as they were technical concepts with specific meaning that didn’t have to imply. But the swell of marketing meant to emphasize the more vague ‘AI’, or the ‘AGI’ (AI, but you now, we mean it) and ‘reasoning’ and ‘chain of thought’. Having real AGI or reasoning is something that can be discussed with uncertainty, but LLMs are real, whatever they are.
- Comment on New Executive Order:AI must agree on the Administration views on Sex,Race, cant mention what they deem to be Critical Race Theory,Unconscious Bias,Intersectionality,Systemic Racism or "Transgenderism 1 week ago:
But they do have authority over government procurement, and this order even explicitly mentions that this is about government procurement.
Of course, if you make life simple by using the same offering for government and private customers, then you bring down your costs and you appease the conservatives even better.
Even in very innocuous matters, if there’s a government procurement restriction and you play in that space, you tend to just follow that restriction across the board for simplicities sake unless somehow there’s a lot of money behind a separate private offering.
- Comment on [deleted] 1 week ago:
You’ll develop a deeper interest for whatever it is as time goes on.
This is so insidious. Working a job and one day you realize you actually care about something you don’t understand why you should care about it. I know I it’s stupid due me to care about something in my with and yet I do.
- Comment on Rising rocket launches linked to ozone layer thinning 2 weeks ago:
Yeah, every time I see someone say go to Mars as an answer to the earth getting ruined, have to keep in mind that Mars is pre ruined, and whatever calamity that ruins earth will be easier to survive than colonizing Mars
- Comment on Vibe coding service Replit deleted production database 2 weeks ago:
But like the whole ‘vibe coding’ message is the LLM knows all this stuff so you don’t have to.
This isn’t some “LLM can do some code completion/suggestions” it’s “LLM is so magical you can be an idiot with no skills/training and still produce full stack solutions”.
- Comment on Vibe coding service Replit deleted production database 2 weeks ago:
judgement
Yeah, it admitted to an error in judgement because the prompter clearly declared it so.
Generally LLMs will make whatever statement about what has happened that you want it to say. If you told it it went fantastic, it would agree. If you told it that it went terribly, it will parrot that sentiment back.
Which what seems to make it so dangerous for some people’s mental health, a text generator that wants to agree with whatever you are saying, but doing so without verbatim copying so it gives an illusion of another thought process agreeing with them. Meanwhile, concurrent with your chat is another person starting from the exact same model getting a dialog that violently disagrees with the first person. It’s an echo chamber.