CileTheSane
@CileTheSane@lemmy.ca
- Comment on This is Android's new 'advanced flow' for sideloading apps without verification, includes one-day waiting period 1 day ago:
The average user does not need the same level of device security and lock down as a CEO of an independent opposition-aligned media outlet. It’s absurd that you seem to be arguing otherwise.
- Comment on This is Android's new 'advanced flow' for sideloading apps without verification, includes one-day waiting period 1 day ago:
Perhaps a CEO of an independent opposition-aligned media outlet should be using more strict security measures far above what is necessary or even accessible to an average phone user.
- Comment on This is Android's new 'advanced flow' for sideloading apps without verification, includes one-day waiting period 1 day ago:
“statistically doesn’t happen” is not equivalent to “has never happened”. It means the number of times it has happened is such a statistically insignificant % of the user base that it does not pass the smell test for being the reason to inconvenience every user.
- Comment on This is Android's new 'advanced flow' for sideloading apps without verification, includes one-day waiting period 1 day ago:
Other people not knowing how to secure their devices is not an excuse for my device that I own to block me from using it the way I want to.
- Comment on This is Android's new 'advanced flow' for sideloading apps without verification, includes one-day waiting period 2 days ago:
What % of users side load apps vs what % of users had someone else install a bug on their phone?
It’s a situation that statistically doesn’t happen, and now every legitimate user is being inconvenienced to stop it? This if like agree verification laws being sold as “protecting children” as an excuse to spy on and control people.
- Comment on Nvidia Announces DLSS 5, and it adds... An AI slop filter over your game 4 days ago:
It does a pretty good job of making the game still look (almost) exactly the same
Isn’t that just displaying the image with extra steps? Why is my PC using all this extra processing power in order to make it look (almost) exactly the same?
- Comment on Share this with 5 people or it gets ya 6 days ago:
- Comment on Oracle Layoffs: Tech giant to slash 30,000 jobs as banks pull out from financing AI data centres | Company Business News 1 week ago:
The fact that they had to make a riddle for the AI to trip it up
“I want to take my car to the car wash, should I walk or drive” is not a riddle. It requests basic understanding of what is being asked.
- Comment on Hisense TVs force owners to watch intrusive ads when switching inputs, visiting the home screen, or even changing channels — practice infuriates consumers, brand denies wrongdoing 1 week ago:
Even if that were true, you’re still paying more than you would be for a “dumb” TV that doesn’t have those features. So everybody loses but the company selling the hardware still sees a sale. They lose a lot more if they pay the cost to produce and then never sell the device.
- Comment on Hisense TVs force owners to watch intrusive ads when switching inputs, visiting the home screen, or even changing channels — practice infuriates consumers, brand denies wrongdoing 1 week ago:
You are paying for features you don’t use (such as Internet access). That’s not a win.
- Comment on Oracle Layoffs: Tech giant to slash 30,000 jobs as banks pull out from financing AI data centres | Company Business News 1 week ago:
I think you are underestimating how accurate LLMs are because you probably don’t use them much, and only see there mistakes posted for memes. No one’s going to post the 99 times an LLM gives the correct answer, but the one time it says to put glue on pizza it’s going to go viral. So if your only view on LLM output is from posts, you’re going to think it’s way worse than it is.
And look at what is on my feed just this morning: lemmy.world/post/44099386
It’s not just that LLMs are shit. It’s that people trust them way too much and are shocked when the predictable happens.
Even if you mark it down for incorrect answers it’s still going to beat most people. An LLM can score in the 90th percentile in the SAT, and around the 80th percentile in the LSAT.
And of course the AI bro goes for the “vibes” argument. You can’t just state that as true without providing a source. Or did AI tell you it was true?
For example: fewer than 10% of tested AIs consistently properly answered that you need to drive to a car wash in order to wash your car: opper.ai/blog/car-wash-test
That’s a question so far below anything on the SAT or LSAT and 90% of LLMs can’t even get that right.
If you’re doubting my percentages on the accuracy of LLMs I’d encourage you to test them yourself.
I’ve tried using LLMs. I don’t use them for research, because why the fuck would I? Better, more efficient tools already exist for that. When I had something that a search engine can’t help me with and LLMs are apparently “good at” it immediately proved itself to be worthless.
- Comment on YouTube ads are about to get even longer and they’ll be unskippable - Dexerto 1 week ago:
I’m just imagining how spammy it would be to see this reply on every comment that has more than 69 upvotes.
Yup. At one point that number was 69 in order to get to where it is now. Good job.
- Comment on Oracle Layoffs: Tech giant to slash 30,000 jobs as banks pull out from financing AI data centres | Company Business News 1 week ago:
An LLM will give more accurate declaritive statements on more question then any human can
Not if you include “I don’t know” as an accurate statement or penalize the score for incorrect declarative statements.
So is it not more trustworthy for giving declaritive statements than any random human? Would you not trust an LLMs answer on who the 4th president is over a random human?
I would absolutely trust the random human more because they’re not going to make shit up if they don’t know. It will either be “I don’t know” or “I would guess” to make it clear they aren’t confident. The LLM will give me a declarative answer but I have no fucking clue if it’s accurate or an “hallucination” (lie). I’ll need to do what I should have done in the first place and ask a search engine to make sure.
- Comment on Oracle Layoffs: Tech giant to slash 30,000 jobs as banks pull out from financing AI data centres | Company Business News 1 week ago:
they have good declaritive knowledge
No. They don’t. They are good at making declarative statements.
That’s not the same thing.Every day you also probably see a new post of humans being blatantly wrong, does that mean humans can’t know things?
I fully agree that asking a random human for help with something is just as effective as asking an LLM to help with something.
If I need to know something (like who was the first president of the United States) I will not go outside and ask a random human, I will ask a trustworthy source.
If I need some code written I won’t have a random human do it, I will interview people to find someone capable.
If I need someone to interact with customers I won’t let some random human come in and do it. - Comment on New York Bill Would Force Age ID Checks at the Device Level 1 week ago:
So you replied to an article about something happening in the US to talk about Canada without mentioning Canada anywhere in your original post?
- Comment on New York Bill Would Force Age ID Checks at the Device Level 1 week ago:
If you’re Canadian then you know “left of center” in American units is still to the right of center everywhere else in the world.
It’s absurd to see a democrat doing something and blame liberals for it.
- Comment on Oracle Layoffs: Tech giant to slash 30,000 jobs as banks pull out from financing AI data centres | Company Business News 1 week ago:
Whether an LLM can determine truth depends on your definition of truth
Of course someone who doesn’t believe “truth” exists thinks LLMs are just fine. You have to not believe things can be true in order to find their output acceptable.
An LLM can derive this sort of truth by determining the consensus of its training data assuming its training data is from trustworthy sources or the more trustworthy sources are more reinforced.
Every week I see a new post of an LLM being blantly wrong. LLMs said to add glue to pizza to make the cheese stick together.
“They have improved the models since then…” Last week the American military used “AI” and it targeted a school as a military structure. The models are full of shit, they just manually remove the blantly incorrect shit whenever they make the rounds, and there’s always more blantly incorrect shit to be found.
- Comment on Oracle Layoffs: Tech giant to slash 30,000 jobs as banks pull out from financing AI data centres | Company Business News 1 week ago:
An LLM has no knowledge.
My calculator does not “know” that 2+2=4, it runs the code it has been programmed with which tells it to output 4. It has no knowledge or understanding of what it’s being asked to do, it just does what it is programmed to do.
An LLM is programmed to guess what a human would say if asked who the 4th president of the United States was. It runs the code that was developed with the training data to output the most likely response. Is it true? Doesn’t matter. All that matters is that it sounds like something a human would say.
I trust the knowledge of my calculator more, because it was designed for giving factual correct responses.
- Comment on Oracle Layoffs: Tech giant to slash 30,000 jobs as banks pull out from financing AI data centres | Company Business News 1 week ago:
They all “lie” because they don’t actually know a damn thing. Everything an LLM outputs is just a guess of what a human might do.
- Comment on Oracle Layoffs: Tech giant to slash 30,000 jobs as banks pull out from financing AI data centres | Company Business News 1 week ago:
Except for “diagnose these symptoms”, with proper framework around it (only using it for flagging things, not for actually making decisions, things that have been discussed thousands of times) that’s a valid task for them.
This sounds like someone who knows nothing about construction saying “building a house” is a valid task because they don’t understand why using a hammer to drive in a screw would be incorrect or why it’s even a problem. “The results are good enough right?”
- Comment on Oracle Layoffs: Tech giant to slash 30,000 jobs as banks pull out from financing AI data centres | Company Business News 1 week ago:
And I give it less then a year before the “oh shit, we really should have human’s overseeing this” hits
- Comment on Oracle Layoffs: Tech giant to slash 30,000 jobs as banks pull out from financing AI data centres | Company Business News 1 week ago:
So in the composite object of “LLM” what is the tool and what is the task?
The tool is “Language Learning Model” and the task is “Learn language and mimic human speech.”
The task is not “Provide accurate information” or “write code” or “provide legal advice” or “Diagnose these symptoms” or “provide customer service” or “manage a database”.
- Comment on Oracle Layoffs: Tech giant to slash 30,000 jobs as banks pull out from financing AI data centres | Company Business News 1 week ago:
Lemmy is filled to the brim with llm haters but you’re not only a minority, you’re probably also closing doors on the future trajectory of tech in business.
“Think of the shareholder value of firing all these people!”
Also, I call bullshit. I’ve seen many cases of companies replacing their staff with AI, then a month later desperately trying to hire staff again because the AI is good at "looking like* it can do the job but once in use turns out it’s complete shit.
- Comment on Oracle Layoffs: Tech giant to slash 30,000 jobs as banks pull out from financing AI data centres | Company Business News 1 week ago:
I agree anyone using an LLM is a bad craftsman, because they’re using a hammer to drive in a nail.
- Comment on Oracle Layoffs: Tech giant to slash 30,000 jobs as banks pull out from financing AI data centres | Company Business News 2 weeks ago:
Only people who know very little about a field feel like AI “is good enough” for that field. Experts in a field will universally say that AI is shit in their field.
LLMs are the extreme example of “the dumb man’s idea of a smart man.” It sounds like it knows what it’s talking about so people ignorant on the subject don’t know it’s full of shit.
- Comment on McDonald’s CEO’s awkward taste test sparks mocking online: ‘His aura screams kale salad’ 2 weeks ago:
There’s definately mental illness involved. Being a billionaire and continuing to want more is a symptom of something deeply wrong with them.
- Comment on Can some please explain to me why it is that your health insurance can deny you medication, even if your doctor says you need it? 2 weeks ago:
Isn’t the insurance approving the medication/procedure only after being asked for proof the denial was legally obtained evidence that the denial was illegal, and reason enough for a lawsuit?
- Comment on McDonald’s CEO’s awkward taste test sparks mocking online: ‘His aura screams kale salad’ 2 weeks ago:
How much money is being spent daily for a marketing team? Let them do their job and stay out of it.
Elon Musk should have been enough of a warning to CEOs everywhere that being in the public eye is bad for business.
- Comment on McDonald’s CEO’s awkward taste test sparks mocking online: ‘His aura screams kale salad’ 2 weeks ago:
An intelligent person would look at that bank account and think “I never have to work another day in my life, why am I still here?”
- Comment on What do you think might happen if Luigi Mangione isnt found guilty? 3 weeks ago:
Disappeared by ICE.