andallthat
@andallthat@lemmy.world
- Comment on Connor Myers: As if graduating weren’t daunting enough, now students like me face a jobs market devastated by AI 1 day ago:
but why am I soft in the middle? The rest of my life is so hard!
- Comment on Microsoft Copilot falls Atari 2600 Video Chess 5 days ago:
but… but… reasoning models! AGI! Singularity! Seriously, what you’re saying is true, but it’s not what OpenAI & Co are trying to peddle, so these experiments are a good way to call them out on their BS.
- Comment on $219 Springer Nature book on machine learning was written with a chatbot 6 days ago:
Congrats then, you write better than a LLM!
- Comment on $219 Springer Nature book on machine learning was written with a chatbot 6 days ago:
Interestingly, your original comment is not much longer and I find it much easier to read.
Was it written with the help of a LLM? Not being sarcastic, I’m just trying to understand if the (perceived) deterioration in quality was due to the fact that the input was already LLM-assisted.
- Comment on Trump says he has 'a group of very wealthy people' to buy TikTok 1 week ago:
In order to make sure they were wealthy enough, I’m sure he personally tested them one by one, challenging to send him a big donation in cryptocurrencies.
That’s what a committed President-slash-genius looks like!
- Comment on Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027 1 week ago:
60% success rate sounds like a very optimistic take. Investing in a AI startup with 60% chance of success? That’s a VC’s wet dream!
- Comment on In a First, America Dropped 30,000-Pound Bunker-Busters—But Iran’s Concrete May Be Unbreakable, Scientists Say 1 week ago:
“Eventually” might be a long time with radiation.
20 years after the Chernobyl disaster the level of radiation was still high enough to give you a good chance of cancer if you went to live there for a few years.
www.chernobylgallery.com/…/radiation-levels/
I don’t know how much radiation these “tactical” weapons release, but if it’s comparable to Chernobyl, even if the buildings were not originally damaged, I don’t know how fit they would be for living after being abandoned for 30 or 40 years.
- Comment on Can AI run a physical shop? Anthropic’s Claude tried and the results were gloriously, hilariously bad 1 week ago:
It was Anthropic who ran this experiment
- Comment on In a First, America Dropped 30,000-Pound Bunker-Busters—But Iran’s Concrete May Be Unbreakable, Scientists Say 1 week ago:
rest of Tokio is mostly intact
and housing becomes much more accessible too when buildings are intact but their inhabitants have much shorter lives because of radiation
- Comment on Amazon boss tells staff AI means their jobs are at risk in coming years 2 weeks ago:
Quick recap for future historians:
-
for a really brief part of its history, humanity tried to give kindness a go. A half-hearted attempt at best, but there were things like DEI programs, for instance, attempting to create a gentler, more accepting world for everyone. At the very least, trying to appear human to the people they managed was seen as a good attribute for Leaders.
-
some people felt that their God-given right to be assholes to everyone was being taken away (it’s right there in the Bible: be a jerk to your neighbor, take away his job and f##k his wife)
-
Assholes came back in full force, with a vengeance. Not that they had ever disappeared, but now they relished the opportunity to be openly mean for no reason again. Once again, True Leaders were judged by their ability to drain every drop of blood from their employees and take their still-beating hearts as an offering to the Almighty Shareholders.
-
- Comment on Iran asks its people to delete WhatsApp 2 weeks ago:
I get what you mean and it’s a fair point. But still, as the ignorant I am, I would still go with Meta as the most immediate threat in a war with the US. As the ignorant I am, I would assume the phone manufacturer to have a certain level of control on the way Android works and that it wouldn’t be as easy for Google to have the same level of access to any individual Samsung or Xiaomi phone with Android as it is for Meta with WhatsApp, an app they fully control that has full access to (way too many) phone features regardless of brand.
- Comment on Iran asks its people to delete WhatsApp 2 weeks ago:
They are basically at war with the US and there is this piece of US Tech that nearly everyone is carrying around and that can access their communications, precise location, microphone and camera.
It’s also owned by a company, Meta, that has a history of being used as a tool to manipulate public opinion. I have no particular sympathy for Iran but to me it doesn’t sound like bad advice (and I don’t think WhatsApp is the only way for people to communicate with the outside world).
- Comment on The hidden time bomb in the tax code that's fueling mass tech layoffs 3 weeks ago:
I can’t tell if it’s “the true cause” of the massive tech layoffs because I know jackshit of US tax, but it does make more sense than every company realising at the same time that they over-hired or becoming instant believers of AI-driven productivity.
The only part that doesn’t make sense to me is why hide this from employees. Countless all-hamds with uncomfortable CTOs spitting badly rehearsed bs about why 20% of their team was suddenly let go or why project Y, top of last year’s strategic priorities, was unceremoniously cancelled.
I would not necessarily be happier about being laid off but this would at least be an explanation I feel I’d truly be able to accept
- Comment on ChatGPT 'got absolutely wrecked' by Atari 2600 in beginner's chess match — OpenAI's newest model bamboozled by 1970s logic 4 weeks ago:
Machine learning has existed for many years, now. The issue is with these funding-hungry new companies taking their LLMs, repackaging them as “AI” and attributing every ML win ever to “AI”.
Yes, ML programs designed and trained specifically to identify tumors in medical imaging have become good diagnostic tools. But if you read in news that “AI helps cure cancer”, it makes it sound like a bunch of researchers just spent a few minutes engineering the right prompt for Copilot.
That’s why, yes a specifically-designed and finely tuned ML program can now beat the best human chess player, but calling it “AI” and bundling it together with the latest Gemini or Claude iteration is intentionally misleading.
- Comment on Consumer groups file complaint against SHEIN for dark patterns fuelling over-consumption 4 weeks ago:
I guess it’s one of those things where periodically someone gets sanctioned and someone else get scared and stops doing it for a while
- Comment on Consumer groups file complaint against SHEIN for dark patterns fuelling over-consumption 4 weeks ago:
I’ve never used SHEIN so I can’t tell if they are using these practices or how bad they are, but from the article I see they allegedly use fake urgency messaging, which I know has been sanctioned before in the EU (the company I used to work with had to rush removing it from our eCommerce site). A company can tell you that the item you’re looking at happens to be the last one in stock, if it’s true. But if they lie about it, so you rush into a decision to buy it before it’s gone, then it’s a deceptive practice.
- Comment on AI company files for bankruptcy after being exposed as 700 Indian engineers - Dexerto 4 weeks ago:
Depend what you mean by “valid”. If you mean “profitable”, sure: Fraud has always been a profitable business model.
But if you mean “valid” in terms of what Microsoft got out of their $455M investment, not so much, as they wanted a great new AI model, not the output that the “human-powered” model produced pretending to be an AI.
- Comment on AI is rotting your brain and making you stupid 5 weeks ago:
I agree. I was almost skipping it because of the title, but the article is nuanced and has some very good reflections on topics other that AI. Everything we find a shortcut for is a tradeoff. The article mentions cars to get to the grocery store. There are advantages in walking that we give up when always using a car. Are cars in general a stupid and useless technology? No, but we need to be aware of where the tradeoffs are. And eventually most of these tradeoffs are economic in nature.
By industrializing the production of carpets we might have lost some of our collective ability to produce those hand-made masterpieces of old, but we get to buy ok-looking carpets for cheap.
By reducing and industrializing the production of text content, our mastery of language is declining, but we get to read a lot of not-very-good content for free. This pre-dates AI btw, as can be seen by standardized tests in schools everywhere.
The new thing about GenAI, though is that it upends the promise that technology was going to do the grueling, boring work for us and free up time for us to do the creative things that give us joy. I feel the roles have reversed: even when I have to write an email or a piece of coding, AI does the creative piece and I’m the glorified proofreader and corrector.
- Comment on AI is rotting your brain and making you stupid 5 weeks ago:
cover letters, meeting notes, some process documentation: the stuff that for some reason “needs” to be done, usually written by people who don’t want to write it for people who don’t want to read it. That’s all perfect for GenAI.
- Comment on Duolingo CEO says AI is a better teacher than humans—but schools will exist ‘because you still need childcare’ 1 month ago:
In other news: AI is a better human than Duolingo CEO
- Comment on The Collapse of GPT: Will future artificial intelligence systems perform increasingly poorly due to AI-generated material in their training data? 1 month ago:
Look up stuff where? Some things are verifiable more or less directly: the Moon is not 80% made of cheese,adding glue to pizza is not healthy, the average human hand does not have seven fingers. A “reasoning” model might do better with those than current LLMs.
But for a lot of our knowledge, verifying means “I say X because here are two reputable sources that say X”. For that, having AI-generated text creeping up everywhere (including peer-reviewed scientific papers, that tend to be considered reputable) is blurring the line between truth and “hallucination” for both LLMs and humans
- Comment on The Collapse of GPT: Will future artificial intelligence systems perform increasingly poorly due to AI-generated material in their training data? 1 month ago:
Basically, model collapse happens when the training data no longer matches real-world data
I’m more concerned about LLMs collaping of the whole idea of “real-world”.
I’m not a machine intelligence expert but I do get the basic concept of training a model and then evaluating its output against real data. But the whole thing rests on the idea that you have a model trained with relatively small samples of the real world and a big, clearly distinct “real world” to check the model’s performance.
If LLMs have already ingested basically the entire information in the “real world” and their output is so pervasive that you can’t easily tell what’s true and what’s AI-generated slop “how do we train our models now” is not my main concern.
As an example, take the judges who found made-up cases because lawyers used an LLM. What happens if made-up cases are referenced in several other places, including some legal textbooks used in Law Schools? Don’t they become part of the “real world”?
- Comment on Algorithm based on LLMs doubles lossless data compression rates 1 month ago:
I tried reading the paper. There is a free preprint version on arxiv. The article linked by OP also links the code they used and the data they tried compressing, in the end.
While most of the theory is above my head, the basic intuition is that compression improves if you have some level of “understanding” or higher-level context of the data you are compressing. And LLMs are generally better at doing that than numeric algorithms.
As an example if you recognize a sequence of letters as the first chapter of the book Moby-Dick you’ll probably transmit that information more efficiently than a compression algorithm. “The first chapter of Moby-Dick”; there … I just did it.
- Comment on Google might replace the ‘I’m Feeling Lucky’ button with AI Mode 1 month ago:
Especially right now, I’m feeling lots of things but “lucky” ain’t one…
- Comment on China has introduced a drone that flies like a bird. The new invention could turn the drone industry upside down 1 month ago:
These are more realistic. I hear they can form a flock and shit all over you. Scary stuff!
- Comment on AI is your money becoming sentient 2 months ago:
I think that’s already uncomfortably close to reality.
A fully AI company was already tried as an experiment. There is also a company that appointed an AI CEO but I suspect this one is a publicity stunt.
Right now, various socials are full of AI generated fake engagement, images and videos. Meta is offering AI-powered ads. The obvious question I see asked every time, also here on Lemmy, is: if most of Facebook becomes a zombie world where comments and fake engagement is all LLMs, who would buy those Meta ads? That question was actually what inspired this wildly successful (-27 votes and counting!) showerthought of mine: this fake engagement only makes sense if Meta thinks we’ll give AI more and more agency to choose the products we buy and (eventually) buy them on our behalf. So it’s going to be AI convincing other AIs to buy. Our money becomes sentient, so to speak,
Crazy talk, right? Well…
- Comment on AI is your money becoming sentient 2 months ago:
Thanks! I’ll try longer showers.
- Comment on AI is your money becoming sentient 2 months ago:
I like the comparison. We used to think of the Economy like this hard-to-control, vital but occasionally dangerous natural force, like Gravity. The showerthought was that with the advent of machine models, money has started becoming sentient and making decisions without us.
- Comment on AI is your money becoming sentient 2 months ago:
Fair enough… I meant more in the sense of investment/pension funds. Or the fact that the actual value the bills in our pockets have is driven up or (more frequently) down and probably so does the interest rate of your mortgage or the price of your fuel. And maybe not for you, but the algorithms on social media do have influence on what company you choose for your insurance.
- Submitted 2 months ago to showerthoughts@lemmy.world | 10 comments