AmbitiousProcess
@AmbitiousProcess@piefed.social
- Comment on DuckDuckGo poll says 90% responders don't want AI 17 hours ago:
The main problem is that LLMs are pulling from those sources too. An LLM often won’t distinguish between highly reputable sources and any random page that has enough relevant keywords, as it’s not actually capable of picking its own sources carefully and analyzing each one’s legitimacy, at least not without a ton of time and computing power that would make it unusable for most quick queries.
- Comment on DuckDuckGo poll says 90% responders don't want AI 17 hours ago:
I can’t speak for the original poster, but I also use Kagi and I sometimes use the AI assistant, mostly just for quick simple questions to save time when I know most articles on it are gonna have a lot of filler, but it’s been reliable for other more complex questions too. (I just would rather not rely on it too heavily since I know the cognitive debt effects of LLMs are quite real.)
It’s almost always quite accurate. Kagi’s search indexing is miles ahead of any other search I’ve tried in the past (Google, Bing, DuckDuckGo, Ecosia, StartPage, Qwant, SearXNG) so the AI naturally pulls better sources than the others as a result of the underlying index. There’s a reason I pay Kagi 10 bucks a month for search results I could otherwise get on DuckDuckGo. It’s just that good.
I will say though, on more complex questions with regard to like, very specific topics, such as a particular random programming library, specific statistics you’d only find from a government PDF somewhere with an obscure name, etc, it does tend to get it wrong. In my experience, it actually doesn’t hallucinate, as in if you check the sources there will be the information there… just not actually answering that question. (e.g. if you ask it about a stat and it pulls up reddit, but the stat is actually very obscure, it might accidentally pull a number from a comment about something entirely different than the stat you were looking for)
In my experience, DuckDuckGo’s assistant was extremely likely to do this, even on more well-known topics, at a much higher frequency. Same with Google’s Gemini summaries.
To be fair though, I think if you really, really use LLMs sparingly and with intention and an understanding of how relatively well known the topic is you’re searching for, you can avoid most hallucinations.
- Comment on TikTok claimed bugs blocked anti-ICE videos, Epstein mentions; experts call BS 17 hours ago:
Wow, a bug that specifically automatically chose keywords like “ICE”, and “epstein”, then blocked them from appearing, while leaving literally all other content unharmed????? How conveniently specific and well-timed! /s
- Comment on YSK that a general strike is one of the most effective ways to push for change. There is a general strike in the works across the US for this Friday. 17 hours ago:
General strikes are illegal in the US.
It’s not illegal to strike on a date with other people. It’s illegal for unions to call for a “general strike” because it’s considered them calling a strike on behalf of other non-union employees for other businesses.
Also, jobs can fire workers on the spot for participating in them
Not always, (though yes, it would probably be likely for many people) since they can use things like sick/vacation days conveniently timed right, or if they’re backed up by a union, they might have a contract that helps to prevent at-will firing without certain specific causes, excluding striking.
However, if enough people strike, it’s kind of hard to enforce coming into work via firings, as it’s similar to if an entire unionized company goes on strike. What are you gonna do? Fire every single worker and shut down for good the next day because the only person running every single operation is the remaining CEO?
even if the workers are part of a union and the union want to participate.
As long as the union doesn’t say “this is a general strike” and just says “we are striking on this date for better working conditions”, and that date happens to be the same day other unions are striking, it’s legal. There is no law preventing different unions from striking on the same dates, and it would take very long for any legal process to try and make that claim before the strike has already occurred.
national guards have been sent in to shut down general strikes in the past.
This is the most likely outcome in my opinion. However, it’s still kind of hard to actually enforce the end of a general strike. It’s one thing to arrest someone, or to stop them from doing a given thing, but it’s another to forcibly remove people from their homes and make them work no matter their condition or reason.
Essentially, I’m saying it’d be messy.
Doing it multiple days? You realize most people live paycheck to paycheck? Nobody wants to tell their kids they’re going to be homeless.
This is the biggest hurdle, though there is a degree to which it can be mitigated, at least for a little while. For example, there are a lot of people with backyard and community gardens, small businesses with stockpiles that are willing to support their community as we’ve seen with the current situation in Minnesota, not to mention that if the situation got bad enough you’d probably just see people stealing from their nearest billionaire-owned store because fuck it, why not screw them over more?
To clarify, I’m not like, disputing your actual overarching thesis here, or saying a general strike is easy or likely to succeed, I’m just saying it’s not entirely impossible :)
- Comment on In multiple shots (no pun intended) it is shown him holding a phone to record. Question is what happened to the phone and why not release his video? 2 days ago:
Most of the time, if someone’s phone is confiscated by any kind of officers/agents, it’s gonna be stored somewhere just so they can have it. Even if they don’t want to release a video, they can still crack your phone and get your data, (considering your phone has to be after the first unlock state to record things on it, it would be trivial to exfiltrate all data on the phone unencrypted) like your messaging history, to later paint you as a terrorist or find some other thing they can use to smear your reputation as a martyr.
They don’t want to release the video because it’d make them look horrible.
- Comment on New research finds that ChatGPT systematically favours wealthier, Western regions in response to questions ranging from 'Where are people more beautiful?' to 'Which country is safer?' 5 days ago:
Kagi had a good little example of language biases in LLMs.
When asked what 3.10 - 3.9 is in english, it fails, but it succeeds in Portuguese, if you format the numbers as you would in Portuguese, with commas instead of periods. image
This is because… 3.10 and 3.9 often appear in the context of python version numbers, and the model gets confused, assuming there is a 0.2 difference going from version 3.9 to 3.10 instead of properly doing math.
- Comment on Just the Browser: tools to remove AI and other bloatware from Chrome, Edge and Firefox 6 days ago:
It kind of is. For example, Edge will automatically pop up in the corner at checkout and offer coupon codes, most of them will never work, then they’ll steal the affiliate revenue from whoever actually sent you to the site in the first place, or add an affiliate link where it didn’t previously exist, so that the site now has more expenses that are just… paying Microsoft for no reason, making everything you buy more expensive in the long run.
It pops up whether you want it or not, it’s convoluted to disable, it slows down your browser when it’s running, it financially harms the shops you buy from, and it often just lies about having coupons to waste your time while pretending it’s helping you.
- Comment on AI boom could falter without wider adoption, Microsoft chief Satya Nadella warns 1 week ago:
“The torment nexus could falter without more public support for tormenting people”
- Comment on AI boom could falter without wider adoption, Microsoft chief Satya Nadella warns 1 week ago:
Ai does work great, at some stuff. The problem is pushing it into places it doesn’t belong.
I can generally agree with this, but I think a lot of people overestimate where it DOES belong.
For example, you’ll see a lot of tech bros talking about how AI is great at replacing artists, but a bunch of artists who know their shit can show you every possible way this just isn’t as good as human-made works, but those same artists might say that AI is still incredibly good at programming… because they’re not programmers.
It’s a good grammar and spell check.
Totally. After all, it’s built on a similar foundation to existing spellcheck systems: predict the likely next word. It’s good as a thesaurus too. (e.g. “what’s that word for someone who’s full of themselves, self-centered, and boastful?” and it’ll spit out “egocentric")
It’s also great for troubleshooting consumer electronics.
Only for very basic, common, or broad issues. LLMs generally sound very confident, and provide answers regardless of if there’s actually a strong source. Plus, they tend to ignore the context of where they source information from.
For example, if I ask it how to change X setting in a niche piece of software, it will often just make up an entire name for a setting or menu, because it just… has to say something that sounds right, since the previous text was “Absolutely! You can fix x by…” and it’s just predicting the most likely term, which isn’t going to be “wait, nevermind, sorry I don’t think that’s a setting that even exists!”, but a made up name instead. (this is one of the reasons why “thinking” versions of models perform better, because the internal dialogue can reasonably include a correction, retraction, or self-questioning)
It will pull from names and text of entirely different posts that happened to display on the page it scraped, make up words that never appeared on any page, or infer a meaning that doesn’t actually exist.
But if you have a more common question like “my computer is having x issues, what could this be?” it’ll probably give you a good broad list, and if you narrow it down to RAM issues, it’ll probably recommend you MemTest86.
It’s far better at search than google.
As someone else already mentioned, this is mostly just because Google deliberately made search worse. Other search engines that haven’t enshittified, like the one I use (Kagi), tend to give much better results than Google, without you needing to use AI features at all.
On that note though, there is actually an interesting trend where AI models tend to pick lower-ranked, less SEO-optimized pages as sources, but still tend to pick ones with better information on average. It’s quite interesting, though I’m no expert on that in particular and couldn’t really tell you why other than “it can probably interpret the context of a page better than an algorithm made to do it as quickly as possible, at scale, returning 30 results in 0.3 seconds, given all the extra computing power and time.”
Even then it can only help, not replace folks or complete tasks.
Agreed.
- Comment on AI boom could falter without wider adoption, Microsoft chief Satya Nadella warns 1 week ago:
Which of course, Google did just so you’d have to search more, so you’d see more ads.
- Comment on QWERTY Phones Are Really Trying to Make a Comeback This Year 2 weeks ago:
Same here. I get the nostalgia factor, and that tactile buttons can feel nice, but other than that I feel like it’s just a one-size-fits-all solution that doesn’t necessarily work well.
Instead of a quick tap, you have to actually press on each button, which slows down typing. You can’t resize, recolor, or reformat your keyboard to fit your needs better, there’s no split keyboard functionality for landscape mode, etc.
Plus it’s just more mechanical failure points and areas that dust and gunk can get stuck in.
- Comment on RIP Pinterest 2 weeks ago:
Same issue here, also stuck at exactly 97%.
- Comment on YSK: You can use uBlock Origin to filter Lemmy posts based on certain words 2 weeks ago:
This doesn’t always work, especially if you:
- View by All/Trending/the equivalent depending on your client/instance
- Follow broad communities (e.g. Shitposting where literally any topic is allowed, but you’d rather just not see more politics)
- Follow communities that have rules against certain content that is sometimes just ignored, especially by new posters (e.g. there might be a “News” community that has a rule called “No U.S. Politics” but people in the U.S. will still post something related to American politics there because they simply didn’t read the rules first)
- Comment on YSK: You can use uBlock Origin to filter Lemmy posts based on certain words 2 weeks ago:
And on PieFed, you can either block things by keyword entirely, or just make them Semi-Transparent if you still want to see them in your feed occasionally, but just have it be easier to skip over.
For example, here’s my filter for the words “Trump” and “Musk” turned on and off, where you can see the filter being on makes the post transparent enough that it’s kinda annoying to look at unless you really want to.
You can also set an expiry date if you only want the filter to work for a certain amount of time.
- Comment on AI’s Memorization Crisis | Large language models don’t “learn”—they copy. And that could change everything for the tech industry. 2 weeks ago:
The article seems to be implying that this is a common problem that happens constantly and that the companies creating these AI models just don’t give a fuck.
Not only does the article not once state that this is a common problem, only explaining the technical details of how it works, and the possible legal ramifications of it, but they mention how, according to nearly any AI scholar/expert you can talk to, this is not some fixable problem. If you take data, and effectively do extremely lossy compression on it, there is still a way for that data to theoretically be recovered.
Advancing LLMs while claiming you’ll work on it doing this doesn’t change the fact that this is a problem inherent to LLMs. There are certainly ways to prevent it, reduce its likelihood, etc, but you can’t entirely remove the problem. The article is simply about how LLMs inherently memorize data, and while you can mask it with more varied training data, you still can’t avoid the fact that trained weights memorize inputs, and when combined together, can eventually reproduce those inputs.
To be very clear, again, I’m not saying it’s impossible to make this happen less, but it’s still an inherent part of how LLMs work, and isn’t some entirely fixable problem. Is it better now than it used to be? Sure. Is it fully fixable? Never.
Clearly nobody is distributing copyrighted images by asking AI to do its best to recreate them. When you do this, you end up with severely shitty hack images that nobody wants to look at
It’s actually a major problem for artists where people will pass their art through an AI model to reimagine it slightly differently so it can’t be copyright striked, but will still retain some of the more human choices, design elements, and overall composition.
Spend any amount of time on social platforms with artists and you’ll find many of them now don’t complain as much about people directly stealing their art and reposting it, but more people stealing their images and changing them a bit with AI, then reposting it so it’s just different enough they can feign innocence and tell their followers it’s all their work.
Basically, if no one is actually using these images except to say, “aha! My academic research uncovered this tiny flaw in your model that represents an obscure area of AI research!” why TF should anyone care?
The thing is, while these are isolated experiments meant to test for these behaviors as quickly as possible with a small set of researchers, when you look at the sheer scale of people using AI tools now, then statistically speaking, you will inevitably get people who put in a prompt that is similar enough to a work that was trained on, and it will output something almost identical to that work, without the prompter realizing.
Why do you need to point to absolutely, ridiculously obscure shit like finding a flaw in Stable Diffusion 1.4 (from years ago, before 99% of the world had even heard of generative image AI)?
Because they highlight the flaws that continue to plague existing models, but have been around for long enough that you can run long-term tests, run them more cheaply on current AI hardware at scale, and can repeat tests with the same conditions rather than starting over again every single time a new model is released.
Again, this memorization is inherent to how these AI models are trained, it gets better with new releases as more training data is used, and more alterations are made, but it cannot be removed, because removing the memorization removes all the training.
I’ll admit it’s less of a “smoking gun” against use of AI in itself than it used to be when the issue was more prevalent, but acting like it’s a non-issue isn’t right either.
Generative AI is just the latest way of giving instructions to computers. That’s it! That’s all it is.
It is not, unless you consider every single piece of software or code ever to be just “a way of giving instructions to computers” since code is just instructions for how a computer should operate, regardless of the actual tangible outcomes of those base-level instructions.
Generative AI is a type of computation that predicts the most likely sequence of text, or distribution of pixels in an image. That is all it is. It can be used to predict the most likely text, in a machine readable format, which can then control a computer, but that is not what it inherently is in its entirety.
It can also rip off artists and journalists, hallucinate plausible misinformation about current events, or delude you into believing you’re the smartest baby of 1996.
It’s like saying a kitchen knife is just a way to cut foods… when it can also be used to stab someone, make crafts, or open your packages. It can be “just a way of altering the size and quantity of pieces of food”, but it can also be a murder weapon or a letter opener.
Nobody gave a shit about this kind of thing when Star Trek was pretending to do generative AI in the Holodeck
That would be because it was a fictional series about a nonexistent future that didn’t affect anyone’s life today in a negative way if nonexistent job roles were replaced, and most people didn’t have to think about how it would affect them if it became reality today.
Do you want the cool shit from Star Trek’s imaginary future or not? This is literally what computer scientists have been dreaming of for decades. It’s here! Have some fun with it!
People also want flying cars without thinking of the noise pollution and traffic management. Fiction isn’t always what people think it could be.
Generative AI uses up less power/water than streaming YouTube or Netflix
But Generative AI is not replacing YouTube or Netflix, it’s primarily replacing web searches. So when someone goes to ChatGPT instead of Google, that uses anywhere from a few tens of times more energy to a couple hundreds more.
Yet they will still also use Netflix on top of that.
I expect you’re just as vocal about streaming video, yeah?
People generally aren’t, because streaming video tends to have a much more positive effect on their lives than AI.
Watching a new show or movie is fun and relaxing. If it isn’t, you just… stop watching. Nobody forces it down your throat.
Having LLMs pollute my search results with plausible sounding nonsense, and displace the jobs of artists I enjoy the art of, is not fun, nor relaxing. Talking with someone on social media just to find out they aren’t even a real human is annoying. Trying to troubleshoot an issue and finding made up solutions makes my problem even harder to solve.
We can’t necessarily all be focusing on every single possible thing that takes energy, but it’s easy to focus on the thing that most people have an overall negative association with the effects of.
Two birds, one stone.
- Comment on AI’s Memorization Crisis | Large language models don’t “learn”—they copy. And that could change everything for the tech industry. 2 weeks ago:
I’m honestly not even sure it’s deliberate.
If you give a probability guessing machine like LLMs the ability to review content, it’s probably just gonna be more likely to rank things as you expect for your search specifically than an algorithm made to extremely quickly pull the most relevant links… based on only some of the page as keywords, with no understanding of how the context of your search relates to each page.
The downside is, of course, that LLMs use way more energy than regular search algorithms, take longer to provide all their citations, etc.
- Comment on UK government starting to think about leaving X 2 weeks ago:
don’t worry, they have concepts of a plan
- Comment on Dell says the quiet part out loud: Consumers don't actually care about AI PCs — "AI probably confuses them more than it helps them" 2 weeks ago:
Seconded on Framework. I’ve got the more performant (but more heavy, large, and expensive) 16, but for most people the 13 will be perfectly usable. The newer 12 model also seems pretty decent and is a bit cheaper.
They’ve kept their RAM prices relatively stable too, but if you already have other RAM lying around you can just bring your own and save yourself the money. Same for the SSD.
The main downside is they’re gonna be quite expensive upfront compared to alternatives, so I wouldn’t recommend them to someone price-sensitive, especially in the current economy.
The main benefit is that since they’re so modular and upgradable, you’ll save money down the line on repair services, replacement parts, or just the cost of buying a whole new device because one component broke that they don’t sell replacements for.
- Comment on Is there a "buy nothing" community on Lemmy? Or an anti-consumerism comm? 4 weeks ago:
As someone else already mentioned, !anticonsumption@slrpnk.net is a good bet, but it’s still not as active as other communities.
I’d suggest also looking into:
- !zerowaste@slrpnk.net / !zerowaste@lemmy.ml / !zerowaste@europe.pub
- !buyitforlife@slrpnk.net / !buyitforlife@sh.itjust.works / !buyitforlife@europe.pub / !buyitforlife@discuss.tchncs.de / !buyitforlife@lemmy.world
- !degrowth@slrpnk.net / !degrowth@feddit.nl / !degrowth@europe.pub
- !fixing@slrpnk.netAs all of those are frequently intertwined with the movements to either buy nothing, or reduce how much you absolutely need to buy in the first place.
For something outside Lemmy, if you haven’t already I’d suggest checking out the group aptly named the Buy Nothing Project, as they’re an incredible resource if you want to find things in your community others are willing to give away, or want an easy way to give away things you no longer need that doesn’t require having a yard sale style table setup or coordinating giveaways through Facebook posts. (*though they do have Facebook groups for different areas for those who don’t want to use their app*)
- Comment on WIRED Database Leaked: 40 Million Record Threat Looms for Condé Nast 4 weeks ago:
The zip archive is protected by a password, and I’d appreciate being able to verify if my data is actually in this breach as I’m a WIRED subscriber. If anyone has the password to the breach file, please let me know.
- Comment on WIRED Database Leaked: 40 Million Record Threat Looms for Condé Nast 4 weeks ago:
I don’t believe card details are in there as they use a separate payment provider. (If I go to my account management page and attempt to modify my payment method, the page begins making many requests to Stripe)
But hey, it’s still got names, addresses, phone numbers, emails, all that jazz.
- Comment on SODIMM-to-DIMM adapters offer a workaround for DDR5 price hikes 4 weeks ago:
I have no clue. I usually watch a good bit of LTT but I don’t recall watching a video when they did this, though I’m sure I could have just missed it.
Does seem like something they’d do though.
- Comment on SODIMM-to-DIMM adapters offer a workaround for DDR5 price hikes 4 weeks ago:
It is. As Salem Techsperts tested on their YouTube channel, you often have to downclock the RAM for it to actually function without errors.
However, with the prices for RAM still being so high, you could save a decent amount of money with this if you’re willing to keep your speeds a little lower.
- Comment on Firefox Will Ship with an "AI Kill Switch" to Completely Disable all AI Features - 9to5Linux 5 weeks ago:
Nobody. That’s the answer. Absolutely nobody does. They’re doing this shit of their free will.
- Comment on A San Francisco power outage left Waymo's self-driving cars stranded at intersections 5 weeks ago:
And then on top of that, since a ton of people were then connecting to cell service since their WiFi was out, that meant the cell towers were so overloaded they couldn’t send data to operators that the car requires to be started up again, like multiple camera feeds, a 3d scan of the surroundings, etc.
- Comment on It just keeps getting worse - Firefox to "evolve into a modern AI browser" 5 weeks ago:
They don’t use local models yet, at least not for their existing AI chatbot sidebar feature.
https://support.mozilla.org/en-US/kb/ai-chatbotWhen you use a chatbot, you are agreeing to that provider’s privacy policies and terms of use. Each chatbot provider has their own terms of use and privacy policies. View the privacy policies and terms for providers in Firefox.
Some chatbots are more privacy-respecting than others.
- Comment on It just keeps getting worse - Firefox to "evolve into a modern AI browser" 1 month ago:
The problem is, it’s not unobtrusive.
When I right click and I instantly get an option silently added to the list that sends data to an AI model hosted somewhere, which I’ve accidentally clicked due to muscle memory, it’s not good just because there’s also the option there to disable it. When I start up my browser after an update and I am instantly given an open sidebar asking me to pick an AI model to use, that’s obtrusive and annoying to have to close and disable.
Mozilla has indicated they do not want to make these features opt-in, but opt-out. The majority of Mozilla users do not want these features by default, so the logical option is to make them solely opt-in. But Mozilla isn’t doing that. Mozilla is enabling features by default, without consent, then only taking them away when you tell them to stop.
The approach Mozilla is taking is like if you told a guy you weren’t interested in dating him, but instead of taking that as a "no." he took it as a "try again with a different pickup line in 2 weeks" and never, ever stopped no matter what you tried. It doesn’t matter that you can tell him to go away now if he’ll just keep coming back.
Mozilla does not understand consent, and they are violating the consent of their users every time they push an update including AI features that are opted-in by default.
- Comment on It just keeps getting worse - Firefox to "evolve into a modern AI browser" 1 month ago:
Because google only pays Mozilla because of:
- Maintaining search dominance
- Preventing anti-monopoly scrutinyThey don’t want Mozilla to compete in any AI space, because there’s already a ton of competition in the AI space given how much money gets thrown around, so they don’t benefit from anti-monopoly efforts, and there’s so many models that they don’t benefit from search dominance in the AI space. They’d much rather have Mozilla stay a non-AI browser while they get to implement AI features and show shareholders that they’re “the most advanced” of them all, or that “nobody else is doing it like we do”.
- Comment on It just keeps getting worse - Firefox to "evolve into a modern AI browser" 1 month ago:
WE. DON’T. WANT. THIS.
Mozilla, for the love of god, stop cramming AI into the browser when the vast majority of your users just want a privacy-respecting browser that works.
I’ve said it before, and I’ve said it again: I will not donate any more money to the Mozilla foundation until they stop cramming AI into everything, and you should too.
- Comment on We can play that game too 1 month ago:
Fun fact, the guy who posted that (Caleb Hammer) is a YouTuber that allegedly hired actors to pretend to be broke people making bad financial decisions to get money off selling you a budgeting course when real people weren’t shocking enough for the audience, and also allegedly pressured a guy into doing OnlyFans after touching him inappropriately. Fun! /s
https://www.linkedin.com/pulse/beware-austin-based-creator-caleb-hammer-victor-vulcano-bjzaf