antonim
@antonim@lemmy.dbzer0.com
- Comment on Against AI: An Open Letter From Writers to Penguin Random House, HarperCollins, Simon & Schuster, Hachette Book Group, Macmillan, and all other publishers of America 3 hours ago:
Yeah, usually they’re just sourced from public-domain book collections such as Google Books (who scan older books which can end up visually messy), and I’m pretty sure some of those that are offered on Amazon were straight-up based on pirated PDFs.
- Comment on Against AI: An Open Letter From Writers to Penguin Random House, HarperCollins, Simon & Schuster, Hachette Book Group, Macmillan, and all other publishers of America 3 hours ago:
because you’re paying
Well no, it’s the buyer who is paying. Which they might find off-putting, if the final price is too high, so you get fewer buyers and less profit.
As for the quality, there’s literally no reason that a book that is printed on demand has to be low quality or use low quality materials.
Except that in practice they simply are of lower quality. I’ve seen quite enough of such books. Maybe higher quality materials could be used, but that would raise the price for the end-user even more, and possibly slow down the production.
and the proof is the fact that Amazon is filled with AI generated garbage books
One has to wonder how much money they actually make, though. I saw some YT videos about the topic, IIRC it’s really difficult. Their mere presence doesn’t prove their profitability but only the belief by many people that they could be profitable.
It’s easy to start a business, sure. But you didn’t explain the rest of the process and don’t seem to actually know a lot about the particulars of book publishing (neither do I, but whatever I do know doesn’t agree with your imagined “solution”).
- Comment on Against AI: An Open Letter From Writers to Penguin Random House, HarperCollins, Simon & Schuster, Hachette Book Group, Macmillan, and all other publishers of America 11 hours ago:
I guess, but print on demand is also more expensive than printing in bulk, when looking per unit, and of lower quality (paper and binding). I’m not too familiar with the details of book publishing but I wouldn’t expect that people are not using this route simply because they failed to notice its benefits.
- Comment on Against AI: An Open Letter From Writers to Penguin Random House, HarperCollins, Simon & Schuster, Hachette Book Group, Macmillan, and all other publishers of America 1 day ago:
I tried to read about “just-in-time economy” but I really don’t see how it would apply to book market?
- Comment on Judge Rules Training AI on Authors' Books Is Legal But Pirating Them Is Not 4 days ago:
Large AI companies themselves want people to be ignorant of how AI works, though. They want uncritical acceptance of the tech as they force it everywhere, creating a radical counterreaction from people. The reaction might be uncritical too, I’d prefer to say it’s merely unjustified in specific cases or overly emotional, but it doesn’t come from nowhere or from sheer stupidity. We have been hearing about people treating their chatbots as sentient beings since like 2022 (remember that guy from Google?), bombarded with doomer (or, from AI companies’ point of view, very desirable) projections about AI replacing most jobs and wreaking havoc on world economy - how are ordinary people supposed to remain calm and balanced when hearing such stuff all the time?
- Comment on Judge Rules Training AI on Authors' Books Is Legal But Pirating Them Is Not 4 days ago:
Oh man…
That is the point, to show how AI image generators easily fail to produce something that rarely occurs out there in reality (i.e. is absent from training data), even though intuitively (from the viewpoint of human intelligence) it seems like it should be trivial to portray.
- Comment on Judge Rules Training AI on Authors' Books Is Legal But Pirating Them Is Not 4 days ago:
Yeah, I don’t think that would fly.
“Your honour, I was just hoarding that terabyte of Hollywood films, I haven’t actually watched them.”
- Comment on Judge Rules Training AI on Authors' Books Is Legal But Pirating Them Is Not 4 days ago:
Bro are you a robot yourself? Does that look like a glass full of wine?
- Comment on Judge Rules Training AI on Authors' Books Is Legal But Pirating Them Is Not 4 days ago:
AI can “learn” from and “read” a book in the same way a person can and does,
If it’s in the same way, then why do you need the quotation marks? Even you understand that they’re not the same.
And either way, machine learning is different from human learning in so many ways it’s ridiculous to even discuss the topic.
AI doesn’t reproduce a work that it “learns” from
That depends on the model and the amount of data it has been trained on. I remember the first public model of ChatGPT producing a sentence that was just one word different from what I found by googling the text (from some scientific article summary, so not a trivial sentence that could line up accidentally). More recently, there was a widely reported-on study of AI-generated poetry where the model was requested to produce a poem in the style of Chaucer, and then produced a letter-for-letter reproduction of the well-known opening of the Canterbury Tales. It hasn’t been trained on enough Middle English poetry and thus can’t generate any of it, so it defaulted to copying a text that probably occurred dozens of times in its training data.
- Comment on Judge Rules Training AI on Authors' Books Is Legal But Pirating Them Is Not 4 days ago:
Facebook (Meta) torrented TBs from Libgen, and their internal chats leaked so we know about that, and IIRC they’ve been sued. Maybe you’re thinking of that case?
- Comment on WhatsApp is officially getting ads 1 week ago:
And again in a year or so only a handful of tech nerds with few social connections will actually ditch it.
- Comment on YSK: Non-violent protests are 2x likely to succeed and no non-violent movement that has involved more than 3.5% of the country population has ever failed 2 weeks ago:
It can have effect when the opposition is relatively weak, e.g. individual small companies or govts that aren’t powerful and authoritarian enough to ignore massive protests.
- Comment on YSK: Non-violent protests are 2x likely to succeed and no non-violent movement that has involved more than 3.5% of the country population has ever failed 2 weeks ago:
Sounds like bullshit. Just in recent memory: look at Belarus 2021, look at the massive Serbian protests that have been going on for over half a year and the govt is still not relenting.
- Comment on Keep on GIFin’ — A New Version of GifCities, Internet Archive’s GeoCities Animated GIF Search Engine 2 weeks ago:
This may be a bit too far from the nominal topic of the comm, so feel free to report it and let the mods decide if it can stay up. (I’d report it myself but apparently can’t do it.)
The GIF in the OP is from the game Lemmings (1991).
- Keep on GIFin’ — A New Version of GifCities, Internet Archive’s GeoCities Animated GIF Search Engineblog.archive.org ↗Submitted 2 weeks ago to technology@lemmy.world | 4 comments
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 2 weeks ago:
you know what I’m talking about
But I literally don’t. Well, I didn’t but now I mostly do, since you explained it.
I get what you’re saying with regards to the isolation, this issue has already been raised when many left-wing people started to leave Twitter. But it is opening a whole new can of worms - these profiles that post AI-generated content are largely not managed by ordinary people with their private agendas (sharing neat stuff, political agitation, etc.), but by bots, and are also massively followed and supported by other bot profiles. Much the same on Twitter with its hordes of right-wing troll profiles, and as I’m still somewhat active on reddit I also notice blatant manipluation there as well (my country had elections a few weeks ago and the flood of new profiles less than one week old spamming idiotic propaganda and insults was too obvious). It’s not organic online behaviour and it can’t really be fought by organic behaviour, especially when the big social media platforms give up the tools to fight it (relaxing their moderation standards, removing fact-checking, etc.). Lemmy and Mastodon etc. are based on the idea(l) that this corporate-controlled area is not the only space where meaningful activity can happen.
So that’s one side of the story, AI is not something happening in a vacuum and that you just have to submit to your own will. The other side of the story, the actual abilities of AI, have already been discussed, we’ve seen sufficiently that it’s not that good at helping people form more solidly developed and truth-based stances. Maybe it could be used to spread the sort of mass-produced manipulative bullshit that is already used by the right, but I can’t honestly support such stuff. In this regard, we can doubt whether there is any ground to win for the left (would the left’s possible audience actually eat it up), and if yes, whether it is worth it (basing your political appeal on bullshit can bite you in the ass down the line).
As for the comparison to discourse around immigrants, again I still don’t fully understand the point other than on the most surface level (the media is guiding people what to think, duh).
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 2 weeks ago:
I don’t have even the slightest idea what that video is supposed to mean. (Happy cake day tho.)
- Comment on ChatGPT Mostly Source Wikipedia; Google AI Overviews Mostly Source Reddit 2 weeks ago:
In 2005 the article on William Shakespeare contained references to a total of 7 different sources, including a page describing how his name is pronounced, Plutarch, and “Catholic Encyclopedia on CD-ROM”. It contained more text discussing Shakespeare’s supposed Catholicism than his actual plays, which were described only in the most generic terms possible. I’m not noticing any grave mistakes while skimming the text, but it really couldn’t pass for a reliable source or a traditionally solid encyclopedia. And that’s the page on the best known English writer, slightly less popular topics were obviously much shoddier.
It had its significant upsides already back then, sure, no doubt about that. But the teachers’ skepticism wasn’t all that unwarranted.
- Comment on ChatGPT Mostly Source Wikipedia; Google AI Overviews Mostly Source Reddit 2 weeks ago:
I think the academic advice about Wikipedia was sadly mistaken.
It wasn’t mistaken 10 or especially 15 years ago, however. Check how some articles looked back then, you’ll see vastly fewer sources and overall a less professional-looking text. These days I think most professors will agree that it’s fine as a starting point (depending on the subject, at least; I still come across unsourced nonsensical crap here and there, slowly correcting it myself).
- Comment on Wikipedia Pauses AI-Generated Summaries After Editor Backlash 2 weeks ago:
I think that’s not possible. Wikipedia collects as little user data as possible, and providing a different UX in different countries sounds like it would already be too intrusive in that regard.
- Comment on Wikipedia Pauses AI-Generated Summaries After Editor Backlash 2 weeks ago:
As far as I’ve seen they only generated one example summary, which is linked in OP. It’s not good, as Wikipedians have pointed out: en.wikipedia.org/…/Wikipedia:Village_pump_(techni…
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 2 weeks ago:
Logic requires abstracting the argumentative form from the literal linguistic content and then generalising it, just how like math is done properly when you work with numbers and not just with sentences such as “two apples and three apples is five apples” (such abstraction in practice allows far more powerful and widely applicable operations than dealing with individual linguistic expressions; if you’ve ever solved very complex truth trees you’ll know how they allow streamlining and solutions that would be practically impossible to do if you had only the ordinary linguistic expression of the same problem). Logic doesn’t operate with textual tokens but with logical propositions and operators. “Difficulty” is not a meaningful term here, a tool is either technically capable of doing something (more or less successfully) or it isn’t.
That LLMs aren’t capable of this sort of precision and abstraction is shown by the OP link as well as the simple fact that chatbots used to be extremely bad at math (which is now probably patched up by adding a proper math module, rather than relying on the base LLM - my assumption, at least).
As for trying more examples of looking for logical fallacies, I tried out three different types of text. Since you say context is important, it’s best to take only the beginning of a text. One text I tried is the opening of the Wikipedia article on “history”, which ChatGPT described like this: “The passage you’ve provided is an informative and largely neutral overview of the academic discipline of history. It doesn’t make any strong arguments or persuasive claims, which are typically where logical fallacies appear.” It then went on to nitpick about some details “for the sake of thorough analysis”, but basically had no real complaints. Then I tried out the opening paragraph of Moby-Dick. That’s a fictional text so it would be reasonable to reject analysing its logical solidity, as GPT already did with the WP article, but it still tried to wring out some “criticism” that occasionally shows how it misunderstands the text (just as it misunderstood a part of my comment above). Finally, I asked it to find the fallacies in the first four paragraphs of Descartes’ Meditations on First Philosophy, which resulted in a criticism that was based on less logically rigid principles than the original text (accusing Descartes of the “slippery slope fallacy”).
I’ll post the full replies below.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 3 weeks ago:
That was a roundabout way of admitting you have no idea what logic is or how LLMs work. Logic works with propositions regardless of their literal meaning, LLMs operate with textual tokens irrespective of their formal logical relations. The chatbot doesn’t actually do the logical operations behind the scenes, it only produces the text output that looks like the operations were done (because it was trained on a lot of existing text that reflects logical operations in its content).
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 3 weeks ago:
Right now the hype from most is finding issues with chatgpt
publicity
especially : promotional publicity of an extravagant or contrived kind
You’re abusing the meaning of “hype” in order to make the two sides appear the same, because you do understand that “hype” really describes the pro-AI discourse much better.
It did find the fallacies based on what it was asked to do.
It didn’t. Put the text of your comment back into GPT and tell it to argue why the fallacies are misidentified.
You act like this is fire and forget.
But you did fire and forget it. I don’t even think you read the output yourself.
First I wanted to be honest with the output and not modify it.
Or maybe you were just lazy?
Personally I’m starting to find these copy-pasted AI responses to be insulting. It has the “let me Google that for you” sort of smugness around it. I can put in the text in ChatGPT myself and get the same shitty output, you know. If you can’t be bothered to improve it, then there’s absolutely no point in pasting it.
Given what this output gave me, I can easily keep working this to get better and better arguments.
That doesn’t sound terribly efficient. Polishing a turd, as they say. These great successes of AI are never actually visible or demonstrated, they’re always put off - the tech isn’t quite there yet, but it’s just around the corner, just you wait, just one more round of asking the AI to elaborate, just one more round of polishing the turd, just a bit more faith on the unbelievers’ part…
I just feel like you can’t honestly tell me that within 10 seconds having that summary is not beneficial.
Oh sure I can tell you that, assuming that your argumentative goals are remotely honest and you’re not just posting stupid AI-generated criticism to waste my time. You didn’t even notice one banal way in which AI misinterpreted my comment (I didn’t say SMBC is bad), and you’d probably just accept that misreading in your own supposed rewrite of the text. Misleading summaries that you have to spend additional time and effort double checking for these subtle or not so subtle failures are NOT beneficial.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 3 weeks ago:
Excellent, these “fallacies” are exactly as I expected - made up, misunderstanding my comment (I did not call SMBC “bad”), and overall just trying to look like criticism instead of being one. Completely worthless - but I sure can see why right wingers are embracing it!
It’s funny how you think AI will help refine people’s ideas, but you actually just delegated your thinking to it and let it do it worse than you could (if you cared). That’s why I don’t feel like getting any deeper into explaining why the AI response is garbage, I could just as well fire up GPT on my own and paste its answer, but it would be equally meaningless and useless as yours.
Saying it’ll be boring comics missed the entire point.
So what was the point exactly? I re-read that part of your comment and you’re talking about “strong ideas”, whatever that’s supposed to be without any actual context?
Saying it is the same as google is pure ignorance of what it can do.
I did not say it’s the same as Google, in fact I said it’s worse than Google because it can add a hallucinated summary or reinterpretation of the source. I’ve tested a solid number of LLMs over time, I’ve seen what they produce. You can either provide examples that show that they do not hallucinate, that they have access to sources that are more reliable than what shows up on Google, or you can again avoid any specific examples, just expecting people to submit to the revolutionary tech without any questions, accuse me of complete ignorance and, no less, compare me with anti-immigrant crowds (I honestly have no idea what’s supposed to be similar between these two viewpoints? I don’t live in a country with particularly developed anti-immigrant stances so maybe I’m missing something here?).
The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they’ve bought into the hype and need to justify it.
“They’ve bought into the hype and need to justify it”? Are you sure you’re talking about the anti-AI crowd here? Because that’s exactly how one would describe a lot of the pro-AI discourse. Like, many pro-AI people literally BUY into the hype by buying access to better AI models or invest in AI companies, the very real hype is stoked by these highly valued companies and some of the richest people in the world, and the hype leads the stock market and the objectively massive investments into this field.
But actually those who “buy into the hype” are the average people who just don’t want to use this tech? Huh? What does that have to do with the concept of “hype”? Do you think hype is simply any trend that doesn’t agree with your viewpoints?
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 3 weeks ago:
I have no idea what sort of AI you’ve used that could do any of this stuff you’ve listed. A program that doesn’t reason won’t expose logical fallacies with any rigour or refine anyone’s ideas. It will link to credible research that you could already find on Google but will also add some hallucinations to the summary. Etc., it’s completely divorced from how the stuff as it is currently works.
Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.
That’s a misguided view of how art is created. Supposed “brilliant ideas” are dime a dozen, it takes brilliant writers and artists to make them real. Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept. If you are not competent in a visual medium, then don’t make it visual, write a story or an essay.
Besides, most of the popular and widely shared webcomics out there are visually extremely simple or just bad (look at SMBC or xkcd or - for a right-wing example - Stonetoss).
For now I see no particular benefits that the right-wing has obtained by using AI. They either make it feed back into their delusions, or they whine about the evil leftists censoring the models (by e.g. blocking its usage of slurs).
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 3 weeks ago:
Wow, I would deeply apologise on the behalf of all of us uneducated proles having opinions on stuff that we’re bombarded with daily through the media.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 3 weeks ago:
That depends on your assumption that the left would have anything relevant to gain by embracing AI (whatever that’s actually supposed to mean).
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 3 weeks ago:
But 90% of “reasoning humans” would answer just the same. Your questions are based on some non-trivial knowledge of physics, chemistry and medicine that most people do not possess.
- Submitted 3 weeks ago to [deleted] | 12 comments