jj4211
@jj4211@lemmy.world
- Comment on What’s even the appeal of Linux? 25 minutes ago:
Forget GNU/Linux, VIM/Linux is where it is at.
But say it too loud and we are going to end up with a systemd-vim
- Comment on What’s even the appeal of Linux? 20 hours ago:
But with Linux, you can init=/bin/vim
Why settle for running vim on your os when vim can just be your os?
- Comment on It Took Many Years And Billions Of Dollars, But Microsoft Finally Invented A Calculator That Is Wrong Sometimes 3 days ago:
Well, the article is covering the disclaimer, which is vague enough to mean pretty much whatever.
I can buy that he is taking it to the level of if it can’t directly be used for the stuff in the disclaimer, well, what could it be used for then? Crafting formulas seems to be a possibility, especially since the spreadsheet formula language is kind of esoteric and clumsy to read and write. It ‘should’ be up an LLM alley, a relatively limited grammar that’s kind of a pain for a human to work with, but easy enough to get right in theory for an LLM. LLM is sometimes useful for script/programming but the vocabulary and complexity can easily get away from it, but excel formula are less likely to have programming level complexity or arbitrarily many methods to invoke. You of course have to eyeball the formula to see if it looks right, and if it does screw up the cell parameters, that might be a hard thing to catch by eyeballing for most people.
- Comment on It's a simple thing, but one good way to make games memorable is for the developers to leave you words of encouragement in the pack-in material. 5 days ago:
Yeah was interesting to see Carmack and Romero go different ways.
On the Carmack side you had an excellent technical execution of a fairly bland experience.
On the Romero side you had a very shoddy execution of what could have been an interesting concept… Maybe…
Of course Doom itself was fun, but not exactly made in an era that demanded much in the way of plot. Here you are, you got guns, there’s demons from hell… Go for it. The instruction manual introduction, the titles of the levels, and visual design cues were all that was plot wise.
Trying to have a half life style plot didn’t really fit the franchise. Doom 2016 both had a bit deeper plot than old Doom, but was self aware enough to have doom guy just destroy something to make the exposition go away and get on with what needs to go on.
- Comment on It's a simple thing, but one good way to make games memorable is for the developers to leave you words of encouragement in the pack-in material. 5 days ago:
Yeah, Doom 3 was so mediocre that it pretty much killed the franchise for years.
The game engine enjoyed some popularity, but id didn’t really know how to make a good “game” for that era.
- Comment on It's a simple thing, but one good way to make games memorable is for the developers to leave you words of encouragement in the pack-in material. 5 days ago:
Was the door locked with a red, blue, or yellow key?
- Comment on FFmpeg moves to Forgejo 6 days ago:
Geordi LaForge
- Comment on FFmpeg moves to Forgejo 6 days ago:
On one hand, I kind of agree, on another that is at least somewhat a generic term for a gitlab/gitea/github/sourceforce/forgejo/etc, so it might be harder to search.
- Comment on We hate AI because it's everything we hate 6 days ago:
Yeah, but let’s say you had 12 guys hand scrubbing to keep up with the plates, but then you got a mediocre dishwashing machine that did a worse job scrubbing. You wouldn’t dismiss the machine because it was imperfect, you would say I need a dishwashing machine operator, who might have to do a quality check on the way out, or otherwise have whoever is plating put it in a stack for hand scrubbing, and lay off 11 of the guys.
So this could be the way out if AI worked ‘as advertised’. It however largely does not.
But then to the second point, it doesn’t even need to work as advertised if the business leader thinks it’s good enough and does the layoffs. They might end up having to scale back operations, but somehow it won’t be their fault.
- Comment on We hate AI because it's everything we hate 6 days ago:
Umm… ok, but that’s a bit beside the point?
Unless you mean to include those 1980 computers, in which case stockfish won’t run on that… More than about 10 year old home computer would likely be unable to run it.
- Comment on We hate AI because it's everything we hate 6 days ago:
It probably would have if IBM decided that every household in the USA needed to have chess playing compute capacity and made everyone dial up to a singular facility in the middle of a desert where land and taxes were cheap so they could charge everyone a monthly fee for the privilege…
- Comment on We hate AI because it's everything we hate 6 days ago:
It might, but:
- Current approaches are displaying exponential demands for more resources with barely noticable “improvements”, so new approaches will be needed.
- Advances in electronics are getting ever more difficult with increasing drawbacks. In 1980 a processor would likely not even have a heatsink. Now the current edge of that Moore’s law essentially is datacenter only and frequently demands it to be hooked up to water for cooling. SDRAM has joined CPUs in needing more active cooling.
- Comment on We hate AI because it's everything we hate 6 days ago:
Those are not multi purpose tools. Guns are for killing.
Nah, they are multi purpose tools:
Image - Comment on We hate AI because it's everything we hate 6 days ago:
There’s a few things.
First off, there is utility, and that utility varies based on your needs. In software development for example, the utility varies from doing most of the work to nearly useless and you feel like the LLM users are gaslighting you based on how useless it is to you. People who live life making utterly boilerplate applications feel like it’s magical. People who generate tons of what are supposed to be ‘design documents’ but get eyed by non-technical executives that don’t understand them, but like to see volumes of prose, LLMs can just generate that no problem (no one who actually would need them ever reads them anyway). Then people who work on more niche scenarios get annoyed because they barely do anything useful, but attempting to use them gets you innundated with low quality code suggestions.
But I’d say mostly it’s about the ratio of investment/hype to the reality. The investment is scary because one day the bubble will pop (doesn’t mean LLM is devoid of value, just that the business context is irrational right now, just like internet was obviously important, but we still had a bubble around the turn of the century overy it). The hype is just so obnoxious, they won’t shut up even when they have nothing really new to say. We get it, we’ve heard it, saying it over and over again just is exhausting to hear.
On creative fronts, it’s kind of annoying when companies use it in a way that is noticeable. I think they could get away with some backdrops and stuff, but ‘foreground’ content is annoying due to being a dull paste of generic content with odd looks. For text this manifests as obnoxiously long prose that could be more to the point.
On video, people are generating content and claiming ‘real’, in ways to drive engagement. That short viral clip of animals doing a funny thing? Nope, generated. We can’t trust video content, whether fluff or serious to be authentic.
- Comment on We hate AI because it's everything we hate 6 days ago:
For 3, there are two things:
-
It is common for less good, but much cheaper tech to displace humans doing a job if it’s “good enough”. Dishwashing machines that sometimes leave debris on dishes are an example.
-
The technically competent people have long ofnet been led by people not technically competent, and have long been outcompeted by bullshit artists. LLM output is remarkably similar to bullshit artistry. One saving grace of the human bullshit artists is they at least usually understand they secretly have dependencies on actual competent people and while they will outcompete, they will at least try to keep the competent around, the LLM doesn’t have such concepts.
-
- Comment on Why LLMs can't really build software 1 week ago:
They are still bullish on LLM, just to augment rather than displace human suggested development.
This perspective is quite consistent with the need for a product that manages promoting/context for a human user and helps the human review and integrate the LLM supplied content in a reasonable way.
If LLM were as useful as some of the fanatics say, you’d just use a generic prompt and it would poop out the finished project. This is by the way the perspective of an executive I talked to not long ago, that he was going to be able to let go of all his “coders” and feed his “insight” directly into a prompt that will so it all for him instead. He is also easily influenced so articles like this can reshape him into a more tenable position, after which he’ll pretend he never thought a generic prompt would be good enough
- Comment on Why LLMs can't really build software 1 week ago:
Subjectively speaking, I don’t see it so that good a job of being current or priortizing current over older.
While RAG is the way to give LLM a shot at staying current, I just didn’t see it doing that good a job with library documentation. Maybe it can do all right with tweaks like additional properties or arguments, but more structural changes to libraries I just don’t see being handled.
- Comment on Why LLMs can't really build software 1 week ago:
I have been using it a bit, still can’t decide if it is useful or not though… It can occasionally suggest a blatantly obvious couple of lines of code here and there, but along the way I get inundated with annoying suggestions that are useless and I haven’t gotten used to ignoring them.
I mostly work with a niche area the LLMs seem broadly clueless about, and prompt driven code is almost always useless except when dealing with a super boilerplate usage of a common library.
I do know some people that deal with amazingly mundane and common functions and they are amazed that it can pretty much do their jobs, but they never really impressed me before anyway and I wondered how they had a job…
- Comment on ChatGPT 5 power consumption could be as much as eight times higher than GPT 4 — research institute estimates medium-sized GPT-5 response can consume up to 40 watt-hours of electricity 1 week ago:
Well over the course of an hot or two, but it’s correct that a dryer run even with heat pump is significantly more than 40wh
- Comment on "I support it only if it's open source" should be a more common viewpoint 1 week ago:
No, they don’t say they will sue (they flat out can’t), but they say they will cut off your access to any updates.
Now one could (and I would) argue that sounds like a restriction on exercising your open source rights. However the counter argument seems to be those protections apply only to software acquired to date, and if you deny access to future binaries you can deny access to those sources.
In any event, all this subtlety around the licensing aside, it’s just a bigger hassle to use RedHat versus pretty much any other distribution, precisely because they kind of want IBM/Oracle style entitlement management where the user gets to have to do all the management work to look after their suppliers business needs.
- Comment on Incident 1 week ago:
I suppose it could be possible that the humans are entering it, also possible the timestamps are just being rounded by the system. Guess it’s hard to say, though I still say that a daycare that includes infants can reasonably be expected to log this sort of activity in case something goes wrong that would only show up as a loss of appetite or lack of bowel movement or explaining an otherwise unrecognized injury incurred during an assasination.
- Comment on Incident 1 week ago:
Looks like a daycare that’s taking care of toddlers and infants. Logging these events makes a bit more sense as you have to be at least roughly aware of this stuff to keep an eye out for potential health issues. The kid isn’t able to convey things directly so you have to look for signs. If diapers aren’t being soiled, then you might need a medical exam, for example.
The precision of the timestamps might seem a bit needlessly specific, but if you are noting it electronically, might as well let the system time-stamp it.
- Comment on ChatGPT Is Still a Bullshit Machine 1 week ago:
I saw one article actually going all in on how incredible GPT-5 was.
The thing is, the biggest piece that really made the author excited was the “startup idea” and it proceeded to generate a mountain of business-speak that says nothing. He proceeds to proclaim a whole team of MBAs would take hours to produce something so magnificent. This pretty much made him just lose it, and I guess that is exactly the sort of content idiot executives slurp up.
- Comment on GPT-5: Overdue, overhyped and underwhelming. And that’s not the worst of it. 1 week ago:
Keyboard substituted the wrong word, fixed.
- Comment on ChatGPT Is Still a Bullshit Machine 1 week ago:
Yeah, the fact you can “gaslight” a chat is just as much of a symptom of a difficulty as the usual mistakes. It shows that it doesn’t deal with facts, but structurally sound content, which is correlated with facts, especially when the prompt has context/rag stuffing the prompt using more traditional approaches that actually will tend to get more factual stuff crammed in.
To all the people white knighting for the LLM, for the thousandth time, we know that it is useful, but it’s usefulness is only tenuously connected to the marketing reality. Making the mistake in counting letters is less important than the fact that it “acts” like it can when it can’t.
- Comment on GPT-5: Overdue, overhyped and underwhelming. And that’s not the worst of it. 1 week ago:
Oh the CS job market may just be more persistently toast. Yes there have been layoffs attributed to AI, however I think a lot of those businesses were kind of itching to do those layoffs anyway. There was way overhiring in the security in general, plus when the AI bubble pops it’ll drag the test if the tech sector with it.
- Comment on GPT-5: Overdue, overhyped and underwhelming. And that’s not the worst of it. 2 weeks ago:
AGI might be just around the corner, or it might be indefinitely far off, but either way I don’t think “just more LLM” is going to get there, and that seems to be all the AI industry is really equipped to handle at the moment.
Ironically, getting to AGI might take a bubble pop to stop the current LLM architectures from just sucking up all the resources to let other approaches breathe a little.
More practically, I’d have expected to see more engaged robotics, but it seems all the money is being spent on pure online AI approaches.
- Comment on GPT-5: Overdue, overhyped and underwhelming. And that’s not the worst of it. 2 weeks ago:
Seemed a likely outcome. On the way to being late, there were stories where basically they spent ungodly amounts of money in an attempt and then scrapped it because it wasn’t actually any better. And that this happened multiple times.
So if they were truly stuck, what to do? They could admit they were stuck, and watch the economic collapse as investors realize they were mistaken on how far along the technology curve things were, or they could market the hell out of GPT-5 and pretend it’s amazing and hope enough suckers and latecomers to LLM buy into that narrative that it carries through. Like Sam Altman acting ‘scared’ of what GPT-5 is going to be, “what have we done?” in a very melodramatic way like he’s Oppenheimer or something, likening it to the Death Star (all in all, a very ‘wtf’ situation, if it were really as dangerous as you say, you seem awfully eager to get it going).
So we have an incremental iteration with some good, some bad, and perhaps overall better, but in the context of the ungodly investment in the LLM sector, it’s way way less than would should reasonably expect.
- Comment on Have you encountered this? 2 weeks ago:
Which is insane, it’s a percentage, compensation for inflation is baked in.
- Comment on Have you encountered this? 2 weeks ago:
I don’t tip on tax.
But on the flip side if I receive a discount of some sort, I tip on the pre-discount amount.