jj4211
@jj4211@lemmy.world
- Comment on Using Clouds for too long might have made you incompetent 10 hours ago:
Yeah, they are frequently just parroting things like CVE notices as highlighted by a fairly stupid scanning tool.
The security ecosystem has been long diluted because no one wants to doubt a “security” person and be wrong, and over time that has made a pretty soft context for people to get credibility as a security person.
- Comment on Spiritual Safety Tip! 10 hours ago:
If someone is claiming God is on their side, then absolutely they should not be trusted.
A good example was Huckabee’s message to Trump where he says he shouldn’t listen to humble old Huckabee, but he should listen to God, who, coincidentally, is saying exactly the same thing as Huckabee.
If you have your faith but make no assertions about it’s validity over other opinions nor that it confers divine authority to the words or deeds of any person, cool, I respect that faith. I’m inclined to have some faith myself, but I’m not about to claim any of it is more than my personal wild guesses and hope.
However organized religion is generally exploitable and bad people take advantage…
- Comment on YSK that apart from not having a car, the single greatest thing you can do for the climate is simply eating less red meat 5 days ago:
That’s one pair of philosophies that creep me out both ways. Both the anti natalists and pro natalists.
Deciding for yourself is one thing, imposing your choice on others is maddening.
I don’t know if the comment quite raises to the level of anti natalist though. Maybe it’s grading on a curve of reading some more hard core anti natalists, but that comment felt tame and felt like they wouldn’t necessarily object to a couple having one child or even two, being somewhat below the replacement level…
- Comment on AI agents wrong ~70% of time: Carnegie Mellon study 5 days ago:
The issue here is that we’ve well gone into sharply exponential expenditure of resources for reduced gains and a lot of good theory predicting that the breakthroughs we have seen are about tapped out, and no good way to anticipate when a further breakthrough might happen, could be real soon or another few decades off.
I anticipate a pull back of resources invested and a settling for some middle ground where it is absolutely useful/good enough to have the current state of the art, mostly wrong but very quick when it’s right with relatively acceptable consequences for the mistakes. Perhaps society getting used to the sorts of things it will fail at and reducing how much time we try to make the LLMs play in that 70% wrong sort of use case.
I see LLMs as replacing first line support, maybe escalating to a human when actual stakes arise for a call (issuing warranty replacement, usage scenario that actually has serious consequences, customer demanding the human escalation after recognizing they are falling through the AI cracks without the AI figuring out to escalate). I expect to rarely ever see “stock photography” used again. I expect animation to employ AI at least for backgrounds like “generic forest that no one is going to actively look like, but it must be plausibly forest”. I expect it to augment software developers, but not able to enable a generic manager to code up whatever he might imagine. The commonality in all these is that they live in the mind numbing sorts of things current LLM can get right and/or a high tolerance for mistakes with ample opportunity for humans to intervene before the mistakes inflict much cost.
- Comment on AI agents wrong ~70% of time: Carnegie Mellon study 5 days ago:
I’ve found that as an ambient code completion facility it’s… interesting, but I don’t know if it’s useful or not…
So on average, it’s totally wrong about 80% of the time, 19% of the time the first line or two is useful (either correct or close enough to fix), and 1% of the time it seems to actually fill in a substantial portion in a roughly acceptable way.
It’s exceedingly frustrating and annoying, but not sure I can call it a net loss in time.
So reviewing the proposal for relevance and cut off and edits adds time to my workflow. Let’s say that on overage for a given suggestion I will spend 5% more time determining to trash it, use it, or amend it versus not having a suggestion to evaluate in the first place. If the 20% useful time is 500% faster for those scenarios, then I come out ahead overall, though I’m annoyed 80% of the time. My guess as to whether the suggestion is even worth looking at improves, if I’m filling in a pretty boilerplate thing (e.g. taking some variables and starting to write out argument parsing), then it has a high chance of a substantial match. If I’m doing something even vaguely esoteric, I just ignore the suggestions popping up.
However, the 20% is a problem still since I’m maybe too lazy and complacent and spending the 100 milliseconds glancing at one word that looks right in review will sometimes fail me compared to spending 2-3 seconds having to type that same word out by hand.
That 20% success rate allowing for me to fix it up and dispose of most of it works for code completion, but prompt driven tasks seem to be so much worse for me that it is hard to imagine it to be better than the trouble it brings.
- Comment on AI agents wrong ~70% of time: Carnegie Mellon study 5 days ago:
We promise that if you spend untold billions more, we can be so much better than 70% wrong, like only being 69.9% wrong.
- Comment on What is the funniest insult / joke you've come up with on the spot? 6 days ago:
My nephew was trash talking me about Mario Kart talking about how he’d smoke me because he had been playing it so long.
My reply “I was playing this before you were born”
- Comment on What is the funniest insult / joke you've come up with on the spot? 6 days ago:
Not me but in way back in high school I saw a comeback I’ll never forget. I’ll call them John and Bob.
John was teasing Bob in a mock flirting way. Bob was uncomfortable and told John to stop it.
John says “what’s the matter, aren’t you secure in your sexuality?”
Bob instantly replies “absolutely, but I’m not secure in yours”
- Comment on Time travel doesn't work unless you also have teleportation. If you travel to the past/future, Earth will be in a different position in its orbit, and you'll die in space. 6 days ago:
But what frame of reference?
- Comment on Time travel doesn't work unless you also have teleportation. If you travel to the past/future, Earth will be in a different position in its orbit, and you'll die in space. 6 days ago:
Or, in that case, stealing someone else’s spaceship time machine
- Comment on Let’s Encrypt Begins Supporting IP Address Certificates 6 days ago:
They will require the requester to prove they control the standard http(s) ports, which isn’t possible with any nat.
It won’t work for such users, but also wouldn’t enable any sort of false claims over a shared IP.
- Comment on Let’s Encrypt Begins Supporting IP Address Certificates 6 days ago:
If you can get their servers to connect to that IP under your control, you’ve earned it
- Comment on They are so clueless they don't realize that this just pisses everyone off. Shove your banana 1 week ago:
I mean, it’s one banana. What could it cost? 10 dollars?
- Comment on Amazon CEO Andy Jassy says AI will probably mean fewer jobs after 27,000 people have already been cut from its workforce 1 week ago:
And/or later entry into the workforce and earlier retirement
- Comment on Microsoft axe another 9000 in continued AI push 1 week ago:
If they marketed on the actual capability, customer executives won’t be as eager to open their wallet. Get them thinking they can reduce headcount and they’ll fall over themselves. You tell them your staff will remain about the same but some facets of their job will be easier, and they are less likely to recognize the value.
- Comment on Microsoft Copilot falls Atari 2600 Video Chess 1 week ago:
The research I saw mentioning LLMs as being fairly good at chess had the caveat that they allowed up to 20 attempts to cover for it just making up invalid moves that merely sounded like legit moves.
- Comment on Microsoft Copilot falls Atari 2600 Video Chess 1 week ago:
I remember seeing that, and early on it seemed fairly reasonable then it started materializing pieces out of nowhere and convincing each other that they had already lost.
- Comment on Microsoft Copilot falls Atari 2600 Video Chess 1 week ago:
Because the business leaders are famously diligent about putting aside the marketing push and reading into the nuance of the research instead.
- Comment on Microsoft Copilot falls Atari 2600 Video Chess 1 week ago:
To reinforce this, just had a meeting with a software executive who has no coding experience but is nearly certain he’s going to lay off nearly all his employees because the value is all in the requirements he manages and he can feed those to a prompt just as well as any human can.
He does tutorial fodder introductory applications and assumes all the work is that way. So he is confident that he will save the company a lot of money by laying off these obsolete computer guys and focus on his “irreplaceable” insight. He’s convinced that all the negative feedback is just people trying to protect their jobs or people stubbornly not with new technology.
- Comment on Facts and minds 1 week ago:
It at least holds true for a lot of people, and is even enforced in some forms of leadership training. Some folks believe the worst thing is to be perceived as ever being wrong and will push hard against that outcome no matter what.
If you weakly hold an opinion, it’s more malleable, but you are also unlikely to express that opinion strongly.
- Comment on Facts and minds 1 week ago:
If someone is proactively expressing an opinion or responding, they are frequently pretty attached to the position they take if it is vaguely important.
It’s not universal, but it’s probable that if you make a strong statement towards the Internet, your view is kind of set and certainly some text from some anonymous guy on the Internet is supremely low on the list of things that are going to change your mind.
- Comment on Facts and minds 1 week ago:
If it’s one to one communication, it’s probably not going to be productive, but worth a shot, just don’t waste too much time.
In a public forum, it’s more about giving the lurkers something to process, those that might not have gotten emotionally attached to one side or another, or just need to see there’s a diversity of thought to avoid getting too sucked into one thing or another.
- Comment on We need to stop pretending AI is intelligent 1 week ago:
Yes, as common as that is, in the scheme of driving it is relatively anomolous.
By hours in car, most of the time is spent on a freeway driving between two lines either at cruising speed or in a traffic jam. The most mind numbing things for a human, pretty comfortably in the wheel house of driving.
Once you are dealing with pedestrians, signs, intersections, etc, all those despite ‘common’ are anomolous enough to be dramatically more tricky for these systems.
- Comment on We need to stop pretending AI is intelligent 1 week ago:
At least in my car, the lane following (not keeping system) is handy because the steering wheel naturally tends to go where it should and less often am I “fighting” the tendency to center. The keeping system is at least for me largely nothing. If I turn signal, it ignores me crossing a lane. If circumstances demand an evasive maneuver that crosses a line, it’s resistance isn’t enough to cause an issue. At least mine has fared surprisingly well in areas where the lane markings are all kind of jacked up due to temporary changes for construction. If it is off, then my arms are just having to generally assert more effort to be in the same place I was going to be with the system. Generally no passenger notices when the system engages/disengages in the car except for the chiming it does when it switches over to unaided operation.
So at least my experience has been a positive one, but it hits things just right with intervention versus human attention, including monitoring gaze to make sure I am looking where I should. However there are people who test “how long can I keep my hands off the steering wheel”, which is a more dangerous mode of thinking.
And yes, having cameras everywhere makes fine maneuvering so much nicer, even with the limited visualization possible in the synthesized ‘overhead’ view of your car.
- Comment on We need to stop pretending AI is intelligent 1 week ago:
To the extent it is people trying to fool people, it’s rich people looking to fool poorer people for the most part.
To the extent it’s actually useful, it’s to replace certain systems.
Think of the humble phone tree, designed to make it so humans aren’t having to respond, triage, and route calls. So you can have an AI system that can significantly shorten that role, instead of navigating a tedious long maze of options, a couple of sentences back and forth and you either get the portion of automated information that would suffice or routed to a human to take care of it. Same analogy for a lot of online interactions where you have to input way too much and if automated data, you get a wall of text of which you’d like something to distill the relevant 3 or 4 sentences according to your query.
So there are useful interactions.
However it’s also true that it’s dangerous because the “make user approve of the interaction” can bring out the worst in people when they feel like something is just always agreeing with them. Social media has been bad enough, but chatbots that by design want to please the enduser and look almost legitimate really can inflame the worst in our minds.
- Comment on We need to stop pretending AI is intelligent 1 week ago:
The thing about self driving is that it has been like 90-95% of the way there for a long time now. It made dramatic progress then plateaued, as approaches have failed to close the gap, with exponentially more and more input thrown at it for less and less incremental subjective improvement.
But your point is accurate, that humans have lapses and AI have lapses. The nature of those lapses is largely disjoint, so that makes an opportunity for AI systems to augment a human driver to get the best of both worlds. A constantly consistently vigilant computer driving monitoring and tending the steering, acceleration, and braking to be the ‘right’ thing in a neutral behavior, with the human looking for more anomolous situations that the AI tends to get confounded about, and making the calls on navigating certain intersections that the AI FSD still can’t figure out. At least for me the worst part of driving is the long haul monotony on freeway where nothing happens, and AI excels at not caring about how monotonous it is and just handling it, so I can pay a bit more attention to what other things on the freeway are doing that might cause me problems.
I don’t have a Tesla, but have a competitor system and have found it useful, though not trustworthy. It’s enough to greatly reduce the drain of driving, but I have to be always looking around, and have to assert control if there’s a traffic jam coming up (it might stop in time, but it certainly doesn’t slow down soon enough) or if I have to do a lane change in some traffic (if traffic conditions are light, it can change langes nicely, but without a whole lot of breathing room, it won’t do it, which is nice when I can afford to be stupidly cautious).
- Comment on We need to stop pretending AI is intelligent 1 week ago:
I think the self driving is likely to be safer in the most boring scenarios, the sort of situations where a human driver can get complacent because things have been going so well for the past hour of freeway driving. The self driving is kind of dumb, but it’s at least consistently paying attention, and literally has eyes in the back of it’s head.
However, there’s so much data about how it fails in stupidly obvious ways that it shouldn’t, so you still need the human attention to cover the more anomalous scenarios that foul self driving.
- Comment on We need to stop pretending AI is intelligent 1 week ago:
Now there’s models that reason, Well, no, that’s mostly a marketing term applied to expending more tokens on generating intermediate text. It’s basically writing a fanfic of what thinking on a problem would look like. If you look at the “reasoning” steps, you’ll see artifacts where it just goes disjoint in the generated output that is structurally sound, but is not logically connected to the bits around it.
- Comment on We need to stop pretending AI is intelligent 1 week ago:
The probabilities of our sentence structure are a consequence of our speech, we aren’t just trying to statistically match appropriate sounding words.
With enough use of LLM, you will see how it is obviously not doing anything like conceptualizing the tokens it’s working with or “reasoning” even when it is marketed as “reasoning”.
Sticking to textual content generation by LLM, you’ll see that what is emitted is first and foremost structurally appropriate, but beyond that it’s mostly “bonus” for it to be narratively consistent and an extra bonus if it also manages to be factually consistent. An example I saw from Gemini recently had it emit what sounded like an explanation of which action to pick, and then the sentence describing actually picking the action was exactly opposite of the explanation. Both of those were structurally sound and reasonable language, but there’s no logical connection between the two portions of the emitted output in that case.
- Comment on Young men are 'playing videogames all day' instead of getting jobs because they can mooch off of free healthcare, claims congressman 1 week ago:
With the caveat that we can accommodate everyone so long as sufficient people put in their fair share of effort. In an ideal world that will mean very short working hours and/or nicely early retirement/late entry into the work force.
Certainly the usual talking heads are spoiled rich guys that have never known labor and have not done their fair share, but it is a difficult thing to balance to make sure we do take care of each other but make sure enough people are engaged to successfully do that