MangoCats
@MangoCats@feddit.it
- Comment on 28-pound electric motor delivers 1000 horsepower 3 days ago:
It’s so sad, because we could make really great shitbox econo cars now. China, Japan and India are doing it, meanwhile in the U.S. we’re needing side-step assistance to climb into our tower-viewing position $80K+ ROADMASTER trucks and SUVs.
- Comment on 28-pound electric motor delivers 1000 horsepower 3 days ago:
Bonus: you can always circle back and pick up the recharge pack, and if you put solar cells on top it can trickle in a (tiny) extra charge when you’re away. More practical: plug it into the grid for slow charging of your big batteries while you zip around town in your lightweight configuration.
- Comment on 28-pound electric motor delivers 1000 horsepower 4 days ago:
. I just don’t understand why they need all that.
Power sells, they can give that insane 0-60 sprint for very low cost, so it gets people to buy their product instead of a 6 liter V8.
- Comment on 28-pound electric motor delivers 1000 horsepower 4 days ago:
Put the big battery pack (and maybe an ICE powered generator + fuel) on a trailer for cruising, then have a “ditch trailer and escape” button for that 20 mile sprint at the end of the trip.
- Comment on After police used Flock cameras to accuse a Denver woman of theft, she had to prove her own innocence 1 week ago:
Back in the days before dash cams I got let off with warnings a few times. Once in a while they actually are human beings, but that’s rare when they’re on a month end quota filling mission.
- Comment on After police used Flock cameras to accuse a Denver woman of theft, she had to prove her own innocence 1 week ago:
This is exactly the tactic the officer was employing here (for a sub $25 theft), not showing the accused the evidence so they don’t know what the police might or might not know.
At some point in the process, there is “discovery” where both sides share their evidence before trial to avoid going to trial for stupid stuff (like this.) But you usually have to engage thousands of dollars of legal services before discovery is available, again over a sub $25 theft allegation.
The officer sweating her for driving through his town on the day somebody porch pirated somebody else is really ridiculous.
- Comment on After police used Flock cameras to accuse a Denver woman of theft, she had to prove her own innocence 1 week ago:
That’s how they’re running it, and there are a whole lot of people who would prefer it to run that way in the future.
What should be happening is: when falsely accused and exonerated in court, you get a judgement against the LEA for treble damages for your costs to rebut their false claims.
False claims are going to happen, but if they’re costing the police thousands of dollars per instance, that should slow them down. I’m more than happy to pay increased taxes to put that deterrent on the agencies.
- Comment on ‘There isn’t really another choice:’ Signal chief explains why the encrypted messenger relies on AWS 1 week ago:
using your own fucking servers
And/or peer to peer mesh. Personally, I WANT a system that has peak performance AND multiple fallbacks to prevent blackout single point of failure situations.
- Comment on Study Claims 4K/8K TVs Aren't Much Better Than HD To Your Eyes 1 week ago:
The closest thing to “Smart TVs” in our home are Blu-Ray players, and they’ve never been network connected.
I like the ViewSonics we have, and we’ve had a series of NUCs over the years, but lately I’m finding that the N100/N150 fanless PCs like this are perfectly capable of HTPC duty: www.amazon.com/dp/B0CWV439YW
- Comment on Study Claims 4K/8K TVs Aren't Much Better Than HD To Your Eyes 1 week ago:
Yeah… we have a 55" 4K tv, and from across the room you sort of have to squint to tell the difference between 4K and 1080p, up close sure, but I don’t watch screens that big from that close.
- Comment on Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With 1 week ago:
Instead of calling it chaos, call it losing their jobs - being forced to move hundreds of miles if they want to earn decent money again…
- Comment on Study Claims 4K/8K TVs Aren't Much Better Than HD To Your Eyes 1 week ago:
Can you see the difference without your glasses?
- Comment on Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With 1 week ago:
The real thing most people are trying to hold onto is stability, because chaos benefits the powerful. AI is just the latest agent of chaos, from their perspectives.
- Comment on Study Claims 4K/8K TVs Aren't Much Better Than HD To Your Eyes 1 week ago:
640K of RAM is all anybody will ever need.
1920x1080 is plenty, if the screen is under 50" and the viewer is more than 10’ away.
- Comment on Study Claims 4K/8K TVs Aren't Much Better Than HD To Your Eyes 1 week ago:
I used to have 20/10 vision, this 20/20 BS my cataract surgeon says I have now sucks.
- Comment on Study Claims 4K/8K TVs Aren't Much Better Than HD To Your Eyes 1 week ago:
If you’re sitting 3’ from the screen, sure. Even 8K is better, if your hardware can drive it.
- Comment on Study Claims 4K/8K TVs Aren't Much Better Than HD To Your Eyes 1 week ago:
Depends on your eyes quite a bit, too. If I’m sitting more than 15’ back from a 55" screen, 1080p is just fine. Put on my distance glasses and I might be able to tell the difference with 4K.
- Comment on Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With 1 week ago:
this issue is impossible to discuss without conflating it with general economics and wealth imbalance
It’s not conflating, the two issues are inextricably linked.
General economics and wealth imbalance can be addressed with or without the chaos of AI disrupting the job market. The problem is: chaos acts to drive wealth imbalance faster, so any change like AI in the jobs market is just shaking things up and letting more people fall through the cracks faster.
- Comment on Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With 1 week ago:
The first was never “AI” in a CS context
Mostly because CS didn’t start talking about AI until after popular perception had pushed calculators into the “dumb automatons” category.
Image classifiers came after CS drew the “magic” line for what qualifies as AI, so CS has piles of academic literature talking about artificially intelligent image classification, but public perception moves on.
The definition has been pretty consistent since at least Alan Turing, if not earlier.
I think Turing already had adding machines before he developed his “test.”
The current round of LLMs seem more than capable of passing the Turing test if they are configured to try to. In the 1980s, the Eliza chat program could pass the Turing test for three or four exchanges with most people. These past weeks, I have had extended technical conversations with LLMs and they exhibit sustained “average” knowledge of our topics of discussion. Not the brightest bulb on the tree, but they’re widely read and can pretty much keep up with the average bear on the internet in terms of repeating what others have written.
Meanwhile, there’s a virulent public perception backlash calling LLMs “dumb automatons.” Personally, I don’t care what the classification is. “AI” has been “5 years away from realization” my whole life, and I’ve worked with “near AI” tech all that time. The current round of tools have made an impressive leap in usefulness. Bob Cratchit would have said the same about an adding machine if Scrooge had given him one.
- Comment on Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With 1 week ago:
The problem with AI in a “popular context” is that it has been a forever moving target. Old mechanical adding machines were better at correctly summing columns of numbers than humans, at the time they were considered a limited sort of artificial intelligence. All along the spectrum it continues. 5 years ago, image classifiers that can sit and watch video feeds 24-7, accurately identifying things that happen in the feed with better than human accuracy (accounting for human lack of attention, coffee breaks, distracting phone calls, etc.) those were amazing feats of AI - at the time, and now they’re “just image classifiers” much as Alpha-Zero “just plays games.”
- Comment on Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With 1 week ago:
A year ago AI answers were only successfully compiling for me about 60% of the time. Now they’re up over 80%, and I’m no longer in the loop when they screw up, they get it right on the first try 80% of the time, then 96% of the time by the 2nd try, 99% by the third try, 99.84% of the time by the 4th try, and the beauty is: they retry for themselves until they get something that actually compiles.
Now we can talk about successful implementation of larger feature sets…
- Comment on Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With 1 week ago:
I had similar experiences a few months back, like 6-8. Since Anthropic Sonnet 4.0 things have changed significantly. 4.5 is even a bit better. Competing models have been similarly improving.
- Comment on Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With 1 week ago:
Most of what LLMs present as solutions have been around for decades, that’s how they learned them: from source material they train to.
So far, AI hasn’t surprised me with anything clever or new, mostly I’m just reminding it to follow directions, and often I’m pointing out better design patterns than what it implements on the first go around.
Above all: you don’t trust what an LLM spits out any more than you trust a $50/hr “consultant” from the local high school computer club to give you business critical software… you test it, if you have the ability you review it at the source level, line by line. But there ARE plenty of businesses out there running “at risk” with sketchier software developers than the local computer club, OF COURSE they are going to trust AI generated code further than they should.
Get the popcorn, there will be some entertaining stories about that over the coming year.
- Comment on Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With 1 week ago:
I have been working with computers, and networks, and the internet since the 1980s. Over this span of 40-ish years, “how I work” has evolved dramatically through changes in how computers work and more dramatically through changes in information availability. In 1988 if you wanted to program an RS-232 port to send and receive data, you read books. You physically traveled to libraries, or bookstores - maybe you might mail order one, but that was even slower. Compared to today the relative costs to gain the knowledge to be able to perform the task were enormous, in time invested, money spent, and physical resources (paper, gasoline, vehicle operating costs).
By 20 years ago, the internet had reformulated that equation tremendously. Near instant access to worldwide data, organized enough to be easier to access than a traditional library or bookstore, and you never needed to leave your chair to get it. There was still the investment of reading and understanding the material, and a not insignificant cost of finding the relevant material through search, but the process was accelerated from days or more to hours or less, depending on the nature of the learning task.
A year ago, AI hallucination rates made them curious toys for me - too unreliable to be of net practical value. Today, in the field of computer programming, the hallucination rate has dropped to a very interesting point: almost the same as working with a not-so-great but still useful human colleague. The difference being: where a human colleague might take 40 hours to perform a given task (not that the colleague is slow, just it’s a 40 hour task for an average human worker), the AI can turn around the same programming task in 2 hours or less.
Humans make mistakes, they get off on their own tracks and waste time following dead ends. This is why we have meetings. Not that meetings are the answer to everything, but at least they keep us somewhat aware of what other members of the team are doing. That not so great programmer working on a 40 hour task is much more likely to create a valuable product if you check in with them every day or so, see “how’s it going”, help them clarify points of confusion, check their understanding and direction of work completed so far. That’s 4 check points of 15 minutes to an hour in the middle of the 40 hour process. My newest AI colleagues are ripping through those 40 hour tasks in 2 hours, impressive, and when I don’t put in the additional 2 hours of managing them through the process, they get off the rails, wrapped around the axles, unable to finish a perfectly reasonable task because their limited context windows don’t keep all the important points in focus throughout the process. A bigger difficulty is that I don’t get 23 hours of “offline wetware processing” between touch points to refine my own understanding of the problems and desired outcomes.
Humans have developed software development processes to help manage human shortcomings, humans’ limited attention spans and memory. We still out-perform AI in some of this context window span thing, but we have our own non-zero hallucination rates. Asking an AI chatbot to write a program one conversational prompt at a time only gets me so far. Providing an AI with a more mature software development process to follow gets much farther. AI isn’t following these processes (that it helped to translate from human concepts into its own language of workflows, skills, etc.) 100% perfectly, I catch it skipping steps in simple 5 step workflows, but like human procedures, there’s a closed loop procedure improvement procedure to help perform better in the future.
Perhaps most importantly, the procedures are constantly reminding AI to be “self aware” of its context window limitations, do RAG (research augmented generation) of best practices for context management, DRY (reduce through non-repetition and use of references to single points of truth) its own procedures and documentation it generates. Will I succeed in having AI rebuild a 6 month project I did five years back, doing it better this time - expanding its scope to what would have been a year long development effort if I had continued doing it solo? Unclear, I’m two weeks in and I feel like I’m about where I was after two weeks of development last time, but it also feels like I have a better foundation to complete the bigger scope this time using the AI tools, and there’s that tantalizing possibility that at any point now it might just take off and finish it by itself.
- Comment on Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With 1 week ago:
your error/hallucination rate is like 1/10th of what I’d expect. I’ve been using an AI assistant for the better part of a year,
I’m having AI write computer programs, and when I tried it a year ago I laughed and walked away - it was useless. It has improved substantially in the past 3 months.
CONSTANTLY reinforcing fucking BASIC directives
Yes, that is the “limited context window” - in my experience people have it too.
I have given my AIs basic workflows to follow for certain operations, simple 5 to 8 step processes, and they do them correctly about 19 times out of 20, but that 5% they’ll be executing the same process and just skip a step - like many people tend to as well.
but a human can learn
In the past week I have been having my AIs “teach themselves” these workflows and priorities. Prioritizing correctness over speed, respecting document hierarchies when deciding which side of a conflict needs to be edited, etc. It seems to be helping somewhat. I had it research current best practices on context window management and apply it to my projects, and that seems to have helped a little too. But, while I type this, my AI just ran off and started implementing code based on old downstream specs that should have been updated to reflect top level changes we just made, I interrupted it and told it to go back and do it the right way, like its work instructions already tell it to. After the reminder it did it right : limited context window.
The main problem I have with computer programming AIs is: when you have a human work on a problem for a month, you drop by every day or two to see how it’s going, clarify, course correct. The AI does the equivalent work in an hour and I just don’t have the bandwidth to keep up at that speed, so it gets just as far off in the weeds as a junior programmer locked in a room and fed Jolt cola and Cheetos through a slot in the door would after a month alone.
An interesting response I got from my AI recently regarding this phenomenon was: it provided “training seminar” materials for our development team telling them how to proceed incrementally with the AI work and carefully review intermediate steps. I already do that with my “work side” AI project, it didn’t suggest it. My home side project where I normally approve changes without review is the one that suggested the training seminar.
- Comment on Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With 1 week ago:
Granting them AI status, we should recognize that they “gained their abilities” by training on the rando junk that people post on the internet.
I have been working with AI for computer programming, semi-seriously for 3 months, pretty intensively for the last two weeks. I have also been working with humans for computer programming for 35 years. AI’s “failings” are people’s failings. They don’t follow directions reliably, and if you don’t manage them they’ll go down rabbit holes of little to no value. With management, working with AI is like an accelerated experience with an average person, so the need for management becomes even more intense - where you might let a person work independently for a week then see what needs correcting, you really need to stay on top of AI’s “thought process” on more of a 15-30 minute basis. It comes down to the “hallucination rate” which is a very fuzzy metric, but it works pretty well - at a hallucination rate of 5% (95% successful responses) AI is just about on par with human workers - but faster for complex tasks, and slower for simple answers.
Interestingly, for the past two weeks, I have been having some success with applying human management systems to AI: controlled documents, tiered requirements-specification-details documents, etc.
- Comment on Huge internet outage live blog: Amazon, Disney+, Hulu, HBO Max and more experiencing issues 2 weeks ago:
Depends on your level of trust. I trust the sensor to tell me the door is open/closed (which is a concern to know if people are actually coming in or out - can’t do that with the door closed)… I don’t trust a smart lock to always lock or unlock when I want it to, and those are the things that will give you a locked/unlocked status report. If anybody really wants to get in/out of our house it doesn’t matter if the doors are locked or not, they can always break a window.
- Comment on Huge internet outage live blog: Amazon, Disney+, Hulu, HBO Max and more experiencing issues 2 weeks ago:
Peace of mind. We have a light that lights up red when a door is open. At the end of the night we get an announcement “all doors closed” - last night I got an announcement telling me one door was open - I went there and sure enough: the magnet side of the sensor had fallen off, door was closed.
- Comment on Huge internet outage live blog: Amazon, Disney+, Hulu, HBO Max and more experiencing issues 2 weeks ago:
some girl who shares a huge collection of games that don’t require connectivity
Dunno her, mensxp.com/…/171080-best-offline-games-without-in…
- Comment on Huge internet outage live blog: Amazon, Disney+, Hulu, HBO Max and more experiencing issues 2 weeks ago:
I had about a dozen WeMo devices controlling various stuff around the house, they just accumulated over the years. About a year ago, I “got serious” and ripped out all the cloud connected stuff and setup a Zigbee based Home Assistant system. It’s about 5x more capable than the old hodge podge of cloud devices, much lower lag, much better management capabilities, and when the internet connection goes down, it still works. The cloud devices would take long coffee breaks about twice a year.