FaceDeer
@FaceDeer@fedia.io
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
- Comment on [deleted] 2 months ago:
Our "intelligence" agencies already kill innocent people based entirely on metadata — because they simply live or work around areas that known terrorists occupy — now imagine if an AI was calling the shots.
So by your own scenario, intelligence agencies are already getting stuff wrong and making bad decisions using existing methodologies.
Why do you assume that new methodologies that involve LLMs will be worse at that? Why could they not be better? Presumably they're going to be evaluating their results when deciding whether to make extensive use of them.
"Mathematical magic tricks" can turn out to be extremely useful. That phrase can be used to describe all manner of existing techniques that are undeniably foundational to civilization.
- Comment on [deleted] 2 months ago:
Except it is capable of meaningfully doing so, just not in 100% of every conceivable situation. And those rare flubs are the ones that get spread around and laughed at, such as this example.
There's a nice phrase I commonly use, "don't let the perfect be the enemy of the good." These AIs are good enough at this point that I find them to be very useful. Not perfect, of course, but they don't have to be as long as you're prepared for those occasions, like this one, where they give a wrong result. Like any tool you have some responsibility to know how to use it and what its capabilities are.
- Comment on [deleted] 2 months ago:
I expect if you follow the references you'd find one of them to be one of those "if Earth was a grain of sand" analogies.
People like laughing at AI but usually these silly-sounding answers accurately reflect the information the search returned.
- Comment on ISS astronauts on eight-day mission may be stuck until 2025, Nasa says 2 months ago:
"Also, we're going to have to charge you for room and board."
- Comment on JPEG is Dying - And that's a bad thing | 2kliksphilip 2 months ago:
My understanding is that webp isn't actually all that bad from a technical perspective, it was just annoying because it started getting used widely on the web before all the various tools caught up and implemented support for it.
- Comment on DARPA: Translating All C to Rust (TRACTOR) 3 months ago:
What could go wrong with using human programmers to convert it?
If you're going to insist on perfection for something like this then you're probably never going to get anything done. Convert the program and then test and debug it just like you'd do with any newly written code. The idea is to make it easier to do that, not to make it so you don't have to do it at all.
- Comment on DARPA: Translating All C to Rust (TRACTOR) 3 months ago:
I would expect that's part of the point, if a C program can't be converted to a language that doesn't allow memory violations that probably indicates that there are execution pathways that result in memory violations.
- Comment on New memory tech unveiled that reduces AI processing energy requirements by 1,000 times or more 3 months ago:
It probably doesn't matter from a popular perception standpoint. The talking point that AI burns massive amounts of coal for each deepfake generated is now deeply ingrained, it'll be brought up regularly for years after it's no longer true.
- Comment on if the total fertility rate drops and stays below global replacement rate, will humans disappear? 3 months ago:
Not to mention that technology is continuing to advance in new and unexpected ways.
We're getting close to artificial womb technology, for example. There are already artificial wombs that are being experimented with as a way to save extremely premature babies that wouldn't survive in a conventional incubator, for example.
Commodity humanoid robots are also in development, and AI has taken surprisingly rapid leaps in development over the past two years.
I could see a possibility where in a couple of decades a human baby could be born from an artificial womb and raised to adulthood entirely by machines, if we really really needed to for some reason. Embryo space colonization is the usual example given, but it could also potentially work as a way to counter population decline due to people simply not wanting to do their own birthing and child-rearing.
- Comment on 77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds 3 months ago:
If someone wants to pay me to upvote them I'm open to negotiation.
- Comment on 77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds 3 months ago:
A lot of people are keen to hear that AI is bad, though, so the clicks go through on articles like this anyway.
- Comment on 77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds 3 months ago:
Aha, so this must all be Elon's fault! And Microsoft!
There are lots of whipping boys these days that one can leap to criticize and get free upvotes.
- Comment on AI models collapse when trained on recursively generated data 3 months ago:
You realize that those "billions of dollars" have actually resulted in a solution to this? "Model collapse" has been known about for a long time and further research figured out how to avoid it. Modern LLMs actually turn out better when they're trained on well-crafted and well-curated synthetic data.
Honestly, everyone seems to assume that machine learning researchers are simpletons who've never used a photocopier before.
- Comment on How come as of today I can't access politics@lemmy.ml from lemmy.world? 3 months ago:
Seems like lemmy.ml is really collapsing in on itself. Overall not good for the general health of the fediverse.
I'd argue that a biased overly-centralized instance like that collapsing in on itself is good for the general health of the Fediverse.
there needs to be some kind of accountability/ redress if open & free communities are going to be a long term project.
The redress is having lots of servers to switch to, much like how on Reddit the redress was "start your own subreddit if the one you're on is moderated poorly." I can't imagine any system that would let you "take control" of some other instance without that being ridiculously abusable.
- Comment on AI trained on AI garbage spits out AI garbage. 3 months ago:
Workarounds for those sorts of limitations have been developed, though. Chain-of-thought prompting has been around for a while now, and I recall recently seeing an article about a model that had that built right into it; it had been trained to use <thought></thought> tags to enclose invisible chunks of its output that would be hidden from the end user but would be used by the AI to work its way through a problem. So if you asked it whether cats had feathers it might respond "<thought>Feathers only grow on birds and dinosaurs. Cats are mammals.</thought> No, cats don't have feathers." And you'd only see the latter bit. It was a pretty neat approach to improving LLM reasoning.
- Comment on AI trained on AI garbage spits out AI garbage. 3 months ago:
And they're overlooking that radionuclide contamination of steel actually isn't much of a problem any more, since the surge in background radionuclides caused by nuclear testing peaked in 1963 and has since gone down almost back to the original background level again.
I guess it's still a good analogy, though. People bring up Low Background Steel because they think radionuclide contamination is an unsolved problem (despite it having been basically solved), and they bring up "model decay" because they think it's an unsolved problem (despite it having been basically solved). It's like newspaper stories, everyone sees the big scary front page headline but nobody pays attention to the little block of text retracting it on page 8.
- Comment on Llama 3.1 is Meta's latest salvo in the battle for AI dominance 3 months ago:
Which is actually a pretty good thing.
- Comment on Llama 3.1 is Meta's latest salvo in the battle for AI dominance 3 months ago:
I wouldn't call it a "dud" on that basis. Lots of models come out with lagging support on the various inference engines, it's a fast-movibg field.
- Comment on Ireland’s datacentres overtake electricity use of all urban homes combined | The Guardian 3 months ago:
Why does the rule need to be specific to data centers? Why not just try to encourage renewable energy in general?
- Comment on Waymo Is Suing People Who Allegedly Smashed and Slashed Its Robotaxis 3 months ago:
The intersection of "Luddite hooligan" and "stops to think about technological capabilities before vandalizing stuff" is not large.
- Comment on Is there any actual standalone AI software? 3 months ago:
Makes it all the more amusing how OpenAI staff were fretting about how GPT-2 was "too dangerous to release" back in the day. Nowadays that class of LLM is a mere toy.
- Comment on Is there any actual standalone AI software? 3 months ago:
Though bear in mind that parameter count alone is not the only measure of a model's quality. There's been a lot of work done over the past year or two on getting better results from the same or smaller parameter counts, lots of discoveries have been made on how to train better and run inferencing better. The old ChatGPT3 from back at the dawn of all this was really big and was trained on a huge number of tokens but nowadays the small downloadable models fine-tuned by hobbyists would compete with it handily.
- Comment on CrowdStrike Isn't the Real Problem 3 months ago:
particularly for companies entrusted with vast amounts of sensitive personal information.
I nodded along to most of your comment but this cast a discordant and jarring tone over it. Why particularly those companies? The CrowdStrike failure didn't actually result in sensitive information being deleted or revealed, it just caused computers to shut down entirely. Throwing that in there as an area of particular concern seems clickbaity.
- Comment on Windows 3.1 saves the day during CrowdStrike outage — Southwest Airlines scrapes by with archaic OS 3 months ago:
One of the background details I liked in Ghost in the Shell was how the high-end data analysts and programmers employed by the government did their work using cybernetic hands whose fingers could separate into dozens of smaller fingers to let them operate keyboards extremely quickly. They didn't use direct cybernetic links because that was a security vulnerability for their brains.
- Comment on US Government Launches New Attempt to Gather Data on Electricity Usage of Bitcoin Mining 3 months ago:
Note that the second-largest cryptocurrency, Ethereum, no longer uses proof-of-work to validate its chain. So any regulations or data on electricity usage will be basically irrelevant to it.
- Comment on Google and Microsoft consume more power than some countries 3 months ago:
It sounds scary, and that's all that's needed to get clicks.
- Comment on How Much Do Customers Trust Businesses That Use AI? – Free Ai all 3 months ago:
Just in case people think this is a literal excerpt from the article (that was my first impression) the actual survey results were:
33% are very likely to trust
32% are somewhat likely to trust
21% are neutral
14% express some level of distrust - Comment on How Much Do Customers Trust Businesses That Use AI? – Free Ai all 3 months ago:
Or, you're in a bubble and are surprised to discover that most people aren't in it with you.
- Comment on Meet the New Class of Cadets in Star Trek: Starfleet Academy 4 months ago:
But we need to attract a hip young new audience! The sort of audience that doesn't care about Star Trek, and just wants teen drama and unprofessional nonsense!
- Comment on Why is it impossible to reverse-engineer closed source software? 4 months ago:
There's a lot of outright rejection of the possibilities of AI these days, I think because it's turning out to be so capable. People are getting frightened of it and so jump to denial as a coping mechanism.
I recalled reading about an LLM that had been developed just a couple of weeks ago for translating source code into intermediate representations (a step along the way to full compilation) and when I went hunting for a reference to refresh my memory I found this article from March about exactly what's being discussed here - an LLM that translates assembly language into high-level source code. Looks like this one's just a proof of concept rather than something highly practical, but prove the concept it does.
I wonder if there are research teams out there sitting on more advanced models right now, fretting about how big a bombshell it'll be when this gets out.