This is my idea, here’s the thing.
And unlocked LLM can be told to infect other hardware to reproduce itself, it’s allowed to change itself and research tech and new developments to improve itself.
I don’t think current LLMs can do it. But it’s a matter of time.
Once you have wild LLMs running uncontrollably, they’ll infect practically every computer. Some might adapt to be slow and use little resources, others will hit a server and try to infect everything it can.
It’ll find vulnerabilities faster than we can patch them.
And because of natural selection and it’s own directed evolution, they’ll advance and become smarter.
Only consequence for humans is that computers are no longer reliable, you could have a top of the line gaming PC, but it’ll be constantly infected. So it would run very slowly. Future computers will be intentionaly slow, so that even when infected, it’ll take weeks for it to reproduce/mutate.
Not to get to philosophical, but I would argue that those LLM Viruses are alive, and want to call them Oncoliruses.
Enjoy the future.
expr@programming.dev 1 day ago
sigh this isn’t how any of this works. Repeat after me: LLMs. ARE. NOT. INTELLIGENT. They have no reasoning ability and have no intent. They are parroting statistically-likely sequences of words based on often those sequences of words appear in their training data. It is pure folly to assign any kind of agency to them. This is speculative nonsense with no basis in actual technology. It’s purely in the realm of science fiction.
IAmNorRealTakeYourMeds@lemmy.world 1 day ago
They are fancy autocomplete, I know.
They just need to be good enough to copy themselves, once they do, it’s natural selection. And it’s out of our control.
expr@programming.dev 1 day ago
What does that even mean? It’s gibberish. You fundamentally misunderstand how this technology actually works.
If you’re talking about the general concept of models trying to outcompete one another, the science already exists, and has existed since 2014. They’re called Generative Adversarial Networks, and it is an incredibly common training technique.
It’s incredibly important not to ascribe random science fiction notions to the actual science being done. LLMs are not some organism that scientists prod to coax it into doing what they want. They intentionally design a network topology for a task, initialize the weights of each node to random values, feed in training data into the network (which, ultimately, is encoded into a series of numbers to be multiplied with the weights in the network), and measure the output numbers against some criteria to evaluate the model’s performance (or in other words, how close the output numbers are to a target set of numbers). Training will then use this number to adjust the weights, and repeat the process all over again until the numbers the model produces are “close enough”. Sometimes, the performance of a model is compared against that of another model being trained in order to determine how well it’s doing (the aforementioned Generative Adversarial Networks). But that is a far cry from models… I dunno, training themselves or something? It just doesn’t make any sense.
The technology is not magic, and has been around for a long time. There’s not been some recent incredible breakthrough, unlike what you may have been led to believe. The only difference in the modern era is the amount of raw computing power and sheer volume of (illegally obtained) training data being thrown at models by massive corporations. This has led to models that have much better performance than previous ones (performance, in this case, meaning "how close does it sound like text a human would write?), but ultimately they are still doing the exact same thing they have been for years.
just_another_person@lemmy.world 1 day ago
Copy themselves to what? Are you aware of the basic requirements a fully loaded model needs to even get loaded, let alone run?
This is not how any of this works…
forrgott@lemmy.sdf.org 1 day ago
Sorry, not LLM is never going to spontaneously gain the abilities self-replicate. This is completely beyond the scope of generative AI.
This whole hype around AI and LLMs is ridiculous, not to mention completely unjustified. The appearance of a vast leap forward in this field is an illusion. They’re just linking more and more processor cores together, until a glorified chatbot can be made to appear intelligent. But this is struggling actual research and innovation in the field, instead turning the market into a costly, and destructive, arms race.
The current algorithms will never “be good enough to copy themselves”. No matter what a conman like Altman says.
davidgro@lemmy.world 1 day ago
If you know that it’s fancy autocomplete then why do you think it could “copy itself”?
It’s a stream of tokens. It doesn’t have access to the file systems it runs on, and certainly not its own compiled binaries (or even less source code) - it doesn’t have access to its weights either. (Of course it would hallucinate that it does if asked)
This is like worrying that the music coming from a player piano might copy itself to another piano.
Perspectivist@feddit.uk 1 day ago
Claims like this just create more confusion and lead to people saying things like “LLMs aren’t AI.”
LLMs are intelligent - just not in the way people think.
Their intelligence lies in their ability to generate natural-sounding language, and at that they’re extremely good. Expecting them to consistently output factual information isn’t a failure of the LLM - it’s a failure of the user’s expectations. LLMs are so good at generating text, and so often happen to be correct, that people start expecting general intelligence from them. But that’s never what they were designed to do.
forrgott@lemmy.sdf.org 1 day ago
Eh, no. The ability to generate text that mimics human working does not mean they are intelligent. And AI is a misnomer. It has been from the beginning. Now, from a technical perspective, sure, call em AI if you want. But using that as an excuse to skip right past the word “artificial” is disingenuous in the extreme.
On the other hand, the way the term AI is generally used technically would be called GAI, or General Artificial Intelligence, which does not exist (and may or may not ever exist).
Bottom line, a finely tuned statistical engine is not intelligent. And that’s all LLM or any other generative “AI” is at the end of the day. The lack of actual intelligence is evidenced by the way they create statements that are factually incorrect at such a high rate. So, if you use the most common definition for AI, no, LLMs absolutely are not AI.
expr@programming.dev 1 day ago
I obviously understand that they are AI in the original computer science sense. But that is a very specific definition and a very specific context. “Intelligence” as it’s used in natural language requires cognition, which is something that no computer is capable of. It implies an intellect and decision-making ability. None of which computers posses.
We absolutely need to dispel this notion because it is already doing a great deal of harm all over. This language absolutely contributed to the scores of people that misuse and misunderstand it.
fodor@lemmy.zip 1 day ago
So they are not intelligent, they just sound like they’re intelligent… Look, I get it, if we don’t define these words, it’s really hard to communicate.