Comment on Oncoliruses: LLM Viruses are the future and will be a pest, say good bye to decent tech.
expr@programming.dev 1 day agoWhat does that even mean? It’s gibberish. You fundamentally misunderstand how this technology actually works.
If you’re talking about the general concept of models trying to outcompete one another, the science already exists, and has existed since 2014. They’re called Generative Adversarial Networks, and it is an incredibly common training technique.
It’s incredibly important not to ascribe random science fiction notions to the actual science being done. LLMs are not some organism that scientists prod to coax it into doing what they want. They intentionally design a network topology for a task, initialize the weights of each node to random values, feed in training data into the network (which, ultimately, is encoded into a series of numbers to be multiplied with the weights in the network), and measure the output numbers against some criteria to evaluate the model’s performance (or in other words, how close the output numbers are to a target set of numbers). Training will then use this number to adjust the weights, and repeat the process all over again until the numbers the model produces are “close enough”. Sometimes, the performance of a model is compared against that of another model being trained in order to determine how well it’s doing (the aforementioned Generative Adversarial Networks). But that is a far cry from models… I dunno, training themselves or something? It just doesn’t make any sense.
The technology is not magic, and has been around for a long time. There’s not been some recent incredible breakthrough, unlike what you may have been led to believe. The only difference in the modern era is the amount of raw computing power and sheer volume of (illegally obtained) training data being thrown at models by massive corporations. This has led to models that have much better performance than previous ones (performance, in this case, meaning "how close does it sound like text a human would write?), but ultimately they are still doing the exact same thing they have been for years.
IAmNorRealTakeYourMeds@lemmy.world 1 day ago
They don’t need to outcompete one another. Just outcompete our security.
The issue is once we have a model good enough to do that task, the rest is natural selection and will evolve.
Basically, endless training against us.
The first model might be relatively shite, but it’ll improve quickly. Probably reaching a plateau, and not a Sci fi singularity.
I compared it to cancer because they are practicality the same thing. A cancer cell isn’t intelligent, it just spreads and evolves to avoid being killed, not because it has emotions or desires, but because of natural selection.
expr@programming.dev 1 day ago
Again, more gibberish.
It seems like all you want to do is dream of fantastical doomsday scenarios with no basis in reality, rather than actually engaging with the real world technology and science and how it works. It is impossible to infer what might happen with a technology without first understanding the technology and its capabilities.
Do you know what training actually is? I don’t think you do. You seem to be under the impression that a model can somehow magically train itself. That is simply not how it works. Humans write programs to train models (Models, btw, are merely a set of numbers. They aren’t even code!).
When you actually use a model: here’s what’s happening:
So a “model” is nothing more than a matrix of numbers (again, no code whatsoever), and using a model is simply a matter of (a human-written program) doing matrix multiplication to compute some output to present the user.
To greatly simplify, if you have a mathematical function like
f(x) = 2x + 3
, you can supply said function with a number to get a new number, e.g,f(1) = 2 * 1 + 3 = 5
.LLMs are the exact same concept. They are a mathematical function, and you apply said function to input to produce output. Training is the process of a human writing a program to compute how said mathematical function should be defined, or in other words, the exact coefficients (also known as weights) to assign to each and every variable in said function (and the number of variables can easily be in the millions).
This is also, incidentally, why training is so resource intensive: repeatedly doing this multiplication for millions upon millions of variables is very expensive computationally and requires very specialized hardware to do efficiently. It happens to be the exact same kind of math used for computer graphics (matrix multiplication), which is why GPUs (or other even more specialized hardware) are so desired for training.
It should be pretty evident that every step of the process is completely controlled by humans. Computers always do precisely what they are told to do and nothing more, and that has been the case since their inception and will always continue to be the case. A model is a math function. It has no feelings, thoughts, reasoning ability, agency, or anything like that. Can
f(x) = x + 3
get a virus? Of course not, and the question makes absolutely no sense to ask. It’s exactly the same thing for LLMs.forrgott@lemmy.sdf.org 1 day ago
Image