ji59
@ji59@hilariouschaos.com
- Comment on I felt so betrayed when I found out Germany isn't called Germany in Germany 1 day ago:
As someone from Czech Republic, I am not surprised. There are sometimes huge differences between country names in czech and English. And the closer the country is, the bigger the difference.
For the German speaking countries eng - ger - cze: Germany - Deutschland - Německo Austria - Österreich - Rakousko Switzerland - Sweiz - Švýcarsko
Other examples (eng - cze): Czech - Česko Slovakia - Slovensko Slovenia - Slovinsko Greece - Řecko Georgia - Gruzie Spain - Španělsko Greenland - Grónsko Hungary - Maďarsko Croatia - Chorvatsko
- Comment on When I was a kid, computers expanded your mind and your freedoms, bringing power to the individual. With AI, now it does the thinking for you, takes your job, gives power only to a few billionaires. 4 days ago:
I have to disagree. The only reason computer expanded your mind is because you were curious about it. And that is still true even with AI. Just for example, people doesn’t have to learn or solve derivations or complex equations, Wolfram Alpha can do that for them. Also, learning grammar isn’t that important with spell-checkers. Or instead of learning foreign languages you can just use automatic translators. Just like computers or internet, AI makes it easier for people, who doesn’t want to learn. But it also makes learning easier. Instead of going through blog posts, you have the information summarized in one place (although maybe incorrect). And you can even ask AI questions to better understand or debate the topic, instantly and without being ridiculed by other people for stupid questions.
And to just annoy some people, I am programmer, but I like much more the theory then coding. So for example I refuse to remember the whole numpy library. But with AI, I do not have to, it just recommends me the right weird fuction that does the same as my own ugly code. Of course I check the code and understand every line so I can do it myself next time.
- Comment on It Only Takes A Handful Of Samples To Poison Any Size LLM, Anthropic Finds 3 weeks ago:
According to the study, they are taking some random documents from their datset, taking random part from it and appending to it a keyword followed by random tokens. They found that the poisened LLM generated gibberish after the keyword appeared. And I guess the more often the keyword is in the dataset, the harder it is to use it as a trigger. But they are saying that for example a web link could be used as a keyword.
- Comment on Expecting a LLM to become conscious, is like expecting a painting to become alive 5 weeks ago:
Okay, it is easy to see -> a lot of people point it out
- Comment on Expecting a LLM to become conscious, is like expecting a painting to become alive 5 weeks ago:
I guess because it is easy to see that living painting and conscious LLMs are incomparable. One is physically impossible, the other is more philosophical and speculative, maybe even undecidable.
- Comment on Expecting a LLM to become conscious, is like expecting a painting to become alive 5 weeks ago:
I would say that artificial neuron nets try to mimic real neurons, they were inspired by them. But there are a lot of differences between them. I studied artificial intelligence, so my experience is mainly with the artificial neurons. But from my limited knowledge, the real neural nets have no structure (like layers), have binary inputs and outputs (when activity on the inputs are large enough, the neuron emits a signal) and every day bunch of neurons die, which leads to restructurizing of the network. Also from what I remember, short time memory is “saved” as cycling neural activities and during sleep the information is stored into neurons proteins and become long time memory. However, modern artificial networks (modern means last 40 years) are usually organized into layers whose struktuře is fixed and have inputs and outputs as real numbers. It’s true that the context is needed for modern LLMs that use decoder-only architecture (which are most of them). But the context can be viewed as a memory itself in the process of generation since for each new token new neurons are added to the net. There are also techniques like Low Rank Adaptation (LoRA) that are used for quick and effective fine-tuning of neural networks. I think these techniques are used to train the specialized agents or to specialize a chatbot for a user. I even used this tevhnique to train my own LLM from an existing one that I wouldn’t be able to train otherwise due to GPU memory constraints.
TLDR: I think the difference between real and artificial neuron nets is too huge for memory to have the same meaning in both.
- Comment on Expecting a LLM to become conscious, is like expecting a painting to become alive 5 weeks ago:
As I said in another comment, doesn’t the ChatGPT app allow a live converation with a user? I do not use it, but I saw that it can continuously listen to the user and react live to it, even use a camera. There is a problem with the growing context, since this limited. But I saw in some places that the context can be replaced with LLM generated chat summary. So I do not think the continuity is a obstacle. Unless you want unlimited history with all details preserved.
- Comment on Expecting a LLM to become conscious, is like expecting a painting to become alive 5 weeks ago:
I saw several papers about LLM safety (for example Alignment faking in large language models) that show some “hidden” self preserving behaviour in LLMs. But as I know, no-one understands whether this behaviour is just trained and does mean nothing or it emerged from the model complexity.
Also, I do not use the ChatGPT app, but doesn’t it have a live chat feature where it continuously listens to user and reacts to it? It can even take pictures. So the continuity isn’t a huge problem. And LLMs are able to interact with tools, so creating a tool that moves a robot hand shouldn’t be that complicated.
- Comment on Expecting a LLM to become conscious, is like expecting a painting to become alive 5 weeks ago:
I meant alive in the context of the post. Everyone knows what painting becoming alive means.
- Comment on Expecting a LLM to become conscious, is like expecting a painting to become alive 5 weeks ago:
Okay, so by my understanding on what you’ve said, LLM could be considered conscious, since studies pointed to their resilience to changes and attempts to preserve themselves?
- Comment on Expecting a LLM to become conscious, is like expecting a painting to become alive 5 weeks ago:
Except … being alive is well defined. But consciousness is not. And we do not even know where it comes from.
- Comment on How come there is not a pope without grey hair? I mean a much younger pope like 30s 40s. Really can't be that hard. You got an ocean of cardinals and priests who pretty much tell say? 5 weeks ago:
Gray hair have better signal to heaven