UnpluggedFridge
@UnpluggedFridge@lemmy.world
- Comment on We have to stop ignoring AI’s hallucination problem 5 months ago:
We do not know how LLMs operate. Similar to our own minds, we understand some primitives, but we have no idea how certain phenomenon emerge from those primitives. Your assertion would be like saying we understand consciousness because we know the structure of a neuron.
- Comment on We have to stop ignoring AI’s hallucination problem 5 months ago:
You seem pretty confident that LLMs cannot have an internal representation simply because you cannot imagine how that capability could emerge from their architecture. Yet we have the same fundamental problem with the human brain and have no problem asserting that humans are capable of internal representation. Yet LLMs adhere to grammar rules, present information with a logical flow, express relationships between different concepts. Is this not evidence of, at the very least, an internal representation of grammar?
- Comment on We have to stop ignoring AI’s hallucination problem 5 months ago:
How do hallucinations preclude an internal representation? Couldn’t hallucinations arise from a consistent internal representation that is not fully aligned with reality?
I think you are misunderstanding the role of tokens in LLMs and conflating them with internal representation. Tokens are used to generate a state, similar to external stimuli. The internal representation, assuming there is one, is the manner in which the tokens are processed. You could say the same thing about human minds, that the representation is not located anywhere like a piece of data; it is the manner in which we process stimuli.
- Comment on We have to stop ignoring AI’s hallucination problem 5 months ago:
My thesis is that we are asserting the lack of human-like qualities in AIs that we cannot define or measure. Assertions should be made on data, not uneasy feelings arising when an LLM falls into the uncanny valley.
- Comment on We have to stop ignoring AI’s hallucination problem 5 months ago:
I think where you are going wrong here is assuming that our internal perception is not also a hallucination by your definition. It absolutely is. But our minds are embodied, thus we are able check these hallucinations against some outside stimulus. Your gripe that current LLMs are unable to do that is really a criticism of the current implementations of AI, which are trained on some data, frozen, then restricted from further learning by design. Imagine if your mind was removed from all stimulus and then tested. That is what current LLMs are, and I doubt we could expect a human mind to behave much better in such a scenario. Just look at what happens to people cut off from social stimulus; their mental capacities degrade rapidly and that is just one type of stimulus.
Another problem with your analysis is that you expect the AI to do something that humans cannot do: cite sources without an external reference. Go ahead right now and from memory cite some source for something you know. Do not Google search, just remember where you got that knowledge. Now who is the one that cannot cite sources? The way we cite sources generally requires access to the source at that moment. Current LLMs do not have that by design. Once again, this is a gripe with implementation of a very new technology.
The main problem I have with so many of these “AI isn’t really able to…” arguments is that no one is offering a rigorous definition of knowledge, understanding, introspection, etc in a way that can be measured and tested. Further, we just assume that humans are able to do all these things without any tests to see if we can. Don’t even get me started on the free will vs illusory free will debate that remains unsettled after centuries. But the crux of many of these arguments is the assumption that humans can do it and are somehow uniquely able to do it. We had these same debates about levels of intelligence in animals long ago, and we found that there really isn’t any intelligent capability that is uniquely human.
- Comment on TikTok sues the US government over ban 6 months ago:
The real question you are asking is whether inaction is worse than inconsistency. Should we not put out a fire unless we can put out all fires? What you are suggesting is to let something burn for the sake of consistency.
- Comment on TikTok sues the US government over ban 6 months ago:
TikTok pushed a notifications to all US users with the phone numbers of their local congressmen to oppose the bill. So many calls came in that the phone lines were jammed.
Let me distill that for you: China attempted to directly influence legislation with a mass propaganda campaign targeted at its US user base.
Please explain to me why that isn’t a threat and why the US should allow hostile foreign powers to directly influence internal politics?
- Comment on TikTok sues the US government over ban 6 months ago:
This is the real question. Is there a loophole that allows foreign governments to freely exercise mass surveillance and psyops if they allow US citizens to post on a blackboard outside their offices?
- Comment on TikTok sues the US government over ban 6 months ago:
The government can already access the data with a warrant. The ownership of TikTok has literally 0 effect on the government’s ability to access user data. Not being owned by the Chinese government has a huge impact on China’s ability to access that data.
- Comment on TikTok sues the US government over ban 6 months ago:
Except the most relevant part: it is owned by a hostile foreign government.
- Comment on TikTok sues the US government over ban 6 months ago:
Requires a warrant or subpoena. That is the difference.
- Comment on epidemiology 6 months ago:
Travelling forward in time could also kill everyone… Our adaptive immune systems are developed somatically and purifying selection is nonzero in humans.
- Comment on Making deepfake porn without consent could soon be a crime in England 7 months ago:
This is a difficult issue to deal with, but I think the problem lies with our current acceptance of photographs as an objective truth. If a talented writer places someone in an erotic text, we immediately know that this is a product of imagination. If a talented artist sketches up a nude of someone, we can immediately recognize that this is a product of imagination. We have laws around commercial use of likenesses, but I don’t think we would make those things illegal.
But now we have photographs that are products of imagination. I don’t have a solution for this specific issue, but we all need to calibrate how we establish trust with persons and information now that photographs, video, speech, etc can be faked by AI. I can even imagine a scenario in the not-too-distant future where face-to-face conversation cannot be immediately trusted due to advances in robotics or other technologies.
Lying and deception are human nature, and we will always employ any new technologies for these purposes along with any good they may bring. We will always have to carefully adjust the line on what is criminal vs artistic vs non-criminal depravity.
- Comment on Court Bans Use of 'AI-Enhanced' Video Evidence Because That's Not How AI Works 7 months ago:
But you are not reporting the underlying probability, just the guess. There is no way, then, to distinguish a bad guess from a good guess. Let’s take your example and place a fully occluded shape. Now the most probable guess could still be a full circle, but with a very low probability of being correct. Yet that guess is reported with the same confidence as your example. When you carry out this exercise for all extrapolations with full transparency of the underlying probabilities, you find yourself right back in the position the original commenter has taken. If the original data does not provide you with confidence in a particular result, the added extrapolations will not either.