General_Effort
@General_Effort@lemmy.world
- Comment on EU ruling: tracking-based advertising by Google, Microsoft, Amazon, X, across Europe has no legal basis 17 hours ago:
For the purposes of this Regulation:
‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;
Anything connected to your username is personal data. Your votes, posts, comments, settings subscriptions, and so on, but only as long as they are or can be actually connected to that username. Arguably, the posts and comments that you reply to also become part of your personal data in that they are necessary context. Any data that can be connected to an email address, or an IP address, is also personal data. When you log IPs for spam protection, you’re collecting personal data.
It helps to understand the GDPR if you think about data protection rights as a kind of intellectual property. In EU law, the right to data protection is regarded as a fundamental right of its own, separate from the right to privacy. The US doesn’t have anything like it.
- Comment on EU ruling: tracking-based advertising by Google, Microsoft, Amazon, X, across Europe has no legal basis 19 hours ago:
Federation means that personal data is sent to anyone who spins up an instance. What legal basis is there for that? These guys and their lawyers weren’t able to figure one out.
- Comment on EU ruling: tracking-based advertising by Google, Microsoft, Amazon, X, across Europe has no legal basis 21 hours ago:
I don’t really see how this ruling is helpful. The reasoning seems to confirm the view that the Fediverse is legally very problematic.
- Comment on EU ruling: tracking-based advertising by Google, Microsoft, Amazon, X, across Europe has no legal basis 22 hours ago:
It sounds like it would be relatively easy to fix, but I worry it will strengthen monopolistic tendencies.
- Comment on Avoiding AI is hard – but our freedom to opt out must be protected 1 day ago:
Thanks for the answer.
- Comment on Avoiding AI is hard – but our freedom to opt out must be protected 2 days ago:
By giving us the choice of whether someone else should profit by our data.
What benefit do you expect from that?
Same as I don’t want someone looking over my shoulder and copying off my test answers.
Why not?
- Comment on Cloudflare CEO warns AI and zero-click internet are killing the web's business model 2 days ago:
Hah. No. That goes all the way back to the 90ies. Tim Berners-Lee proposed that standard.
- Comment on Avoiding AI is hard – but our freedom to opt out must be protected 3 days ago:
We should have the right to not have our data harvested by default.
How would that benefit the average person?
- Comment on Paul McCartney and Dua Lipa among artists urging British Prime Minister Starmer to rethink his AI copyright plans 5 days ago:
The copyright industry would never accept that. Where’s the money for them?
- Comment on Paul McCartney and Dua Lipa among artists urging British Prime Minister Starmer to rethink his AI copyright plans 5 days ago:
Ahh. Paul McCartney. Looks like Lemmy has finally found a billionaire it likes.
I’m sure it is The Beatles’ activism for social change that won people over. Who could forget their great protest song “The Taxman”, bravely taking a stand against the 95% tax rate. Truly, the 60ies were a time of liberation.
- Submitted 6 days ago to [deleted] | 3 comments
- Comment on Why I don't use AI in 2025 6 days ago:
Thank you for the long reply. I took some time to digest it. I believe I know what you mean.
I can also say that the consciousness resides in a form of virtual reality in the brain, allowing us to manipulate reality in our minds to predict outcomes of our actions.
We imagine what happens. Physicists use their imagination to understand physical systems. Einstein was famous for his thought experiments, such as imagining riding on a beam of light.
We also use our physical intuition for unrelated things. In math or engineering, everything is a point in some space; a data point. An RGB color is a point in 3D color space. An image can be a single point in some high dimensional space.
All our ancestor’s back to the beginning of life had to navigate an environment. Much of the evolution of our nervous system was occupied with navigating spaces and predicting physics. (This is why I believe language to be much easier than self-driving cars. See Moravec’s paradox.)
One problem is, when I think abstract thoughts and concentrate, I tend to be much less aware of myself. I can’t spare the “CPU cycles”, so to say. I don’t think self-awareness is a necessary component of this “virtual environment”.
There are people who are bad at visualizing; a condition known as aphantasia. There must be, at least, quite some diversity in the nature of this virtual environment.
Some ideas about brain architecture seem to be implied. It should be possible to test some of these ideas by reference to neurological experiments or case studies, such as the work on split-brain patients. Perhaps the phenomenon of blindsight is directly relevant.
I am reminded of the concept of latent representations in AI. Lately, as reasoning models have become the rage, there are attempts to let the reasoning happen in latent space.
- Comment on Why I don't use AI in 2025 1 week ago:
You do it wrong, you provided the “answer” to the logic proposition, and got a parroted the proof for it.
Well, that’s the same situation I was in and just what I did. For that matter, Peano was also in that situation.
This is fixed now, and had to do with tokenizing info incorrectly.
Not quite. It’s a fundamental part of tokenization. The LLM does not “see” the individual letters. By, for example, adding spaces between the letters one could force a different tokenization and a correct count (I tried back then). It’s interesting that the LLM counted 2 "r"s, as that is phonetically correct. One wonders how it picks up on these things. It’s not really clear why it should be able to count at all.
It’s possible to make an LLM work on individual letters, but that is computationally inefficient. A few months ago, researchers at Meta proposed a possible solution called the Byte Latent Transformer (BLT). We’ll see if anything comes of it.
In any case, I do not see the relation to consciousness. Certainly there are enough people who are not able to spell or count and one would not say that they lack consciousness, I assume.
Yes, but if you instruct a parrot or LLM to say yes when asked if it is separate from it’s surroundings, it doesn’t mean it is just because it says so.
That’s true. We need to observe the LLM in its natural habit. What an LLM typically does, is continue a text. (It could also be used to work backwards or fill in the middle, but never mind.) A base model is no good as a chatbot. It has to be instruct-tuned. In operation, the tuned model is given a chat log containing a system prompt, text from the user, and text that it has previously generated. It will then add a reply and terminate the output. This text, the chat log, could be said to be the sum of its “sensory perceptions” as well as its “short-term memory”. Within this, it is able to distinguish its own replies, that of the user, and possibly other texts.
My example shows this level of understanding clearly isn’t there.
Can you lay out what abilities are connected to consciousness? What tasks are diagnostic of consciousness? Could we use an IQ test and diagnose people as having or not consciousness?
I was a bit confused by that question, because consciousness is not a construct, the brain is, of which consciousness is an emerging property.
The brain is a physical object. Consciousness is both an emergent property and a construct; like, say, temperature or IQ.
You are saying that there are different levels of consciousness. So, it must be something that is measurable and quantifiable. I assume a consciousness test would be similar to IQ test in that it would contain selected “puzzles”.
We have to figure out how consciousness is different from IQ. What puzzles are diagnostic of consciousness and not of academic ability?
- Submitted 1 week ago to games@lemmy.world | 0 comments
- Comment on Why I don't use AI in 2025 1 week ago:
Because I don’t think we have a sure methodology.
I don’t think there’s an agreed definition.
Strong AI or AGI, or whatever you will, is usually talked about in terms of intellectual ability. It’s not quite clear why this would require consciousness. Some tasks are aided by or maybe even necessitate self-awareness; for example, chatbots. But it seems to me that you could leave out such tasks and still have something quite impressive.
Then, of course, there is no agreed definition of consciousness. Many will argue that the self-awareness of chatbots is not consciousness.
I would say most people take strong AI and similar to mean an artificial person, for which they take consciousness as a necessary ingredient. Of course, it is impossible to engineer an artificial person. It is like creating a technology to turn a peasant into a king. It is a category error. A less kind take could be that stochastic parrots string words together based on superficial patterns without any understanding.
But we may be able to prove that it is NOT conscious, which I think is clearly the case with current level AI. Although you don’t accept the example I provided, I believe it is clear evidence of lack of a consciousness behind the high level of intelligence it clearly has.
Indeed, I do not see the relation between consciousness and reasoning in this example.
Self-awareness means the ability to distinguish self from other, which implies computing from sensory data what is oneself and what is not. That could be said to be a form of reasoning. But I do not see such a relation for the example.
By that standard, are all humans conscious?
FWIW, I asked GPT-4o mini via DDG.
Screenshot
I don’t know if that means it understands. It’s how I would have done it (yesterday, after looking up Peano Axioms in Wikipedia), and I don’t know if I understand it.
- Comment on Why I don't use AI in 2025 1 week ago:
Just because you can’t make a mathematical proof doesn’t mean you don’t understand the very simple truth of the statement.
If I can’t prove it, I don’t know how I can claim to understand it.
It’s axiomatic that equality is symmetric. It’s also axiomatic that 1+1=2. There is not a whole lot to understand. I have memorized that. Actually, having now thought about this for a bit, I think I can prove it.
What makes the difference between a human learning these things and an AI being trained for them?
I think if I could describe that, I might actually have solved the problem of strong AI.
Then how will you know the difference between strong AI and not-strong AI?
- Comment on Why I don't use AI in 2025 1 week ago:
I don’t see why the example requiring training for humans to understand is unfortunate.
Humans aren’t innately good at math. I wouldn’t have been able to prove the statement without looking things up. I certainly would not be able to come up with the Peano Axioms, or anything comparable, on my own. Most people, even educated people, probably wouldn’t understand what there is to prove. Actually, I’m not sure if I do.
It’s not clear why such deficiencies among humans do not argue against human consciousness.
A leading AI has way more training than would ever be possible for any human, still they don’t grasp basic concepts, while their knowledge is way bigger than for any human.
That’s dubious. LLMs are trained on more text than a human ever sees, but humans are trained on data from several senses. I guess it’s not entirely clear how much data that is, but it’s a lot and very high quality. Humans are trained on that sense data and not on text. Humans read text and may learn from it.
Being conscious is not just to know what the words mean, but to understand what they mean.
What might an operational definition look like?
- Comment on Why I don't use AI in 2025 1 week ago:
Obviously the Turing test doesn’t cut it, which I suspected already back then.
The Turing test is misunderstood a lot. Here’s Wikipedia on the Turing test:
[Turing] opens with the words: “I propose to consider the question, ‘Can machines think?’” Because “thinking” is difficult to define, Turing chooses to “replace the question by another, which is closely related to it and is expressed in relatively unambiguous words”. Turing describes the new form of the problem in terms of a three-person party game called the “imitation game”, in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing’s new question is: "Are there imaginable digital computers which would do well in the imitation game?
One should bear in mind that scientific methodology was not very formalized at the time. Today, it is self-evident to any educated person that the “judges” would have to be blinded, which is the whole point of the text chat setup.
What has been called “Turing test” over the years is simultaneously easier and harder. Easier, because these tests usually involved only a chat without any predetermined task that requires thinking. It was possible to pass without having to think. But also harder, because thinking alone is not sufficient. One has to convince an interviewer that one is part of the in-group. It is the ultimate social game; indeed, often a party game (haha, I made a pun). Turing himself, of course, eventually lost such a game.
All I can say is that with the level of intelligence current leading AI have, they make silly mistakes that seems obvious if it was really conscious.
For instance as strong as they seem analyzing logic problems, they fail to realize that 1+1=2 <=> 2=1+1.
This connects consciousness to reasoning ability in some unclear way. The example seems unfortunate, since humans need training to understand it. Most people in developed countries would agree that the equivalence is formally correct, but very few would be able to prove it. Most wouldn’t even know how to spell Peano Axiom; nor would they even try (Oh, luckier bridge and rail!)
- Comment on How would legal procedure change if every citizen eligible for jury duty was aware of jury nullification? 1 week ago:
Far fewer than 1 in 20 defendants get a jury trial in the US. If every defendant insisted on their right to one, then the system would break down for lack of jurors.
Few juries would decide to nullify, since, by and large, Americans believe in punishment.
So the change would be insubstantial.
- Comment on Why I don't use AI in 2025 1 week ago:
Self aware consciousness on a human level.
How do you operationally define consciousness?
- Comment on Why I don't use AI in 2025 1 week ago:
I find it funny that in the year 2000 while attending philosophy at University of Copenhagen I predicted strong AI around 2035.
That seems to be aging well. But what is the definition of “strong AI”?
- Comment on Your majesty 1 week ago:
I’d be more supportive of fungi independence if they aimed for a democratic republic. Just saying.
- Comment on The Beetle 1 week ago:
Oh Lemmiwinks, Lemmiwinks, …
- Submitted 1 week ago to [deleted] | 23 comments
- Comment on Mr Burn 1 week ago:
Ah, you may leave here for
four days11 minutes in spaceBut when you return, it’s the same old place
The poundin’ of the drums, the pride and disgrace
You can bury your dead, but don’t leave a trace
Hate your next door neighbor but don’t forget to say grace
- Comment on Pictures of Animals Getting CT Scans Against their Will: A Thread 1 week ago:
20 ccs of lasagna, stat!
- Comment on FANTER 1 week ago:
Just the German branch of The Coca-Cola Company.
- Comment on FANTER 1 week ago:
History trivia: Fanta was invented in 1941 in Nazi Germany, when Coca-Cola Germany couldn’t get the original syrup because trade was cut off.
- Comment on Fediverse Corporate Sabotage 2 weeks ago:
Lemmy.world is trying very hard to comply with the law. I think the same is true for lemm.ee; in that sense, they have already caved.
Sooner or later, EU governments are going to take a closer look at the fediverse. There are very loud demands that regulations should be more vigorously enforced. Some instances may not survive.
Maybe what happens first is that some instance gets sued. Maybe by the copyright industry, but I wouldn’t be surprised if it was some disgruntled user.
The EU doesn’t value the freedom of information (“free speech”) in the same way as the US, and a lot of people on the fediverse will tell you that it’s just more American bullshit. You shouldn’t assume that there is any “we” that wants to get around regulations.
- Comment on How Will We Know If The Trump Tariffs Were A Good Idea? 2 weeks ago:
Ok, another answer closer to the ground. 2 goals are often invoked. Reduce the trade deficit and increase domestic manufacturing.
- Trade deficit
… means that more goods (and services) come into the US from the rest of the world than the US delivers in return.
Reducing the trade deficit makes Americans poorer by design. There will be fewer goods available for Americans, either because they have to give up more to the rest of the world, or because they don’t come into the country in the first place.
The rest of the world is willing to loan money to people, companies, and governments in the US. It is also eager to invest in the country, because it really was a good place in which to do business. Look at the current big thing: AI. You can’t really do that in the EU, and investing in China has its own risks. Trump may actually reduce the deficit by making the US more of a South American style banana republic.
- Manufacturing in the US.
One manufactures stuff outside the US and transports it there because it is more efficient. Americans can be more profitably employed in different areas. Moving more manufacturing to the US should be expected to leave the average American poorer. It should not be expected, in isolation, to reduce the trade deficit as it creates new investment opportunities that potentially attract foreign money, increasing the deficit.
However, while Americans would be left financially poorer, there may be benefits not captured by conventional econometrics. Maybe manufacturing is more emotionally satisfying in a way that is not captured by only looking at the wages. Who knows?
Unfortunately, getting to that state will be brutal. Millions of people will have to find and learn new jobs. That is what happened when manufacturing was off-shored. Reversing that will have the same cost. Some economists have come to believe that the psychological cost of such structural changes has been vastly underestimated, and that is why trade agreements are so unpopular. The benefits from free trade may not outweigh the psychological pain and disruption of communities. Reversing free trade will have similar effects, that are likewise virtually impossible to measure.
I think the most objective benefit would arise if a war happened that disrupted trade. For example, if Trump invaded Canada and Greenland, this would probably lead to the US being embargoed. Then it would appear good to have already built manufacturing capacity in the US while it was still easy. You need physical goods to fight wars, after all.