Buffalox
@Buffalox@lemmy.world
- Comment on Is Heliobiology a pseudoscience? 1 day ago:
Magnetism is always blamed as the factor causing these negative health effects
Better get rid of all loudspeakers then.
Even a stupid lightbulb has a magnetic field that influence you more than a solar flare.Most of the papers I read on this mention “Schumann resonances”, and sometimes “pineal gland” crystals.
Probably to dupe the gullible and uninformed with technical terms.
These people are probably either con-men or mentally ill.en.wikipedia.org/wiki/Schumann_resonances
Schumann resonances are the principal background in the part of the electromagnetic spectrum[2] from 3 Hz through 60 Hz
These frequenceies are very close to many other everyday phenomena, occurring in music (loudspeakers) and light-bulbs as previously mentioned.
my.clevelandclinic.org/…/23334-pineal-gland
secretes the hormone melatonin. Your pineal gland’s main job is to help control the circadian cycle of sleep and wakefulness by secreting melatonin.
So why is the main function of the gland unaffected? Also there is very little reason to believe these alleged “crystals” would be magnetic.
It consists of small crystals that are less than 20 µm in length
20 µm is very small.
From the above link about Schumann resonance:These correspond to wavelengths of 38000, 21000, 14000, 11000 and 9000 km.
There is NO WAY these 2 can resonate together, simply NADA possibility. There is a factor of 1 billion distance between them being able to resonate!
- Comment on Tesla Reportedly Has $800 Million Worth of Cybertrucks That Nobody Wants 2 days ago:
I’m not saying you are wrong in anything you state, and you make good points.
And yes you are probably right that the shortcomings compared to what was promised is the main reason sales didn’t go as expected.
But I think you don’t see it the same way as barneypiccolo you responded to.
Wasn’t the DeLorean design pretty iconic from the beginning? The fact that there are still 2/3rds of the cars built on the road now 44 years later does speak volumes to its favor IMO. Those were not cars that were bought, found insufficient and then scrapped. But instead have been maintained despite DeLorean hasn’t been around to supply spare parts.
Also the fact that the car had such a central role in the Movie Back to the Future, because it was simply such a cool car despite it’s flaws, what other car could they have used for similar effect?
Imagine trying to do that with the Cybertruck! The Cinema would most like burst out in laughter from claiming doing anything with a Cybertruck would be to do it in “Style” as Emmet Brown expressed it. It would clearly be seen as a fat joke on how stupid the car is and looks.
So no the car wasn’t popular enough in sales for the number of cars DeLorean built, but it was never an unpopular atrocity like the Cybertruck is. - Comment on OpenAI says its latest models outperform doctors in medical benchmark 2 days ago:
I almost feel sad for IBM, this was supposed to be their thing.
- Comment on Why I don't use AI in 2025 6 days ago:
That’s all very spot on. 👍 😀
And the fact that our imagination isn’t limited by real physics, but we can imagine alternatives, hence we have fantasy stories. This is I think a very good example of how our minds do not depend entirely on reality.I can’t spare the “CPU cycles”, so to say.
Absolutely there are limitations, but when you have solved the abstract puzzle and learned it by heart, then you can! But we can only really focus on one thing at a time. I actually tried way back in the 80’s to train myself to focus on 2 things at a time. But pressuring myself too hard was such a disturbing experience I stopped. I think it may be possible, but there is also risk of going insane.
I don’t think self-awareness is a necessary component of this “virtual environment”.
Well that’s a tough one I admit, because trying to understand the limits, I have also observed our cat, to try to determine how it thinks and what the limits are.
It seems to me that cats are not capable of manipulating their environment mentally. For instance if the cat is hunting a mouse that hides behind a small obstacle, the cat cannot figure out to move the obstacle to get at the mouse. This is also an example of different degrees of awareness. It seems like this thing we take for granted, most animals aren’t capable of. So I think this virtual environment is necessary at least for the level of consciousness we have. But I agree that it may not be a necessity for more basic self awareness, because I think our cat is self aware. He can clearly distinguish between me an my wife, this is obvious because his behavior is very different towards us. If he can distinguish between us, it seems logical that he is also able to tell himself as being different from us. AFAIK that’s a pretty big part of what self awareness is.
But I also think that we don’t have to be aware of our consciousness all the time, only when it’s relevant.at least, quite some diversity in the nature of this virtual environment.
Absolutely yes, HUGE differences. I’m personally a bit of a fan of the multi talent Piet Hein, a multi talent who was a theoretical physicist. He could hold complex geometrical shapes in his head and see if they fit together, in a way no one else at the university could, and he played mental ping-pong" with Niels Bohr, and just like he was very good at it, there are people who are similarly bad at it. I find it hard to understand how their thought process works, because curiously this is also a thing among smart people AFAIK.
I admit I’m not really aware of any results from the study of it, but it is an interesting subject.I am reminded of the concept of latent representations in AI.
From your link:
we argue that language space may not always be optimal for reasoning.
I absolutely agree. It’s like discerning between the abstract and the concrete, and if you can visualize it as a person, you can probably also understand it. So I wonder if people with aphantasia think in a way that is similar to abstract thinking for everything? Maybe each way has it’s own strength?
We utilize the last hidden state of the LLM as a representation of the reasoning state (termed “continuous thought”)
So it’s not like a virtual reality, but wow that sounds awesome. 😎
It sure is impressive how fast things are developing now. - Comment on Things at Tesla are worse than they appear 6 days ago:
It was only able to post a $409 million profit in the quarter thanks to the sale of $595 million worth of regulatory credits to other automakers.
Without the regulatory credits, and capital gains Tesla would be $500 million in the red.
And sales continue to drop in all markets. Tesla is no longer competitive in China and EU, only in USA due to tariffs on cars.
A couple of years ago Tesla boasted the highest margins in the industry on their cars, now they are so low so if prices continue to drop, Tesla will soon be at s deficit on every car sold if they try to follow, or if they don’t reduce prices, their cars will simply be too expensive. Damned if you do, damned if you don’t. - Comment on Why I don't use AI in 2025 1 week ago:
Can you lay out what abilities are connected to consciousness?
I probably can’t say much new, but it a combination of memory, learning, abstract thinking, and self awareness.
I can also say that the consciousness resides in a form of virtual reality in the brain, allowing us to manipulate reality in our minds to predict outcomes of our actions.
At a more basic level it is pattern memory, recognition, prediction and manipulation.
The fact that our consciousness is a virtual construct, also acts as a shim, distancing the mind from direct dependency of the underlying physical layer. Although it still depend on it to work of course.
So to make an artificial consciousness, you don’t need to create a brain, you can do it by recreating the functionality of the abstraction layer on other forms of hardware too.
It is also this feature that allows us to have free will, although that depends on definition, I believe we do have free will in an absolutely meaningful sense. Something that took me decades to realize was actually possible.I don’t know if this makes any sense to you? But maybe you find it interesting?
You are saying that there are different levels of consciousness. So, it must be something that is measurable and quantifiable.
Yes there are different levels, actually in 2 ways. there are different levels between the consciousness of a dolphin and a human. A dolphin is also self aware and conscious. But it does not have the same level of consciousness we do. Simply because it doesn’t posses the same level of intelligence.
But even within the human brain there are different levels of consciousness. It’s a common term to use “sub conscious” and that is with good reason. Because there are things that are hard to learn, and we need to concentrate hard and practice hard to learn them. But with enough practice we build routine, and at some point they become so much routine we can do them without thinking about it, but instead think about something else.
At that point you have trained a subconscious routine, that is able to work independently almost without guidance of you main consciousness. There are also functions that are “automatic”, like when you listen to sounds, you can distinguish many separate sounds without problem. We can somewhat mimic that in software today, separating different sounds. It’s extremely complex to do, and the mathematics involved is more than most can handle. Except in our hearing we do it effortlessly. But there is obviously an intelligence at work in the brain that isn’t directly tied to our consciousness.
IDK if I’m explaining myself well here, but the subconscious is a very significant part of our consciousness.So, it must be something that is measurable and quantifiable.
That is absolutely not a certainty. At least I don’t think we can at this point in time, but in the future there may exist better knowledge and better tools. But as it is, we have been hampered by wrongful thinking in these areas for centuries, quite opposite to physics and mathematics that has helped computing every step of the way.
The study of the mind has been hampered by prejudice, thinking that humans are not animals, thinking free will is from god, with nonsense terms like id. And thinking the soul is something separate from the body. Psychology basically started out as pseudo science, and despite that it was a huge step forward!I’ll stop here, these issues are very complex, and some of the above issues have taken me decades to figure out. There is much dogma and even superstition surrounding these issues, so it used to be rare to finally find someone to read or listen to that made some sense based on reality. It’s seems to me basically only for the past 15 years, that it seems that science of the mind is beginning to catch up to reality.
- Comment on Why I don't use AI in 2025 1 week ago:
FWIW, I asked GPT-4o mini via DDG.
You do it wrong, you provided the “answer” to the logic proposition, and got a parroted the proof for it. Completely different situation.
The AI must be able to figure this out in responses that require this very basic understanding. I don’t recall the exact example, but here is a similar example, where the AI fails to simply count the number of R’s in strawberry, claiming there are only 2, and refusing to accept there is 3, then when explained there is 1 in straw and 2 in berry, it made some very puzzling argument, that counting the R in Straw is some sort of clever trick.
This is fixed now, and had to do with tokenizing info incorrectly. So you can’t “prove” this wrong by showing an example of a current AI that doesn’t make the mistake.
Unfortunately I can’t find a link to the original story, because I’m flooded with later results. But you can easily find the 2 R’s in strawberry problem.Self-awareness means the ability to distinguish self from other, which implies computing from sensory data what is oneself and what is not.
Yes, but if you instruct a parrot or LLM to say yes when asked if it is separate from it’s surroundings, it doesn’t mean it is just because it says so.
So need to figure out if it actually understands what it means. Self awareness on the human level requires a high level of logical thought and abstract understanding. My example shows this level of understanding clearly isn’t there.As I wrote earlier, we really can’t prove consciousness, the way to go around it is to figure out some of the mental abilities required for it, if those can be shown not to be present, we can conclude it’s probably not there.
When we have Strong AI, it may take a decade to be widely acknowledged. And this will stem from failure to disprove it, rather than actually proof.
You never asked how I define intelligence, self awareness or consciousness, you asked how I operationally define it, that a very different question.
en.wikipedia.org/wiki/Operational_definition
An operational definition specifies concrete, replicable procedures designed to represent a construct.
I was a bit confused by that question, because consciousness is not a construct, the brain is, of which consciousness is an emerging property.
Also:
An operation is the performance which we execute in order to make known a concept. For example, an operational definition of “fear” (the construct) often includes measurable physiologic responses that occur in response to a perceived threat.
Seem to me to be able to define that for consciousness, would essentially mean to posses the knowledge necessary to replicate it.
Nobody on planet earth has that knowledge yet AFAIK. - Comment on Why I don't use AI in 2025 1 week ago:
Then how will you know the difference between strong AI and not-strong AI?
I’ve already stated that that is a problem:
From a previous answer to you:
Obviously the Turing test doesn’t cut it, which I suspected already back then. And I’m sure when we finally have a self aware conscious AI, it will be debated violently.
Because I don’t think we have a sure methodology.
I think therefore I am, is only good for the conscious mind itself.
I can’t prove that other people are conscious, although I’m 100% confident they are.
In exactly the same way we can’t prove when we have a conscious AI.But we may be able to prove that it is NOT conscious, which I think is clearly the case with current level AI. Although you don’t accept the example I provided, I believe it is clear evidence of lack of a consciousness behind the high level of intelligence it clearly has.
- Comment on Why I don't use AI in 2025 1 week ago:
Just because you can’t make a mathematical proof doesn’t mean you don’t understand the very simple truth of the statement.
- Comment on Why I don't use AI in 2025 1 week ago:
I know about the Turing test, it’s what we were taught about and debated in philosophy class at University of Copenhagen, when I made my prediction that strong AI would probably be possible about year 2035.
to exhibit intelligent behaviour equivalent to that of a human
Here equivalent actually means indistinguishable from a human.
But as a test of consciousness that is not a fair test, because obviously a consciousness can be different from a human, and our understanding of how a simulation can fake something without it being real is also a factor.
But the original question remains, how do we decide it’s not conscious if it responds as if it is?This connects consciousness to reasoning ability in some unclear way.
Maybe it’s unclear because you haven’t pondered the connection? Our consciousness is a very big part of our reasoning, consciousness is definitely guiding our reasoning. And our consciousness improve the level of reasoning we are capable of.
I don’t see why the example requiring training for humans to understand is unfortunate. A leading AI has way more training than would ever be possible for any human, still they don’t grasp basic concepts, while their knowledge is way bigger than for any human.It’s hard to explain, but intuitively it seems to me the missing factor is consciousness. It has learned tons of information by heart, but it doesn’t really understand any of it, because it isn’t conscious.
Being conscious is not just to know what the words mean, but to understand what they mean.
I think therefore I am. - Comment on [deleted] 1 week ago:
It’s hard to say, it doesn’t seem like the CEO of Apple being openly gay has detracted from the popularity of the brand.
And as we have seen with Elon Musk and Tesla, an unpopular CEO definitely can detract from the popularity of a brand.
There has been no such reaction against Apple.
So my guess is, that it is possible, but it obviously depends on the candidate.If you had asked a few years before Obama was elected if a colored president would be possible, I would probably have guessed no. But Obama proved it was in fact possible when he got elected in 2008.
I think if the right person comes along, he or she can win disregarding color sexuality or gender.
Of the 3, it seems to me that currently not being a man is probably the biggest handicap.But in time politics will be dominated by women, the trend where I live (Denmark) is pretty clear, women will most likely dominate within a few decades. Just recently all the Scandinavian Prime ministers were Women. To me that was a very clear sign of a trend towards more women in politics, and more women gaining leading positions too.
We also had a gay man as a pretty popular leader of the conservatives of Denmark. So it is not much of a stretch to say a gay man could absolutely become Prime Minister here. No issue whatsoever.
USA might be a bit harder, but not impossible. - Comment on People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies 1 week ago:
Haha I grew up before smartphones and GPS navigation was a thing, and I never could navigate well even with a map!
GPS has actually been a godsend for me to learn to navigate my own city way better. Because I learn better routes in first try.Navigating is probably my weakest “skill” and is the joke of the family. If I have to go somewhere and it’s 30km, the joke is it’s 60km for me, because I always take “the long route”.
- Comment on People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies 1 week ago:
But obviously CIA is still around. Plus dozens of other secret US agencies.
- Comment on Why I don't use AI in 2025 1 week ago:
To understand what “I think therefore I am” means is a very high level of consciousness.
At lower levels things get more complicated to explain. - Comment on Why I don't use AI in 2025 1 week ago:
Good question.
Obviously the Turing test doesn’t cut it, which I suspected already back then. And I’m sure when we finally have a self aware conscious AI, it will be debated violently.
We may think we have it before it’s actually real, some claim they believe some of the current systems display traits of consciousness already. I don’t believe that it’s even close yet though.
As wrong as Descartes was about animals, he still nailed it with “I think therefore I am” (cogito, ergo sum) www.britannica.com/topic/cogito-ergo-sum.
Unfortunately that’s about as far as we can get, before all sorts of problems arise regarding actual evidence. So philosophically in principle it is only the AI itself that can know for sure if it is truly conscious.All I can say is that with the level of intelligence current leading AI have, they make silly mistakes that seems obvious if it was really conscious.
For instance as strong as they seem analyzing logic problems, they fail to realize that 1+1=2 <=> 2=1+1.
Such things will of course be ironed out, and maybe this on is already. But it shows the current model, isn’t good enough for the basic comprehension I would think would follow from consciousness.Luckily there are people that know much more about this, and it will be interesting to hear what they have to say, when the time arrives. 😀
- Comment on People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies 1 week ago:
I live in Danmark, and I was taught already in public school how such things were possible, most notably that Russia might be doing experiments here, because our reporting on effects is very open and efficient. So Denmark would be an ideal testing ground for experiments.
But my guess is that it also may makes it dangerous to experiment here, because the risk of being detected is also high. - Comment on Why I don't use AI in 2025 1 week ago:
Self aware consciousness on a human level. So it’s still far from a sure thing, because we haven’t figured consciousness out yet.
But I’m still very happy with my prediction, because AI is now at a way more useful and versatile level than ever, the use is already very widespread, and the research and investments have exploded the past decade. And AI can do things already that used to be impossible, for instance in image and movie generation and manipulation.But I think the code will be broken soon, because self awareness is a thing of many degrees. For instance a dog is IMO obviously self aware, but it isn’t universally recognized, because it doesn’t have the same degree of selv awareness humans have.
This is a problem that dates back to 17th century and Descartes, who claimed for instance horses and dogs were mere automatons, and therefore couldn’t feel pain.
This of course completely in line with the Christian doctrine that animals don’t have souls.
But to me it seems self awareness like emotions don’t have to start at human level, it can start at a simpler level, that then can be developed further.PS:
It’s true animals don’t have souls, in the sense of something magical provided by a god, because nobody has. Souls are not necessary to explain self awareness or consciousness or emotions. - Comment on People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies 1 week ago:
OK that risk wasn’t really on my radar, because I live in a country where such things have never been known to happen.
- Comment on People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies 1 week ago:
IDK, apparently the MkUltra program was real,
B) applicant is 100% correct about what is happening to him, DEFINITELY not someone I want to get any closer to professionally, personally, or even be in the same elevator with coincidentally.
That sounds harsh.
- Comment on Why I don't use AI in 2025 1 week ago:
Deep blue was mostly based on raw computational power, with very little ability to actually judge whether a draw was “good” without calculating the possibilities following it.
As I understand it, it only worked on Chess as a “mathematical” problem, and was incapable of judging strategic positions, except if it had “seen” it before, and already calculated the likely outcomes.
In short, there was very little intelligence, it was based only on memory and massive calculation power. Which indeed are aspects of intelligence, but only on a very low level. - Comment on Why I don't use AI in 2025 1 week ago:
I find it funny that in the year 2000 while attending philosophy at University of Copenhagen I predicted strong AI around 2035. This was based on calculations of computational power, and estimates of software development.
At the time I had already been interested in AI development and matters of consciousness for many years. And I was a decent programmer. I already made self modifying code back in 1982. So I made this prediction at a time where AI wasn’t a very popular topic, and in the middle of a decades long futile desert walk without much progress.And for 15 about years, very little continued to happen. It was pretty obvious the approach behind for instance Deep Blue wasn’t the way forward. But that seemed to be the norm for a long time.
But it looks to me that the understanding of how to build a strong AI is much much closer now. We might actually be halfway there!
I think we are pretty close to having the computational power needed now in AI specific datacenter clusters, but the software isn’t quite there yet.I’m honestly not that interested in the current level of AI, although LLM can yield very impressive results at times, it’s also flawed.
partially self driving cars are kind of irrelevant IMO. But truly self driving cars will make all the difference, and be a cool achievement for current level of AI evolution when achieved.So current level AI can be useful, but when we achieve strong AI it will make all the difference!
- Comment on Facebook Allegedly Detected When Teen Girls Deleted Selfies So It Could Serve Them Beauty Ads 1 week ago:
Goddam I had to read that headline 3 times before I understood the implication!
That is outright disgusting, and such practices ought to be outlawed.
Or as Trump would say, very cool and very legal way to make money. - Comment on People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies 1 week ago:
Faulty wiring.
- Comment on People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies 1 week ago:
I admit I only read a third of the article.
But IMO nothing in that is special to AI, in my life I’ve met many people with similar symptoms, thinking they are Jesus, or thinking computers work by some mysterious power they posses, but was stolen from them by the CIA. And when they die all computers will stop working!
I’m not a psychiatrist, but from what I gather it’s probably Schizophrenia of some form.My guess is this person had a distorted view of reality he couldn’t make sense of. He then tried to get help from the AI, and he built a world view completely removed from reality with it.
But most likely he would have done that anyway, it would just have been other things he would interpret in extreme ways. Like news, or conversations, or merely his own thoughts.
- Comment on New Meta XR glasses again tipped to land later this year – well ahead of Apple's rumored AR glasses with Apple Intelligence 1 week ago:
Well ahead of Apple
Is this supposed to indicate that Meta is beating Apple to market?
Because I’ve got news for you then, neither are “first”, and it’s completely irrelevant which is first, if they can’t present a strong use case, which all previous attempts have failed at. - Comment on Ericsson and Nokia were cutting 20,000 jobs as Huawei grew 1 week ago:
Ha! The joke’s on them, when we achieve 100% idiocracy their educations will be worthless. 🤣🤣🤣
- Comment on China's Huawei develops new AI chip, seeking to match Nvidia 1 week ago:
Yes they have, but SMIC can still only make the equivalent of TSMC 7nm process. And SMIC id the leading chip manufacturer in Chin They could of course have better design, but it needs to be about 3 times better to match Nvidia performance on the currently available processes.
- Comment on AI models routinely lie when honesty conflicts with their goals 1 week ago:
ALL the examples apply.
- Comment on AI models routinely lie when honesty conflicts with their goals 1 week ago:
3 an inaccurate or untrue statement; falsehood: When I went to school, history books were full of lies, and I won’t teach lies to kids.
- Comment on China's Huawei develops new AI chip, seeking to match Nvidia 1 week ago:
it might only hold for that narrow case.
Absolutely, I am sure Huawei has amazing resources for development, and they can do a lot.
But I’m also pretty sure they can’t beat Nvidia, with a process only allowing half the transistors at lower clock.In fact if they manage to match Nvidia even in 5 years time, when they may have better production, that too would be an astounding feat.
But there is no way they can surpass Nvidia now.