AnarchoEngineer
@AnarchoEngineer@lemmy.dbzer0.com
- Comment on Intelligent Design 20 hours ago:
I’m an engineer with a CS minor and ADHD; this kind of research is what I do with my freetime lol.
To be fair this is kind of a shared hobby project/topic between me and my friend (who is a biophysics major now in med school).
Anyway, point is that you don’t need to have a real “purpose” in order to be curious. I work in a robotics/medical lab at my university and my friends is trying to be a surgeon, yet we’re constantly in debates about astro and quantum physics to the point we’ve gotten career physicists to weigh in on our arguments.
No relevance to our majors or our work, but super fucking interesting and full of gaps where there are more theories than facts. Plenty of room for new perspectives.
Normalize doing research for fun!
- Comment on Intelligent Design 1 day ago:
SNNs more closely resemble the function of biological neurons and are perfect for temporally changing inputs. I decided to teach myself rust at the same time I learned about these so I built one from scratch trying to mimic the results of this paper (or rather a follow up paper in which they change the inhibition pattern leading to behavior similar to a self organizing map; I can’t find the link to said paper right now…).
After building that net I had some ideas about how to improve symbol recognition. This lead me down a massive rabbit hole about how vision is processed in the brain and eventually spiraled out to the function and structure of the hippocampus and now back to the neocortex where I’m currently focusing now on mimicking the behavior and structure of cortical minicolumns.
The main benefit of SNNs over ANNs is also a detriment: the neurons are meant to run in parallel. This means it’s blazing fast if you have neuromorphic hardware, but it’s incredibly slow and computationally intense if you try to simulate it on a typical machine with von Neumann architecture.
- Comment on Intelligent Design 2 days ago:
I actually came across this for the first time when I was doing research into the visual pathway for the purpose of trying to structure a spiking neural net more closely to human visual processing.
The Wikipedia page mentions cephalopod eyes specifically when talking about the inverted retina of vertebrates.
The vertebrate retina is inverted in the sense that the light-sensing cells are in the back of the retina, so that light has to pass through layers of neurons and capillaries before it reaches the photosensitive sections of the rods and cones.[5] The ganglion cells, whose axons form the optic nerve, are at the front of the retina; therefore, the optic nerve must cross through the retina en route to the brain. No photoreceptors are in this region, giving rise to the blind spot.[6] In contrast, in the cephalopod retina, the photoreceptors are in front, with processing neurons and capillaries behind them. Because of this, cephalopods do not have a blind spot.
The Wikipedia page goes on to explain that our inverted retinas could be the result of evolution trying to protect color receptors by limiting their light intake, as it does appear that our glial cells do facilitate concentrating light.
However, the “positive” effects of the glial cells coming before the receptors could almost certainly be implemented in a non-inverted retina. So that’s the evolutionary duct tape I was mentioning.
It would be difficult to flip the retina back around (in fact since it originates as part of the brain we’d kind of have to grow completely different eyes), so that’s not an option for evolution.
However, slight changes to the glial cells and vasculature of the eyes is definitely more possible. So those mutations happen and evolution optimizes them as best it can.
Evolution did well to optimize a poorly structured organ but it’s still a poorly structured organ.
- Comment on Intelligent Design 2 days ago:
Honestly, it was pretty hard for me to find a source which has made me a little skeptical of my own statements.
I was able to find two case studies in which patients with liver damage that caused them to have low levels of vitamin A exhibited night blindness. Both were treated for vitamin A deficiency and saw symptoms improve.
The strongest evidence of my original claim is the fact that one of the patients had otherwise healthy eyes and vision, only having extreme trouble seeing at night. After receiving treatment for vitamin A deficiency, her night vision improved. This suggests that dark adaptation is dependent on vitamin A in the blood which is regulated by the liver.
However, I’m now somewhat skeptical and curious myself considering these two studies were almost all I could find on this topic. If I have more time I’ll try digging deeper. For now though, I’ve edited my comment with links to the studies.
- Comment on Intelligent Design 2 days ago:
I was able to find two case studies showing direct links from vitamin A levels (and liver damage) to night blindness. I’ve edited my initial comment with the links to them.
- Comment on Intelligent Design 2 days ago:
Bro myopia is the least stupid part of our eye design problems. Our retinas are built entirely backwards for no other reason besides evolution making a mistake and then duct taping over it too much to fix it later.
If your retina was the right way around (like cephalopod eyes) you would have:
- No blind spots
- Higher fidelity vision even with the same number of receptors since the nerves and blood vessels wouldn’t interfere like they do now
- much lower likelihood of retinal detachment since you could attach it for real in the first place
- possibility for better brightness/darkness resolution since blood supply could be greater without affecting light passage
- possibility for better resolution because ganglion nerves can be packed more densely without affecting light passage
- The ability to regenerate cones and rods because you could, again, ACTUALLY HAVE SUPPORT CELLS WITHOUT BLOCKING LIGHT TO THE RETINA
Our eyes are built in the stupidest way possible.
Another fun fact: retinol is regenerated by your liver. Not your eyes, not some part of your brain, not some organ near your head like your thalamus which could probably get the job done if it tried, your fucking liver. Your eyes taking a while to adjust to the dark has basically nothing to do with your eyes; it’s because of the delay in adjustment by your fucking liver to produce more retinal, dump it into your vascular system and wait for it to hopefully reach your eyes. Why are we built like this?!
- Comment on Thats a good question. 1 week ago:
Pointer to a pointer exists but the pointer it points to no longer points to anything because the thing it pointed to was moved. You still have the pointer to the pointer but you get null from it instead of a memory.
- Comment on "make out" has an awful lot of meanings for two words that seemingly don't make any sense together 1 week ago:
May I draw your attention to the humble single word “set” which has 48 definitions?
- Comment on Nobody uses the white emojis 2 weeks ago:
¿Qué dice Juan?
- Comment on Human-level AI is not inevitable. We have the power to change course 5 weeks ago:
Thanks, I almost didn’t post because it was an essay of a comment lol, glad you found it insightful
As for Wolfram Alpha, I’m definitely not an expert but I’d guess the reason it was good at math was that it would simply translate your problem from natural language into commands that could be sent to a math engine that would do the actual calculation.
So basically act like a language translator but for typed out math to a programming language for some advanced calculation program (like wolfram Mathematica)
Again, this is just speculation because I’m a bit too tired to look into it rn, but it seems plausible since we had basic language translators online back then (I think…) and I’d imagine parsing written math is probably easier than natural language translation
- Comment on Human-level AI is not inevitable. We have the power to change course 5 weeks ago:
Engineer here with a CS minor in case you care about ethos: We are not remotely close to AGI.
I loathe python irrationally (and I guess I’m masochist who likes to reinvent the wheel programming wise lol) so I’ve written my own neural nets from scratch a few times.
Most common models are trained by gradient descent, but this only works when you have a specific response in mind for certain inputs. You use the difference between the desired outcome and actual outcome to calculate a change in weights that would minimize that error.
This has two major preventative issues for AGI: input size limits, and determinism.
The weight matrices are set for a certain number of inputs. Unfortunately you can’t just add a new unit of input and assume the weights will be nearly the same. Instead you have to retrain the entire network. (This problem is called transfer learning if you want to learn more)
This input constraint is preventative of AGI because it means a network trained like this cannot have an input larger than a certain size. Problematic since the illusion of memory that LLMs like ChatGPT have comes from the fact they run the entire conversation through the net. Also just problematic from a size and training time perspective as increasing the input size exponentially increases basically everything else.
Point is, current models are only able to simulate memory by literally holding onto all the information and processing all of it for each new word which means there is a limit to its memory unless you retrain the entire net to know the answers you want. (And it’s slow af) Doesn’t sound like a mind to me…
Now determinism is the real problem for AGI from a cognitive standpoint. The neural nets you’ve probably used are not thinking… at all. They literally are just a complicated predictive algorithm like linear regression. I’m dead serious. It’s basically regression just in a very high dimensional vector space.
ChatGPT does not think about its answer. It doesn’t have any sort of object identification or thought delineation because it doesn’t have thoughts. You train it on a bunch of text and have it attempt to predict the next word. If it’s off, you do some math to figure out what weight modifications would have lead it to a better answer.
All these models do is what they were trained to do. Now they were trained to be able to predict human responses so yeah it sounds pretty human. They were trained to reproduce answers on stack overflow and Reddit etc. so they can answer those questions relatively well. And hey it is kind of cool that they can even answer some questions they weren’t trained on because it’s similar enough to the questions they weren’t trained on… but it’s not thinking. It isn’t doing anything. The program is just multiplying numbers that were previously set by an input to find the most likely next word.
This is why LLMs can’t do math. Because they don’t actually see the numbers, they don’t know what numbers are. They don’t know anything at all because they’re incapable of thought. Instead there are simply patterns in which certain numbers show up and the model gets trained on some of them but you can get it to make incredibly simple math mistakes by phrasing the math slightly differently or just by surrounding it with different words because the model was never trained for that scenario.
Models can only “know” as much as what was fed into them and hey sometimes those patterns extend, but a lot of the time they don’t. And you can’t just say “you were wrong” because the model isn’t transient (capable of changing from inputs alone). You have to train it with the correct response in mind to get it to “learn” which again takes time and really isn’t learning or intelligence at all.
Now there are some more exotic neural networks architectures that could surpass these limitations.
Currently I’m experimenting with Spiking Neural Nets which are much more capable of transfer learning and more closely model biological neurons along with other cool features like being good with temporal changes in input.
However, there are significant obstacles with these networks and not as much research because they only run well on specialized hardware (because they are meant to mimic biological neurons who run simultaneously) and you kind of have to train them slowly.
You can do some tricks to use gradient descent but doing so brings back the problems of typical ANNs (though this is still possibly useful for speeding up ANNs by converting them to SNNs and then building the neuromorphic hardware for them).
SNNs with time based learning rules (typically some form of STDP which mimics Hebbian learning as per biological neurons) are basically the only kinds of neural nets that are even remotely capable of having thoughts and learning (changing weights) in real time. Capable as in “this could have discrete time dependent waves of continuous self modifying spike patterns which could theoretically be thoughts” not as in “we can make something that thinks.”
Like these neural nets are good with sensory input and that’s about as far as we’ve gotten (hyperbole but not by that much). But these networks are still fascinating, and they do help us test theories about how the human brain works so eventually maybe we’ll make a real intelligent being with them, but that day isn’t even on the horizon currently
In conclusion, we are not remotely close to AGI. Current models that seem to think are verifiably not thinking and are incapable of it from a structural standpoint. You cannot make an actual thinking machine using the current mainstream model architectures.
The closest alternative that might be able to do this (as far as I’m aware) is relatively untested and difficult to prototype (trust me I’m trying). Furthermore the requirements of learning and thinking largely prohibit the use of gradient descent or similar algorithms meaning training must be done on a much more rigorous and time consuming basis that is not economically favorable. Ergo, we’re not even all that motivated to move towards AGI territory.
Lying to say we are close to AGI when we aren’t at all close, however, is economically favorable which is why you get headlines like this.
- Comment on I'm not sure if I'm the stupidest smart person I know, or the smartest stupid person. 1 month ago:
Remember folks: being willing and able to admit when you are wrong is much more important and powerful than simply being right
- Comment on Evading suffering is _itself_ a form of suffering 1 month ago:
True, but in those cases you don’t want them to stop, unless of course, you are getting distracted by them and would like to stop in which case: suffering. Context is more important than raw qualia.
- Comment on Kakapos 1 month ago:
I do love that Aotearoa has incredible avian diversity.
On one hand, we have Kea parrots who are smarter than most of the human tourists they like pranking and stealing from.
And on the other side of the spectrum we have the kakapo: literally the dumbest bird in existence.
Such amazing biodiversity lol
- Comment on Evading suffering is _itself_ a form of suffering 1 month ago:
I’ve come to the conclusion that suffering is really just anything that invades your focus without your desire for it to happen.
Thinking about anything you would rather not think about is suffering. You get cut and your brain constantly reminds you of it because evolution is a bitch. Hatred, envy, anger, intrusive thoughts, headaches, itchy clothes, annoying noises in your environment, etc. Anything that steals your attention without your consent is suffering.
So if you’re so focused on avoiding suffering you aren’t able to focus on doing what you want then yep, suffering.
- Comment on UwU brat mathematician behavior 1 month ago:
I’m a mechanical engineering student with a math minor and I’m a switch so yeah, I’d take either side of this
- Comment on High Fashion 1 month ago:
You could always make a gigantic super absorbent polymer bead and wear that to a similar affect
- Comment on Most businesses lose authenticity as they grow. 1 month ago:
Capitalism forces businesses to enshittify as they grow
- Comment on Radiation is a literal Lovecraftian Monster 1 month ago:
Apart from “being summoned” yeah. No desire or consciousness just a thing that modifies everything around it by nature. It doesn’t care that it drives animals insane or turns them into monsters, because it’s probably not aware of what an animal is to begin with.
Also kinda coincidental that Color Out of Space makes plants bigger. Before we had better gene editing methods, scientists used radiation to trigger mutations plants attempting to find some mutations that, among other things, made the fruit bigger lol
- Comment on Most people's earliest memories are at around 3 or 4 years of age, which correlates with the age kids start asking "why" for everything. Kids start asking why when they become self-aware. 1 month ago:
My earliest memory (that I have a solid time estimate for) is from 2yo. It’s not a memory of questions though, I was a curious kid; it’s a memory of me and my older siblings coming up with stupid names for our soon to be born younger sibling.
So, my guess is that it’s more about trying to come up with your own thoughts and ideas and answers than it is about asking questions specifically.
- Comment on Everybody talks about beliefs like they're this big important thing. 1 month ago:
The word for established assumptions is “axioms”
Definitions are kind of the most fundamental axioms. Abstracting things helps us build with them and they’re true because you say they are.
We use axioms in models to derive new theorems/information. But that is often what makes us resist changing them. If you build your other assumptions on an axiom, you have to rethink all those assumptions or even throw them out when it gets proven wrong.
However, attachment to a belief, holding to an assumption even when it’s been proven wrong, is called “delusion” and yeah those beliefs tend to be the most destructive
- Comment on Everybody talks about beliefs like they're this big important thing. 1 month ago:
I think by cornerstone, they are referencing that beliefs are assumptions that form one’s model of the world.
You think by logically building on assumptions. “I remember putting leftovers in the fridge last night, so I don’t need to make dinner tonight” You assume your memories are accurate (or accurate enough) and then build on other things you “know” to construct every thought.
Sights, sounds, and vibes are a different story. They are called qualia and the raw experience of them cannot be described.
Think of qualia like the raw data you collect from an experiment. Your worldview is the scientific model you’ve built to describe this data and it rests on both fundamental logic and the beliefs/theories you currently believe in.
Unfortunately people don’t like having to change their worldview. And when you’ve held a belief for long enough, it becomes foundational to many of your other assumptions. Some people would rather say reality is wrong than change their beliefs.
The word for a belief that cannot be changed via evidence is called a “delusion” in case you ever want to piss off a religious person who says “nothing can shake my faith” like it’s a good thing.
- Comment on Everybody talks about beliefs like they're this big important thing. 1 month ago:
if a belief is a model/theory/assumption that a person will not change regardless of evidence against it, it is by definition a delusion.
If a belief is an opinion, it is a personal statement. Statements like “Vim is the best IDE” are really conveying the information “I prefer Vim over all others IDEs” which is a true statement.
If a belief is a hypothesis then the person holding it will accept if it ends up being wrong.
Only in the first and second cases do people usually place importance on their beliefs, and typically, only the first case leads people to harm others or themselves with no way to convince them to stop.
- Comment on YSK: NASA’s Moon landing relied on Nazi scientists — and a secret U.S. program brought them here 1 month ago:
Fun fact, my grandfather worked on the Saturn V and, according to my father, got in an argument with Von Braun at least once
I mean not fun because of working with Nazis, but fun because it’s interesting history
- Comment on You can do it. It's an easy one 2 months ago:
“I ate sigma pie and it was delicious!” Sounds like something that’d show up on my university’s YikYak, alluding to eating out a sorority chick from Sigma Pi lol
Idk if that’s a legitimate sorority, but I know that regardless of the sorority mentioned someone would reply something like “wait till you try a pi phi 😜” and/or someone would say you’re going to get an STD from that particular sorority
- Comment on The Purge 2 months ago:
No no no
Snack the keeps, booze the purge
That’s what they meant
- Comment on Who remembers alt.fan.tonya.harding.whack.whack.whack ? 2 months ago:
Lloyd Braun, I just wanted serenity
But you had to go testin’ me, gave me suicide tendencies