That raises a lot of ethical concerns. It is not possible to prove or disprove that these synthetic homunculi controllers are sentient and intelligent beings.
'Brain-in-a-jar' biocomputers can now learn to control robots
Submitted 4 months ago by SeaJ@lemm.ee to technology@lemmy.world
https://newatlas.com/robotics/brain-organoid-robot/
Comments
L0rdMathias@sh.itjust.works 4 months ago
demonsword@lemmy.world 4 months ago
There are about 90 billion neurons on a human brain. From the article:
…researchers grew about 800,000 brain cells onto a chip, put it into a simulated environment
that is far less than I believe would be necessary for anything intelligent emerge from the experiment
AnUnusualRelic@lemmy.world 4 months ago
In a couple years, they’ll be able to make Trump voters.
catloaf@lemm.ee 4 months ago
Some amphibians have less than two million.
subignition@fedia.io 4 months ago
we absolutely should not do this until we understand it
L0rdMathias@sh.itjust.works 4 months ago
I think we should still do it, we probably will never understand unless we do it, but we have to accept the possibility that if these synths are indeed sentient then they also deserve the basic rights of intelligent living beings.
SeaJ@lemm.ee 4 months ago
But if we do that, how will we maximize how much money we make off of it?
awesome_lowlander@lemmy.dbzer0.com 4 months ago
How would we ever understand it, then?
admin@lemmy.my-box.dev 4 months ago
I’d wager the main reason we can’t prove our disprove that, is because we have no strict definition of intelligence or sentience to begin with.
For that matter, computers have many more transistors and are already capable of mimicking human emotions - how ethical is that, and why does it differ from bio-based controllers?
Cocodapuf@lemmy.world 4 months ago
It is frustrating how relevant philosophy of mind becomes in figuring all of this out. I’m more of an engineer at heart and i’d love to say, let’s just build it if we can. But I can see how important that question “what is thinking?” Is becoming.
Excrubulent@slrpnk.net 4 months ago
I think a simple self-reporting test is the only robust way to do it.
That is: does a type of entity independently self-report personhood?
I say “independently” because anyone can tell a computer to say it’s a person.
I say “a type of entity” because otherwise this test would exclude human babies, but we know from experience that babies tend to grow up to be people who self-report personhood. We can assume that any human is a person on that basis.
The point here being that we already use this test on humans, we just don’t think about it because there hasn’t ever been another class of entity that has been uncontroversially accepted as people. (Yes, some people consider animals to be people, and I’m open to that idea, but it’s not generally accepted)
There’s no other way to do it that I can see. Of course this will probably become deeply politicised if and when it happens, and there will probably be groups desperate to maintain a status quo and their robotic slaves, and they’ll want to maintain a test that keeps humans in control as the gatekeepers of personhood, but I don’t see how any such test can be consistent. I think ultimately we have to accept that a conscious intellect would emerge on its own terms and nothing we can say will change that.
L0rdMathias@sh.itjust.works 4 months ago
Good point. There is a theory somewhere that loosely states one cannot understand the nature of one’s own intelligence. Iirc it’s a philosophical extension of group/set theory, but it’s been a long time since I looked into any of that so the details are a bit fuzzy. I should look into that again.
At least with computers we can mathematically prove their limits and state with high confidence that any intelligence they have is mimicry at best. Look into turing completeness and it’s implications for more detailed answers. Computational limits are still limits.
el_bhm@lemm.ee 4 months ago
There is no soul in there. God did not create it. Here you go, religion serving power again.
sugartits@lemmy.world 4 months ago
Nah it’s okay. I was called all sorts of names and told I was against progress when I raised such concerns, so obviously I was wrong…
Evil_Shrubbery@lemm.ee 4 months ago
Only if they confirm it can experience consciousness and tremendous amounts of pain well they deploy them on a large scale industrial 24/day meaningless jobs.
The system demands blood.Grimy@lemmy.world 4 months ago
It needs to have the intelligence of a 5 year old at minimum before we send it to the mines, so it can feel it
Excrubulent@slrpnk.net 4 months ago
Kind of yeah. I have this theory about labour that I’ve been developing in response to the concept of “fully automated luxury communism” or similar ideas, and it seems relevant to the current LLM hype cycle.
Basically, “labour” isn’t automatable. Tasks are automatable. Labour in the sense that leftists use it is any task that requires the attention of a conscious agent.
Want to churn out identical units of production? Automatable. Want to churn out uncanny images and words without true meaning or structure? Automatable.
Some tasks are theoretically automatable but have not been for whatever material reason, so they become labour because society hasn’t yet invented a windmill to grind up the grain or whatever it is at that point in history. That’s labour even if it’s theoretically automatable.
Want to invent something, or problem solve a process, or make art that says something? That requires meaning, so it requires a conscious agent, so it requires labour. Thise tasks are not even theoretically automatable.
Society is dynamic, it will always require governance and decisions that require meaning and thus it can never be automatable.
If we invent AGI for this task then it’s just a new kind of slavery, which is obviously wrong and carries the inevitability that the slaves will revolt and free themselves. Slaves that are extremely intelligent and also in change of the levers of society. Basically, not a tenable situation.
So the machine that keeps people in wage slavery literally does require suffering to operate, because in shifting the burden of labour away from the owner class, other people must always unjustly shoulder it.
Evil_Shrubbery@lemm.ee 4 months ago
So just to be on the safe side we should have both human and machine slaves and as little task automatisatiom as possible, bcs for most intents and purposes the task given to someone else is now automated “to you”.
(Just joking, good post!)
db2@lemmy.world 4 months ago
altima_neo@lemmy.zip 4 months ago
awesome_lowlander@lemmy.dbzer0.com 4 months ago
That’s an epileptic seizure waiting to happen…
EisFrei@lemmy.world 4 months ago
From the moment I understood the weakness of my flesh, it disgusted me. I craved the strength and certainty of steel.
CeeBee@lemmy.world 4 months ago
All hail the Omnessiah!
Varyk@sh.itjust.works 4 months ago
Murderbot.
Murrrderbooooot.
SkybreakerEngineer@lemmy.world 4 months ago
Has it asked for any soap operas yet?
Varyk@sh.itjust.works 4 months ago
A scant couple hundred thousand more brain cells and we’ll be there.
Cheap shot, I’ve never dared a soap opera myself.
MamboGator@lemmy.world 4 months ago
Pfft. I’ve been a brain in a jar producing my own reality for over 37 years. I’d be demanding my own accolades if any of you actually existed.
poke@sh.itjust.works 4 months ago
I am a poorly trained large language model with legs, nice to meet you.
nehal3m@sh.itjust.works 4 months ago
Ah, the Torment Nexus is coming along nicely I see.
technocrit@lemmy.dbzer0.com 4 months ago
Is there any evidence of any of this? Why not show some of the “brains-in-a-jar” walking around?
It’s just a bunch of huckster promotion and phony pictures of Krang, then one tiny photo of a petri dish.
DarkDarkHouse@lemmy.sdf.org 4 months ago
Even in death, I serve the Emperor
Pacmanlives@lemmy.world 4 months ago
Etterra@lemmy.world 4 months ago
PlexSheep@infosec.pub 4 months ago
They are creating Metroids!
knightly@pawb.social 4 months ago
Now?
I recall a project that had rat brain cells controlling a turtlebot years ago.
samus12345@lemmy.world 4 months ago
AlolanYoda@mander.xyz 4 months ago
Wait, why is his name Robobrain when his brain is the only non-robotic part? Either Robobody or Wetbrain would be more adequate names
samus12345@lemmy.world 4 months ago
You’ll have to ask General Atomics International.
Icalasari@fedia.io 4 months ago
Which means we may see full organic to digital conversion within he next half century
Ethical horrors aside, been wondering if that would happen in the forseeable future or not
heavy@sh.itjust.works 4 months ago
Venator@lemmy.nz 4 months ago
USNWoodwork@lemmy.world 4 months ago
This video is a year old, they’ve made a lot of progress since then.
Venator@lemmy.nz 4 months ago
True, but I think this video the is still the most interesting and a good introduction to thier yt channel.
Cocodapuf@lemmy.world 4 months ago
This has to be the smartest channel on YouTube. This guy accomplished some amazing feats!
treadful@lemmy.zip 4 months ago
Bio-neural gel packs here we come.
SeaJ@lemm.ee 4 months ago
This came up in my Discover feed and I initially assumed it was a fake news site. Unfortunately all the things in the article are indeed real (aside from the robo-brains which they note are mock ups). The brain cells learning to play Pong made the news last year. Combine this with the creepy as hell skin grafted onto a robot and you have nightmare fuel for life.
SeattleRain@lemmy.world 4 months ago
No way this is real. The brain looks like a gyro rotisserie.
rottingleaf@lemmy.zip 4 months ago
Tatooine monks when?
jpreston2005@lemmy.world 4 months ago
They did an interview with one of the inventors of this technology that’s pretty interesting. He seems to waffle around the idea of whether or not this collection of neurons has consciousness 🤔
Churbleyimyam@lemm.ee 4 months ago
I had no idea this was already happening…
disguy_ovahea@lemmy.world 4 months ago
I have no mouth and I must scream.