If it makes you feel any better, my bet is still on nuclear holocaust or complete ecological collapse resulting from global warming to be our undoing. Given a choice, I’d prefer nuclear holocaust. Feels less protracted. Worst option is weaponized microbes or antibiotic resistant bacteria. That’ll take foreeeever.
Comment on AI companies are violating a basic social contract of the web and and ignoring robots.txt
masonlee@lemmy.world 10 months ago
Also, by the way, violating a basic social contract to not work towards triggering an intelligence explosion that will likely replace all biological life on Earth with computronium, but who’s counting? :)
Gullible@sh.itjust.works 10 months ago
masonlee@lemmy.world 10 months ago
100%. Autopoietic computronium would be a “best case” outcome, if Earth is lucky! More likely we don’t even get that before something fizzles. “The Vulnerable World Hypothesis” is a good paper to read.
lunarul@lemmy.world 10 months ago
That would be a danger if real AI existed. We are very far away from that and what is being called “AI” today (which is advanced ML) is not the path to actual AI. So don’t worry, we’re not heading for the singularity.
masonlee@lemmy.world 10 months ago
I request sources :)
lunarul@lemmy.world 10 months ago
www.lifewire.com/strong-ai-vs-weak-ai-7508012
Strong AI, also called artificial general intelligence (AGI), possesses the full range of human capabilities, including talking, reasoning, and emoting. So far, strong AI examples exist in sci-fi movies
Weak AI is easily identified by its limitations, but strong AI remains theoretical since it should have few (if any) limitations.
…wikipedia.org/…/Artificial_general_intelligence
As of 2023, complete forms of AGI remain speculative.
Boucher, Philip (March 2019). How artificial intelligence works
Today’s AI is powerful and useful, but remains far from speculated AGI or ASI.
www.itu.int/en/journal/001/…/itu2018-9.pdf
AGI represents a level of power that remains firmly in the realm of speculative fiction as on date
masonlee@lemmy.world 10 months ago
Ah, I understand you now. You don’t believe we’re close to AGI. I don’t know what to tell you. We’re moving at an incredible clip; AGI is the stated goal of the big AI players. Many experts think we are probably just one or two breakthroughs away. You’ve seen the surveys on timelines? Years to decades. Seems wise to think ahead to its implications rather than dismiss its possibility.
glukoza@lemmy.dbzer0.com 10 months ago
Ah, AI doesn’t pose as danger in that way. It’s danger is in replacing jobs, people getting fired bc of ai, etc.
Crikeste@lemm.ee 10 months ago
Those are dangers of capitalism, not AI.
glukoza@lemmy.dbzer0.com 10 months ago
Fair point, but AI is part of it, I mean it exists in capitalist system. This AI Singularity apocalypse is like not gonna happen in 99%, AI within capitalism will affect us badly.
lunarul@lemmy.world 10 months ago
All progress comes with old job becoming obsolete and new jobs being created. It’s just natural.
But AI is not going to replace any skilled professionals soon. It’s a great tool to add to professionals’ arsenal, but non-professionals who use it to completely replace hiring a professional will get what they pay for (and those people would have never actually paid for a skilled professional in the first place; they’d have hired the cheapest outsourced wannabe they could find)
glukoza@lemmy.dbzer0.com 10 months ago
It replaced content writers, replacing digital artists, replacing programmers. In a sense they fire unexeprieced ones because ai speeds up those with more experience.
lunarul@lemmy.world 10 months ago
Any type of content generated by AI should be reviewed and polished by a professional. If you’re putting raw AI output out there directly then you don’t care enough about the quality of your product.
For example, there are tons of nonsensical articles on the internet that were obviously generated by AI and their sole purpose is to crowd search results and generate traffic. The content writers those replaced were paid $1/article or less (I work in the freelancing business and I know these types of jobs). Not people with any actual training in content writing.
But besides the tons of prompt crafting and other similar AI support jobs now flooding the market, there’s also huge investment in hiring highly skilled engineers to launch various AI related product while the hype is high.
So overall a ton of badly paid jobs were lost and a lot of better paid jobs were created.
Umbraveil@lemmy.world 10 months ago
Seems relevant.
masonlee@lemmy.world 10 months ago
Your worry at least has possible solutions, such as a global VAT funding UBI.
glukoza@lemmy.dbzer0.com 10 months ago
Yeah I’m not for UBI that much, and don’t see anyone working towards global VAT. I was comparing that worry about AI that is gonna destroy humanity is not possible, it’s just scifi.
masonlee@lemmy.world 10 months ago
Seven years ago I would have told you that GPT-4 was sci fi, and I expect you would have said the same, as would have most every AI researcher. The deep learning revolution came as a shock to most. We don’t know when the next breakthrough will be towards agentification, but given the funding now, we should expect soon. Anyways, if you’re ever interested to learn more about unsolved fundamental AI safety problems, the book “Human Compatible” by Stewart Russell is excellent. Also “Uncontrollable” by Darren McKee just came out (I haven’t read it yet) and is said to be a great introduction to the bigger fundamental risks. A lot to think about; just saying I wouldn’t be quick to dismiss it. Cheers.
tiltinyall@lemmy.org 10 months ago
I remember early Zuckerberg comments that put me onto just how douchey corporations could be about exploiting a new resource.
frostysauce@lemmy.world 10 months ago
I don’t think glorified predictive text is posing any real danger to all life on Earth.
MataVatnik@lemmy.world 10 months ago
Until we weave consciousness with machines we should be good.