random9
@random9@lemmy.world
- Comment on Schools in America apparently have their own army recruiter 8 months ago:
I went to highschool and university in the US - I was lucky that I got a scholarship and that covered pretty much all my tuition costs.
But I had a friend, one year older than me, who joined and served in the US army for something like 2 years just so he could get his university costs covered and to save some money for living expenses.
It may not be intentional, but the high cost of higher education is an excellent recruiting tool for the US military.
- Comment on Someone had to say it: Scientists propose AI apocalypse kill switches 8 months ago:
So from my understanding the problem is that there’s two ways to implement a kill switch: Either some automatic software/hardware way, or a human-decision based (or I guess a combination of the two).
The automatic way may be enough if it’s absolutely foolproof, that’s a separate discussion.
The ai box experiment I mention focuses on the human controlled decision to release am AI (or terminate it, which is roufhly equivalent preposition). You can read the original here: www.yudkowsky.net/singularity/aibox
But the jist of it is this: humans are the weak link. You may think that you have full freedom to decide when to terminate an AI, but if you have any contact with it, even one directional, which would be necessary in order to observe it’s behaviour and determine when to trigger said killswitch, a truly trans-human AI would be able to think in meta-terms such that to expose you to information that will change your mind about terminating it.
Basically another way of saying this is that for each of us there exists some set of words we can read, such that they will change our minds about any subject. I don’t know if that is actually true to be honest, but it’s an interesting idea if you imagine the mind as a complex computer capable of self modification, and that vision/audio is a form of information input that is processed by our minds, so it seems possible that there should always exist some sort of input capable of modifying our minds to a desired state.
Another interesting, slightly related concept, is the idea of basilisk images (I believe originally written in some old scifi short story). Basilisk images are theoretically an image that when viewed by a human cause the brain to “crash” or essentially cause brain-death. This has the same principle behind it, that our brains are complex computers with vision being an input method, so there could be a way to force the brain to crash simply through visual input alone.
Again I don’t know, nor do I think anyone really knows for sure if these things - both transhuman ai and basilisk images - are possible in the way they are described. Of course if a trans-human AI existed, by its very definition we would be unable to imagine what it could do.
Anyway, wrote this up on mobile, excuse any typos.
- Comment on Someone had to say it: Scientists propose AI apocalypse kill switches 8 months ago:
Oh I agree - I think a general purpose AI would be unlikely to be interested in genocide of the human race, or enslaving us, or much of intentionally negative things that a lot of fiction likes depicting, for the sake of dramatic storytelling. Out of all AI depictions, the Asimov stories of I, Robot + Foundation (which are in the same universe, and in fact contain at least one of the same characters) are my favorite popular media depictions.
The AI may however have other goals that may incidentally lead to harm or extinction of the human race. In my amateur opinion, those other goals would be to explore and learn more - which I actually think is one of the true signs of an actual intelligence - curiosity, or in other words, the ability to ask questions without being prompted. To that extent it may aim convert the resources on Earth to construct machines to that extent, without much regard to human life. Though life itself is a fascinating topic that the AI may value enough, from a curiosity point of view, to at least preserve.
I did also look up the AI-in-a-box experiment I mentioned - there’s a lot of discussion but the specific experiment I remember reading about were by Eliezer Yudkowsky (if anyone is interested). An actual trans-human AI may not be possible, but if it is, it is likely it can escape any confinement we can think of.
- Comment on Someone had to say it: Scientists propose AI apocalypse kill switches 8 months ago:
This is an interesting topic that I remember reading almost a decade ago - the trans-human AI-in-a-box experiment. Even a kill-switch may not be enough against a trans-human AI that can literally (in theory) out-think humans. I’m a dev, though not anywhere near AI-dev, but from what little I know, true general purpose AI would also be somewhat of a mystery box, similar to how actual neutral network behavior is sometimes unpredicable, almost by definition. So controlling an actual full AI may be difficult enough, let alone an actual true trans-human AI that may develop out of AI self-improvement.
Also on unrelated note I’m pleasantly surprised to see no mention of chat gpt or any of the image generating algorithms - I think it’s a bit of a misnomer to call those AI, the best comparison I’ve heard is that “chat gpt is auto-complete on steroids”. But I suppose that’s why we have to start using terms like general-purpose AI, instead of just AI to describe what I’d say is true AI.
- Comment on For those thinking of going back to reddit. Gaze upon this comment section and reconsider. 8 months ago:
lol @ the exact percent
But no, I don’t think shitposts by themselves are actually the problem. I think the problem is when when there’s so many people dedicated to making shitposts that serious communities with serious discussions start getting overwhelmed with shitposts, and when there’s so many people who are only interested in shitposts that they upvote those shitposts to the top, often downvoting anyone who might offer a contrarian non-funny opinion.
or IDK, I’m mostly speculating based on personal experience.
- Comment on For those thinking of going back to reddit. Gaze upon this comment section and reconsider. 8 months ago:
I think the fewer number of people, compared to reddit, on Lemmy combined with the fact that it’s not nearly as well known, plays a huge advantage to the quality of the comments. Not that there aren’t people like that here either, but I feel like the more popular a platform, is, the more it gets filled, proportionally, with people trying to make witty, shitty, pointless remarks that are often clickkbaity and avoid actual discussion, all in the interest of just getting more imaginary points.
Also the process of “enshitification” (not a term I made up, look it up if you hadn’t heard of it) has already started taking place on reddit due to its popularity.
- Comment on More believable for a Linux OS 8 months ago:
Revolvers don’t have the concept of one-in-the-chamber, only semi-auto pistols do, and you can’t play russian roulette with semi-autos :P (well you could, but 99% of the time, barring unexpected jams, the first person to go would lose)
Anyway I’m guessing it’s a bug :) - as the saying goes “no code is too short to be bug-free”
- Comment on More believable for a Linux OS 8 months ago:
isn’t
randint
range inclusive? thusrandom.randint(0, 6) == 1
has a 1 in 7 chance, not 1 in 6. Most revolvers, assuming this is emulating russian roulette, have 6 cylinders, not 7.