Sam Altman, CEO of OpenAI, speaks at the meeting of the World Economic Forum in Davos, Switzerland. (Denis Balibouse/Reuters)
That's why they just removed the military limitations in their terms of service I guess...
Submitted 10 months ago by MaxVoltage@lemmy.world to technology@lemmy.world
https://www.cnn.com/2024/01/18/tech/davos-sam-altman-ai/index.html
Sam Altman, CEO of OpenAI, speaks at the meeting of the World Economic Forum in Davos, Switzerland. (Denis Balibouse/Reuters)
That's why they just removed the military limitations in their terms of service I guess...
I also want to sell my shit for every purpose but take zero responsibility for consequences.
Considering what we’ve decided to call AI can’t actually make decisions, that’s a no-brainer.
AI term means humans are no brainers
Shouldn’t, but there’s absolutely nothing stopping it, and lazy tech companies absolutely will. I mean we live in a world where Boeing built a plane that couldn’t fly straight so they tried to fix it with software. The tech will be abused so long as people are greedy.
So long as people are rewarded for being greedy. Greedy and awful people will always exist, but the issue is in allowing them to control how things are run.
More than just that, they’re shielded from repercussions. The execs involved with ignoring all the safety concerns should be in jail right now for manslaughter. They knew better and gambled with other people’s lives.
They fixed it with software and then charged extra for the software safety feature. It wasn’t until the planes started falling out of the sky that decided they would gracefully offer it for free.
Has anyone checked on the sister?
OpenAI went from interesting to horrifying so quickly, I just can’t look.
The only difference between a beloved tech mogul and a deservedly hated one is time.
People still like Steve Jobs.
Ugh. There’s time yet.
OpenAI went from an interesting and insightful company to a horrible and a weird one in a very little time.
People only thought it was the former before they actually learned anything about them. They were always this way.
Remember when they were saying GPT-2 was too dangerous to release because people might use it to create fake news or articles about topics people commonly Google?
Hah, good times.
I’m tired of dopey white men making the world so much worse.
AI will be used to increase shareholder dividends. If your company just happens to involve healthcare, warfare, etc and AI makes decisions to maximize profit then youre just collateral damage with no human to blame. Sorry your husband died of cancer, the computer did it.
Yup, my job sent us to an AI/ML training program from a top cloud computing provider, and there were a few hospital execs there too.
They were absolutely giddy about being able to use it to deny unprofitable medical care.
AI shouldn’t make any decisions
Agreed, but also one doomsday-prepping capitalist shouldn’t be making AI decisions. If only there was some kind of board that would provide safeguards that ensured AI was developed for the benefit of humanity rather than profit…
I am sure Zergerberg is also claiming that they are not making any life-or-death decisions. Lets see you in a couple years when military gets involved with your shit
Ummm…no fucking shit. Who was honking that was a good idea?
probably about half of the executives this guy talks to
So just like shitty biased algorithms shouldn’t be making life changing decisions on folks’ employability, loan approvals, which areas get more/tougher policing, etc. I like stating obvious things, too. A robot pulling the trigger isn’t the only “life-or-death” choice that will be (is!) automated.
is exactly this AI will do in a near future (not dystopia)
But it should drive cars? Operate strike drones? Manage infrastructure like power grids and the water supply? Forecast tsunamis?
Too little too late, Sam. 
Yes on everything but drone strikes.
A computer would be better than humans in those scenarios. Especially driving cars, which humans are absolutely awful at.
So if it looks like it’s going to crash, should it automatically turn off and go “Lol good luck” to the driver now suddenly in charge of the life-and-death situation?
Have you seen a Tesla drive itself? Never mind ethical dilemmas, they can barely navigate the downtown without hitting pedestrians
But it should drive cars?
Oh, definitely. Humans are shit at that. Get bored when we have to concentrate for 10 minutes.
As advanced cruise control, yes. No, but in practice it doesn’t change a thing as humans can bomb civilians just fine themselves. Yes and yes.
If we’re not talking about LLMs which is basically computer slop made up of books and sites pretending to be a brain, using a tool for statistical analysis to analyze a shitload of data (like optical, acoustic and mechanical data to assist driving or seismic data to forecast tsunamis) is a bit of a no-brainer.
Fair enough. I do think AI will become a valuable tool for doctors, etc who do make those decisions
Using AI to base a decision on, is different from letting it make decisions
And yet it persuades people to choose for it.
I mean he can have his opinion on this, I do personally agree, but it’s way too late to try and stop now.
We’ve already got automated drones picking targets and killing people in the middle east and last I heard the newest set of US jets has AI integrated so heavily that they can opt to kill their operator in order to perform objectives
that they can opt to kill their operator in order to perform objectives
Source?
(spoiler: it went from “this is true” to “this could be true”)
Air force denies actual casualty and claims it was ‘only a simulation’, still problematic, assuming it stopped at a simulation: theguardian.com/…/us-military-drone-ai-killed-ope…
You didn’t ask for it but these are the drones that pick their own targets: npr.org/…/a-u-n-report-suggests-libya-saw-the-fir…
We've been putting our lives in the hands of automated, programmed decisions for decades now of y'all haven't noticed. The traffic light that keeps you from getting T-boned. The autopilot that keeps your plane straight and level and takes workload off the pilots. The scissor lift that prevents you from raising the platform if it's too tilted.
This is the best summary I could come up with:
ChatGPT is one of several generative AI systems that can create content in response to user prompts and which experts say could transform the global economy.
But there are also dystopian fears that AI could destroy humanity or, at least, lead to widespread job losses.
AI is a major focus of this year’s gathering in Davos, with multiple sessions exploring the impact of the technology on society, jobs and the broader economy.
In a report Sunday, the International Monetary Fund predicted that AI will affect almost 40% of jobs around the world, “replacing some and complementing others,” but potentially worsening income inequality overall.
Speaking on the same panel as Altman, moderated by CNN’s Fareed Zakaria, Salesforce CEO Marc Benioff said AI was not at a point of replacing human beings but rather augmenting them.
As an example, Benioff cited a Gucci call center in Milan that saw revenue and productivity surge after workers started using Salesforce’s AI software in their interactions with customers.
The original article contains 443 words, the summary contains 163 words. Saved 63%. I’m a bot and I’m open source!
When there’s no human to blame because the robot made the decision the ceo should carry all the blame
LWD@lemm.ee 10 months ago
OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare”
Good job, Sam Altman. Saying one thing and doing the opposite. There are already enough conspiracy theories about Davos running the world, and the creepy eye-scanning orb guy isn’t helping.
hai@lemmy.ml 10 months ago
What a sentence to sum up 2023 with.
ItsAFake@lemmus.org 10 months ago
That last line kinda creeps me out.
LWD@lemm.ee 10 months ago
The whole thing is creepy. The name, the orb, scanning people’s eyes with it, specifically targeting poor Kenyan people (the “unbanked”) like a literal sci-fi villain.