This is the technology worth trillions of dollars huh
Lol @ these fucking losers who think AI is the current answer to any problems
Submitted 6 months ago by HarkMahlberg@kbin.earth to technology@lemmy.world
https://media.kbin.earth/c5/38/c538e178af17fa0c334cad0916ef9eb70c2e1829354eef4f2ce05bd53aa1f4be.jpg
This is the technology worth trillions of dollars huh
Lol @ these fucking losers who think AI is the current answer to any problems
Third time’s the charm! They have to keep the grift going after Blockchain and NFT failed with the general public.
@arararagi Don't forget Metaverse, they took a fuckin bath on that.
GitLab Enterprise somewhat recently added support for Amazon Q (based on claude) through an interface they call “GitLab Duo”. I needed to look up something in the GitLab docs, but thought I’d ask Duo/Q instead (the UI has this big button in the top left of every screen to bring up Duo to chat with Q):
(Paraphrasing…)
ME: How do I do X with Amazon Q in GitLab? Q: Open the Amazon Q menu in the GitLab UI and select the appropriate option.
ME: [:looks for the non-existant menu:] ME: Where in the UI do I find this menu?
Q: My last response was incorrect. There is no Amazon Q button in GitLab. In fact, there is no integration between GitLab and Amazon Q at all.
ME: [:facepalm:]
Stop using Google search, easy as that! I use duckduckgo and I have turned off AI prompts.
So this is the terminator consciousness so many people are scared will kill us all…
It ripped off this famous poem in the process:
With enough duct tape and chewed up bubble gum, surely this will lead to artificial general intelligence and the singularity! Any day now.
Hurry MacGruber! We’re almost out of…BOOM!
I don’t think this gets nearly enough visibility: www.academ-ai.info
Papers in peer-reviewed journals with (extremely strong) evidence of AI shenanigans.
Thanks for sharing! I clicked on it with cynicism around how easily we could detect AI usage with confidence vs. risking making false allegations, but every single example on their homepage is super clear and I have no doubts - I’m impressed! (and disappointed)
Yup. I had exactly the same trepidation, and then it was all like “As an AI model, I don’t have access to the data you requested, however here are some examples of…”
I would estimate that Google’s AI is helpful and correct about 7% of the time.
Connecdicud.
i rather manually search for info
They took money from cancer reaearch programs to fund this.
Well as long as we still have enough money to buy weapons for that one particular filthy country in the middle east, we’re fine.
only cancer patients benefit from cancer research, CEOs benefit from AI
Tbf cancer patients benefit from AI too tho a completely different type that’s not really related to LLM chatbot AI girlfriend technology used in these.
After we pump another hundred trillion dollars and half the electricity generated globally into AI you’re going to feel pretty foolish for this comment.
Just a couple billion more parameters, bro, I swear, it will replace all the workers
✅ Colorado
✅ Connedicut
✅ Delaware
❌ District of Columbia (on a technicality)
✅ Florida
But not
❌ I’aho
❌ Iniana
❌ Marylan
❌ Nevaa
❌ North Akota
❌ Rhoe Islan
❌ South Akota
Gosh tier comment.
You just described most of my post history.
Everyone knows it’s properly spelled “I, the ho” not Idaho. That’s why it didn’t make the list.
Connedicut
You mean Connecdicud.
I would assume it uses a different random seed for every query. Probably fixed sometimes, not fixed other times.
What about Our Kansas?
Just checked, it sure does say that!
Blows my mind people pay money for wrong answers.
I get the sentiment behind this post, and it’s almost always funny when LLM are such dumbass. But this is not a good argument against the technology. It is akin to climate change denier using the argument: “look! It snowed today, climate change is so dumb huh ?”
I get the sentiment behind this post, and it's almost always funny when LLM are such dumbass. But this is not a good argument against the technology.
It's a pretty good argument against the technology, at least as it currently stands. This was a trivial question where anybody with a basic reading ability can see it's just completely wrong, the problem comes when you ask it a question you don't already know the answer to and can't easily check and it give equally wrong answers.
It’s not worth the environmental impact
You do know that AI is (if not already) fast approaching a leading CAUSE of climate change?
Yes, I know it has an impact, though not as big as you make it seem, (and so is everything). When you divide it to calculate the personal impact, it is way lower than a huge number of other stuff. I agree that we need to address climate change, but I don’t believe this should be the main focus.
Also, every individual should be able to choose how they spend their “carbon allocation”, personally, I don’t eat meat, I never take the plane, I don’t own a car and do everything using bike and trains, my house is carbon negative (building it actually had a negative carbon footprint) which was a huge sacrifice I had to compromise getting a way way smaller house for way more debt than if I had built a cheap standard house (and of course I’m in debt for decade). LLM makes me more efficient at my job so I think I can afford the carbon footprint that comes with it which, as I said, is not as big per individual as you make it appear.
I understand that hanging on Lemmy makes it seem like AI/LLM is the worse thing that has happened to mankind, but it’s really not, there are lots of issues with it, sure. But there is worse stuff to worry about.
I want to finish by saying that I DO support your action to minimize its impact, what you are doing overall is important and necessary, but I think you should revise the individual argument you put up against LLM, cause this one is not great.
While the environmental impact of AI is absolutely horrible I don’t think it is even in the top 10 of industries. Meat production, Transportation by cars, Airplanes, plastic products etc are all much worse.
The problem is AI is absolutely useless for how big its climate impact is. The other industries at least provide value.
AI writes code for me. It makes dumbass mistakes that compilers automatically catch. It takes three or four rounds to correct a lot of random problems that crop up. Above all else, it’s got limited capacity - projects beyond a couple thousand lines of code have to be carefully structured and spoonfed to it - a lot like working with junior developers. However: it’s significantly faster than Googling for the information needed to write the code like I have been doing for the last 20 years, it does produce good sample code (if you give it good prompts), and it’s way less frustrating and slow to work with than a room full of junior developers.
That’s not saying we fire the junior developers, just that their learning specializations will probably be very different from the ones I was learning 20 years ago, just as those were very different than the ones programmers used 40 and 60 years ago.
I agree, cursor and other IDE integration have been a game changer. It made it way easier for a certain range of problems we used to have in software dev. And for every easy code, like prototyping, or inconsequential testing, it’s so so fast. What I found is that, it is particularly efficient at helping you do stuff you would have been able to do alone, and are able to check once it’s done. Need to be careful when asking stuff you aren’t familiar with though, cause it will comfortably lead you toward a mistake that will waste your time.
Though one thing I have to say: I’m very annoyed by it’s constant agreeing with what I say, and enabling me when I’m doing dumb shit. I wish it would challenge me more and tell me when I’m an idiot.
“Yes you are totally right”, “This is a very common issue that everybody has”, “What a great and insightful question”… I’m so tired of this BS.
Listen, we just have to boil the ocean five more times.
Then it will hallucinate slightly less.
Or more. There’s no way to be sure since it’s probabilistic.
If you want to get irate about energy usage, shut off your HVAC and open the windows.
“This is the technology worth trillions of dollars”
You can make anything fly high in the sky with enough helium, just not for long. (Welcome to the present day Tech Stock Market)
Bubbles and crashes aren’t a bug in the financial markets, they’re a feature. There are whole legions of investors and analysts who depend on them.
We’re turfing out students by the tens on academic misconduct. They are handing in papers with references that clearly state “generated by Chat GPT”. Lazy idiots.
Huh that actually does sound like a good use-case of LLMs. Making it easier to weed out cheaters.
This is why invisible watermarking of AI-generated content is likely to be so effective. Even primitive watermarks like file metadata. It’s not hard for anyone with technical knowledge to remove, but the thing with AI-generated content is that anyone who dishonestly uses it when they are not supposed to is probably also too lazy to go through the motions of removing the watermarking.
Couldn’t students just generate a paper with ChatGPT, open two windows wide by side and then type it out in a word document?
if you are going to do all that, just do the research and learn something.
Connedicut.
Close. We natives pronounce it ‘coe ned eh kit’
So does everyone else
Click bait post that cherry picks bad output to say certain technology has no potential because it thinks he smarter than everybody else with 4+years of higher education.
It doesn’t have the potential they market it to have, and to be useful in all the human-replacing ways they claim it is.
That’s what is bad about it.
Well, for anyone who knows a bit about how LLMs work, it’s pretty obvious why LLMs struggle with identifying the letters in the words
Well go on…
Which is State contains 狄? They use a different alphabet, so understanding ours is ridiculous.
They don’t look at it letter by letter but in tokens, which are automatically generated separately based on occurrence. So while ‘z’ could be it’s own token, ‘ne’ or even ‘the’ could be treated as a single token vector. of course, ‘e’ would still be a separate token when it occurs in isolation. You could even have ‘le’ and ‘let’ as separate tokens, afaik. And each token is just a vector of numbers, like 300 or 1000 numbers that represent that token in a vector space. So ‘de’ and ‘e’ could be completely different and dissimilar vectors.
so ‘delaware’ could look to an llm more like de-la-w-are or similar.
of course you could train it to figure out letter counts based on those tokens with a lot of training data, though that could lower performance on other tasks and counting letters just isn’t that important, i guess, compared to other stuff
Have a 40% accuracy on any type of information it can produce? Not handle 2 column pages in its training data, resulting in dozens of scientific papers including references to nonsense pseudoscience words? Invent an entirely new form of slander that its creators can claim isn’t their fault to avoid getting sued in court for it?
Connecticut do have a D in it: mine.
So the Dakotas get a pass
And Idaho
Hey look the markov chain showed its biggest weakness (the markov chain)!
In the training data, it could be assumed by output that Connecticut usually follows Colorado in lists of two or more states containing Colorado. There is no other reason for this to occur as far as I know.
Markov Chain based LLMs (I think thats all of them?) are dice-roll systems constrained to probability maps.
I was wondering if you’d get similar results for states with the letter R, since there’s lots of prior art mentioning these states as either “D” or “R” during elections.
Oh l I was thinking it’s because people pronounce it Connedicut
Awe cute!
Just another trillion, bro.
Just another 1.21 jigawatts of electricity, bro. If we get this new coal plant up and running, it’ll be enough.
Behold the most expensive money burner!
Yesterday i asked Claude Sonnet what was on my calendar (since they just annoyed that feature)
It listed my work meetings on Sunday, so I tried to correct it…
You’re absolutely right - I made an error! September 15th is a Sunday, not a weekend day as I implied. Let me correct that: This Week’s Remaining Schedule: Sunday, September 15
Just today when I asked what’s on my calendar it gave me today and my meetings on the next two thursdays. Not the meetings in between, just thursdays.
Something is off in AI land.
We’ve used the Google AI speakers in the house for years, they make all kinds of hilarious mistakes. They also are pretty convenient and reliable for setting and executing alarms like “7AM weekdays”, and home automation commands like “all lights off”. But otherwise, it’s hit and miss and very frustrating when they push an update that breaks things that used to work.
Also, Sunday September 15th is a Monday… I’ve seen so many meeting invites with dates and days that don’t match lately…
Yeah, it said Sunday, I asked if it was sure, then it said I’m right and went back to Sunday.
I assume the training data has the model think it’s a different year or something, but this feature is straight up not working at all for me. I don’t know if they actually tested this at all.
Sonnet seems to have gotten stupider somehow.
Opus isn’t following instructions lately either.
Curious_Canid@lemmy.ca 6 months ago
This is the perfect time for LLM-based AI. We are already dealing with a significant population that accepts provable lies as facts, doesn’t believe in science. and has no concept of what hypocrisy means. The gross factual errors and invented facts of current AI couldn’t possibly fit in better.