5gruel
@5gruel@lemmy.world
- Comment on OpenAI is now valued at $157 billion 1 month ago:
So what is intelligence in your general, all-purpose understanding?
Are newborns intelligent? How about dogs? Ants?
You may argue that current AI is still behind an average human adult and therefore not intelligent, but academia is a bit more nuanced.
- Comment on Men Harassed A Woman In A Driverless Waymo, Trapping Her In Traffic 1 month ago:
How are they getting my metadata?
- Comment on Most consumers hate the idea of AI-generated customer service 4 months ago:
I mean, your suggestive question at least helps me understand your mindset a bit better. If I would see the situation the way you characterize it, I would probably sound the same.
I can only encourage you to try to see to the business bullshit that is undoubtedly there and recognize that there is an actual underlying technological breakthrough with the chance of redefining how we interact with machines.
I’m running a local LLM that I use daily at work to help me brainstorm and the fact that I can run perfect speech to text in real time on my laptop was simply not possible a few years ago.
- Comment on Most consumers hate the idea of AI-generated customer service 4 months ago:
The AI hate on Lemmy never fails to amaze me
- Comment on Should I use Microsoft Copilot? 5 months ago:
Weird responses here so far. I’ll try to actually answer the question.
I’m using copilot for 9 months at work now and it’s crazy how it accelerates wiring code. I am writing class c code in C++ and rust, and it has become a staple tool like auto formatting. That being said, it cannot really do more abstract stuff like this architecture decisions.
Just try it for some time and see if it fits your use case. I’m hoping the local code models will catch up soon so I can get away from Microsoft, but until then, copilot it is.
- Comment on We have to stop ignoring AI’s hallucination problem 5 months ago:
I’m not convinced about the “a human can say ‘that’s a little outside my area of expertise’, but an LLM cannot.” I’m sure there are a lot of examples in the training data set that contains qualification of answers and expression of uncertainty, so why would the model not be able to generate that output? I don’t see why it would require an “understanding” for that specifically. I would suspect that better human reinforcement would make such answers possible.
- Comment on May 13, 1985 6 months ago:
What is an anti tank machine gun?
- Comment on Possible snipers seen at OSU. Administration says they're not snipers but should be treated like they are. 6 months ago:
Yes and probably yes. But that answers why they are there.
- Comment on Possible snipers seen at OSU. Administration says they're not snipers but should be treated like they are. 6 months ago:
Or you know, like, deterrence.
- Comment on Why a kilobyte is 1000 and not 1024 bytes 10 months ago:
TL;DR?
- Comment on Lemmy disproves the stereotype that Germans lack a sense of humor 1 year ago:
Du
- Comment on Amazon's drone delivery program is the joke it always sounded like. 1 year ago:
Having built commerical drones, it’s mainly two things, obstacles and ground effect.
- Comment on Weezer straight up writing ads for audible.com 1 year ago:
We are Weezer and we are here to make money and sell out and stuff!
- Comment on meet project primrose, adobe’s real-life interactive dress that changes design every second 1 year ago:
Women amirite