Give me an example of how you use it.
Comment on Why I am not impressed by A.I.
VintageGenious@sh.itjust.works 4 weeks ago
Because you’re using it wrong. It’s good for generative text and chains of thought, not symbolic calculations including math or linguistics
joel1974@lemmy.world 4 weeks ago
L3s@lemmy.world 4 weeks ago
Writing customer/company-wide emails is a good example. “Make this sound better: we’re aware of the outage at Site A, we are working as quick as possible to get things back online”
Another is feeding it an article and asking for a summary, hackingne.ws does that for its Bsky posts.
Coding is another good example, “write me a Python script that moves all files in /mydir to /newdir”
Asking for it to summarize a theory, “explain to me why RIP was replaced with RIPv2, and what problems people have had since with RIPv2”
Corngood@lemmy.ml 4 weeks ago
Make this sound better: we’re aware of the outage at Site A, we are working as quick as possible to get things back online
How does this work in practice? I suspect you’re just going to get an email that takes longer for everyone to read, and doesn’t give any more information (or worse, gives incorrect information). Your prompt seems like what you should be sending in the email.
If the model (or context?) was good enough to actually add useful, accurate information, then maybe that would be different.
I think we’ll get to the point really quickly where a nice concise message like in your prompt will be appreciated more than the bloated, normalised version, which people will find insulting.
L3s@lemmy.world 4 weeks ago
Yeah, normally my “Make this sound better” or “summarize this for me” is a longer wall of text that I want to simplify, talking to non-technical people about a technical issue is not the easiest for me, and AI has helped me dumb it down when sending an email.
As for accuracy, you review what is gives you, you don’t just copy and send it without review. Also you will have to tweak some pieces that it gives out where it doesn’t make the most sense, such as if it uses wording you wouldn’t typically use. It is fairly accurate though in my use-cases.
earphone843@sh.itjust.works 4 weeks ago
It works well. For example, we had a work exercise where we had to write a press release based on an example, then write a Shark Tank pitch to promote the product we came up with in the release.
I gave AI the link to the example and a brief description of our product, and it spit out an almost perfect press release. I only had to tweak a few words because there were specific requirements I didn’t feed the AI.
Then I told it to take the press release and write the pitch based on it.
Again, very nearly perfect with only having to change the wording in one spot.
locuester@lemmy.zip 3 weeks ago
Yes, people are using it as the least efficient communication protocol ever.
One side asks an LLM to expand a summary into a fluff filled email, and the other side asks an LLM to reduce the long email to a summary.
lurch@sh.itjust.works 4 weeks ago
it’s not good for summaries. often gets important bits wrong, like embedded instructions that can’t be summarized.
L3s@lemmy.world 4 weeks ago
My experience has been very different, I do have to sometimes add to what it summarized though. The Bsky account is mentioned is a good example, most of the posts are very well summarized, but every now and then there will be one that isn’t as accurate.
spankmonkey@lemmy.world 4 weeks ago
The dumbed down text is basically as long as the prompt. Plus you have to double check it to make sure it didn’t have outrage instead of outage just like if you wrote it yourself.
Are you really saving time?
L3s@lemmy.world 4 weeks ago
Yes, I’m saving time. As I mentioned in my other comment:
Yeah, normally my “Make this sound better” or “summarize this for me” is a longer wall of text that I want to simplify, I was trying to keep my examples short.
And
and helps correct my shitty grammar at times.
And
Hallucinations are a thing, so validating what it spits out is definitely needed.
earphone843@sh.itjust.works 4 weeks ago
Dumbed down doesn’t mean shorter.
lime@feddit.nu 4 weeks ago
i’m still not entirely sold on them but since i’m currently using one that the company subscribes to i can give a quick opinion:
i had an idea for a code snippet that could save be some headache (a mock for primitives in lua, to be specific) but i foresaw some issues with commutativity (aka how to make sure that
a + b == b + a
). so i asked about this, and the llm created some boilerplate to test this code. i’ve been chatting with it for about half an hour, and had it expand the idea to all possible metamethods available on primitive types, together with about 50 test cases with descriptive assertions. i’ve now run into an issue where the__eq
metamethod isn’t firing correctly when one of the operands is a primitive rather than a mock, and after having the llm link me to the relevant part of the docs, that seems to be a feature of the language rather than a bug.so in 30 minutes i’ve gone from a loose idea to a well-documented proof-of-concept to a roadblock that can’t really be overcome. complete exploration and feasibility study, fully tested, in less than an hour.
chiisana@lemmy.chiisana.net 4 weeks ago
Ask it for a second opinion on medical conditions.
Sounds insane but they are leaps and bounds better than blindly Googling and self prescribe every condition there is under the sun when the symptoms only vaguely match.
Once the LLM helps you narrow in on a couple of possible conditions based on the symptoms, then you can dig deeper into those specific ones, learn more about them, and have a slightly more informed conversation with your medical practitioner.
They’re not a replacement for your actual doctor, but they can help you learn and have better discussions with your actual doctor.
Wogi@lemmy.world 4 weeks ago
So can web MD. We didn’t need AI for that. Googling symptoms is a great way to just be dehydrated and suddenly think you’re in kidney failure.
chiisana@lemmy.chiisana.net 4 weeks ago
We didn’t stop trying to make faster, safer and more fuel efficient cars after Model T, even though it can get us from place A to place B just fine. We didn’t stop pushing for digital access to published content, even though we have physical libraries. Just because something satisfies a use case doesn’t mean we should stop advancing technology.
noodlejetski@lemm.ee 4 weeks ago
sounds like a perfectly sane idea freethoughtblogs.com/…/ai-anatomy-is-weird/
TheHobbyist@lemmy.zip 4 weeks ago
One thing which I find useful is to be able to turn installation/setup instructions into ansible roles and tasks. If you’re unfamiliar, ansible is a tool for automated configuration for large scale server infrastructures. In my case I only manage two servers but it is useful to parse instructions and convert them to ansible, helping me learn and understand ansible at the same time.
Here is an example of instructions which I find interesting: how to setup docker for alpine Linux: wiki.alpinelinux.org/wiki/Docker
Results are actually quite good even for smaller 14B self-hosted models like the distilled versions of DeepSeek, though I’m sure there are other usable models too.
To assist you in programming (both to execute and learn) I find it helpful too.
I would not rely on it for factual information, but usually it does a decent job at pointing in the right direction. Another use i have is helpint with spell-checking in a foreign language.
chaosCruiser@futurology.today 4 weeks ago
Here’s a bit of code that’s supposed to do stuff. I got this error message. Any ideas what could cause this error and how to fix it? Also, add this new feature to the code.
Works reasonably well as long as you have some idea how to write the code yourself. GPT can do it in a few seconds, debugging it would take like 5-10 minutes, but that’s still faster than my best. Besides, GPT also fairly fluent in many functions I have never used before. My approach would be clunky and convoluted, while the code generated by GPT is a lot shorter.
Windex007@lemmy.world 4 weeks ago
That makes sense as long as you’re not writing code that needs to know how to do something as complex as …checks original post… count.
slaacaa@lemmy.world 4 weeks ago
I have it write for me emails in German. I moved there not too long ago, works wonders to get doctors appointment, car service, etc. I also have it explain the text, so I’m learning the language.
I also use it as an alternative to internet search, which is now terrible. It’s not going to help you to find smg super location specific, but I can ask it to tell me without spoilers smg about a game/movie or list metacritic scores in a table, etc.
It also works great in summarizing long texts.
LLM is a tool, what matters is how you use it. It is stupid, it doesn’t think, it’s mostly hype to call it AI. But it definitely has it’s benefits.
scarabic@lemmy.world 4 weeks ago
We have one that indexes all the wikis and GDocs and such at my work and it’s incredibly useful for answering questions like “who’s in charge of project 123?” or “what’s the latest update from team XYZ?”
I even asked it to write my weekly update for MY team once and it did a fairly good job. The one thing I thought it had hallucinated turned out to be something I just hadn’t heard yet. So it was literally ahead of me at my own job.
I get really tired of all the automatic hate over stupid bullshit like this OP. These tools have their uses. It’s very popular to shit on them. So congratulations for whatever agreeable comments your post gets. Anyway.
verdigris@lemmy.ml 4 weeks ago
I mean, I would argue that the answer in the OP is a good one. No human asking that question honestly wants to know the sum total of Rs in the word, they either want to know how many in “berry” or they’re trying to trip up the model.
dreadbeef@lemmy.dbzer0.com 4 weeks ago
“You’re holding it wrong”
Voyajer@lemmy.world 4 weeks ago
This but actually. Don’t use an LLM to do things LLMs are known to not be good at. As tools various companies would do good to list out specifically what they’re not good at to eliminate requiring background knowledge before even using them, not unlike needing to know that one corner of those old iPhones was an antenna and to not bridge it.
sugar_in_your_tea@sh.itjust.works 4 weeks ago
Yup, the problem with that iPhone (4?) wasn’t that it sucked, but that it had limitations. You could just put a case on it and the problem goes away.
LLMs are pretty good at a number of tasks, and they’re also pretty bad at a number of tasks. They’re pretty good at summarizing, but don’t trust the summary to be accurate, just to give you a decent idea of what something is about. They’re pretty good at generating code, just don’t trust the code to be perfect.
You wouldn’t use a chainsaw to build a table, but it’s pretty good at making big things into small things, and cleaning up the details later with a more refined tool is the way to go.
spankmonkey@lemmy.world 4 weeks ago
They’re pretty good at summarizing, but don’t trust the summary to be accurate, just to give you a decent idea of what something is about.
That is called being terrible at summarizing.
TheGrandNagus@lemmy.world 4 weeks ago
I think there’s a fundamental difference between someone saying “you’re holding your phone wrong, of course you’re not getting a signal” to millions of people and someone saying “LLMs aren’t good at that task you’re asking it to perform, but they are good for XYZ.”
If someone is using a hammer to cut down a tree, they’re going to have a bad time. A hammer is not a useful tool for that job.
Prandom_returns@lemm.ee 4 weeks ago
So for something you can’t objectively evaluate? Looking at Apple’s garbage generator, LLMs aren’t even good at summarising.
balder1991@lemmy.world 3 weeks ago
For reference:
AI chatbots unable to accurately summarise news, BBC finds
the BBC asked ChatGPT, Copilot, Gemini and Perplexity to summarise 100 news stories and rated each answer. […] It found 51% of all AI answers to questions about the news were judged to have significant issues of some form. […] 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.
It makes me remember I basically stopped using LLMs for any summarization after this exact thing happened to me. I realized that without reading the text, I wouldn’t be able to know whether the output has all the info or if it has some made-up info.
Grandwolf319@sh.itjust.works 4 weeks ago
No, I think you mean to say it’s because you’re using it for the wrong use case.
Well this tool has been marketed as if it would handle such use cases.
I don’t think I’ve actually seen any AI marketing that was honest about what it can do.
I personally think image recognition is the best use case as it pretty much does what it promises.
scarabic@lemmy.world 4 weeks ago
Really? AI has been marketed as being able to count the r’s in “strawberry?” Please link to this ad.