i’m still not entirely sold on them but since i’m currently using one that the company subscribes to i can give a quick opinion:
i had an idea for a code snippet that could save be some headache (a mock for primitives in lua, to be specific) but i foresaw some issues with commutativity (aka how to make sure that a + b == b + a
). so i asked about this, and the llm created some boilerplate to test this code. i’ve been chatting with it for about half an hour, and had it expand the idea to all possible metamethods available on primitive types, together with about 50 test cases with descriptive assertions. i’ve now run into an issue where the __eq
metamethod isn’t firing correctly when one of the operands is a primitive rather than a mock, and after having the llm link me to the relevant part of the docs, that seems to be a feature of the language rather than a bug.
so in 30 minutes i’ve gone from a loose idea to a well-documented proof-of-concept to a roadblock that can’t really be overcome. complete exploration and feasibility study, fully tested, in less than an hour.
L3s@lemmy.world 17 hours ago
Writing customer/company-wide emails is a good example. “Make this sound better: we’re aware of the outage at Site A, we are working as quick as possible to get things back online”
Another is feeding it an article and asking for a summary, hackingne.ws does that for its Bsky posts.
Coding is another good example, “write me a Python script that moves all files in /mydir to /newdir”
Asking for it to summarize a theory, “explain to me why RIP was replaced with RIPv2, and what problems people have had since with RIPv2”
Corngood@lemmy.ml 17 hours ago
How does this work in practice? I suspect you’re just going to get an email that takes longer for everyone to read, and doesn’t give any more information (or worse, gives incorrect information). Your prompt seems like what you should be sending in the email.
If the model (or context?) was good enough to actually add useful, accurate information, then maybe that would be different.
I think we’ll get to the point really quickly where a nice concise message like in your prompt will be appreciated more than the bloated, normalised version, which people will find insulting.
L3s@lemmy.world 16 hours ago
Yeah, normally my “Make this sound better” or “summarize this for me” is a longer wall of text that I want to simplify, talking to non-technical people about a technical issue is not the easiest for me, and AI has helped me dumb it down when sending an email.
As for accuracy, you review what is gives you, you don’t just copy and send it without review. Also you will have to tweak some pieces that it gives out where it doesn’t make the most sense, such as if it uses wording you wouldn’t typically use. It is fairly accurate though in my use-cases.
otp@sh.itjust.works 16 hours ago
Yeah, I don’t get why so many people seem to not get that.
It’s like people who were against Intellisense in IDEs because “What if it suggests the wrong function?”…you still need to know what the functions do. If you find something you’re unfamiliar with, you check the documentation. You don’t just blindly accept it as truth.
Just because it can’t replace a person’s job doesn’t mean it’s worthless as a tool.
Voroxpete@sh.itjust.works 16 hours ago
I think these are actually valid examples, albeit ones that come with a really big caveat; you’re using AI in place of a skill that you really should be learning for yourself. As an autistic IT person, I get the struggle of communicating with non-technical and neurotypical people, especially clients who you have to be extra careful with. But the reality is, you can’t always do all your communication by email. If you always rely on the AI to correct your tone or simplify your language, you’re choosing not to build an essential skill that is every bit as important to doing your job well as it is to know how to correctly configure an ACL on a Cisco managed switch.
That said, I can also see how relying on the AI at first can be a helpful learning tool as you build those skills. There’s certainly an argument that by using tools, but paying attention to the output of those tools, you build those skills for yourself. Learning by example works. I think used in that way, there’s potentially real value there.
Which is kind of the broader story with Gen AI overall. It’s not that it can never be useful; it’s that, at best, it can only ever aspire to “useful.” No one, yet, has demonstrated any ability to make AI “essential” and the idea that we should be investing hundreds of billions of dollars into a technology that is, on its best days, mildly useful, is sheer fucking lunacy.
earphone843@sh.itjust.works 15 hours ago
It works well. For example, we had a work exercise where we had to write a press release based on an example, then write a Shark Tank pitch to promote the product we came up with in the release.
I gave AI the link to the example and a brief description of our product, and it spit out an almost perfect press release. I only had to tweak a few words because there were specific requirements I didn’t feed the AI.
Then I told it to take the press release and write the pitch based on it.
Again, very nearly perfect with only having to change the wording in one spot.
lurch@sh.itjust.works 10 hours ago
it’s not good for summaries. often gets important bits wrong, like embedded instructions that can’t be summarized.
L3s@lemmy.world 10 hours ago
My experience has been very different, I do have to sometimes add to what it summarized though. The Bsky account is mentioned is a good example, most of the posts are very well summarized, but every now and then there will be one that isn’t as accurate.
spankmonkey@lemmy.world 16 hours ago
The dumbed down text is basically as long as the prompt. Plus you have to double check it to make sure it didn’t have outrage instead of outage just like if you wrote it yourself.
Are you really saving time?
L3s@lemmy.world 15 hours ago
Yes, I’m saving time. As I mentioned in my other comment:
And
And
spankmonkey@lemmy.world 15 hours ago
How do you validate the accuracy of what it spits out?
Why don’t you skip the AI and just use the thing you use to validate the AI output?
earphone843@sh.itjust.works 15 hours ago
Dumbed down doesn’t mean shorter.
spankmonkey@lemmy.world 15 hours ago
If the amount of time it takes to create the prompt is the same as it would have taken to write the dumbed down text, then the only time you saved was not learning how to write dumbed down text. Plus you need to know what dumbed down text should look at to know if the output is dumbed down but still accurate.