Depends on what you do. I personally use LLMs to write preliminary code and do cheap world building for d&d. Saves me a ton of time. My brother uses it at a medium-sized business to write performance evaluations… which is actually funny to see how his queries are set up. It’s basically the employee’s name, job title, and three descriptors. He can do in 20 minutes what used to take him all day.
Comment on ‘Overhyped’ generative AI will get a ‘cold shower’ in 2024, analysts predict
HeavyDogFeet@lemmy.world 1 year ago
Seeing people say they’re saving lots of time with LLMs makes me wonder how much menial busywork other people do relative to myself. I find so few things in my day where using these tools wouldn’t just make me a babysitter for a dumb machine.
TheWiseAlaundo@lemmy.whynotdrs.org 1 year ago
ChaoticEntropy@feddit.uk 1 year ago
Well that just sounds kind of bad… I hadn’t even considered generating a performance review for my direct report. It’s part of my job to give them meaningful feedback and help them improve, not just tick a box.
killeronthecorner@lemmy.world 1 year ago
If you’re using AI to generate performance reviews you’re shortchanging your reports and your company. That guy’s brother sounds like a shitty boss.
TheWiseAlaundo@lemmy.whynotdrs.org 1 year ago
Regardless of what anyone says, I think this is actually a pretty good use case of the technology. The specific verbiage a the review isn’t necessarily important, and ideas can still be communicated clearly if tools are used appropriately.
If you ask a tool like ChatGPT to write “A performance review for a construction worker named Bob who could improve on his finer carpentry work and who is delightful to be around because if his enthusiasm for building. Make it one page.” The output can still be meaningful, and communicate relevant ideas.
I’m just going to take a page from William Edwards Deming here, and state that an employee is largely unable to change the system that they work in, and as such individual performance reviews have limited value. Even if an employee could change the system that they work in, this should be interpreted as the organization having a singular point of failure.
ChaoticEntropy@feddit.uk 1 year ago
If all the manager is going to input into the process is, at best, some bullet points then they should just stop pretending and send their employee the bullet points. Having some automatically generated toss around it makes the process even more ridiculous than can already easily be.
If my manager gave me my performance review and it was some meaningless auto-praise/commentary, structured around the actual keywords they wanted to express to me, then I would think less of them. I would no longer value the input of my manager or their interest in my development.
I’m not asking my employee to change the system, I’m asking them to either maintain or change themselves, depending on the feedback I’m giving.
HeavyDogFeet@lemmy.world 1 year ago
What your brother is doing is a pretty good example of why this stuff needs to be regulated better. People’s performance evaluations are not the kind of thing that these tools are equipped to do properly.
SCB@lemmy.world 1 year ago
I use AI all the time in my work. With one of my tools I can type in a script and have a fully-acted, fully-voiced virtual instructor added to the training we create. Saves us massively in both time and money and increases engagement.
This is how AI will truly sweep through the market. Small improvements, incrementally developed upon, just like every other technology. White collar workers will be impacted first, with blue collar workers second, as the technology continues to develop.
My friend is an AI researcher as part of his overarching role as an analyst for a massive insurance company, and they’re developing their own internal LLM. The things AI can do will be absolutely market-shattering over time.
Anyone suggesting AI is just a fad/blip is about as naive as someone saying that about the internet in 1994, in my view.
raldone01@lemmy.world 1 year ago
It’s great for writing latex.
latexify sum i=0 to n ( x_i dot (nabla f(x)) x e_r) = 0
\[ \sum_{i=0}^{n} \left( x_i \cdot (\nabla f(x)) \times e_r \right) = 0 \]
Also great at postioning images and fixing weird layout issues.
rainerloeten@lemmy.world 1 year ago
You don’t need a LLM for converting pseudo code to Latex. LLMs surely help at programming (in my experience), but I feel like your example is really giving them justice :p
ChaoticEntropy@feddit.uk 1 year ago
Yeah… as a Product Manager, dealing with a lot of text based tasks, I really expected to find it more useful than I actually have. I’ve not really been able to use it for writing documentation and sending emails, because it matters to me what is in those and I have something I want to say in them.
The only way I could really consider offloading these tasks to AI is if I just stopped caring what went in them.