This is the most obvious outcome ever. How could anyone not see this coming given the constant AI improvements?
AI Prompt Engineering Is Dead
Submitted 8 months ago by 0nekoneko7@lemmy.world to technology@lemmy.world
https://developers.slashdot.org/story/24/03/07/1511252/ai-prompt-engineering-is-dead
Comments
realharo@lemm.ee 8 months ago
FaceDeer@kbin.social 8 months ago
"Prompt engineering" is simply the skill of knowing how to correctly ask for the thing that you want. Given that this is something that is in rare supply even when interacting with other humans, I don't see this going away until we're well past AGI and into ASI.
realharo@lemm.ee 8 months ago
Human experts often say things like "customers say X, they probably mean they want Y and Z" purely based on their experience of dealing with people in some field for a long time.
That is something that can be learned. Follow-up questions can be asked to clarify. Etc. Not that complicated.
tsonfeir@lemm.ee 8 months ago
TLDR get in, charge a lot, find a backup. Just like any new popular tech buzz
dhork@lemmy.world 8 months ago
Wait, Slashdot isn’t dead yet?
NeryK@sh.itjust.works 8 months ago
That is not dead which can eternal lie.
corsicanguppy@lemmy.ca 8 months ago
Ah so you’ve seen the maintenance cycle on my open-source stuff.
Usually it all springs to life when the cart comes around and the man shouts “bring out yer dead!”
demonsword@lemmy.world 8 months ago
Wait, Slashdot isn’t dead yet?
it’s been undead for at least a good 12 years
leds@feddit.dk 8 months ago
Some the community moved to https://soylentnews.org/
NounsAndWords@lemmy.world 8 months ago
The hype around AI language models has companies scrambling to hire prompt engineers to improve their AI queries and create new products.
Who is hiring all these prompt engineers? Who is ‘scrambling’ to find people for this? The jobs I do see have basically replaced “developer” with “prompt engineer” with the same job requirements.
LostXOR@fedia.io 8 months ago
Ah yes, using AI to solve problems with AI.
turkishdelight@lemmy.ml 8 months ago
Prompt Engineering was never a thing to begin with.
gravitas_deficiency@sh.itjust.works 8 months ago
Good
WhatAmLemmy@lemmy.world 8 months ago
This is dumb. Literally nothing has changed. Anyone who knows anything about LLM’s knows that they’ve struggled with math more than almost every other discipline. It sounds counter intuitive for a computer to be shit at math, but this is because LLM’s “intelligence” is through mimicry. They do not calculate math like a calculator. They calculate all responses based on a probability distribution constructed from billions of human text inputs. E.g. When input equal “YHXURUAG”, the most probable response based on MY SPECIFIC INPUT DATASET OF X BILLION WORDS is “GOPALEFDT”. They are as smart, and as fallible, as wikipedia + reddit + twitter, etc, etc. They are as fallible as their constructing dataset.
“Prompt engineering” is about understanding an LLM’s strengths and weaknesses, and learning how to work with them to build out a context and efficiently achieve an end result, whatever that desired result may be. It’s not dead, and it’s not going anywhere as long as LLM’s exist.
chetradley@lemmy.world 8 months ago
I really wish all of these companies racing to replace their existing software features and employees with LLMs understood this. So many applications are dependent on a response being 100% accurate for a very specific request as opposed to being 80% accurate for a wide variety of requests. “Based on training data, here’s what a response to your input might look like” is pretty good for conversational language and image generation, but it sucks for anything requiring computation or expertise. Worst of all, it’s so confidently wrong about things I might as well be back on Reddit.
abhibeckert@lemmy.world 8 months ago
They totally understand it. And OpenAI has solved it. For example while researching The Ultimate Answer to Life the Universe and Everything, I asked it to calculate 6 by 9 in base 13 and got the correct answer - 42. ChatGPT didn’t use the LLM to calculate that, it used the LLM to write and execute the following python script:
realharo@lemm.ee 8 months ago
Prompt engineering is about expressing your intent in a way that causes an LLM to come to the desired result.
It will go away as soon as LLMs get good at inferring intent. It might not be a single model, it may require some extra steps, etc., but there is nothing uniquely “human” about writing prompts.
Future systems could for example start asking questions more often, to clarify your intent better.
abhibeckert@lemmy.world 8 months ago
Current systems already do that. But they’re expensive and it might be cheaper to have a human do it. Prompt engineering is very much a thing if you’re working with high performance low memory consumption language models. We’re a long way from having smartphones with a couple terabytes of RAM and a few thousand GPU cores… but we do have primitive models right now which can run on a phone (iPhones use one to figure out what key a user intended to press when their fat thumb hits six keys at the same time).
peopleproblems@lemmy.world 8 months ago
You know, I had gotten frustrated using it because it wouldn’t understand me, but now I’ll use the approach to find out how it understands me
gaylord_fartmaster@lemmy.world 8 months ago
Machine learning could find those strengths and weaknesses and learn to work around them likely better than a human could. It’s just trial and error. There’s nothing about the human brain that makes it better suited to understanding the inner logic of an LLM.
sudoreboot@slrpnk.net 8 months ago
For that you need a program to judge the quality of output given some input. If we had that, LLMs could just improve themselves directly, bypassing any need for prompt engineering in the first place.
The reason prompt engineering is a thing is that people know what is expected and desired output and what isn’t, and can adapt their interactions with the tool accordingly, a trait associated to adaptive complex systems.
WhatAmLemmy@lemmy.world 8 months ago
Congrats. You don’t understand the difference between a statistical model and a human.
jacksilver@lemmy.world 8 months ago
Actually most (I think all, but not 99% positive) machine learning models are incapable of doing straight arithmetic. Due to the way they are built ML models, including deep learning models, can only learn relationships in a limited input space.
This is most apparent when you test LLMs on different arithmetic operations:
This has to do with the fact that LLMs are effectively multiple layers of linear functions, so higher order operations break down faster.
MonkderZweite@feddit.ch 8 months ago
If input.type = int, use calculator.
thedeadwalking4242@lemmy.world 8 months ago
I mean it’s not really like humans are good at math either, we are good at making abstractions and following linear rules but we are slow and fallible. Digital computation is just near absolute the best method for doing math. LLMs are decent abstraction and general problem solving tho. They are not as creative as people but they are still pretty good! It’s a step on the right direction for true agi. Honestly even when we have agi I doubt they will ever beat raw cpus in computation speed.
GBU_28@lemm.ee 8 months ago
Re math, enter functionary-able models