That’s fair. I guess it could be no different than a scientist with some grand scheme handing his plans off to others to implement.
I think I was assuming that cutting edge AI research involves more math/theory than just… bootstrapping existing tech stacks and tweaking configs.
ayyy@sh.itjust.works 3 weeks ago
“No we don’t need databases anymore, only blockchains.” —Nvidia CEO a few years ago
MangoCats@feddit.it 3 weeks ago
And AI can help you migrate your database solutions to blockchain, utilizing 3000W worth of Nvidia co-processing power to validate your blockchain database that used to work on a 0.3W ARM processor.
8oow3291d@feddit.dk 3 weeks ago
The database was an arbitrary example. A more relevant example would be tenserflow layers in a neural network. As I understand it, you can in some cases get a novel solution to a problem just by choosing a smart enough combination, with the right data.
ChatGPT absolutely knows how to help doing the grunt work setting up the tenserflow configuration, following your directions.
MangoCats@feddit.it 3 weeks ago
Smart, lucky, who can tell the difference?
8oow3291d@feddit.dk 3 weeks ago
If used by an expect developer, then the combinations are not just random “lucky” choices.
baahb@lemmy.dbzer0.com 3 weeks ago
If you are capable of giving good directions…
MangoCats@feddit.it 3 weeks ago
Every software development project, ever.
Review your requirements before starting development. Review them again after each phase of development. Address inadequacies, conflicts, ambiguities whenever you find them.
AI is actually helpful in this process - not so much knowing what to choose to do, but pointing out the gaps and contradictions it can be helpful.
8oow3291d@feddit.dk 3 weeks ago
Well, yes, that is a central point.
I am a senior programmer. LLMs are amazing - I know exactly what I want, and I can ask for it and review it. My productivity has gone up at least 3-fold, with no decrease in quality, by using LLMs responsibly.
But it seems to me that some people on social media can’t imagine using LLMs in this way. They just imagine that all LLM usage is vibe coding, using the output without understanding or review.