It is, gradient descent is what you use to find optimal model parameters.
the algorithm takes a step, computes a gradient (whether any nearby options are better), then moves in that direction to improve the parameters, in a loop.
Comment on Elon Musk's xAI loses second cofounder in 48 hours
floofloof@lemmy.ca 2 days ago
It’s time to recalibrate my gradient on the big picture.
I’m so glad not to think or speak like these people do.
It is, gradient descent is what you use to find optimal model parameters.
the algorithm takes a step, computes a gradient (whether any nearby options are better), then moves in that direction to improve the parameters, in a loop.
Adding to the above, one of the challenges of GD is how to know whether the global optimum reported by the function isn’t just one of its many imposters (local optima). That’s the “big picture” he’s talking about. Working with Elon was a dead end.
It seems obviously so. I don’t think I could hire someone who worked on Grok’s deep fake porn engine, ever.
Between the working for a nazi, child porn issue, and all, it’s a bad fucking look.
Someone from a brand new account posted a bunch of gibberish like that today about having the keys to the octo-dimension mother universe…
And I immediately thought it was an Elon AI, because that’s how they think humans actually talk
Probably someone who booted up openclaw and gave it a lemmy account and then it likely gave away their financial information on moltbook
It’s language for people who advertise they know something about “AI,” but couldn’t implement Llama 1 if their life depended on it.
I grew an amazon trademark smile just by reading that fucking quote.
Corngood@lemmy.ml 2 days ago
How can I add words to this sentence without adding information?
Goodlucksil@lemmy.dbzer0.com 2 days ago
You just did