Sometimes. As a tool, not an outsourced human, oracle, or some transcendent companion con artists like Altman are trying to sell.
See how grounded this interview is, from a company with a model trained on peanuts compared to ChatGPT, and that takes even less to run:
…In 2025, with the launch of Manus and Claude Code, we realized that coding and agentic functions are more useful. They contribute more economically and significantly improve people’s efficiency. We are no longer putting simple chat at the top of our priorities. Instead, we are exploring more on the coding side and the agent side. We observe the trend and do many experiments on it.
www.chinatalk.media/p/the-zai-playbook
They even touch on how their own models are just starting to show practical, albeit not miraculous utility in their internal workflows.
PullPantsUnsworn@lemmy.ml 1 day ago
I am a developer. While AI is being marketed as snake oil, the things they can do is astonishing. One example is it reviews code a lot better than human beings. It’s not just finding obvious errors but it catches logical error that no human would have caught.
I see people are just forming two groups. Those who thinks AI will solve everything and those who thinks AI is useless. Neither of them are right.
teohhanhui@lemmy.world 1 day ago
No, it does not.
Source: Open-source contributor who’s constantly annoyed by the useless CodeRabbit AI that some open source projects have chosen to use.
ThirdConsul@lemmy.ml 1 day ago
I’m not having the same experience.
mirshafie@europe.pub 1 day ago
Maybe reconsider which model you’re using?
bold_atlas@lemmy.world 1 day ago
If there was a model that coded perfectly then there wouldn’t be models. There would just be THE model.
bold_atlas@lemmy.world 1 day ago
And how many errors is it creating that we don’t know about?