They’ve been a boon for medical diagnoses as well, I believe.
Comment on We did the math on AI’s energy footprint. Here’s the story you haven’t heard.
FreedomAdvocate@lemmy.net.au 2 weeks agoIn what area of AI? Image generation is increasing in Lagos and bounds. Video generation even more so. Image reconstruction for games (DLSS, XeSS, FSR) is having generational improvements almost every year. AI chatbots are getting much much smarter seemingly every month.
What’s one main application of AI that hasn’t improved?
Almacca@aussie.zone 2 weeks ago
MagicShel@lemmy.zip 2 weeks ago
Any strictly rule-based system, like accounting and taxes, is a job for traditional software, not AI. Particularly when the laws change every year.
Almacca@aussie.zone 2 weeks ago
Once it has the information in a recognisable format. Reading and recognising random receipts, bank statements, payment slips, and whatever and sorting it into a coherent format is what I’m trying to avoid.
MagicShel@lemmy.zip 2 weeks ago
I see. So AI for gathering the information to put into the accounting/tax software?
That’s a more reasonable ask, but I wouldn’t personally trust AI with that. I’ve done something similar in games where I take a picture of something on screen and ask AI to collect all the information from many similar pictures into a table. It’s definitely good enough for gaming, but it makes mistakes often enough I wouldn’t sign my name attesting to the truth of anything it produced, you know?
msage@programming.dev 2 weeks ago
Which chatbots are getting smarter?
I know AI has potential, but specifically LLMs (which most people mean when talking about AI) seem to have hit their technological limits.
Jakeroxs@sh.itjust.works 2 weeks ago
Advanced Reasoning models came out like 4 months ago lol
msage@programming.dev 2 weeks ago
Advanced reasoning? Having LLM talk to itself?
theterrasque@infosec.pub 2 weeks ago
Yes, which has improved some tasks measurably. ~20% improvement on programming tasks, as a practical example. It has also improved tool use and agentic tasks, allowing the llm to plan ahead and adjust it’s initial approach based on later parts.
Having the llm talk through the tasks allows it to improve or fix bad decisions taken early based on new realizations on later stages. Sort of like when a human thinks through how to do something.
Jakeroxs@sh.itjust.works 2 weeks ago
Lul no, but they are clearly better at many types of tasks.
FreedomAdvocate@lemmy.net.au 2 weeks ago
Copilot, ChatGPT, pretty much all of them.
msage@programming.dev 2 weeks ago
Smarter how? Synthetic benchmarks?
Because I’ve heard the opposite from users and bloggers.
FreedomAdvocate@lemmy.net.au 2 weeks ago
So you want me to provide some evidence that it’s getting smarter, but you can’t provide any that it’s getting worse other than anecdotal evidence?
What evidence would you accept?