projectmoon
@projectmoon@lemm.ee
- Comment on How does AI-based search engines know legit sources from BS ones ? 8 hours ago:
A lot of the answers here are short or quippy. So, here’s a more detailed take. LLMs don’t “know” how good a source is. They are word association machines. They are very good at that. When you use something like Perplexity, an external API feeds information from the search queries into the LLM, and then it summarizes that text in (hopefully) a coherent way. There are ways to reduce hallucination rate and check factualness of sources, e.g. by comparing the generated text against authoritative information. But how much of that is employed by Perplexity et al I have no idea.
- Comment on Google created a new AI model for talking to dolphins 1 month ago:
This is probably one of the best actual uses for something like generative AI. With enough data, they should be able to vectorize and translate dolphin language, assuming there is one.
- Comment on What Ever Happened to MSN Messenger? 7 months ago:
Have you tried Matrix?