minoscopede
@minoscopede@lemmy.world
- Comment on Google CIO Calls Trump Admin’s Climate Denialism “Fantastic” | Ruth Porat called for data centers to be powered by coal, gas, and nuclear 6 days ago:
I’ve listened to Ruth Porat speak before and nothing about this article matches that. It feels fake or taken wildly out of context. I’d take it with a grain of salt, especially given how the article doesn’t even link to the primary source. :/
- Comment on New Executive Order:AI must agree on the Administration views on Sex,Race, cant mention what they deem to be Critical Race Theory,Unconscious Bias,Intersectionality,Systemic Racism or "Transgenderism 4 weeks ago:
Related PSA: grok is the top rated AI app in the play store, and we can fix that
- Comment on People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis" 5 weeks ago:
How many people? What percentage of users?
- Comment on How to turn off Gemini on Android — and why you should 1 month ago:
Ew, this article is an ad for another company. I feel icky when people try to monetize my basic human rights.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 2 months ago:
I think you might not be using the vocabulary correctly. The statement “Markov chains are still the basis of inference” doesn’t make sense, because markov chains are a separate thing. You might be thinking of Markov decision processes, which is used in training RL agents, but that’s also unrelated because these models are not RL agents, they’re supervised learning agents. And even if they were RL agents, the MDP describes the training environment, not the model itself, so it’s not really used for inference.
I’d encourage you to research more about this space and learn more. We need more people who are skeptical of AI doing research in this field, and many of us in the research community would be more than happy to welcome you into it.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 2 months ago:
I see a lot of misunderstandings in the comments 🫤
This is a pretty important finding for researchers, and it’s not obvious by any means. This finding is not showing a problem with LLMs’ abilities in general. The issue they discovered is more likely that the training is not right, specifically for so-called “reasoning models” that iterate on their answer before replying.
Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that’s a flaw that needs to be corrected. If so, that opens the door for experimentation on more rigorous training processes that could lead to more capable models that actually do “reason”.
- Comment on Plex now will SELL your personal data 2 months ago:
Beautiful! I’ll definitely give this a go
- Comment on Plex now will SELL your personal data 2 months ago:
python -m http.server is still my media server of choice. It’s never let me down.
- Comment on Paul McCartney and Dua Lipa among artists urging British Prime Minister Starmer to rethink his AI copyright plans 3 months ago:
It only seems to make a difference when the rich ones complain.
- Comment on Trying to avoid antitrust suits, Google senior executives told employees to destroy messages 3 months ago:
I read the article, and it’s way less bad than the title made it sound. They just set company chats to disappear after some number of days and told employees to “not speculate about legal matters until they have the facts”.
This has been the policy of every company I’ve worked at, including university IT and Amazon.
- Comment on China scientists develop flash memory 10,000× faster than current tech 4 months ago:
Link to the actual paper: www.nature.com/articles/s41586-025-08839-w