chrash0
@chrash0@lemmy.world
- Comment on Gemini AI tells the user to die — the answer appeared out of nowhere when the user asked Google's Gemini for help with his homework 3 days ago:
the reactionary opinions are almost hilarious. they’re like “ha this AI is so dumb it can’t even do complex systems analysis! what a waste of time” when 5 years ago text generation was laughably unusable and AI generated images were all dog noses and birds.
- Comment on Steve Ballmer was an underrated CEO. 3 weeks ago:
you have to do a lot of squinting to accept this take.
so his wins were copying competitors, and even those products didn’t see success until they were completely revolutionized (Bing in 2024 is a Ballmer success? .NET becoming widespread is his doing?). one thing Nadela did was embrace the competitive landscape and open source with key acquisitions like GitHub and open sourcing .NET, and i honestly don’t have the time to fully rebuff this hot take. but i don’t think the Ballmer haters are totally off base here. even if some of the products started under Ballmer are now successful, it feels disingenuous to attribute their success to him. it’s like an alcoholic dad taking credit for his kid becoming an actor. Microsoft is successful despite him
- Comment on AI sees beyond humans: automated diagnosis of myopia based on peripheral refraction map using interpretable deep learning. 1 month ago:
All programs were developed in Python language (3.7.6). In addition, freely available Python libraries of NumPy (1.18.1) and Pandas (1.0.1) were used to manipulate data, cv2 (4.4.0) and matplotlib (3.1.3) were used to visualize, and scikit-learn (0.24.2) was used to implement RF. SqueezeNet and Grad-CAM were realized using the neural network library PyTorch (1.7.0). The DL network was trained and tested using a DL server mounted with an NVIDIA GeForce RTX 3090 GPU, 24 Intel Xeon CPUs, and 24 GB main memory
it’s interesting that they’re using pretty modest hardware (i assume they mean 24 cores not CPUs) and fairly outdated dependencies. also having their dependencies listed out like this is pretty adorable. it has academic-out-of-touch-not-a-software-dev vibes. makes you wonder how much further a project like this could go with decent technical support. like, all these talented engineers are using 10k times the power to work on generalist models like GPT that struggle at these kinds of tasks, while promising that it would work someday and trivializing them as “downstream tasks”. i think there’s definitely still room in machine learning for expert models; sucks they struggle for proper support.
- Comment on CrowdStrike unhappy with “shady commentary” from competitors after outage 2 months ago:
i was mostly making a joke about how this absolutely is not a common problem on any platform, not to this degree. and at least when my Arch and Nix systems go down i don’t have anyone to blame but myself. sure, systems have update issues, but a kernel level meltdown that requires a safe mode rescue? that’s literally never happened to me unless it was my fault
- Comment on CrowdStrike unhappy with “shady commentary” from competitors after outage 2 months ago:
damn i haven’t used Windows in over a decade. are y’all ok?
- Comment on 4 months ago:
it’s super weird that people think LLMs are so fundamentally different from neural networks, the underlying technology. neural network architectures are constantly improving, and LLMs are just a product of a ton of research and an emergence after the discovery of the transformer architecture. what LLMs have shown us is that we’re definitely on the right track using neural networks to solve a wide range of problems classified as “AI”
- Comment on Meta to broaden hate speech policy to remove more posts targeting 'Zionists' 4 months ago:
most Zionists i’ve met are white Protestants, and most Jews i’ve met aren’t Zionists…
- Comment on Zuckerberg disses closed-source AI competitors as trying to 'create God' 4 months ago:
simply not true. they’re no angels or open source champions, but come on.
- Comment on McDonald’s Gives Up On ‘AI’ After Comedy Of Errors, Including Putting Bacon On Ice Cream 4 months ago:
sure it does. it won’t tell you how to build a bomb or demonstrate explicit biases that have been fine tuned out of it. the problem is McDonald’s isn’t an AI company and probably is just using ChatGPT on the backend, and GPT doesn’t give a shit about bacon ice cream out of the box.
- Comment on Has Generative AI Already Peaked? - Computerphile 6 months ago:
“we don’t know how” != “it’s not possible”
i think OpenAI more than anyone knows the challenges with scaling data and training. anyone working on AI knows the line: “a baby can learn to recognize elephants from a single instance”. reducing training data and time is fundamental to advancement. don’t get me wrong, it’s great to put numbers to these things. i just don’t think this paper is super groundbreaking or profound. a bit clickbaity and sensational for Computerphile
- Comment on Has Generative AI Already Peaked? - Computerphile 6 months ago:
gotem!
seriously tho, you don’t think OpenAI is tracking this? architecural improvements and training strategies are developing all the time
- Comment on Rabbit R1 is Just an Android App 6 months ago:
i didn’t think people would really be surprised. but maybe i’m jaded by my experience in the industry.
if we’re arguing whether or not it’s objectively stupid, i think that’s up to the market to decide.
kinda seems like a toy to me anyway, and it’s kind of priced that way
- Comment on Rabbit R1 is Just an Android App 6 months ago:
what else would it be? it’s a pretty common embedded target. dev kits from Qualcomm come with Android and use the Android bootloader and debug protocols at the very least.
nobody is out here running a plain Linux kernel and maintaining a UI stack while AOSP exists. would be a foolish waste of time for companies like Rabbit to use anything else imo.
to say it’s “just an Android device” is both true and a mischaracterization. it’s likely got a lot in common with a smartphone, but they’ve made modifications and aren’t supporting app stores or sideloading. doesn’t mean you can’t do it, just don’t be surprised when it doesn’t work 1-1
- Comment on Stop Using Your Face or Thumb to Unlock Your Phone 6 months ago:
like i said, it’s more of a username than a password
- Comment on Stop Using Your Face or Thumb to Unlock Your Phone 6 months ago:
it’s an analogy that applies to me. tldr worrying about having my identity stolen via physical access to my phone isn’t part of my threat model. i live in a safe city, and i don’t have anything the police could find to incriminate me. everyone is going to have a different threat model. some people need to brick up their windows
- Comment on Stop Using Your Face or Thumb to Unlock Your Phone 6 months ago:
it’s not a password; it’s closer to a username.
but realistically it’s not in my personal threat model to be ready to get tied down and forced to unlock my phone. everyone with windows on their house should know that security is mostly about how far an adversary is willing to go to try to steal from you.
personally, i like the natural daylight, and i’m not paranoid enough to brick up my windows just because it’s a potential ingress.
- Comment on Big Tech Is Faking AI 7 months ago:
seems like chip designers are being a lot more conservative from a design perspective. NPUs are generally a shitton of 8 bit registers with optimized matrix multiplication. the “AI” that’s important isn’t the stuff in the news or the startups; it’s the things that we’re already taking for granted. speech to text, text to speech, semantic analysis, image processing, semantic search, etc, etc. sure there’s a drive to put larger language models or image generation models on embedded devices, but a lot of these applications are battle tested and would be missed or hampered if that hardware wasn’t there. “AI” is a buzz word and a goalpost that moves at 90 mph. machine learning and the hardware and software ecosystem that’s developed over the past 15 or so years more or less quietly in the background (at least compared to ChatGPT) are revolutionary tech that will be with us for a while.
blockchain currency never made sense to me from a UX or ROI perspective. they were designed to be more power hungry as adoption took off, and power and compute optimizations were always conjecture. the way wallets are handled and how privacy was barely a concern was never going to fly with the masses. pile on that finance is just a trash profession that requires goggles that turn every person and thing into an evaluated commodity, and you have a recipe for a grift economy.
a lot of startups will fail, but “AI” isn’t going anywhere. it’s been around as long as computers have. i think we’re going to see a similarly (to chip designers) cautious approach from companies like Google and Apple, as more semantic search, image editing, and conversation bot advancements start to make their way to the edge.
- Comment on Washington's Lottery AI site turned user photo into porn 7 months ago:
they likely aren’t creating the model themselves. the faces are probably all the same AI girl you see everywhere. you gotta be careful with open weight models because the open source image gen community has a… proclivity for porn. there’s not a “function” per se for porn. the may be doing some preprompting or maybe “swim with the sharks” is just too vague of a prompt and the model was just tuned on this kind of stuff. you can add an evaluation network to the end to basically ask “is this porn/violent/disturbing”, but that needs to be tuned as well. most likely it’s even dumber than that where the contractor just subcontracted the whole AI piece and packages it for this use case