morsebipbip
@morsebipbip@lemm.ee
- Comment on Meet the latest way the superrich prove they're really, totally worried about the environment: $10 million electric superyachts 1 year ago:
I don’t think a 10 million dollar yacht, be it electric, diesel or diesel powered, is anywhere near a reasonable compromise
- Comment on Meet the latest way the superrich prove they're really, totally worried about the environment: $10 million electric superyachts 1 year ago:
TL;DR electric $10M yachts aren’t good for the environment ; not building any yacht at all is the best answer
- Comment on Meet the latest way the superrich prove they're really, totally worried about the environment: $10 million electric superyachts 1 year ago:
No ! it’s because $10 million yachts for billionnaires are of course not good for the environment ! It’s greenwashing ! “well yeah i’m a billionnaire and i run an ecocidal megacorporation but look : my luxury superyacht is electric !” i’m baffled that people could ever think this is a good way to mitigate the climate crisis
- Comment on Meet the latest way the superrich prove they're really, totally worried about the environment: $10 million electric superyachts 1 year ago:
you must be joking right someone tell me i’m not getting the /s
- Comment on ChatGPT Out-scores Medical Students on Complex Clinical Care Exam Questions 1 year ago:
don’t get me wrong, human doctors (humans in general actually) have a lot of problems and it would be great to have some kind of AI assistance for diagnosis or management. But I don’t think generative AI like chatGPT is actual AI : it’s a probabilistic algorithm that spits out the word most likely to be after the last one it wrote, based on the material it was trained on. I don’t think we need a doctor like that.
- Comment on ChatGPT Out-scores Medical Students on Complex Clinical Care Exam Questions 1 year ago:
That’s interesting but never forget the difference between exams and real life is huge. Exam test cases are always sorta typical clinical presentations, every small element pointing towards the general picture.
In real life, there are almost always discrepancies, elements that don’t make sense at all for the given case, and the whole point of getting some residency experience is to be able to know what to make out of those contradictory elements. When to question nonsensical lab values. What to do when a situation doesn’t belong in any category of problems you learned to solve.
Many things i think generative AI, due to its generative nature of predicting what word is most likely to come next based on learned data, wouldn’t be able to do