ComfortablyDumb
@ComfortablyDumb@lemmy.ca
- Comment on Tesla Robotaxi Freaks Out and Drives into Oncoming Traffic on First Day 3 weeks ago:
This technology purely exists to make human drivers redundant and put the money in the hands of big tech and eventually the ruling class composed off of politicians risk averse capitalists and beurocracy. There is no other explanation for robo taxis to exist. There are better solution like trains and metros which can solve the movement of people from point A to point B easily. It does not come with a 3x-10x capital growth that making human drivers redundant will for the big tech companies.
- Comment on Operation Narnia: Iran’s nuclear scientists reportedly killed simultaneously using special weapon 3 weeks ago:
The innocents in question are human shields being used by Hamas to protect their objectives. The suffering of those humans and children should have resulted in a massive operation to dismantle Hamas’ financial teeth. Instead hamas leaders still take all humanitarian aid and repurpose them against Israel. Qatar and Israel are the real perpetrators of this violence.
- Comment on Suspected 4chan Hack Could Expose Longtime, Anonymous Admins 2 months ago:
Now please expose the powertripping reddit admins as well.
- Comment on Jack Dorsey and Elon Musk would like to ‘delete all IP law’ | TechCrunch 2 months ago:
I would like to take a crack at this. There is this recent trend going around with ghiblifying one’s picture. Its basically converting a picture into ghibli image. If you had trained it on free sources, this is not possible.
Internally an LLM works by having networks which activate based on certain signals. When you ask it a certain question. It creates a network of similar looking words and then gives it back to you. When u convert an image, you are doing something similar. You cannot form these networks and the threshold at which they activate without seeing copyrighted images from studio ghibli. There is no way in hell or heaven for that to happen.
OpenAI trained their models on pirated things just like meta did. So when an AI produces an image in style of something, it should attribute the person from which it actually took it. Thats not whats happening. Instead it just makes more money for the thief.