General_Effort
@General_Effort@lemmy.world
- Comment on While China has warned the West against 'decoupling', the country’s censorship system is designed for the purpose of isolation, report says 3 days ago:
I just described what’s going on. The world outside of China or Russia is going slower but the direction is the same.
- Comment on While China has warned the West against 'decoupling', the country’s censorship system is designed for the purpose of isolation, report says 4 days ago:
Borders in cyberspace is the future. There are increased efforts to regulate the internet everywhere. Think copyright, age verification, the GDPR, or even anti-CSAM laws. It’s all about making sure that information is only available to people who are permitted to access it. China is really leading the way here.
We do not agree with China’s regulations, but that only means that we need border controls. Data must be checked for regulatory compliance with local laws.
- Comment on MIT Students Stole $25 Million In Seconds By Exploiting ETH Blockchain Bug, DOJ Says 1 month ago:
I thought the same thing, but mind: That’s what the victims did. See my other reply going into this more.
- Comment on MIT Students Stole $25 Million In Seconds By Exploiting ETH Blockchain Bug, DOJ Says 1 month ago:
It reminded me of high-frequency trading.
Mind, the people who do that are the victims here!
I didn’t explain how exactly they were harmed. It’s actually kinda funny, too.
It costs virtually nothing to create crypto-tokens. So that’s what people do. Do some wash trades, slip some money to influencers to hype their new token as the next big thing, then offload the whole supply and run with the money. The “investors” quickly discover that these tokens are only good for one thing: To sell to a greater fool. At that point, there are no more buyers.
The accused obtained such useless tokens. The indictment doesn’t say how. I guess they simply bought it for next to nothing.
Effectively, they tricked the victims’ bots into buying these tokens at face value. The victims were left with crypto supposedly worth $25 million but in reality unsellable. If this was stealing $25 million, then I wonder about the legality of selling these crypto tokens in the first place.
Eventually, all crypto is like that. Some cryptocurrencies are used as payment systems, but eventually something better must come along. Then that currency becomes unsellable. Someone must always be left holding the bag, as it is said in crypto circles.
I think they are guilty of fraud. But I do wonder: If we are to accept that leaving someone with worthless crypto is equal to stealing money, what does that mean for the legality of crypto as a whole?
- Comment on OpenAI strikes Reddit deal to train its AI on your posts 1 month ago:
Refreshing to see a post on this topic that has its facts straight.
EU copyright allows a machine-readable opt-out from AI training (unless it’s for scientific purposes). I guess that’s behind these deals. It means they will have to pay off Reddit and the other platforms for access to the EU market. Or more accurately, EU customers will have to pay Reddit and the other platforms for access to AIs.
- Comment on MIT Students Stole $25 Million In Seconds By Exploiting ETH Blockchain Bug, DOJ Says 1 month ago:
I’ll try a simple explanation of what this is about, cause this is hilarious. It’s the kind of understated humor, you get in a good british comedy.
For a payment system, you must store who owns how much and how the owners transfer the currency. Easy-peasy. A simple office PC can handle that faster and cheaper than a blockchain. But what if the owner of the PC decides to manipulate the records? No problem, you just go to the police with your own records and receipts and they go to jail for fraud. Their belongings are sold off to pay you damages. That’s how these things have worked since forever. It’s how businesses keep track of their debts.
Just one little problem: What if the government wants your money. Maybe you don’t want to pay your taxes, or some fine. Or maybe you have debts you don’t want to pay, like your alimony. Perhaps the government wants to seize the proceeds from a drug deal. They can just go to the record keeper and force them to transfer currency.
This is where cryptocurrencies come to the rescue (as it were). There are different schemes. ETH (Ethereum) uses validators. The validators are paid to take care of the record-keeping. The trick is, that you have to put down ETH as a collateral (called staking) to run a validator. If you manipulate the record/blockchain, then the other validators will notice and raise the alarm. That results in you losing your collateral.
This means the validators can remain anonymous. You don’t need to know their identities to punish them for fraud. You just take their crypto-money. They need to remain anonymous so that the government (or the mob) can’t get to them.
This is where it gets hilarious. These 2 brothers operated fraudulent validators. The stake/the collateral didn’t matter at all. The whole scheme didn’t matter. It was a horrible waste of money and effort. The indictment even details how they tried to launder the crypto. That is, how they tried to transfer it, so that it couldn’t be traced on the blockchain. The indictment even has the search queries they used to look up the info on how to do that.
It’s all a sham. The one thing that crypto is supposed to do: Foil the government. And it doesn’t work.
When people want to buy crypto on the blockchain, they put out a request so that a validator will execute that transaction and record it on the blockchain. So, while the request is waiting, a bot comes along and scans it. It may be that a purchase changes the exchange value of a currency. In that case, the bot adds 2 more transactions. First, to buy that currency before the original request, and to sell it afterward. The original request drives up the price in between the buy and sell, so that the bot makes a profit for its operator. The original request has to pay a little extra. That’s where the profit comes from.
Sound shady? I hope not, because that’s what the victims did.
The accused operated their own validators. At the right time, they put out their own buy request to lure in a bot. When the bot proposed the bundled transactions, their validators feigned acceptance. But then switched out the lure transaction of buying for selling.
The indictment makes a fairly good argument. It’s like there is a “contract” between these automatic systems. The trading bot wants the bundled transactions to be carried out exactly so. The validator feigns agreement, but does not follow through.
- Comment on MIT Students Stole $25 Million In Seconds By Exploiting ETH Blockchain Bug, DOJ Says 1 month ago:
No one ever said ATM-code is law. Ethereum code is supposed to be. Code is law is one of their slogans.
Everything that a blockchain does could be handled by a single office computer. The whole reason for the huge, expensive over-head is to put crypto beyond the law. Stuff like this exposes the whole, huge waste of human effort.
- Comment on MIT-educated brothers accused of stealing $25 million in cryptocurrency in 12 seconds in Ethereum blockchain scheme 1 month ago:
I wish people would go straight to the source for these stories. No reason to link to something that only paraphrases a press release and adds some ads.
Press release (contains link to indictment):
- Comment on Stack Overflow Users Are Revolting Against an OpenAI Deal | WIRED 1 month ago:
Hugging Face is the usual platform for sharing datasets and models.
- Comment on This Week in AI: OpenAI considers allowing AI porn | TechCrunch 1 month ago:
Good question. That is (almost certainly) political speech and as such especially protected by law. It’s also quite controversial and so companies will try to prevent their services being used for it.
- Comment on After announcing increased prices, Spotify to Pay Songwriters About $150 Million Less Next Year 1 month ago:
In 2023, Taylor Swift got $100 million from Spotify. How much should she get?
- Comment on Stack Overflow bans users en masse for rebelling against OpenAI partnership — users banned for deleting answers to prevent them being used to train ChatGPT 1 month ago:
They are not. A derivative would be a translation, or theater play, nowadays, a game, or movie. Even stuff set in the same universe.
Expanding the meaning of “derivative” so massively would mean that pretty much any piece of code ever written is a derivative of technical documentation and even textbooks.
So far, judges simply throw out these theories, without even debating them in court. Society would have to move a lot further to the right, still, before these ideas become realistic.
- Comment on Stack Overflow bans users en masse for rebelling against OpenAI partnership — users banned for deleting answers to prevent them being used to train ChatGPT 1 month ago:
They are also retained by anyone who has archived them., like OpenAI or Google. Thus making their AIs more valuable.
To really pull up the ladder, they will have to protest the Internet Archive and Common Crawl, too. It’s just typical right-wing bullshit; acting on emotion and against their own interests.
- Comment on OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn 1 month ago:
Where is that from?
- Comment on Downloading nematode brains from Github 1 month ago:
We invented the matrix for worms.
Links:
- Comment on Nintendo DMCA Notice Wipes Out 8,535 Yuzu Repos, Mig Switch Also Targeted. 1 month ago:
The EU tends to be much harsher in these matters, though some members don’t follow along.
- Comment on Zuckerberg says Meta's Llama 3 is really good but no chatbot is sophisticated enough to be an 'existential' threat — yet 2 months ago:
Behind every successful man there is a woman, making him look like he has a mullet.
- Comment on Adobe's new generative AI tools for video are absolutely terrifying 2 months ago:
Yes. It is a new tool for vfx artists and not a replacement. If they can deliver higher quality for less money, you’d expect them to be more in demand.
“Never” is a big word, but it’s really not clear how one would train an AI to know what it should generate. See the hubbub about diversity in google’s image generator. I see no theoretical problems, but in practice it’s just not going to happen any time soon.
- Comment on Adobe's new generative AI tools for video are absolutely terrifying 2 months ago:
Needlessly dangerous. The only positive outcome would be to make people aware of what is possible. The danger is that non-marked media will appear more credible.
- Comment on On Being an Outlier 2 months ago:
That is, indeed, the point.
I think you misunderstand. She is shifting responsibility.
But Apple landed itself in court because it had no clue how its credit algorithm worked and could not conceive how it could possibly be sexist if the machine didn’t get any gender data to analyse.
This appears to be wrong.
- Comment on On Being an Outlier 2 months ago:
I’m not really sure what the author is trying to do here. The way he plays with the meaning of words, like “culling the outlier” is literary interesting. But it is also actively harmful to understanding or bettering the issues raised.
“AI” is interpreted as “algorithmic inferences.” This paves over any of the technical distinctions between statistics, ML, AI, and neural nets. In the current hype, the term AI is often narrowed down to mean neural nets but the author widens the meaning. In the text, “AI” includes any kind of bureaucratic or rule-based decision-making.
The effect is to transfer responsibility away from decision-makers, organizations, and even society, at large, to a vaguely understood new technology.
I can see that this could be welcome to these decision-makers and organizations. And so it has the potential to attract funding from them. Perhaps that is the point.
- Comment on Odours have a complex topography, and it’s been mapped by AI 2 months ago:
These terms were coined by academics. These people feel that “learning” is part of appearing intelligent. They don’t get out much.
- Comment on Odours have a complex topography, and it’s been mapped by AI 2 months ago:
Paper: www.biorxiv.org/content/…/2022.09.01.504602v2
Open replication: github.com/BioMachineLearning/openpom
- Comment on Adobe’s ‘Ethical’ Firefly AI Was Trained on Midjourney Images 2 months ago:
Adobe trained its AI “Firefly” on its stock library (and other images). Their library contains AI images. It’s unlikely that these are all from Midjourney.
I’m not sure what you mean by samey. As I said, people chase the same mainstream taste. If the images from one service looks samey, then they probably figure that’s what the customers want. It’s also possible that you only recognize this type of image as AI generated.
- Comment on Adobe’s ‘Ethical’ Firefly AI Was Trained on Midjourney Images 2 months ago:
Put like this, because too much variety is the biggest problem in terms of quality. People don’t want variety in terms of, say, number of limps or fingers. People have something specific in mind when they prompt an AI. They only want very limited and specific variability.
In a sense, limiting variety is the whole point of the AI. There is a vast number of possible images. Most of them would be simply indistinguishable noise to us. The proportion we would consider a sensible picture is tiny. We want to constrain the variety to within this tiny segment.
- Comment on Adobe’s ‘Ethical’ Firefly AI Was Trained on Midjourney Images 2 months ago:
That’s not what anyone would do in reality, though. In reality, when you train an AI model on AI output you get a quality increase, because the model learns to be better at doing the things it’s supposed to do, while forgetting the irrelevant. Where output looks samey, it’s because different people are chasing the same mainstream taste.
- Comment on Adobe’s ‘Ethical’ Firefly AI Was Trained on Midjourney Images 2 months ago:
Adobe trains on images submitted to their stock image marketplace. Deciding to submit is the first selection step. Then there is some quality control by Adobe; mainly AI powered, I’d guess. Adobe also has the sales data (again, human selection) and additional tracking data; how many people clicked a thumbnail and so on.
What people imagine here about quality loss is completely divorced from reality.
- Comment on Adobe’s ‘Ethical’ Firefly AI Was Trained on Midjourney Images 2 months ago:
we’re still feeding them “more data”.
Yes, that’s one way of putting it. What gets into the Adobe stock database is already curated. They also have the sales and tracking data.
Though as these generative models get better and better at mimicking real world data
Also yes on this. It doesn’t matter if your data is synthetic but only if it’s fit for purpose. That’s especially true in this case, where the distinction between synthetic and real is so unclear. You’re already including drawings, renders, photomanips, etc. I have no idea what kind of misconception people have that they would think it matters if some piece of digital art is AI generated.
- Comment on Adobe’s ‘Ethical’ Firefly AI Was Trained on Midjourney Images 2 months ago:
“Retardation”? Seriously?
- Comment on Adobe’s ‘Ethical’ Firefly AI Was Trained on Midjourney Images 2 months ago:
Yes, if you want realism. But that’s just one of the things that people look for. Personal preference.