ClamDrinker
@ClamDrinker@lemmy.world
- Comment on Anon questions our energy sector 1 day ago:
I’m sure such cases exist, but where I’m from people don’t really get paid to host turbines, maybe companies at times. They dislike them because it affects the view in the area, and especially if you live very close to them the blades can cause noticeable flickering shadows. That latter point has a lot more weight to it in my eyes, but people do really care about the former as well, and it’s kind of hard to push on people when they live there and not you.
- Comment on Anon questions our energy sector 3 days ago:
There is competition in battery production. Pretty much all of society would be better off with better batteries, so price gauging in an industry like that is quite hard. And if it was, it would not go unnoticed.
The problem is simply the technology. There’s advancements like molten salt batteries, but it’s practically in it’s infancy. The moment a technology like that would become a big improvement over the norm, it would pretty much immediately cause a paradigm shift in energy production and every company would want a piece of the pie. So you’ll know it when you see it. But it might also just start off very overwhelmingly like nuclear fission and very gradually improved with the hope it can scale beyond the current best technologies for batteries.
All we can do is wait and hope for breakthrough, I guess. Because cheap and abundant batteries could really help massively with reducing our carbon output.
- Comment on Anon questions our energy sector 3 days ago:
2160 GW is it’s rated capacity. I’m not sure how you got from there to 14.2 dollars per watt, but it completely ignores the lifetime of the power plant.
Vogtle 3&4 are really a bad example because unit 4 only entered commercial activity this year. But fine, we can look at what it produces just recently.. About 3335000 MWh per month, or about 107 GWh per day. We can then subtract the baseline from Reactor 1 & 2 from before Reactor 3 was opened, removing about 1700000 MWh per month. Which gives us about 53 GWh per day. The lifetime of them is expected to be around 60 to 80 year, but lets take 60. That’s about 1177200 GWh over it’s lifetime, divided by the 36 billion that it cost to built… Gives you about 0.03 dollars per kWh. Which is pretty much as good as renewables get as well. But of course, this ignores maintenance, but that’s hard to calculate for solar panels as well. As such it will be somewhat larger than 0.03, I will admit.
Solar panels on the other hand, often have a lifetime of 30 years, so even though it costs less per watt, MW, or GW, it also produces less over time. For solar, and wind, that’s about the same.. So this doesn’t really say much.
But that wasn’t even the point of my message. As I said, I agree that Nuclear is slightly more expensive than renewables. But there are other costs associated with renewables that aren’t expressed well in monetary value for their units alone. Infrastructure, space, approval, experts to maintain it.
Let’s ignore that no grid in the country actually needs 10hr storage yet. Because they cannot. They can’t do it because there’s not enough capacity. If the sun is cloudy for a day, and the wind doesn’t run. Who’s going to power the grid for a day? That’s right. Mostly coal and gas. That’s the point. Nuclear is there to ensure we don’t go back to fossils when we want to be carbon neutral, which means no output. If you are carbon neutral only when the weather is perfect for renewables, then you’re not really carbon neutral and still would have to produce a ton of pollution at times.
I’m glad batteries and all are getting cheaper. They are definitely needed, also for nuclear. But you must also be aware of just how damn dirty they are to produce. The minerals required produce them are rare, and expensive. Wind power also kills people that need to maintain it. Things aren’t so black and white.
Also consider that PV and batteries have always gotten cheaper over time, while nuclear has always gotten more expensive.
This is not true, and it should be obvious when you think about it. Since this data fluctuates all the time. Nuclear has been more expensive in the past, before getting cheaper, and now getting more expensive again. Solar and wind have had peaks of being far more expensive than before. These numbers are just a representation of aggregate data, and they often leave out nuance like renewables being favored by regulations and subsidies. They are in part a manifestation of the resistance to nuclear. Unlike renewables, there are many more steps to be made for efficiency in nuclear. Most development has (justifiably) been focused on safety so far, as with solar and wind and batteries we can look away from the slave labor on the other side of the world to produce the rare earth metals needed for it. There is no free lunch in this world.
For what it’s purpose should be, which is to provide a baseline production of electricity when renewables are not as effective. A higher price can be justified. It’s not meant to replace renewables altogether. Because if renewables can’t produce clean energy, their price might as well be infinitely high in that moment, which leaves our only options to be fossil fuels, hydro, batteries, or nuclear. Fossil fuels should be obvious, not everyone has hydro (let alone enough), batteries don’t have the capacity or numbers at the scale required (for the foreseeable future), and nuclear is here right now.
- Comment on Anon questions our energy sector 4 days ago:
Solar and wind are cheaper yes. Batteries, no. If batteries were that cheap and easy to place we’d have solved energy a long time ago. Currently batteries don’t hold a candle to live production, the closest you can get is hydro storage, which not everyone has, and can’t realistically be built everywhere.
Look at the stats. The second largest battery storage in the US (and the world) is located near the Moss Landing Power Plant. It proves a capacity of 3000 MWh with 6000 MWh planned. That sounds like a lot, but it’s located next to San Jose and San Fransisco, so lets pick just one of those counties to compare. The average energy usage in the county of San Clara is 17101 GWh per year, which is about 46.8 GWh per day, or 46800 MWh. So you’d need 8 more of those at 6000 MWh to even be able to store a day’s worth of electricity from that county alone, which has a population of about 2 million people. And that’s not even talking about all the realities that come with electricity like peak loads.
Relative to how much space wind and solar use, nuclear is the clear winner. If a country doesn’t have massive amounts of empty area nuclear is unmissable. People also really hate seeing solar and wind farm. That’s not something I personally mind too much, but even in the best of countries people oppose renewables simply because it ruins their surroundings to them. Creating the infrastructure for such distributed energy networks to sustain large solar and wind farms is also quite hard and requires personnel that the entire world has shortages of, while a nuclear reactor is centralized and much easier to set up since it’s similar to current power plants. But a company that can build a nuclear plant isn’t going to be able to build a solar farm, or a wind farm, and in a similar way if every company that can make solar farms or wind farms is busy, their price will go up too. By balancing the load between all three renewables, we ensure the transition can happen as fast and affordable as possible.
There’s also the fact that it always works and can be scaled up or down on demand, and as such is the least polluting source (on the same level as renewables) that can reliably replace coal, natural gas, biomass, and any other always available source. You don’t want to fall back on those when the sun doesn’t shine or the wind doesn’t blow. If batteries were available to store that energy it’d be a different story. But unless you have large natural batteries like hydro plans with storage basins that you can pump water up to with excess electricity, it’s not sustainable. I’d wish it was, but it’s not. As it stands now, the world needs both renewables and nuclear to go fully neutral. Until something even better like nuclear fission becomes viable.
- Comment on Federated social media from before it was cool 3 weeks ago:
Yeah and honestly, this is largely a reasonable standard for anyone running an email server. If you don’t have SPF, DKIM and DMARC, basically anyone can spoof your emails and you’d be none the wiser. It also makes spam much harder to send without well, sacrificing IP addresses to the many spam lists. I wouldn’t be surprised if some people setting up their own mail server were made aware of these things because of being blocked.
- Comment on Baidu CEO warns AI is just an inevitable bubble — 99% of AI companies are at risk of failing when the bubble bursts 4 weeks ago:
There is so much wrong with this…
AI is a range of technologies. So yes, you can make surveillance with it, just like you can with a computer program like a virus. But obviously not all computer programs are viruses nor exist for surveillance. What a weird generalization. AI is used extensively in medical research, so your life might literally be saved by it one day.
You’re most likely talking about “Chat Control”, which is a controversial EU proposal to use scan either on people’s devices or from provider’s ends for dangerous and illegal content like CSAM. This is obviously a dystopian way to achieve that as it sacrifices literally everyone’s privacy to do it, and there is plenty to be said about that without randomly dragging AI into that. You can do this scanning without AI as well, and it doesn’t change anything about how dystopian it would be.
You should be using end to end regardless, and a VPN is a good investment for making your traffic harder to discern, but if Chat Control is passed to operate on the device level you are kind of boned without circumventing this software, which would potentially be outlawed or made very difficult. It’s clear on it’s own that Chat Control is a bad thing, you don’t need some kind of conspiracy theory about ‘the true purpose of AI’ to see that.
- Comment on Proud globohomo 4 weeks ago:
Yes, but most people dont have that or take way too long than is worth for a simple meme. There already exist models to unblur entire images in seconds. Ai should take the shitty work lol.
- Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates 2 months ago:
I never anthropomorphized the technology, unfortunately due to how language works it’s easy to misinterpret it as such. I was indeed trying to explain overfitting. You are forgetting the fact that current AI technology (artificial neural networks) are based on biological neural networks. There is a range of quirks that it exhibits that biological neural networks do as well. But it is not human, nor anything close. But that does not mean that there are no similarities that can be rightfully pointed out.
Overfitting isn’t just what you describe though. It also occurs if the prompt guides the AI towards a very specific part of it’s training data. To the point where the calculations it will perform are extremely certain about what words come next. Overfitting here isn’t caused by an abundance of data, but rather a lack of it. The training data isn’t being produced from within the model, but as a statistical inevitability of the mathematical version of your prompt. Which is why it’s tricking the AI, because an AI doesn’t understand copyright - it just performs the calculations. But you do. And so using that as an example is like saying “Ha, stupid gun. I pulled the trigger and you shot this man in front of me, don’t you know murder is illegal buddy?”
Nobody should be expecting a machine to use itself ethically. Ethics is a human thing.
People that use AI have an ethical obligation to avoid overfitting. People that produce AI also have an ethical obligation to reduce overfitting. But a prompt quite literally has infinite combinations (within the token limits) to consider, so overfitting will happen in fringe situations. That’s not because that data is actually present in the model, but because the combination of the prompt with the model pushes the calculation towards a very specific prediction which can heavily resemble or be verbatim the original text. (Note: I do really dislike companies that try to hide the existence of overfitting to users though, and you can rightfully criticize them for claiming it doesn’t exist)
This isn’t akin to anything human, people can’t repeat pages of text verbatim like this and no toddler can be tricked into repeating a random page from a random book as you say.
This is incorrect. A toddler can and will verbatim repeat nursery rhymes that it hears. It’s literally one of their defining features, to the dismay of parents and grandparents around the world. I can also whistle pretty much my entire music collection exactly as it was produced because I’ve listened to each song hundreds if not thousands of times. And I’m quite certain you too have a situation like that. An AI’s mind does not decay or degrade (Nor does it change for the better like humans) and the data encoded in it is far greater, so it will present more of these situations in it’s fringes.
but it isn’t crafting its own sentences, it’s using everyone else’s.
How do you think toddlers learn to make their first own sentences? It’s why parents spend so much time saying “Papa” or “Mama” to their toddler. Exactly because they want them to copy them verbatim. Eventually the corpus of their knowledge grows big enough to the point where they start to experiment and eventually develop their own style of talking. But it’s still heavily based on the information they take it. It’s why we have dialects and languages. Take a look at what happens when children don’t learn from others: en.wikipedia.org/wiki/Feral_child So yes, the AI is using it’s training data, nobody’s arguing it doesn’t. But it’s trivial to see how it’s crafting it’s own sentences from that data for the vast majority of situations. It’s also why you can ask it to talk like a pirate, and then it will suddenly know how to mix in the essence of talking like a pirate into it’s responses. Or how it can remember names and mix those into sentences.
Therefore it is factually wrong to state that it doesn’t keep the training data in a usable format
If your arguments is that it can produce something that happens to align with it’s training data with the right prompt, well yeah that’s not incorrect. But it is so heavily misguided and borders bad faith to suggest that this tiny minority of cases where overfitting occurs is indicative of the rest of it. LLMs are a prediction machines, so if you know how to guide it towards what you want it to predict, and that is in the training data, it’s going to predict that most likely. Under normal circumstances where the prompt you give it is neutral and unique, you will basically never encounter overfitting. You really have to try for most AI models.
But then again, you might be arguing this based on a specific AI model that is very prone to overfitting, while I am arguing this out of the technology as a whole.
This isn’t originality, creativity or anything that it is marketed as. It is storing, encoding and copying information to reproduce in a slightly different format.
It is originality, as these AI can easily produce material never seen before in the vast, vast majority of situations. Which is also what we often refer to as creativity, because it has to be able to mix information and still retain legibility. Humans also constantly reuse phrases, ideas, visions, ideals of other people. It is intellectually dishonest to not look at these similarities in human psychology and then treat AI as having to be perfect all the time, never once saying the same thing as someone else. To convey certain information, there are only finite ways to do so within the English language.
- Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates 2 months ago:
This is an issue for the AI user though. And I do agree that needs to be more conscious in people’s minds. But I think time will change that. Perhaps when the photo camera came out there were some shmucks that took pictures of people’s artworks and claimed it as their own because the novelty of the technology allowed that for a bit, but eventually those people are properly differentiated from people properly using it.
- Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates 2 months ago:
Like if I download a textbook to read for a class instead of buying it - I could be proscecuted for stealing
Ehh, no almost certainly not. That honestly just sounds like some corporate boogyman to prevent you from pirating their books. The person hosting the download, if they did not have the rights to publicize it freely, would possibly be prosecuted though.
To illustrate, there’s this story of John Cena who sold a special Ford after signing a contract with Ford to explicitly forbid him from doing that. However, the person who bought the car was never prosecuted or sued, because they received the car from Cena with no strings attached. They couldn’t be held responsible for Cena’s break of contract, but Cena was held personally responsible by Ford.
- Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates 2 months ago:
That would be true if they used material that way paywalled. But the vast majority of the training information used is publicly available. There’s plenty of freely available books and information that you only require an internet connection for to access, and learn from.
- Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates 2 months ago:
Your first point is misguided and incorrect. If you’ve ever learned something by ‘cramming’, a.k.a. just repeating ingesting material until you remember it completely. You don’t need the book in front of you anymore to write the material down verbatim in a test. You still discarded your training material despite you knowing the exact contents. If this was all the AI could do it would indeed be an infringement machine. But you said it yourself, you need to trick the AI to do this. It’s not made to do this, but certain sentences are indeed almost certain to show up with the right conditioning. Which is indeed something anyone using an AI should be aware of, and avoid that kind of conditioning.
- Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates 2 months ago:
This would be a good point, if this is what the explicit purpose of the AI was. Which it isn’t. It can quote certain information verbatim despite not containing that data verbatim, through the process of learning, for the same reason we can.
I can ask you to quote famous lines from books all day as well. That doesn’t mean that you knowing those lines means you infringed on copyright. Now, if you were to put those to paper and sell them, you might get a cease and desist or a lawsuit. Therein lies the difference. Your goal would be explicitly to infringe on the specific expression of those words. Any human that would explicitly try to get an AI to produce infringing material… would be infringing. And unknowing infringement… well there are countless court cases where both sides think they did nothing wrong.
You don’t even need AI for that, if you followed the Infinite Monkey Theorem and just happened to stumble upon a work falling under copyright, you still could not sell it even if it was produced by a purely random process.
Another great example is the Mona Lisa. Most people know what it looks like and if they had sufficient talent could mimic it 1:1. However, there are numerous adaptations of the Mona Lisa that are not infringing (by today’s standards), because they transform the work to the point where it’s no longer the original expression, but a re-expression of the same idea. Anything less than that is pretty much completely safe infringement wise.
You’re right though that OpenAI tries to cover their ass by implementing safeguards. Which is to be expected because it’s a legal argument in court that once they became aware of situations they have to take steps to limit harm. They can indeed not prevent it completely, but it’s the effort that counts. Practically none of that kind of moderation is 100% effective. Otherwise we’d live in a pretty good world.
- Comment on Lemmy devs are considering making all votes public - have your say 2 months ago:
I am kind of afraid that if voting becomes more public than it already is, it will lead exactly to more of the kind of “zero-content downvote” accounts mentioned in the ticket. Because some people are just wildly irrational when it comes to touchy subjects, and aint nobody got time to spend an eternity with them dismantling their beliefs so they understand the nuance you see that they don’t. So it kind of incentivizes people to create an account like that to ensure a crazy person doesn’t latch on to the account you’re trying to have normal discussions with.
- Comment on X’s new AI image generator will make anything from Taylor Swift in lingerie to Kamala Harris with a gun 2 months ago:
I have a similar hesitancy, but unfortunately that’s why we can’t even really trust ourselves either. The statistics we can put to paper already paints such a different image of society than the one we experience. So even though it feels like these people are everywhere and such a mindset is growing, there are many signs that this is not the case. But I get it, that at times also feels like puffing some hopium. I’m fortunate to have met enough stubborn people that did end up changing their minds on their own personal irrationality, and as I grew older I caught myself doing the same a couple of times as well. That does give me hope.
And well, if you look at history, the kind of shit people believed. Miasma, bloodletting, superstitious beliefs, to name a few. As time has moved on, the majority of people has grown. Even a century where not a lot changes in that regard (as long as it doesn’t regress) can be a speed bump in the mindset of the future.
- Comment on X’s new AI image generator will make anything from Taylor Swift in lingerie to Kamala Harris with a gun 2 months ago:
While I share this sentiment, I think/hope the eventual conclusion will be a better relationship between more people and the truth. Maybe not for everyone, but more people than before. Truth is always more like 99.99% certain than absolute truth, and it’s the collection of evidence that should inform ‘truth’. The closest thing we have to achieving that is the court system (In theory).
You don’t see the electric wiring in your home, yet you ‘know’ flipping the switch will cause electricity to create light. You ‘know’ there is not some other mechanism in your walls that just happens to produce the exact same result. But unless you check, you technically didn’t know for sure. (And even then, your eyes might deceive you).
With Harris’ airport crowd, honestly if you weren’t there, you have to trust second hand accounts. So how do you do that? One video might not say a lot, and honestly if I saw the alleged image in a vacuum I might have been suspicious of AI as well.
But here comes the context. There are many eye witness perspectives where details can be verified and corroborates. The organizer isn’t an habitual liar. It happened at a time that wasn’t impossible (eg. a sort of ‘counter’-alibi). It happened in a place that isn’t improbable (She’s on the campaign trail). If true, it would require a conspiracy level of secrecy to pull of. And I could list so many more things.
Anything that could be disproven with ‘It might have been AI’, probably would have not stuck in court anyways. It’s why you take testimony, because even though that proves nothing on it’s own, if corroborated with other information it can make one situation more or less probable.
- Comment on Lectures 3 months ago:
Counter point: It’s from that one teacher who really gets teaching and it’s two hours of fun where you dont realize you’re learning
- Comment on 77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds 3 months ago:
That’s because you’re using AI for the correct thing. As others have pointed out, if AI usage is enforced (like in the article), chances are they’re not using AI correctly. It’s not a miracle cure for everything and should just be used when it’s useful. It’s great for brainstorming. Game development (especially on the indie side of things) really benefit from being able to produce more with less. Or are you using it for DnD?
- Comment on Survey shows most people wouldn't pay extra for AI-enhanced hardware | 84% of people said no 3 months ago:
Depends on what kind of AI enhancement. If it’s just more things nobody needs and solves no problem, it’s a no brainer. But for computer graphics for example, DLSS is a feature people do appreciate, because it makes sense to apply AI there. Who doesn’t want faster and perhaps better graphics by using AI rather than brute forcing it, which also saves on electricity costs.
But that isn’t the kind of things most people on a survey would even think of since the benefit is readily apparent and doesn’t even need to be explicitly sold as “AI”. They’re most likely thinking of the kind of products where the manufacturer put an “AI powered” sticker on it because their stakeholders told them it would increase their sales, or it allowed them to overstate the value of a product.
Of course people are going to reject white collar scams if they think that’s what “AI enhanced” means. If legitimate use cases with clear advantages are produced, it will speak for itself and I don’t think people would be opposed. But obviously, there are a lot more companies that want to ride the AI wave than there are legitimate uses cases, so there will be quite some snake oil being sold.
- Comment on The AI-focused COPIED Act would make removing digital watermarks illegal 4 months ago:
What are you talking about? The open source community has trained these kinds of models. They’re out there.
- Comment on We cater any event! 4 months ago:
“You know you don’t need to bring a dead horse every time you want catering right, Jim?”
- Comment on AI-created “virtual influencers” are stealing business from humans 10 months ago:
No worries my fellow unethical dishonest internet-using homie. It’s not like nuance exists and things can be both good and bad. Everything is black and white, after all.
- Comment on AI-created “virtual influencers” are stealing business from humans 10 months ago:
Ah yes. Like that damn internet and those cursed devices people use to access it. Anyone using those is inherently not honest or ethical.
- Comment on GTA 6 is likely to skip PC again and only launching on current gen consoles 11 months ago:
PC is typically easier to develop for because of the lack of strict (and frequently silly) platform requirements. Which typically makes game development more expensive and slow than it needs to be when just targeting PC. If that barrier to entry was reduced to that of PC, you’d see a lot more games on there from smaller developers.
With current gen consoles, pretty much every game starts as a PC game already, because thats where the development and testing happens.
Rockstar here is the exception in that they are intentionally skipping PC - something that should be well within reach of a company their size while clearly being capable of doing so.
If another AAA game comes out with only PC support I’ll be right there with you - but most game developers with the capability release for all major platforms now. But not the small console indie studio called Rockstar Games it seems.
- Comment on Mozilla Senior Director of Content explained why Mozilla has taken an interest in the fediverse and Mastodon 1 year ago:
It’s because the current version has nothing wrong with it. If the Lemmy devs should choose to sabotage the Lemmy software, you’d be surprised how easily that happens when it pisses off all the instances and their owners. Instances will simply refuse to upgrade. And like most things, eventually some fork will win the race to become the dominant fork and the current Lemmy devs would be essentially disowned. Different forks also doesn’t necessarily mean API breaking changes, so different forks would have no issue communicating (at least for a while).
- Comment on I created an image using AI. Not sure what this style is called, an I want to know the type of this drawing 1 year ago:
If you use StableDiffusion through a web UI (might exist for others as well), you might have access to a feature called ‘interrogate’, which allows you to find an approximate prompt to an image. Can be useful if you need it for future images.
It can also be done online: huggingface.co/spaces/…/CLIP-Interrogator
- Comment on This new data poisoning tool lets artists fight back against generative AI 1 year ago:
LLM is the wrong term. That’s Large Language Model. These are generative image models / text-to-image models.
Trurhfully though, while it will be there when the image is trained, it won’t ‘notice’ it unless you distort it significantly (enough for humans to notice as well). Otherwise it won’t make much of a difference because these models are often trained on a compressed and downsized version of the image (in what’s called latent space)
- Comment on How do you call someone born in the US besides "American"? 1 year ago:
Halfway-North American
- Comment on Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi... 1 year ago:
That’s an eventual goal, which would be a general artificial intelligence (AGI). Different kind of AI models for (at least some) of the things you named already exist, it’s just that OpenAI had all their eggs in the GPT/LLM basket, and GPTs deal with extrapolating text. It just so happened that with enough training data their text prediction also started giving somewhat believable and sometimes factual answers. (Mixed in with plenty of believable bullshit). Other data requires different training data, different models, and different finetuning, hence why it takes time.
It’s highly likely for a company of OpenAI’s size (especially after all the positive marketing and potential funding they got from ChatGPT in it’s prime), that they already have multiple AI models for different kinds of data either in research, training, or finetuning already.
But even with all the individual pieces of an AGI existing, the technology to cross reference the different models doesn’t exist yet. Because they are different models, and so they store and express their data in different ways. And it’s not like training data exists for it either. And unlike physical beings like humans, it doesn’t have any kind of way to “interact” and “experiment” with the data it knows to really form concrete connections backed up by factual evidence.
- Comment on Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi... 1 year ago:
As long as humans are still the driving force behind what content gets spread around (and thus, far more represented in the training data), even if the content is AI generated, it shouldn’t matter. But it’s quite definitely not the case here.