I got an email ban.
1609 hours logged 431 solved threads
Submitted 6 months ago by misk@sopuli.xyz to technology@lemmy.world
I got an email ban.
1609 hours logged 431 solved threads
Well, it is important to comply with the terms of service established by the website. It is highly recommended to familiarize oneself with the legally binding documents of the platform, including the Terms of Service (Section 2.1), User Agreement (Section 4.2), and Community Guidelines (Section 3.1), which explicitly outline the obligations and restrictions imposed upon users. By refraining from engaging in activities explicitly prohibited within these sections, you will be better positioned to maintain compliance with the platform’s rules and regulations and not receive email bans in the future.
ITT: People unable to recognize a joke
Shit like this makes me so glad that I just don’t sign up for these things if I don’t have to.
30 page TOS? You know what, I don’t need to make an account that bad.
Take all you want, it will only take a few hallucinations before no one trusts LLMs to write code or give advice
[…]will only take a few hallucinations before no one trusts LLMs to write code or give advice
Because none of us have ever blindly pasted some code we got off google and crossed our fingers ;-)
It’s way easier to figure that out than check ChatGPT hallucinations. There’s usually someone saying why a response in SO is wrong, either in another response or a comment. You can filter most of the garbage right at that point. You get none of that information with ChatGPT. The data spat out is not equivalent.
When you paste that code you do it in your private IDE, in a dev environment and you test it thoroughly before handing it off to the next person to test before it goes to production.
Hitting up ChatPPT for the answer to a question that you then vomit out in a meeting as if it’s knowledge is totally different.
Split segment of data without pii to staging database, test pasted script, completely rewrite script over the next three hours.
We should already be at that point. We have already seen LLMs’ potential to inadvertently backdoor your code and to inadvertently help you violate copyright law (I guess we do need to wait to see what the courts rule, but I’ll be rooting for the open-source authors).
If you use LLMs in your professional work, you’re crazy. I would never be comfortably opening myself up to the legal and security liabilities of AI tools.
If you use LLMs in your professional work, you’re crazy
Eh, we use copilot at work and it can be pretty helpful. You should always check and understand any code you commit to any project, so if you just blindly paste flawed code (like with stack overflow,) that’s kind of on you for not understanding what you’re doing.
Yeah but if you’re not feeding it protected code and just asking simple questions for libraries etc then it’s good
I feel like it had to cause an actual disaster with assets getting destroyed to become part of common knowledge (like the challenger shuttle or something).
Maybe for people who have no clue how to work with an LLM. They don't have to be perfect to still be incredibly valuable, I make use of them all the time and hallucinations aren't a problem if you use the right tools for the job in the right way.
The last time I saw someone talk about using the right LLM tool for the job, they were describing turning two minutes of writing a simple map/reduce into one minute of reading enough to confirm the generated one worked. I think I’ll pass on that.
This. I use LLM for work, primarily to help create extremely complex nested functions.
I don’t count on LLM’s to create anything new for me, or to provide any data points. I provide the logic, and explain exactly what I want in the end.
I take a process which normally takes 45 minutes daily, test it once, and now I have reclaimed 43 extra minutes of my time each day.
It’s easy and safe to test before I apply it to real data.
It’s missed the mark a few times as I learned how to properly work with it, but now I’m consistently getting good results.
Other use cases are up for debate, but I agree when used properly hallucinations are not much of a problem. When I see people complain about them, that tells me they’re using the tool to generate data, which of course is stupid.
People keep saying this but it’s just wrong.
Maybe I haven’t tried the language you have but it’s pretty damn good at code.
Granted, whatever I puts out needs to be tested and possibly edited but that’s the same thing we had to do with Stack Overflow answers.
I’ve tried a lot of scenarios and languages with various LLMs. The biggest takeaway I have is that AI can get you started on something or help you solve some issues. I’ve generally found that anything beyond a block or two of code becomes useless. The more it generates the more weirdness starts popping up, or it outright hallucinates.
For example, today I used an LLM to help me tighten up an incredibly verbose bit of code. Today was just not my day and I knew there was a cleaner way of doing it, but it just wasn’t coming to me. A quick “make this cleaner: <code>” and I was back to the rest of the code.
This is what LLMs are currently good for. They are just another tool like tab completion or code linting
I use it all the time and it’s brilliant when you put in the basic effort to learn how to use it effectively.
It’s allowing me and other open source devs to increase the scope and speed of our contributions, just talking through problems is invaluable. Greedy selfish people wanting to destroy things that help so many is exactly the rolling coal mentality - fuck everyone else I don’t want the world to change around me! Makes me so despondent about the future of humanity.
We already have those near constantly. And we still keep asking queries.
People assume that LLMs need to be ready to replace a principle engineer or a doctor or lawyer with decades of experience.
This is already at the point where we can replace an intern or one of the less good junior engineers. Because anyone who has done code review or has had to do rounds with medical interns know… they are idiots who need people to check their work constantly. An LLM making up some functions because they saw it in stack overflow but never tested is not at all different than a hotshot intern who copied some code from stack overflow and never tested it.
Except one costs a lot less…
This is already at the point where we can replace an intern or one of the less good junior engineers.
This is a bad thing.
Not just because it will put the people you’re talking about out of work in the shirt term, but because it will prevent the next generation of developers from getting that low-level experience. They’re not “idiots”, they’re inexperienced. They need to get experience. They won’t if they’re replaced by automation.
So, the whole point of learning is to ask questions from people who know more than you, so that you can gain the knowledge you need to succeed…
So… if you try to use these LLMs to replace parts of sectors, where there need to be people that can work their way to the next tier as they learn more and get better at their respective sectors, you do realize that eventually there will no longer be people that can move up their respective tier/position, because people like you said “Fuck ‘em, all in on this stupid LLM bullshit!” So now there are no more doctors, or real programmers, because people like you thought it would just be the GREATEST idea to replace humans with fucking LLMs.
You do see that, right?
Calling people fucking stupid, because they are learning, is actually pretty fucking stupid.
This is already at the point where we can replace an intern or one of the less good junior engineers. Because anyone who has done code review or has had to do rounds with medical interns know… they are idiots who need people to check their work constantly.
Do so at your own peril. Because the thing is, a person will learn from their mistakes and grow in knowledge and experience over time. An LLM is unlikely to do the same in a professional environment for two big reasons:
The company using the LLM would have to send data back to the creator of the LLM. This means their proprietary work could be at risk. The AI company could scoop them, or a data leak would be disastrous.
Alternatively, the LLM could self-learn and be solely in house without any external data connections. A company with an LLM will never go for this, because it would mean their model is improving and developing out of their control. Their customized version may end up being better than their the LLM company’s future releases. Or, something might go terribly wrong with the model while it learns and adapts. If the LLM company isn’t held legally liable, they’re still going to lose that business going forward.
On top of that, you need your inexperienced noobs to one day become the ones checking the output of an LLM. They can’t do that unless they get experience doing the work. Companies already have proprietary models that just require the right inputs and pressing a button. Engineers are still hired though to interpret the results, know what inputs are the right ones, and understand how the model works.
A company that tries replacing them with LLMs is going to lose in the long run to competitors.
The quality really doesn’t matter.
If they manage to strip any concept of authenticity, ownership or obligation from the entirety of human output and stick it behind a paywall, that’s pretty much the whole ball game.
If we decide later that this is actually a really bullshit deal – that they get everything for free and then sell it back to us – then they’ll surely get some sort of grandfather clause because “Whoops, we already did it!”
Have you tried recent models? They’re not perfect no, but they can usually get you most of the way there if not all the way. If you know how to structure the problem and prompt, granted.
See, this is why we can’t have nice things. Money fucks it up, every time. Fuck money, it’s a shitty backwards idea. We can do better than this.
Hear me out. Bottle caps.
'Nuff said!
Someone comes up with something good: look what I made, we can use this to better humanity!
Corporations: How can we make money off of this?
You can be killed with steel, which has a lot of other implications on what you do in order to avoid getting killed with steel.
Does steel fuck it all up?
Centralization is a shitty backwards idea. But you have to be very conscious of yourself and your instincts to neuter the part that tells you that it’s not to understand it.
Distributivism minus Catholicism is just so good. I always return to it when I give up on trying to find future in some other political ideology.
This has nothing to do with centralization. AI companies are already scraping the web for everything useful. If you took the content from SO and split it into 1000 federated sites, it would still end up in a AI model. Decentralization would only help if we ever manage to hold the AI companies accountable for the en masse copyright violations they base their industry on.
List of Distributist parties in the UK:
National Distributist Party British National Party National Front
Hmmm, maybe the Catholic part isn’t the only part worth reviewing.
Also worth noting that the Conservative Party’s ‘Big Society’ schtick in 2010 was wrapped in the trappings of distributism.
Not that all this diminishes it entirely but it does seem to be an entry drug for exploitation by the right.
I gotta hold my hand up and state that I am not read up on it at all, so happy to be corrected. But my impression is that Pope Leo XIII’s conception was to reduce secular power so as to leave a void for the church to fill. And it’s the potential exploitation of that void that attracts the far right too.
So they pulled a “reddit”?
These companies don’t realise their most engaged users generate a disproportionate amount of their content.
They will just go to their own spaces.
I think this a good thing in the long run, the internet will become decentralised again.
I don’t know. It feels a bit like “When I quit my employer will realize how much they depended on me.” The realization tends to be on the other side.
But while SO may keep functioning fine it would be great if this caused other places to spring up as well. Reddit and X/Twitter are still there but I’m glad we have the fediverse.
Well, reddit is doing fine so far. Shareholders are happy
CEO will have his bag and be gone by then.
And then Stack Overflow will go the same way Digg did.
god damn- I went over to Digg yesterday to see what its been like and I shit you not, it is links to reddit threads and instagram posts
I hope it doesn’t end up like it did on Reddit, where all those protests did not result in anything at all.
Reddit/Stack are the latest examples of an economic system where a few people monetize and get wealthy using the output of many.
Technofeudalism
Mmm this golden goose tastes delicious!
blog.codinghorror.com/are-you-a-digital-sharecrop…
Interesting article from on of the co-founders of StackOverflow.
You’re forgetting a silly and funny company whose name starts with “G”
First, they sent the missionaries. They built communities, facilities for the common good, and spoke of collaboration and mutual prosperity. They got so many of us to buy into their belief system as a result.
Then, they sent the conquistadors. They took what we had built under their guidance, and claimed we “weren’t using it” and it was rightfully theirs to begin with.
How many trees does a person need to make one coffin…
Oh I didn’t consider deleting my answers. Thanks for the good idea Barbra StackOverflow.
I’d be shocked if deleted comments weren’t retained by them
Letting corporations “disrupt” forums was a mistake.
Stack Overflow was great when it appeared. The info was spread out incredibly wide and there was a lot of really shitty info as well. One place where it could accumulate and be rated was extremely helpful.
But maybe it’s time to create a federated open source stack overflow.
Messages that people post on Stack Exchange sites are literally licensed CC-BY-SA, the whole point of which is to enable them to be shared and used by anyone for any purpose. One of the purposes of such a license is to make sure knowledge is preserved by allowing everyone to make and share copies.
That license would require chatgpt to provide attribution every time it used training data of anyone there and also would require every output using that training data to be placed under the same license. This would actually legally prevent anything chatgpt created even in part using this training data from being closed source. Assuming they obviously aren’t planning on doing that this is massively shitting on the concept of licensing.
Share Alike
I can’t wait to download my own version of the latest gpt model
Maybe we should replace Stack Overflow with another site where experts can exchange information? We can call it “Experts Exchange”.
Expert Sex Change?
Lemmy could be used as a stack overflow alt also Lemmy is shitification repelent by design .
Maybe there I can ask where to find a good pen supplier.
At the end of the day, this is just yet another example of how capitalism is an extractive system. Unprotected resources are used not for the benefit of all but to increase and entrench the imbalance of assets. This is why they are so keen on DRM and copyright and why they destroy the environment and social cohesion. The thing is, people want to help each other; not for profit but because we have a natural and healthy imperative to do the most good.
There is a difference between giving someone a present and then them giving it to another person, and giving someone a present and then them selling it. One is kind and helpful and the other is disgusting and produces inequality.
If you’re gonna use something for free then make the product of it free too.
An idea for the fediverse and beyond: maybe we should be setting up instances with copyleft licences for all content posted to them. I actually don’t mind if you wanna use my comments to make an LLM. It could be useful. But give me (and all the other people who contributed to it) the LLM for free, like we gave it to you. And let us use it for our benefit, not just yours.
An idea for the fediverse and beyond: maybe we should be setting up instances with copyleft licences for all content posted to them. I actually don’t mind if you wanna use my comments to make an LLM. It could be useful. But give me (and all the other people who contributed to it) the LLM for free, like we gave it to you. And let us use it for our benefit, not just yours.
This seems like a very fair and reasonable way to deal with the issue.
The enshittification is very real and is spreading constantly. Companies will leech more from their employees and users until things start to break down. Acceleration is the only way.
Begun, the AI wars have.
Faces on T-shirts, you must print print. Fake facts into old forum comments, you must edit. Poison the data well, you must.
You really don’t need anything near as complex as AI…a simple script could be configured to automatically close the issue as solved with a link to a randomly-selected unrelated issue.
I despise this use of mod power in response to a protest. It’s our content to be sabotaged if we want - if Stack Overlords disagree then to hell with them.
I’ll add Stack Overflow to my personal ban list, just below Reddit.
And the enshittification continues…
primary use for AI is self destructing your website.
Eventually, we will need a fediverse version of StackOverflow, Quora, etc.
I fully understand why they are doing this, but we are just losing a mass of really useful knowledge. What a shame…
Data should be socialized and machine learning algorithms should be nationalized for public use.
Why does OpenAI want 10 year old answers about using jQuery whenever anyone posts a JavaScript question, followed by aggressive policing of what is and isn’t acceptable to re-ask as technology moves on?
While at the same time they forbid AI generated answers on their website, oh the turntables.
Rather than delete, modify the question so its wrong. Then the ai will hallucinate.
Reddit did almost the same and don’t forget guys to delete your Reddit account
I'm going to run out of sites at this pace.
Maybe we need a technical questions and answers siteon the fediverse!
A malicious response by users would be to employ an LLM instructed to write plausibly sounding but very wrong answers to historical and current questions, then an army of users upvoting the known wrong answer while downvoting accurate ones. This would poison the data I would think.
Time to download the last dump: archive.org/details/stackexchange
Rooki@lemmy.world 6 months ago
If this is true, then we should prepare to be shout at by chatgpt why we didnt knew already that simple error.
snekerpimp@lemmy.world 6 months ago
ChatGPT now just says “read the docs!” To every question
Dave@lemmy.nz 6 months ago
Hey ChatGPT, how can I …
“Locking as this is a duplicate of [unrelated question]”
ekky@sopuli.xyz 6 months ago
And then links to a similar sounding but ultimately totally unrelated site.
elvith@feddit.de 6 months ago
Nah, it just marks your question as duplicate.
angelsomething@lemmy.one 6 months ago
Already had that happen with perplexity, like, no mate, I’m asking you.
catloaf@lemm.ee 6 months ago
Honestly, that wouldn’t be the worst thing in the world.
JJROKCZ@lemmy.world 6 months ago
Always love those answers, well if you read the 700 page white paper on this one command set in one module then you would understand… do you think I have the time to read 37000 pages of bland ass documentation yearly on top of doing my actual job? Come the fuck on.
I guess some of these guys have so many heads on their crews that they don’t have much work to do anymore but that’s not the case for most
NuXCOM_90Percent@lemmy.zip 6 months ago
You joke.
This would have been probably early last year? Had to look up how to do something in fortran (because fortran) and the answer was very much in the voice of that one dude on the Intel forums who has been answering every single question for decades(?) at this point. Which means it also refused to do anything with features newer than 1992 and was worthless.
Tried again while chatting with an old work buddy a few months back and it looks like they updated to acknowledging f99 and f03 exist. So assume that was all stack overflow.
blanketswithsmallpox@lemmy.world 6 months ago
This message brought to you by chatgpt bot.