MagicShel
@MagicShel@lemmy.zip
25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)
- Comment on Using Signal groups for activism 1 day ago:
Maybe I’m just weird, but basically nothing I do in an online capacity traces back to my IRL identity. (I do maintain a linked in for professional purposes.)
- Comment on Elon Musk wants to rewrite "the entire corpus of human knowledge" with Grok 1 day ago:
If we had direct control over how our tax dollars were spent, about would be different pretty fast. Might not be better, but different.
- Comment on AI search finds publishers starved of referral traffic 1 day ago:
A lot of my queries only call for oversimplified summaries. Either I’m simple like that or I google stupid shoot no one else would bother. A recent example:
Are there butterflies or moths that don’t have mouths? (No but some have vestigial mouths connected to non-functioning digestive systems.) Good enough!
That said, I’m very skeptical about answers if it’s anything I care about or need to act on.
- Comment on AI applications are producing cleaner cities, smarter homes and more efficient transit 1 day ago:
As an AI enthusiast:
[x] Doubt
- Comment on Google is intentionally throttling YouTube videos, slowing down users with ad blockers 6 days ago:
Oh no! What will I—Video Downloader. I’ll just watch offline at my leisure.
- Comment on Google’s test turns search results into an AI-generated podcast 1 week ago:
Wow, an even less efficient way to get answers than watching a fucking YouTube video…
- Comment on ChatGPT Mostly Source Wikipedia; Google AI Overviews Mostly Source Reddit 1 week ago:
I used ChatGPT on something and got a response sourced from Reddit. I told it I’d be more likely to believe the answer if it told me it had simply made up the answer. It then provided better references.
I don’t remember what it was but it was definitely something that would be answered by an expert on Reddit, but would also be answered by idiots on Reddit and I didn’t want to take chances.
- Comment on The hidden time bomb in the tax code that's fueling mass tech layoffs 1 week ago:
I found out about this about a year ago while I was laid off. It coincided with when the massive layoffs began. Seems pretty likely to me. Developer salaries aren’t low and to lose another 80% on top is a big hit.
Also a lot of my coworkers are really nervous about immigration right now. This is a bad time to be an Indian tech worker in the US. My team of about 10 could wind up reduced to me and one other guy. We’d even lose our manager and every PM. And this team is responsible for critical software at a major company.
- Comment on Lemmy.zip 2nd Birthday Giveaway! 🍰 1 week ago:
Congrats on two years! Looks like you’ve done good so far! This was supposed to be a temporary home while my old server struggled to get maintenance performed and things weren’t working. But I just never went back.
- Comment on Why so much hate toward AI? 2 weeks ago:
It’s a massive new disruptive technology and people are scared of what changes it will bring. AI companies are putting out tons of propaganda both claiming AI can do anything and fear mongering that AI is going to surpass and subjugate us to back up that same narrative.
Also, there is so much focus on democratizing content creation, which it’s a best a very mixed bag, and little attention is given to collaborative uses (which I think is where AI shines) because it’s so much harder to demonstrate, and it demands critical thinking skills and underlying knowledge.
In short, everything AI is hyped as is a lie, and that’s all most people see. When you’re poking around with it, you’re most likely to just ask it to do something for you: write a paper, create a picture, whatever, and the results won’t impress anyone actually good at those things, and impress the fuck out of people who don’t know any better.
This simultaneously reinforces two things to two different groups: AI is utter garbage and AI is smarter than half the people you know and is going to take all the jobs.
- Comment on An earnest question about the AI/LLM hate 2 weeks ago:
I think a lot of ground has been covered. It’s a useful technology that has been hyped to be way more than it is, and the really shitty part is a lot of companies are trying to throw away human workers for AI because they are that fucking stupid or that fucking greedy (or both).
They will fail, for the most part, because AI is a tool your employees use, they aren’t a thing to foist onto your customers. Also where do the next generation of senior developers come from if we replace junior developers with AI? Substitute in teachers, artists, copy editors, others.
Add to that people who are too fucking stupid to understand AI deciding it needs to be involved in intelligence, warfare, police work.
I frequently disagree with the sky is falling crowd. AI use by individuals, particularly local AI (though it’s not as capable) is democratizing. I moved from windows to Linux two years ago and I couldn’t have done that if I hadn’t had AI to help me troubleshoot a bunch of issues I had. I use it all the time at work to leverage my decades of experience in areas where I’d have to relearn a bunch of things from scratch. I wrote a Python program in a couple of hours having never written a line before because I knew what questions to ask.
I’m very excited for a future with LLMs helping us out. But everyone is fixated on AI gen (image, voice, text) but it’s not great at that. What it excels at is very quickly giving feedback. You have to be smart enough to know when it’s full of shit. That’s why vibe coding is a dead end. I mean it’s cool that very simple things can be churned out by very inexperienced developers, but that has a ceiling. An experienced developer can also leverage it to do more faster at a higher level, but there is a ceiling there as well. Human input and knowledge never stops being essential.
So welcome to Lemmy and discussion about AI. You have to be prepared for knee-jerk negativity, and the ubiquitous correction when you anthropomorphize AI as a shortcut to make your words easier to read. There isn’t usually too much overtly effusive praise here as that gets shut down really quickly, but there is good discussion to be had among enthusiasts.
I find most of the things folks hate about AI aren’t actually the things I do with it, so it’s easy to not take the comments personally. I agree that ChatGPT written text is slop and I don’t like it as writing. I agree AI art is soulless. I agree distributing AI generated nudes of someone is unethical (I could give a shit what anyone jerks off to in private). I agree that in certain niches, AI is taking jobs, even if I think humans ultimately do the jobs better. I do disagree that AI is inherently theft and I just don’t engage with comments to that effect. It’s unsettled law at this point and I find it highly transformative, but that’s not a question anyone can answer in a legal sense, it’s all just strongly worded opinion.
So discussions regarding AI are fraught, but there is plenty of good discourse.
Enjoy Lemmy!
- Comment on Building a slow web 2 weeks ago:
One of the things I miss about web rings and recommended links is it’s people who are passionate about a thing saying here are other folks worth reading about this. Google is a piss poor substitute for the recommendations of people you like to read.
Only problem with slow web is people write what they are working on, they aren’t trying to exhaustively create “content”. By which I mean, they aren’t going to have every answer to every question. You read what’s there, you don’t go searching for what you want to read.
- Comment on Why Decentralized Social Media Matters 2 weeks ago:
Most people don’t care about decentralization
I think that’s largely not the case for people that are currently on Lemmy/Mastodon, but I think you’re right that it prevents larger adoption. I’m okay with that, though. I don’t need to talk with everyone. There’s room for more growth, probably especially for more niche communities, but at least for me Lemmy has hot critical mass.
Everything else I either like the things you dislike or disagree that they are problems.
- Comment on Is Reddit in/directly attacking lemmy instances with controversial AI posts to overpower mods and reduce user experience? 2 weeks ago:
We can tell the difference between AI slop and people. That’s why it’s called slop. I’ve seen a number of folks get called out just for using an AI to edit their post (and letting it rewrite it wholesale instead of retaining their own voice).
This is an Occam’s Razor scenario: the simplest explanation is that people are assholes and a lot of work to moderate, and not every server is going to be able to make it. But that’s why federation is a good thing. It’ll be interesting to see if everyone migrates and remains on the fediverse or if the community shrinks.
- Comment on Ai Code Commits 3 weeks ago:
An LLM providing “an opinion” is not a thing
Agreed, but can we just use the common parlance? Explaining completions every time is tedious, and most everyone talking about it at this level always knows. It doesn’t think, it doesn’t know anything, but it’s a lot easier to use those words to mean something that seems analogous. But yeah, I’ve been on your side of this conversation before and let’s just all that as agreed.
this would not have to reach either a human or an AI agent or anything before getting fixed with little resources
There are tools that do some of this automatically. I picked really low hanging fruit that I still see every single day in multiple environments. LLMs attempt more, but they need review and acceptance by a human expert.
Perfectly decent looking “minor fixes” that are well worded, follow guidelines, and pass all checks, while introducing an off by one error or suddenly decides to swap two parameters that happens to be compatible and make sense in context are the issue. And those, even if rare (empirically I’d say they are not that rare for now) are so much harder to spot without full human analysis, are a real threat.
I get that folks are trying to fully automate this. That’s fucking stupid. I don’t let seasoned developers commit code to my repos without review, why would I let AI? Incidentally, seasoned developers also can suggest fixes with subtle errors. And sometimes they escape into the code base, or sometimes perfectly good code that worked fine on prem goes to shit in the cloud—I just had to argue my team into fixing something that executed over 10k SQL statements in some cases on a single page load due to lazy loading. That shit worked “great” on prem but was taking up to 90 seconds in the cloud. All written by humans.
The goal should not be to emulate human mistakes, but to make something better.
I’m sure that is someone’s goal, but LLMs aren’t going to do that. They are a different tool that helps but does not in any way replace human experts. And I’m caught in the middle of every conversation because I don’t hate them enough for one side, and I’m not hype enough about them for the other. But I’ve been working with them for several years now and watched the grow since GPT2 and I understand them pretty well. Well enough not to trust them to the degree some idiots do, but I still find them really handy.
- Comment on Ai Code Commits 3 weeks ago:
The place I work is actively developing an internal version of this. We already have optional AI PR reviews (they neither approve nor reject, just offer an opinion). As a reviewer, AI is the same as any other. It offers an opinion and you can judge for yourself whether its points need to be addressed or not. I’ll be interested to see whether its comments affect the comments of the tech lead.
I’ve seen a preview of a system that detects problems like failing sonar analysis and it can offer a PR to fix it. I suppose for simple enough fixes like removing unused imports or unused code it might be fine. It gets static analysis and review like any other PR, so it’s not going to be merging any defects without getting past a human reviewer.
I don’t know how good any of this shit actually is. I tested the AI review once and it didn’t have a lot to say because it was a really simple PR. It’s a tool. When it does good, fine. When it doesn’t, it probably won’t take any more effort than any other bad input.
I’m sure you can always find horrific examples, but the question is how common they are and how subtle any introduced bugs are, to get past the developer and a human reviewer. Might depend more on time pressure than anything, like always.
- Comment on Airbuddy 🦛 3 weeks ago:
Sorry, mate. I dropped this.
#!/usr/bin/env bash
- Comment on Airbuddy 🦛 3 weeks ago:
#Yo dawg.
#I heard you like comments.
#So I prefix every line with a hashtag so I can comment my comment while I comment.
exit 1
- Comment on Generative AI's most prominent skeptic doubles down 3 weeks ago:
I’m a fan of AI, but I still think this guy is right as far as investment and hype goes. It’s a useful tool. It cannot do all the things they are promising well. Both can be true.
- Comment on SignalFire: startups and Big Tech firms cut hiring of recent graduates by 11% and 25% respectively in 2024 vs. 2023, as AI can handle routine, low-risk tasks 3 weeks ago:
What kind of low risk tasks is a startup doing? I predict a lot of companies that are never going to generate value.
- Comment on AI could already be conscious. Are we ready for it? 3 weeks ago:
I have a set of attributes that I associate with consciousness. We can disagree in part, but if your definition is so broad as to include math formulas there isn’t every common ground for us to discuss them.
If you want to say contemplation/awareness of self isn’t part of it then I guess I’m not very precious about it the way I would be over a human-like perception of self, then fine people can debate what ethical obligations we have to an ant-like consciousness when we can achieve even that, but we aren’t there yet. LLMs are nothing but a process of transforming input to output. I think consciousness requires rather more than that or we wind up with erosion being considered a candidate for consciousness.
So I’m not the authority, but if we don’t adhere to some reasonable layman’s definition it quickly gets into weird wankery that I don’t see any value in exploring.
- Comment on Half-Life 3 Has Been Designed to be ‘The Final Chapter’, It’s Claimed 4 weeks ago:
I can’t say never, but TRoS is a fucking awful movie (not just as Star Wars, but a terrible cinematic experience) and I think it’s really damn hard to redeem a trilogy with that as a bookend.
- Comment on AI could already be conscious. Are we ready for it? 4 weeks ago:
I said on paper. They are just algorithms. When silicon can emulate meat, it’s probably time to reevaluate that.
- Comment on AI could already be conscious. Are we ready for it? 4 weeks ago:
If they don’t contemplate self then I’d say they aren’t conscious, but I’m not sure how we’d know if they do.
- Comment on AI could already be conscious. Are we ready for it? 4 weeks ago:
-
Let’s say we do an algorithm on paper. Can it be conscious? Why is it any different if it’s done on silicon rather than paper?
-
Because they are capable of fiction. We write stories about sentient AI and those inform responses to our queries.
I get playing devil’s advocate and it can be useful to contemplate a different perspective. If you genuinely think math can be conscious I guess that’s a fair point, but that would be such a gulf in belief for us to bridge in conversation that I don’t think either of us would profit from exploring that.
-
- Comment on AI could already be conscious. Are we ready for it? 4 weeks ago:
Consciousness requires contemplation of self. Which requires the ability to contemplate.
Current AIs function as mainly complex algorithms that are run when invoked. They are 100% not conscious any more than a^2^+b^2^=c^2^ is conscious. AI can simulate the words of a conscious being, but they don’t come from any awareness of internal state, but are a result of the prompt (including injected data and instructions).
In the future, I’m sure an AI could be designed that spends time thinking about its own existence, but I’m not sure why anyone would pay for all the compute to think about things not directly requested.
- Comment on @chrlschn - Beware the Complexity Merchants 4 weeks ago:
Anything UI is kinda bullshit because HTML and CSS were never designed to produce pixel-perfect fidelity on every screen but companies insist, and also jank like text mixing just slightly when you hover your mouse over it is bad UX. So what we wind up with is a fifty-level hierarchy of containers making sure everything lined up just so. That complexity is imposed by the intersection of HTML, CSS, and JS. Not that the previous developer wasn’t an idiot, but I freaking hate front end work despite being “full-stack.”
- Comment on get back, swine! 4 weeks ago:
“Calm down or you’re fucking dinner.”
“It’s cool. I’m chill.”
- Comment on Elon Musk's X temporarily down for tens of thousands of users 4 weeks ago:
This is so frustrating!!!
Temporarily???
- Comment on The AI-powered collapse of the American tech workfoce 4 weeks ago:
it makes for a very good narrative to spin at investors
Particularly for the investors in AI companies. AI is useful. I use it a lot, but all of this shit they put out about what if AI’s take over the world or how we’re going to have to figure out how to deal with 90% unemployment is science-fiction marketing.
It’s not going to take over the world. It’s not going to put artists out of work—not once consumers take in the AI-generated results.
It’s sure as fuck not putting software devs out of work on any kind of scale. It makes me a bit more productive, but not enough to replace a productive co-worker.
On the other hand, I’ve had team members who would boost overall team productivity by getting fired before LLMs.