CeeBee
@CeeBee@lemmy.world
- Comment on Games that stuck with you 4 months ago:
Little Big Adventure Little Big Adventure 2
- Comment on 'Brain-in-a-jar' biocomputers can now learn to control robots 4 months ago:
All hail the Omnessiah!
- Comment on OpenAI strikes Reddit deal to train its AI on your posts 5 months ago:
This makes me thing you don’t understand my meaning. I think you’re talking about one day reddit decides to search for an restore obfuscated and deleted comments.
Yes, that is what we’re talking about. There were a large amount of users that updated their comments to something basic and then deleted those comments. I’m fairly confident that before they happened they had zero need to implement a spam prevention system like you’re suggesting. The fact that all those users’ (including myself) comments are still <deleted> is evidence of that.
They may have implemented someone like that recently, but not before.
- Comment on OpenAI strikes Reddit deal to train its AI on your posts 5 months ago:
There are so many ways this can be done that I think you are not thinking of.
No, I can think of countless ways to do this. I do this kind of thing every single day.
What I’m saying is that you need to account for every possibility. You need to isolate all the deleted comments that fit the criteria of the “Reddit Exodus”.
How do you do that? Do you narrow it down to a timeframe?
The easiest way to do this is identify all deleted accounts, find the backup with the most recent version of their profile with non-deleted comments, and insert that user back into the main database (not the prod db).
Now you need to parse billions upon billions upon billions of records. And yes, it’s billions because you need the system to search through all the records to know which record fits the parameters. And you need to do that across multiple backups for each deleted profile/comment.
It’s a lot of work. And what’s the payoff? A few good comments and a ton of “yes this ^” comments.
I sincerely doubt it’s worth the effort.
- Comment on OpenAI strikes Reddit deal to train its AI on your posts 5 months ago:
It can be done quite easily, trust me.
The words of every junior dev right before I have to spend a weekend undoing their crap.
I’ve been there too many times.
There are always edge cases you need to account for, and you can’t account for them until you run tests and then verify the results.
And you’d be parsing billions upon billions of records. Not a trivial thing to do when running multiple tests to verify. And ultimately for what is a trivial payoff.
You don’t screw around with infinitely invaluable prod data of your business without exhausting every single possibility of data modification.
It’s a piece of cake.
It hurts how often I’ve heard this and how often it’s followed by a massive screw up.
- Comment on OpenAI strikes Reddit deal to train its AI on your posts 5 months ago:
It’s theoretically possible, but the issue that anyone trying to do that would run into is consistency.
How do you restore the snapshots of a database to recover deleted comments but also preserve other comments newer than the snapshot date?
The answer is that it’s nearly impossible. Not impossible, but not worth the massive monumental effort when you can just focus on existing comments which greatly outweigh any deleted ones.
- Comment on What is the point of Xbox? 6 months ago:
Go vegan
I swear vegans are eventually going to out class religious people for pushing their own beliefs.
- Comment on What is the point of Xbox? 6 months ago:
Sony is just as bad in their own ways.
- Comment on Stack Overflow bans users en masse for rebelling against OpenAI partnership — users banned for deleting answers to prevent them being used to train ChatGPT 6 months ago:
I’ve tried a lot of scenarios and languages with various LLMs. The biggest takeaway I have is that AI can get you started on something or help you solve some issues. I’ve generally found that anything beyond a block or two of code becomes useless. The more it generates the more weirdness starts popping up, or it outright hallucinates.
For example, today I used an LLM to help me tighten up an incredibly verbose bit of code. Today was just not my day and I knew there was a cleaner way of doing it, but it just wasn’t coming to me. A quick “make this cleaner: <code>” and I was back to the rest of the code.
This is what LLMs are currently good for. They are just another tool like tab completion or code linting
- Comment on The walls of Apple’s garden are tumbling down 6 months ago:
Linux is sadly very messy for a sysadmin.
wut?
- Comment on The walls of Apple’s garden are tumbling down 6 months ago:
CUDA and AI stuff is very much Linux focused. They run better and faster on Linux, and the industry puts their efforts into Linux. CNC and 3D printing software is mostly equal between Linux and Windows.
The one thing Linux lacks in this area is CAD support from the big players. FreeCAD and OpenCAD exist, and they work very well, but they do miss a lot of the polish the proprietary software has. There are proprietary CAD solutions for Linux, but they’re more industry specific and not general purpose like AutoCAD.
- Comment on Reddit embracing all out enshittification 6 months ago:
I can’t quite put my finger on what exactly makes me feel so strongly, but it’s something to do with how sentences and paragraphs are constructed.
It has the classic 3 section style. Intro, response, conclusion.
It starts by acknowledging the situation. Then it moves on to the suggestion/response. Then finally it gives a short conclusion.
- Comment on Hyundai pauses X ads over pro-Nazi content on the platform 6 months ago:
Maybe waiting to see which side comes out on top. Kinda like Volkswagen. (Yes I know I didn’t exactly happen like that)
- Comment on Big Tech Is Faking AI 7 months ago:
I worked in the object recognition and computer vision industry for almost a decade. That stuff works. Really well, actually.
But this checkout thing from Amazon always struck me as odd. It’s the same issue as these “take a photo of your fridge and the system will tell you what you can cook”. It doesn’t work well because items can be hidden in the back.
The biggest challenge in computer vision is occlusion, followed by resolution (in the context of surveillance cameras, you’re lucky to get 200x200 for smaller objects). They would have had a really hard, if not impossible, time getting clear shots of everything.
My gut instinct tells me that they had intended to build a huge training set over time using this real-world setup and hope that the sheer amount of training data could help overcome at least some of the issues with occlusion.
- Comment on Big Tech Is Faking AI 7 months ago:
That’s not AI tho.
What do you mean?
- Comment on What if there's a bigger, still unknown reference point? 7 months ago:
Earth itself is moving around the sun at about100,000 km/h and the sun is traveling through the galaxy st about 1 million km/h.
So if Marty went back/forward just one hour then he’d be about 1,100,000 kilometers away from Earth (or 900,000 kilometers, depending on the orbital Duffin of Earth relative to the sun’s direction of travel).
And then there’s the motion and speed of the Milkyway itself.
This is all assuming that the layout of the underlying fabric of spacetime is absolute (which it seems to be, outside of expansion).
- Comment on Eww, Copilot AI might auto-launch with Windows 11 soon 7 months ago:
I’ll bring the snacks!
- Comment on Eww, Copilot AI might auto-launch with Windows 11 soon 7 months ago:
- Comment on Eww, Copilot AI might auto-launch with Windows 11 soon 7 months ago:
What ads?
Have you actually used Windows?
- Comment on Eww, Copilot AI might auto-launch with Windows 11 soon 7 months ago:
I’ll help also
- Comment on OpenAI Adds Free Instant ChatGPT Access for Everyone. Here's Why That Matters 7 months ago:
Thanks for that read. I definitely agree with the author for the most part. I don’t really agree that current LLMs are a form of AGI, but it’s definitely close.
But what isn’t up for debate is the fact that LLMs are 100% AI. There’s no debate there. But I think the reason why people argue that is because they conflate “intelligence” with concepts like sapience, sentience, consciousness, etc.
These people don’t understand that intelligence is a concept that can, and does, exist outside of consciousness.
- Comment on OpenAI Adds Free Instant ChatGPT Access for Everyone. Here's Why That Matters 7 months ago:
The most infuriating thing for me is the constant barrage of “LLMs aren’t AI” from people.
These people have no understanding of what they’re talking about.
- Comment on Have We Reached Peak AI? 8 months ago:
they literally have no mechanism to do any of those things.
What mechanism does it have for pattern recognition?
that is literally how it works on a coding level.
Neural networks aren’t “coded”.
It’s called an LLM for a reason.
That doesn’t mean what you think it does. Another word for language is communication. So you could just as easily call it a Large Communication Model.
Neural networks have hundreds of thousands (at the minimum) of interconnected layers. Llama-2 has 76 billion parameters. The newly released Grok has over 300 billion. And though we don’t have official numbers, ChatGPT 4 is said to be close to a trillion.
The interesting thing is that when you have neural networks of such a size and you feed large amounts of data into it, emergent properties start to show up. More than just “predicting the next word”, it starts to develop a relational understanding of certain words that you wouldn’t expect. It’s been shown that LLMs understand things like Miami and Houston are closer together than New York and Paris.
Those kinds of things aren’t programmed, they are emergent from the dataset.
As for things like creativity, they are absolutely creative. I have asked seemingly impossible questions (like a Harlequin story about the Terminator and Rambo) and the stuff it came up with was actually astounding.
They regularly use tools. Lang Chain is a thing. There’s a new LLM called Devin that can program, look up docs online, and use a command line terminal. That’s using a tool.
That also ties in with problem solving. Problem solving is actually one of the benchmarks that researchers use to evaluate LLMs. So they do problem solving.
To problem solve requires the ability to do analysis. So that check mark is ticked off too.
Just about anything that’s a neutral network can be called an AI, because the total is usually greater than the sun of parts.
- Comment on Have We Reached Peak AI? 8 months ago:
LLMs as AI is just a marketing term. there’s nothing “intelligent” about “AI”
Yes there is. You just mean it doesn’t have “high” intelligence. Or maybe you mean to say that there’s nothing sentient or sapient about LLMs.
Some aspects of intelligence are:
- Planning
- Creativity
- Use of tools
- Problem solving
- Pattern recognition
- Analysts
LLMs definitely hit basically all of these points.
Most people have been told that LLMs “simply” provide a result by predicting the next word that’s most likely to come next, but this is a completely reductionist explaining and isn’t the whole picture.
- Comment on What if there's a bigger, still unknown reference point? 8 months ago:
Now what about time travel?
Technically, Marty McFly should have appeared in space far from anything instead of old man Peabody’s Pine farm.
- Comment on House panel unanimously approves bill that could ban TikTok 8 months ago:
I can see that you don’t care about regulating the industry.
Right, because me saying that Facebook and other social media selling our data even just for advertising is not ok and we should introduce laws for strong data and privacy protection equates to me “not caring about regulating the industry”.
Sure there, bud.
You just want to punish China.
Nonsense.
But it’s a law that targets a single person or organization. And the Constitution outright bans it.
Ok, I get this, but it gets murky when the “organisation” being targeted is a corporate office of a government party.
I’m not claiming to have the answer, but as a non-American I can’t get upset at such a bill. Simply because it would push back against a country that lately had been getting away with everything and causing severe and deliberate harm in other countries, including mine and yours.
- Comment on House panel unanimously approves bill that could ban TikTok 8 months ago:
Facebook being sued for giving data to Chinese companies with tighter relationships to the CCP than Bytedance is literally headline news right now.
I looked it up, and you’re right that there’s an issue there. But that’s an issue with an American owned company giving data to an adversarial country (two actually, China and Russia). It’s 100% absurd and shouldn’t be allowed with heavy penalties. But that’s still a different issue than the one we’re talking about.
The fact is you’re bending over backwards to defend an unconstitutional law with unprecedented powers
Two things: I’m not American, and it’s not unconstitutional anyways. There’s nothing in the bill that says no one is allowed to use it. And the first and preferred option of the bill is to sell ownership of TikTok to an American firm, essentially to divorce control and influence of China from the largely American userbase. If, and only if, the transfer of ownership is not possible then the app is to be delisted from all app stores.
That means that it’s still possible for existing users to use the app and it’s still possible to install the app through official means without either thing being illegal.
reuters.com/…/proposed-us-tiktok-ban-not-fair-chi…
Another interesting thing is that the Chinese Foreign Ministry has said it will protect its rights and national security interests (paraphrased). What on earth does TikTok, an app that’s Chinese owned and banned in the very country that owns it, have to do with Chinese National security?
That a very telling thing to say.
Make it illegal on pain of ban to give, or sell American data to a sensitive country; or otherwise cause American data in your company’s control to come into their possession.
I can agree with this, but the TikTok bill has nothing to do with xenophobia. If China wasn’t an adversarial country actively bullying and threatening other countries with war and annihilation then it wouldn’t be an issue.
In fact, let’s go a step further and implement sweeping data protection laws so that our data can’t be sold for any reason.
The question of what’s the difference isn’t some cute gotcha thing.
No, it’s not a “cute gotcha thing”. It’s pointing out the difference between passive data collection and active control to influence content.
And you need to look up targeted advertising.
I know very well what it is. I work in the tech sector (IT/programming) adjacent to cyber security.
It’s literally creating a custom algorithm on everything from Reddit to Facebook to Google Search. Which is why it was used by the Russians to impact our 2016 elections via Facebook.
Right, so if you think targeted advertising is bad when company A sells data to company B, who then builds algorithms to target people for political party C, imagine how bad it is when that entire process is vertically integrated and directly controlled by a foreign adversary. And to add to that, we’re not even just dealing with ads anymore, we’re dealing with grassroots-like influencer content with talking points from the CCP.
You gave me an example of one really bad thing and said it’s the same thing as a different and extremely bad thing.
Both of them are bad need to be addressed. But with TikTok being run by a CCP-influenced company in a country that laughs at American laws, there’s little recourse to deal with it.
- Comment on House panel unanimously approves bill that could ban TikTok 8 months ago:
If that needs to be spelled out to you, then that explains your position.
You’re either not too smart to understand, or you’re a tankie of some kind.
You also completely dodged the part where you need to backup your claims about Facebook selling data to China.
- Comment on House panel unanimously approves bill that could ban TikTok 8 months ago:
But this doesn’t accomplish that goal
That’s partially true. But there’s a difference between having access to a dataset vs having direct control over an app, which includes the algorithms and content being shown.
In any case, if it goes through to a full ban, you can still use the app. It just cannot be distributed on any app stores. It would still be possible to sideload it (on Android).
And that will discourage a lot of people from using it, which would be the point.
I also would like to see any reports or studies showing China buying data from other social media platforms.
- Comment on The Terrible Costs of a Phone-Based Childhood 8 months ago:
They’re getting fucking shot at when they go to school.
American Defaultism