What about it?
Comment on Loops publishes their recommender algorithm
okamiueru@lemmy.world 1 week ago
This infographic reeks of AI slop.
newaccountwhodis@lemmy.ml 1 week ago
okamiueru@lemmy.world 1 week ago
I’m not too happy to spend time pointing out flaws in slop. This kind of bullshit asymmetry feels a bit too much like work. But, since you’re polite about it, and seem to ask in good faith…
First of all this is presented as a technical infographic on an “algorithm” for how a recommendation engine will work. As someone whose job it is to design similar things, it explains pretty much nothing of substance. It does, however, describes the trivial parts you can assume from the problem description, and the rest is weird and confusing.
So let’s see what this suggested algorithm is.
-
It starts out with “user requests the feed”, and depending on whether or not you have “preference” data (prior interests or choices, etc), you give either a selection based on something generic, or something that you can base recommendations on. Well… sure. So far, silly, and trivial.
-
“Scoring and ranking engine”. And below this, a pie diagram with four categories. Why are there lines between only the two top categories, and the engine box? Seems weird, but, OK. I suppose all four are equally connected, which would be clearer without the lines.
-
On the three horizontal “Source Streams” arrows coming in from the right, its all just weird. The source streams are going to be… generated content, no? But let’s give it the befit of the doubt and assume it’s suggesting that, given generated content, some of it might can be considered relevant for “personal preference” and has a “filter: hidden creators”, but, none of that makes any sense. The scoring and ranking engine is already suggested to do this part… The next one is “Popular (high scores) filter: bloom filter (already seen)”. Which mixes concepts. A bloom filter is the perfect thing to confuse an LLM, because it has nothing to do with filters in the exact same context it was used for the above source stream. Something intelligent wouldn’t make this mistake. But, it does statistically parrot it’s way to suggest that a bloom filter might have something to do with a cost effective predicate function that could make sense for a “has seen before”. However, why is this here?
I’ll just leave it at that. This infographic would make a lot of sense if it was created by some high schoolers who tried to understand some of things, found many relevant concepts, but didn’t fully understand any of them. And, it’s also exactly the kind of stuff I’d expect from an LLM.
I don’t think loops hired a bunch of kids, so LLM it is.
newaccountwhodis@lemmy.ml 1 week ago
Ty for the effort post. It’s all french to me so I was looking for arrows to nowhere, crooked lines, and messed up text.
okamiueru@lemmy.world 1 week ago
Happy to hear. Cheers
korendian@lemmy.zip 1 week ago
Just because you overanalyzed something to the point of confusing yourself does not mean that it is AI slop, or equally confusing for other.
To address the specific points you raised as “evidence” of AI:
- The two top categories have lines going to them because those are the things that a user controls with their activity on the platform. Prior to that, the “for you” recommendation engine is not active, since it has nothing to base it’s recommendations on. Seems pretty clear to me.
- Time decayed, in the context of that category means when you last interacted with a post. If you haven’t interacted with a post for a while, it will no longer show up in your for you feed. Again, really quite straight forward.
- What about filtering hidden creators makes no sense? You hide a creator, they don’t show up in your feed. That’s one aspect of personalization, from the start, the rest of it is the two categories that, once they make it past the “hidden creator” filter, determine how likely it is to show up.
- Bloom filter is literally explained right there, it’s if you have seen a post yet or not. Lemmy clearly does not have this sort of filter, because you keep seeing the same shit over and over until it drops off from whatever category of the feed you’re viewing. Really not sure what is hard to understand there.
You’re using a lot of fancy words in your analysis here, but the actual analysis is nonsensical. Almost makes me wonder if you yourself are actually a bot.
okamiueru@lemmy.world 1 week ago
I think you might have missed my point. I wasn’t listing stuff I had trouble understanding. I was listing stuff that didn’t make much a sense. The end result, even if you manage to excuse why it isn’t bad, still doesn’t result in anything useful or informative.
I’m also not using fancy words. The only fancy thing that stands out is the the “Bloom filter”, which isn’t a fancy word. It’s just a thing, in particular a data structure.
The most amusing and annoying thing about AI slop, is that it’s loved by people who don’t understand the subject. They confuse and observation of slop, with “ah, you just don’t get it”.
I design and implement systems and “algorithms” like this as part of my job. Communicating them efficiently is part of that job. If anyone came to me with this diagram (pre 2022), I’d be worried if they were OK. After 2022, my LLM-slop radar is pretty spot on.
-
PixelPilgrim@lemmings.world 1 week ago
probably because the block under the cpu looking thing doesn’t indicate how it interacts with the cpu looking block and the block that ranking engine feeds into the ranked “for you” feed also there’s two user controls
TheOakTree@lemmy.zip 1 week ago
It seems that the pie chart under the cpu describes the weights of video characteristics that push to the top of your algorithm. But that’s a guess, and it should be clearer than that if the platform wants to be transparent.
korendian@lemmy.zip 1 week ago
“Everything I don’t like is AI”
okamiueru@lemmy.world 1 week ago
That’s way too reductive.