FaceDeer
@FaceDeer@fedia.io
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
- Comment on If copyright on a work expired immediately after death, would be that a bad or good idea? 6 hours ago:
I think I'd prefer a flat timespan rather than a lifetime-dependent one. The two flaws I see with the lifetime-dependent one are:
-
It can give wildly different opportunity to the rightsholder depending on how old they are and what their random life circumstances happen to be. A 20-year-old author could have 80 years' hold on their work whereas a 70-year-old one could have just 10. Unless Truck-kun randomly gets involved and sends that 20-year-old author into another world a day after he published.
-
It creates an incentive to assassinate popular authors.
It also creates complexity for work-for-hire situations where a corporation owns a copyright, though that's already a special case so one could continue handling it separately.
-
- Comment on If copyright on a work expired immediately after death, would be that a bad or good idea? 6 hours ago:
So glad to see another reference to this guy's work in the wild.
As an amusing side note, the original term of copyright in the first law that established it (the British Copyright Act of 1710) was a flat 14 years, with a mechanism that allowed you to apply for only one extension of an additional 14 years. So most things would be 14 years, and whatever select things were particularly valuable or important could have 28 years. Under Pollock's analysis this is just about the perfect possible system. So by sheer coincidence this is something that we got right the first time and ever since then we've been "correcting" it to be less and less optimal.
- Comment on Do LLM modelers maintain a list of manual corrections fed by humans? 7 hours ago:
I'm not a deep expert on LLMs, but I've been following their development and write code that uses them so I can think of two systemic approaches to "solving" the strawberry problem.
One is chain-of-thought reasoning, where the LLM does some preliminary note-taking (essentially talking to itself) before it gives a final answer. I've seen it tackle problems like this by saying "okay, how is strawberry spelled?", listing out the individual letters (presumably because somewhere in its training data was information that let it memorize the spellings of common tokens) and then counting them.
Another is the "agentic" approach, where it might be explicitly provided with functions that allow it to send the problem to specialized program code. Eg, there could be a count_letters(string, letter_to_count) function that it's able to call. I expect that sort of thing would only be present for an LLM that's working in a framework where that sort of question is known to be significant, though, and I'm not sure what that might be in the real world. Something helping users fill out forms, perhaps? Or a "language tutor" that's expected to be able to figure out whatever weird incorrect words a student might type?
There are also LLMs that don't tokenize and feed the literal string of characters into the neural network, but as far as I'm aware none of the commonly-used ones are like that. They're just research models for now.
- Comment on The AI vibe shift is upon us 7 hours ago:
Now we're "vibe vibing?"