null
@null@piefed.au
- Comment on [deleted] 1 week ago:
Having a few drinks and getting stoned and then having sex used to make me sweat like a pig.
Just having sex without being drunk or stoned makes me sweat a normal amount.
Also, anti-depressants have 2 problematic side-effects: they can make you sweat more and make it more difficult to climax.
- Comment on About 3m Australians affected by unlawful Centrelink debt calculation to be eligible for up to $600 compensation 1 week ago:
No I'm pretty sure we agree that it's a bad thing.
I'm pointing out the intention of those involved. The intention is not to assume everyone is a cheat, rather the intention is to make it difficult to make a claim.
- Comment on About 3m Australians affected by unlawful Centrelink debt calculation to be eligible for up to $600 compensation 2 weeks ago:
the structural design assumes everyone is a welfare cheat until proven otherwise
I disagree. I work in a related industry and sadly see a lot of people trying to interact with centrelink.
Over the years I've developed a strongly held belief that their processes are designed to be invasive and difficult to follow in order to discourage claimants. Dealing with centrelink is your last, worst option, and that is by design.
- Comment on Argentina wants to monitor social media with AI to ‘predict future crimes’ 3 weeks ago:
I feel like people are somehow stupider.
In Australia in the 80s there was very strong opposition to the introduction of tax file numbers, similar to a social security number I guess - merely a unique identifier for tax paying citizens. It was considered an over reach by the government, and an unnecessary way to track and monitor citizens.
Now 45 years later those same people who were resistant to this type of identifier, like my parents, are nodding along with the conservatives who are trying to implement AI surveillance everywhere saying how necessary it is to protect us all from evil crime doers.
- Comment on Do LLM modelers maintain a list of manual corrections fed by humans? 3 weeks ago:
I don't know the answer and I don't know anything about how LLMs are tuned but I think the answer is probably partially yes.
My supposition is:
Instead of providing manual answers to specific questions, you modify the bot's approach to answering different types of questions.
For example, if you ask "what color are bananas" the bot answers this by looking for discussions about the color of different fruits and selects the word that seems to be provided most often.
Alternatively, if you ask "what is two plus two", when the bot parses the question it recognises that it's a math question, so instead of looking for text discussions of math, it converts it to an equation and returns the solution.
Previously, I guess bots were answering the "how many r's" question in the text based kind of way, and the fix made the bot interpret it in a more mechanical / mathematic kind of way.
It's a pretty salient demonstration of a bot's inability to reason. They're good at making sentences, but they can only emulate reasoning.