perestroika
@perestroika@lemm.ee
- Comment on People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies 1 week ago:
I think Elon was having the opposite kind of problems, with Grok not validating its users nearly enough, despite Elon instructing employees to make it so. :)
- Comment on People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies 1 week ago:
From the article:
Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.
From elsewhere:
Sycophancy in GPT-4o: What happened and what we’re doing about it
We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic.
Apparently, people who are close to falling over the edge, can use AI to push themselves over the edge because it’s not critical of them.
- Comment on Please consider supporting Lemmy development 1 week ago:
Sadly, my only invite code was recently used up for inviting someone who I encountered in real life… and I’ve only ever invited people who I’ve met in real life - because RiseUp has a policy of exacting vengeance from the inviter, if the invited person does meet local standards.
- Comment on Please consider supporting Lemmy development 1 week ago:
As an Eastern European drone developer, I’m OK with donating even to tankies …if what they do is building Lemmy. :)
As a side note, RiseUp need your donations too.
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.
This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:
- accept that negative publicity will result
- accept that people may stop cooperating with them on this work
- accept that their reputation may not be considered spotless after the fact
- ensure that they won’t do anything illegal
After that, if they still feel their study is necesary, maybe they should run it and publish the results.
As for the question of whether a tailor-made response considering someone’s background can sway opinions better - that’s been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been recommended to know their background, model several ways of how they might perceive the proposal, and advance your explanation in a way relates better to their viewpoint.)
Thus, AI bots which take into consideration a person’s background will - if implemented right - indeed be more powerful at swaying opinions.
- Comment on How likely is it that Trump will be the first President assassinated since Kennedy? 5 weeks ago:
A counterpoint: unmanned technology has developed really fast recently. In old times, one had to be motivated as hell, because taking a shot at a president meant likely death.
In our days, for a technically capable adversary, an attempt costs only moderate amounts, escape is far more likely, and tools can automated with self-destruction mechanisms to considerably hinder evidence collection.
I’d say that barriers are lower due to drones and robots. Then again, to get drones and robots pointed at oneself, one has to piss off people who have better things to do. That is, people who are unlikely to be desperate.
- Comment on Anthropic has developed an AI 'brain scanner' to understand how LLMs work and it turns out the reason why chatbots are terrible at simple math and hallucinate is weirder than you thought 5 weeks ago:
Wow, interesting. :)
Not unexpectedly, the LLM failed to explain its own though process correctly.
- Comment on Tired of dating apps? 1 month ago:
A side note about dating apps: most of them aren’t much better than this.
Their interest is keeping the user clicking, paying for services and coming back.
If you find the right person for yourself, you will naturally do none of that.
So:
- they build awful card stack systems with no search function
- they build superficial profile systems with no metadata about personality, habits or world views
…and of course, with such systems, people fail to find suitable partners. They come back and pay, but society suffers, because someone needs to make money.
I would vote for a politician who would promise that the ministry of social security will order a public dating site that’s built by scientists.
- Comment on Internet forums are disappearing because now everything is Reddit and Discord. And that's worrying. 1 month ago:
Server-wide censorship cannot be allowed. / This eliminates every platform I know of.
Within the I2P mix network there was an attempt, at some point, to build a system named Syndie where everyone would have to be their own censor, and servers would host content without the server operator really knowing or caring about what they host.
It failed to take off, but I’m not convinced if the reason was architecture or the main developer leaving.
- Comment on Internet forums are disappearing because now everything is Reddit and Discord. And that's worrying. 1 month ago:
Indeed, forums are almost gone. In particular, I miss one forum about science fiction, one about aeromodelism, one about electric vehicles (another still exists) and one about anarchism. An interesting hold-out in the country where I live, is a military forum. Mods do a cursory background check, rules say that respectful discussion is the only kind of discussion accepted - ironically, the military forum has a peaceful atmosphere. But it could come crashing down much easier than a social media company.
As for why forums disappeared - I think that people became too convenient. They wanted zero expense (hosting a forum incurs some expenses and needs a bit of time and attention), and wanted all their discussion on one place. Advertisers wanted a place where masses could be manipulated. Social media companies wanted people to interact more (read: pick more fights) and see more ads - and built their environments accordingly. Not for the public good.
- Comment on Multiple Tesla vehicles were set on fire in Las Vegas and Kansas City 1 month ago:
I’m not from the US, but I straight out recommend quickly educating oneself about fiber guided drones and remote weapons stations. Because the US is heading somewhere at a rapid pace.
Trump’s administration:
“Agency,” unless otherwise indicated, means any authority of the United States that is an “agency” under 44 U.S.C. 3502(1), and shall also include the Federal Election Commission.
Vance, in his old interviews:
“I think that what Trump should do, if I was giving him one piece of advice: Fire every single midlevel bureaucrat, every civil servant in the administrative state, replace them with our people.”
Also Vance:
“We are in a late republican period,” Vance said later, evoking the common New Right view of America as Rome awaiting its Caesar. “If we’re going to push back against it, we’re going to have to get pretty wild, and pretty far out there, and go in directions that a lot of conservatives right now are uncomfortable with.”
Googling “how do you remove a dictator?” when you already have one is doing it too late. On the day the wannabe Caesar crosses his Rubicon, it better be so that some people already know what to aim at him.
- Comment on China announces plan to label all AI-generated content with watermarks and metadata. 1 month ago:
As an exception to about 95% of regulations issued in China, this approach actually seems well considered - something that might benefit people and work. Similar regulations should be considered by other countries.
- Comment on Google’s ‘Secret’ Update Scans All Your Photos 2 months ago:
In my experience, the API has iteratively made it ever harder for applications to automatically perform previously easy jobs, and jobs which are trivial under ordinary Linux (e.g. become an access point, set the IP address, set the PSK, start a VPN connection, go into monitor / inject mode, access an USB device, write files to a directory of your choice, install an APK). Now there’s a literal thicket of API calls and declarations to make, before you can do some of these things (and some are forever gone).
The obvious reason is that there are a billion fools whom Google tries to protect them from scamers.
But it kills the ability to do non-standard things, and the concept of your device being your own.
- Comment on Google’s ‘Secret’ Update Scans All Your Photos 2 months ago:
The countdown to Android’s slow and painful death is already ticking for a while.
It has become over-engineered and no longer appealing from a developer’s viewpoint.
I still write code for Android because my customers need it - will be needing for a while - but I’ve stopped writng code for Apple’s i-things and I research alternatives for Android. Rolling my own environment with FOSS components on top of Raspbian looks feasible already. On robots and automation, I already use it.
- Comment on Elon Musk’s X blocks links to Signal, the encrypted messaging service 2 months ago:
Tox is nice.
The typical pattern over here: if someone uses Signal, you guess they’re some military type (wants things to be secure, but wants it easy). If someone uses Tox, you guess they’re some hacker / anarchist type (wants things to be secure, but also anonymizable, even if it’s a bit harder).
- Comment on Lemm.ee is recruiting new admins! 3 months ago:
I appreciate the effort of running this place - it’s working well from my perspective - and hope you find the colleagues you need. :)
Myself, I can’t volunteer - I already moderate a few communities elsewhere, my time is limited, and I’m politically too partial. I also cannot say that I fully agree with local federation policy.
- Comment on ‘Do not pet’: Why are robot dogs patrolling Mar-A-Lago? 5 months ago:
Level 3000 hack: compromise security with drone fleas that jump onto drone dogs.
Level 9000 hack: join the pack with a drone attack dog.