JollyG
@JollyG@lemmy.world
- Comment on Police Unmask Millions of Surveillance Targets Because of Flock Redaction Error 3 days ago:
Last attempt, I swear.
By digressing to abstraction, good people can and do justify building tech for immoral purposes. It is irrelevant that tech is not inherently good or bad in cases where it is built to do bad things. Talking about potential alternate uses in cases where tech is being used to do bad is just a way of avoiding the issues.
I have no problem calling flock or facebooks tech stack bad because the intentions behind the tech are immoral. The application of the tech by those organizations is for immoral purposes (making people addicted, invading their privacy etc). The tech is an extension of bad people trying to do bad things. Commentary about tech’s abstract nature is irrelevant at that point. Yeah, it could be used to do good. But it’s not. Yeah, it is not in-and of-itself good or bad. Who cares? This instantiation of the tech is immoral, because it’s purposes are immoral.
The engineers who help make immoral things possible should think about that, rather than the abstract nature of their technology. In these cases, saying technology is neutral is to invite the listener to consider a world that doesn’t exist instead of the one that does.
- Comment on Police Unmask Millions of Surveillance Targets Because of Flock Redaction Error 3 days ago:
Saying “tech is neutral” is a dodge. People say that to avoid thinking about the ethics of what it is they are doing.
- Comment on Police Unmask Millions of Surveillance Targets Because of Flock Redaction Error 3 days ago:
At no point in this conversation have I ever said that tech in an abstract sense is inherently good or bad. The point that I am making— and this is the last time I will make it— is that it is not interesting to talk about the ethics of some technology in an abstraction in cases where the actual tech is as it is actually implemented is clearly bad.
Saying “tech is neutral” is a dodge. People say that to avoid thinking about the ethics of what it is they are doing.
- Comment on Police Unmask Millions of Surveillance Targets Because of Flock Redaction Error 3 days ago:
I don’t see how that is the case.
It is literally the case. People who have literally made tools to do bad things justified it by claiming that tech is neutral in an abstract sense. Find an engineer who is building a tool to do something they think is bad, they will tell you that bromide.
OpenCV is not, in itself, immoral. But openCV is, once again, actual tech that exists in the actual world. In fact, that is how I know it is not bad, I use the context of reality—rather than hypotheticals or abstractions—to assess the morality of the tech. The tech stack that makes up Flock is bad, once again I make that determination by using the actual world as a reference point. It does not matter that some of the tech could be used to do good. In the case of Flock, it is not, so it’s bad.
- Comment on Police Unmask Millions of Surveillance Targets Because of Flock Redaction Error 3 days ago:
As I said before: In a conversation about technology as it actually exists, talking about potentials is not interesting. Yes all technology has the potential to be good or bad. The massive surveillance tech is actually bad right now in the real world
This issue with asserting that technology is neutral is it lets the people who develop it ignore the impacts of their work. The engineers that make surveillance tech make it, ultimately, for immoral purposes. When they are confronted with the effects of their work on society they avoid according with the ethics of what it is that they are doing by deploying bromides like “technology is neutral.”
Example: Building an operant conditioning feedback system into a social media app or video game is not inherently bad, you could use it to reinforce good behaviors and deploy it ethically by obtaining the consent of the people you use on. But the operant conditioning tech in social media apps and video games that actually exists is very clearly and unambiguously bad. It exists to get people addicted to a game or media app, so that they can be more easily exploited. Engineers built that tech stack out for the purpose of exploiting people. The tech, as it exists in the real world, is bad. When these folks were confronted with what they had done, they responded by claiming that tech is not inherently good or bad. (This is a real thing social media engineers really said) They ignored the tech—as it actually exists—in favor of an abstract conversation about some potential alternative tech that does not exist. The effect of which is the people doing harm built a terrible system without ever confronting what it was they were doing.
- Comment on Police Unmask Millions of Surveillance Targets Because of Flock Redaction Error 3 days ago:
“Technology is neutral” is a bromide engineers use to avoid thinking about how their work impacts people. If you are an engineer working for flock or a similar company, you are harming people. You are doing harm through the technology you help to develop.
The massive surveillance systems that currently exist were built by engineers who advanced technology for that purpose. The scale and totality of the resulting surveillance states are simply not possible without the tech. The closest alternatives are stasi-like systems that are nowhere near as vast or continuous. In the actual world the actual tech is immoral. Because it was created for immoral purposes and because it is used for immoral purposes.
- Comment on Police Unmask Millions of Surveillance Targets Because of Flock Redaction Error 3 days ago:
If you are in a discussion about the development and deployment of technology to facilitate a surveillance state, then saying “technology is neutral” is the least interesting thing you could possibly say on the subject.
In a completely abstract, disconnected-from-society-and-current-events sense it is correct to say technology is amoral. But we live in a world where surveillance technology is developed to make it easier for corporations and the state to invade the privacy of individuals. We live in a world where legal rights are being eroded by the use of this technology. We live in a world where this technology is profitable because it helps organizations violate individual rights. If you live in the US, as I do, then you live in a world where federal law enforcement agencies have become completely contemptuous of the law and are literally abducting innocent people off the street. They use the technology under discussion here to help them do that.
That a piece of tech might potentially be used for a not-immoral purpose is completely irrelevant to how it is actually being used in the real world.
- Comment on Is there a mechanism in the USA to undo presidential pardons years later if political corruption has been proven as motivation to give these pardons? 5 weeks ago:
I think you would struggle to find any serious Constitutional scholar who would agree with your interpretation. “Except in cases of impeachment” is clearly a limit on what cases a president has the power to issue a pardon, not a retroactive “unpardoning” of cases after a president has been impeached. In fact the retroactive nullification of a pardon seems to fly in the face of a basic judicial principle that hold decisions to be final.
- Comment on Americans are holding onto devices longer than ever and it’s costing the economy 1 month ago:
The section about buying new phones and the section about company investment appear to have nothing to do with one another.
The report by the Fed they cite is concerned with estimating the effect of capital reinvestment productivity gains by the firm.
Just breezing through the report it looks like the Fed is trying to explain differences in GDP between economies as a consequence of capital reinvestment. When firms buy new equipment, which could include IT equipment but also could include things like robots, backhoes, new looms, or any other piece of equipment a firm uses to produce goods or services, they should be more productive because their equipment has newer technology in it. The Fed reasons that if two major economies differ in GDP growth one of the potential explanations might be the rate of capital reinvestment firms in those economies engage in because newer equipment usually increases productivity. So more frequent investments in capital should yield faster growth in GDP. They present evidence in favor of that argument.
I don’t know how reasonable their conclusion is because I am not too familiar with their measurements which are not direct measures of capital investment and don’t really know enough about how GDP changes over time to know if this is a good explanation. It is clear, however, that the Fed is not arguing that consumers need to keep buying new phones every year or the economy will collapse or even be harmed. That is not even remotely what the report is about.
- Comment on #environmentalist 2 months ago:
Using a straw when drinking sugary/ acidic drinks is supposed to be better for your teeth since it limits the exposure of sugar or acid to your teeth. Not sure how true that is, but I have had dentists suggest it as a way of lowering the chance of getting cavities.
- Comment on Coordinated Pro-Russian Propaganda Network Targeting ActivityPub and ATProto Services 3 months ago:
I think one of the problems with citing that first study as evidence Russian disinfo is targeted at conservatives more than liberals is that it only studied one case, and Russian disinformation campaigns tailor their disinfo to different demographics, often through brute force/trial and error. So it is quite possible that the particular case they studied happens to be tailored to (or more successfully resonated with) conservatives, while another specific case would have resonated with liberals more thus resulting in more liberal exposure by their metrics.
- Comment on ‘I’m a modern-day luddite’: Meet the students who don’t use laptops 3 months ago:
When I was in undergrad I had to write lots of essays by hand. I’d say about every other course in one of my majors had midterms and finals that were a single question essay to be completed in class during the testing period. I figured that was pretty typical.
- Comment on If what they taught us about checks and balances was a lie maybe what they taught us about civil disobedience was a lie too. 4 months ago:
The point I was making was that the people who are in power are in power because about half of all voters are fine with them being in power and about a third actively want facist rule. Ultimately thisis not a failure of government structure. It’s a failure of citizens. Maybe that will change as those who supported trump from ignorance experience the consequences of their decisions. Maybe not. But trump won the popular vote last election cycle and has always enjoyed a fairly substantial base. A base that penalizes conservatives who worked against him by removing them from power. You cannot ignore the role that the people played in bringing about the current state of affairs. We are getting what people voted for.
Btw the checks do still work. They work in lower courts as they apply the law without regard to partisanship. They, surprisingly, work in grand juries. And they work for non MAGA states to the extent that our federalized system gives more influence to local governments. Where they have failed is where maga politicians enjoy wide support.
- Comment on If what they taught us about checks and balances was a lie maybe what they taught us about civil disobedience was a lie too. 4 months ago:
That’s a nice bromide but framing the current constitutional crises as the result of a “lie” about checks and balances fundamentally mischaracterizes the issues at hand. For one it diminishes the compliance of the other branches which is clearly critical for enabling the abuse that we see. And it also overlooks the general issue that about half the national actively enables the naked corruption and ascendant facism of the current government.
The problem of the present moment is not the structure of the government it’s the tolerance of the population.
- Comment on If what they taught us about checks and balances was a lie maybe what they taught us about civil disobedience was a lie too. 4 months ago:
The checks still exist to correct those abuses of power. Just because congress or SCOTUS is unwilling to use them doesn’t mean they don’t exist.
- Comment on If what they taught us about checks and balances was a lie maybe what they taught us about civil disobedience was a lie too. 4 months ago:
“Checks and balances” in the context of US federal government just means that each branch has the ability to check the growth of power of the others. It’s not “a lie” because it’s still true. Right now congress could, if they wanted to, impeach the president or pass laws preventing him from doing the things he wants. The SCOTUS could stop him too if they wanted to actually take up cases on the law instead of using the shadow docket to avoid making rulings.
Trump partisans hold a trifecta in government right now so they are not going to use their checks they have available to them. But one branch refusing to check another because its members were elected from the same stock of partisan lunatics is not the same as checks and balances not existing.
- Comment on AI Experts No Longer Saving for Retirement Because They Assume AI Will Kill Us All by Then 4 months ago:
AI doomsday marketing wank has the same vibe as preteens at a sleepover getting spooked by a ouija board.
- Comment on Reddit is using AI to determine users beliefs, values, stances and more based on their activity (posts and comments) summarizing it to Subreddit Mods. 4 months ago:
The post title makes it sound like Reddit is doing some sort of automated classification of user politics with some sort of ml technique. But the screenshot does not show that. It shows an llm summary of a users posting history . If the tool was run on a user that posted exclusively to a cat subreddit, the summary would have been about how the user likes cats. Despite the utility or accuracy of llm summaries, what the screenshot shows is far more anodyne than what this post’s title implies is happening.
- Comment on Reddit is using AI to determine users beliefs, values, stances and more based on their activity (posts and comments) summarizing it to Subreddit Mods. 4 months ago:
The screenshot shows an llm summary of a users posting history. Is that what you mean by “determine belief values stance and more” ? Is there more to this? How is that summary different from scrolling through someone’s posting history to see what they post about?
- Comment on OpenAI claims GPT-5 AI model can provide PhD-level expertise. 5 months ago:
PhD Level expertise:
- Comment on Even households earning $150,000 a year are struggling with credit card and car payments 5 months ago:
It is very difficult to do. You really do have to be over-leveraged and bad with money. Which is probably why 99.966% of households making 150k+ are showing up as not delinquent on loans in these data.
- Comment on hubris go brrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr 5 months ago:
I feel the same way. This style of thinking can have pretty serious consequences for decision makers.
But, on the other hand, all my bosses think in bullet points, and I am usually the one that writes the bullets. . .
- Comment on hubris go brrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr 5 months ago:
CEOs think in bullet points. LLMs can spit out bulleted lists of confident-sounding utterances with ease.
It is not too surprising that people who see the world through overly simplified disconnected summaries are impressed by LLMs
- Comment on Scientists reportedly hiding AI text prompts in academic papers to receive positive peer reviews 5 months ago:
Andrew German wrote about this. From his blog post I got the impression that this issue is mostly impacting compsci. Maybe it’s more widespread than that field, but my experience with compsci research is that a lot more emphasis is placed on conferences compared to journals and the general vibe I got from working with compsci folks was that volume mattered a lot more than quality when it came to publication. So maybe those quirks of the field left them more vulnerable to ai slop in the review process.
- Comment on Algorithmic Sabotage Manifesto. 6 months ago:
Is this really that precise? Reading through these 10 points, many of them seem quite vague to me. Phrases like:
[. . .] a structural renewal of a wider movement for social autonomy [. . .]
or
[ . . .] emancipatory defence [sic] of the need for communal constraint of harmful technology [. . .]
could mean a million different things, for example.
- Comment on An open letter signed by 602 tech founders, VCs, and more urges Sequoia Capital act after Shaun Maguire said Zohran Mamdani “comes from a culture that lies about everything” 6 months ago:
How can this be projection? Silicon Valley’s culture brought us fraudulent medical tech, cryptocurrency scams, industrialized wire fraud, illegal taxis, and DRM juice machines. A paragon of honesty and integrity if I ever saw one.
- Comment on We need to stop pretending AI is intelligent 6 months ago:
Word guessing machine.
- Comment on Why is Jordan Peterson both a Christian and not a Christian? 7 months ago:
His presentation of psychology leaves me with the impression that he is someone who is not well educated in the field. And I am saying this as someone with a background in a field that is very close to psychology.
His explanations of human experience and society rely on psychoanalysis and he only seems to cite more recent work when it reinforces his view point. His general approach to understanding human psychology is outdated.
<—-1800’s——psychoanalysis—-1900—behaviorism—-1950s——the cognitive revolution—-present day psychology—->
Petersons view of the mind and society is stuck in he past.
- Comment on Why is Jordan Peterson both a Christian and not a Christian? 7 months ago:
Peterson retreats to a politically convenient solipsism whenever challenged on anything. He is not a serious person.
- Comment on Meta Reportedly Eyeing 'Super Sensing' Tech for Smart Glasses 8 months ago:
What does dystopia mean to you?
In this particular case, the things I find dystopian are the tendency of a disconcertingly high number of people to allow a tech company to mediate (and eventually monetize) every aspect of their social lives. The point I was making is that if this tool were to experience widespread adoption, even putting aside the massive surveillance and manipulation issues, what will inevitably happen is that a subset of people will come to rely on the tool to the point where they cannot interact with others outside of it. That is bad. Its bad because, it takes a fundamental human experience and locks it behind a pay wall. It is also bad because the sort of interactions that this tool could facilitate are going to be, by their nature, superficial. You simply cannot have meaningful interactions with someone else if you are relying on a crib sheet to navigate an interaction with them.
This tool would inevitably lead to the atrophy of social skills. In the same way that overusing a calculator causes arithmetic skills to atrophy, and in the same way that overusing a GPS causes spatial reasoning skill to atrophy. But in this case it is worse, because this tool would be contributing to the further isolation of people who, judging by the excuses offered in this thread, are already bad at social interactions. People are already lonely and apparently social media is contributing to that trend allowing it to come between you and personal interactions in the face to face world is not going to help.
This is akin to having sticky notes to remember things, just in a more compact convenient application.
I really disagree with this analogy. It would be more appropriate to say that this is like carrying around a stack of index cards with notes about people in your life and pulling them out every time you interact with someone. If someone in my life needed an index card to interact with me, I would find that insulting, because it is insincere and dehumanizing. It communicates to others "I don’t care enough about you to bother to learn even basic information about who you are.
The problem isn’t the technology, it’s the application
I really cannot stand this bromide. We are talking about a company with a track record of using technology to abuse people. They facilitated a genocide (by incompetence, but they clearly did not give a shit). They prey on people when they feel bad. They researched ways to make people feel bad (so they will be easier to manipulate). They design their tools to be addictive and then manipulate and abuse people on their platform. Saying "technology is neutral is the least interesting thing you can say about tech in the context of the current trends of silicon valley. A place whose thought leaders and influencers are becoming ever more obsessed with manipulation, control and fascism. We don’t need to speculate about technology, we already know the applications of this technology won’t be neutral. They will be used to harm people for profit.