GamingChairModel
@GamingChairModel@lemmy.world
- Comment on SpaceX's Starship blows up ahead of 10th test flight 5 days ago:
NASA funded SpaceX based on hitting milestones on their COTS program. Those were just as available to Boeing and Blue Origin, but they had less success meeting those milestones and making a profit under fixed price contracts (as opposed to the traditional cost plus contracts). It’s still NASA-defined standards, only with an offloading of the risk and uncertainty onto the private contractors, which was great for SpaceX and terrible for Boeing.
But ultimately it’s still just contracting.
- Comment on SpaceX's Starship blows up ahead of 10th test flight 6 days ago:
NASA has always been dependent on commercial for profit entities as contractors. The Space Shuttle was developed by Rockwell International (which was later acquired by Boeing). The Apollo Program relied heavily on Boeing, Douglas Aircraft (which later merged into McDonnell Douglas, and then merged with Boeing), and North American Aviation (which later became Rockwell and was acquired by Boeing), and IBM. Lots of cutting edge stuff in that era happened from government contracts throwing money at private corporations.
That’s the whole military industrial complex Eisenhower was talking about.
The only difference with today is that space companies have other customers to choose from, not just NASA (or the Air Force/Space Force).
- Comment on Honda successfully launched and landed its own reusable rocket 6 days ago:
The only problem with that plan is that it takes a lot of energy to raise an orbit that much, I’m not sure how to make that feasible.
Lowering the orbit takes energy, too, unless you’re relying solely on atmospheric drag.
- Comment on Honda successfully launched and landed its own reusable rocket 6 days ago:
Your original comment said 2050, which is a long way off. SpaceX’s first launch attempt was in 2006, their first successful launch was in 2008, their first successful recovery of a rocket in reusable condition was in 2015, and first reused a rocket in 2017. If they can make progress on that kind of timeline, why wouldn’t someone else be able to?
- Comment on Honda successfully launched and landed its own reusable rocket 6 days ago:
Physics don’t change fundamentally between 6 meters and 120 meters
Yes it does. Mass to strength ratio of structural components changes with scale. So does the thrust to mass ratio of a rocket and its fuel. So does heat dissipation (affected by ratio of surface area to mass).
And I don’t know shit about fluid dynamics, but I’m skeptical that things scale cleanly, either.
Scaling upward will encounter challenges not apparent at small sizes. That goes for everything from engineering bridges to buildings to cars to boats to aircraft to spacecraft.
- Comment on Honda successfully launched and landed its own reusable rocket 6 days ago:
The satellite constellation is the natural consequence of cheaper rockets. It’s a true paradigm shift, but the pioneer in this case has only the moat of being able to spend less money per launch. If someone else can deliver payloads to low earth orbit for less than $2,000/kg, then they’ll easily be able to launch a Starlink competitor.
- Comment on An analysis of X(Twitter)'s new XChat features shows that X can probably decrypt users' messages, as it holds users' private keys on its servers 1 week ago:
The actual key management and encryption protocols are published. Each new device generates a new key and reports their public key to an Apple-maintained directory. When a client wants to send a message, it checks the directory to know which unique devices it should send the message to, and the public key for each device.
Any newly added device doesn’t have the ability to retrieve old messages. But history can be transferred from old devices if they’re still working and online.
Basically, if you’ve configured things for maximum security, you will lose your message history if you lose or break your only logged-in device.
There’s no real way to audit whether Apple’s implementation follows the protocols they’ve published, but we’ve seen no indicators that they aren’t doing what they say.
- Comment on An analysis of X(Twitter)'s new XChat features shows that X can probably decrypt users' messages, as it holds users' private keys on its servers 2 weeks ago:
It’s a chain of trust, you have to trust the whole chain.
Including the entire other side of the conversation. E2EE in a group chat still exposes the group chat if one participant shares their own key (or the chats themselves) with something insecure. Obviously any participant can copy and paste things, archive/log/screenshot things. It can all be automated, too.
Take, for example, iMessage. We have pretty good confidence that Apple can’t read your chats when you have configured it correctly: E2EE, no iCloud archiving of the chats, no backups of the keys. But do you trust that the other side of the conversation has done the exact same thing correctly?
Or take for example the stupid case of senior American military officials accidentally adding a prominent journalist to their war plans signal chat. It’s not a technical failure of signal’s encryption, but a mistake by one of the participants inviting the wrong person, who then published the chat to the world.
- Comment on Palantir Is Going on Defense 2 weeks ago:
Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale
Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create The Torment Nexus
- Comment on lemm.ee is shutting down at the end of this month 3 weeks ago:
I’m not sure that would work. Admins need to manage their instance users, yes, but they also need to look out for the posts and comments in the communities hosted on their instance, and be one level of appeal above the mods of those communities. Including the ability to actually delete content hosted in those communities, or cached media on their own servers, in response to legal obligations.
- Comment on AI company files for bankruptcy after being exposed as 700 Indian engineers - Dexerto 3 weeks ago:
Yes, it’s the exact same practice.
The main difference, though, is that Amazon as a company doesn’t rely on this “just walk out” business in a capacity that is relevant to the overall financial situation of the company. So Amazon churns along, while that one insignificant business unit gets quietly shut down.
For this company in this post, though, they don’t have a trillion dollar business subsidizing the losses from this AI scheme.
- Comment on It’s Time To Go Back to Web 1.0 3 weeks ago:
Yeah, from what I remember of what Web 2.0 was, it was services that could be interactive in the browser window, without loading a whole new page each time the user submitted information through HTTP POST. “Ajax” was a hot buzzword among web/tech companies.
Flickr was mind blowing in that you could edit photo captions and titles without navigating away from the page. Gmail could refresh the inbox without reloading the sidebar. Google maps was impressive in that you could drag the map around and zoom within the window, while it fetched the graphical elements necessary on demand.
Or maybe web 2.0 included the ability to implement states in the stateless HTTP protocol. You could log into a page and it would only show you the new/unread items for you personally, rather than showing literally every visitor the exact same thing for the exact same URL.
Social networking became possible with Web 2.0 technologies, but I wouldn’t define Web 2.0 as inherently social. User interactions with a service was the core, and whether the service connected user to user through that service’s design was kinda beside the point.
- Comment on AI model collapse is not what we paid for 4 weeks ago:
That’s never really been true. It’s a cat and mouse game.
If Google actually used its 2015 or 2005 algorithms as written, but on a 2025 index of webpages, that ranking system would be dogshit because the spammers have already figured out how to crowd out the actual quality pages with their own manipulated results.
Tricking the 2015 engine using 2025 SEO techniques is easy. The problem is that Google hasn’t actually been on the winning side of properly ranking quality for maybe 5-10 years, and quietly outsourced the search ranking systems to the ranking systems of the big user sites: Pinterest, Quora, Stack Overflow, Reddit, even Twitter to some degree. If there’s a responsive result and it ranks highly on those user voted sites, then it’s probably a good result. And they got away with switching to that methodology just long enough for each of those services to drown in their own SEO spam techniques, so that those services are all much worse than they were in 2015. And now indexing search based on those sites is no longer a good search result.
There’s no turning backwards. We need to adopt new rankings for the new reality, not try to turn back to when we were able to get good results.
- Comment on Realtek's $10 tiny 10GbE network adapter is coming to motherboards later this year 4 weeks ago:
My gigabit connection is good enough for my NAS, as the read speeds on the hard drive itself tend to be limited to about a gigabit/s anyway. But I could see some kind of SSD NAS benefiting from a faster LAN connection.
- Comment on Tesla Full-Self Driving Veers Off Road, Hits Tree, and Flips Car for No Obvious Reason (No Serious Injuries, but Scary) 4 weeks ago:
All the other answers here are wrong. It was the Boeing 737-Max.
They fit bigger, more fuel efficient engines on it that changed the flight characteristics, compared to previous 737s. And so rather than have pilots recertify on this as a new model (lots of flight hours, can’t switch back), they designed software to basically make the aircraft seem to behave like the old model.
And so a bug in the cheaper version of the software, combined with a faulty sensor, would cause the software to take over and try to override the pilots and dive downward instead of pulling up. Two crashes happened within 5 months, to aircraft that were pretty much brand new.
It was grounded for a while as Boeing fixed the software and hardware issues, and, more importantly, updated all the training and reference materials for pilots so that they were aware of this basically secret setting that could kill everyone.
- Comment on ‘Elden Ring’ Movie in the Works From ’Civil War’ Director Alex Garland, A24 4 weeks ago:
It’s not a movie, but the Fallout series had a great first season, and I’m looking forward to the second.
- Comment on Duolingo CEO says AI is a better teacher than humans—but schools will exist ‘because you still need childcare’ 4 weeks ago:
Instead, I actively avoided conversations with my peers, particularly because I had nothing in common with them.
Looking at your own social interactions with others, do you now consider yourself to be socially well adjusted? Was the “debating child in a coffee shop” method actually useful at developing the social skills that are useful in adulthood?
I have some doubts.
- Comment on By Default, Signal Doesn't Recall 4 weeks ago:
Just ask Mike Waltz!
- Comment on Mario Kart 64 got finally decompiled! 5 weeks ago:
Some people struggle with the difference between arguing about descriptive statements, about what things are, and arguing about normative statements, about what things should be. And these topics are nuanced.
Decompiling to learn functionality is fair use (because like I said in my previous comment, functionality can’t be copyrighted), but actually using and redistributing code (whether the original source code, the compiled binary derived from the source code, or decompiled code derived from the binary) is pretty risky from a legal standpoint. I’d advise against trying to build a business around the practice.
- Comment on Mario Kart 64 got finally decompiled! 1 month ago:
Yeah, that’s why all the IBM clones had to write their BIOS firmware in clean room implementations of new software that implemented the same functionality as IBM’s own documentation described.
Functionality can’t be copyrighted, but code can be. So the easiest way to prove that you made something without the copyrighted code is to mimic the functionality through your own implementation, not by transforming the existing copyrighted code, through decompilation or anything like that.
- Comment on 1 month ago:
Plus domains should’ve gone left to right in terms of root, tld, domain, subdomain, etc., instead of right to left.
- Comment on In heat 2 months ago:
Longer queries give better opportunities for error correction, like searching for synonyms and misspellings, or applying the right context clues.
In this specific example, “is Angelina Jolie in Heat” gives better results than “Angelina Jolie heat,” because the words that make it a complete sentence question are also the words that give confirmation that the searcher is talking about the movie.
Especially with negative results, like when you ask a question where the answer is no, sometimes the semantic links in the kndex can get the search engine to make suggestions of a specific mistaken assumption you’ve made.
- Comment on In heat 2 months ago:
Why do people Google questions anyway?
Because it gives better responses.
Google and all the other major search engines have built in functionality to perform natural language processing on the user’s query and the text in its index to perform a search more precisely aligned with the user’s desired results, or to recommend related searches.
If the functionality is there, why wouldn’t we use it?
- Comment on In heat 2 months ago:
Search engine algorithms are way better than in the 90s and early 2000s when it was naive keyword search completely unweighted by word order in the search string.
So the tricks we learned of doing the bare minimum for the most precise search behavior no longer apply the same way. Now a search for two words will add weight to results that have the two words as a phrase, and some weight for the two words close together in the same sentence, but still look for each individual word as a result, too.
More importantly, when a single word has multiple meanings, the search engines all use the rest of the search as an indicator of which meaning the searcher means. “Heat” is a really broad word with lots of meanings, and the rest of the search can help inform the algorithm of what the user intends.
- Comment on Did Trump’s Scientific Advisor Admit That The US Possesses Space And Time Manipulation Tech? Internet Is Wilding 2 months ago:
I can travel forward in time at a rate of 60 seconds per minute, and I think the US government can, too.
- Comment on Meta’s AI research lab is ‘dying a slow death,’ some insiders say. Meta prefers to call it ‘a new beginning’ 2 months ago:
I think back to the late 90’s investment in rolling out a shitload of telecom infrastructure, with a bunch of telecom companies building out lots and lots of fiber. And perhaps more important than the physical fiber, the poles and conduits and other physical infrastructure housing that fiber, so that it could be improved as each generation of tech was released.
Then, in the early 2000’s, that industry crashed. Nobody could make their loan payments on the things they paid billions to build, and it wasn’t profitable to charge people for the use of those assets while paying interest on the money borrowed to build them, especially after the dot com crash where all the internet startups no longer had unlimited budgets to throw at them.
So thousands of telecom companies went into bankruptcy and sold off their assets. Those fiber links and routes still existed, but nobody turned them on. Google quietly acquired a bunch of “dark fiber” in the 2000’s.
When the cloud revolution happened in the late 2000’s and early 2010’s, the telecom infrastructure was ready for it. The companies that built that stuff weren’t still around, but the stuff they built finally became useful. Not at the prices paid for it, but when purchased in a fire sale, those assets could be profitable again.
That might happen with AI. Early movers over invest and fail, leaving what they’ve developed to be used by whoever survives. Maybe the tech never becomes worth what was paid for it, but once it’s made whoever buys it for cheap might be able to profit at that lower price, and it might prove to be useful in the more modest, realistic scope.
- Comment on Meta’s AI research lab is ‘dying a slow death,’ some insiders say. Meta prefers to call it ‘a new beginning’ 2 months ago:
For example, as a coding assistant, a lot of people quite like them. But as a replacement for a human coder, they’re a disaster.
New technology is best when it can meaningfully improve the productivity of a group of people so that the group can shrink. The technology doesn’t take any one identifiable job, but now an organization of 10 people, properly organized in a way conscious of that technology’s capabilities and limitations, can do what used to require 12.
A forklift and a bunch of pallets can make a warehouse more efficient, when everyone who works in that warehouse knows how the forklift is best used, even when not everyone is a forklift operator themselves.
Same with a white collar office where there’s less need for people physically scheduling things and taking messages, because everyone knows how to use an electronic calendar and email system for coordinating those things. There might still be need for pooled assistants and secretaries, but maybe not as many in any given office as before.
So when we need an LLM to chip in and reduce the amount of time a group of programmers need in order to put out a product, the manager of that team, and all the members of that team, need to have a good sense of what that LLM is good at and what it isn’t. Obviously autocomplete has always been a productivity enhancer for long before LLMs have been around, and extensions of that general concept may be helpful for the more tedious or repetitive tasks, but any team that uses it will need to use it with full knowledge of its limitations and where it best supplements the human’s own tasks.
I have no doubt that some things will improve and people will find workflows that leverage the strengths while avoiding the weaknesses. But it remains to be seen whether it’ll be worth the sheer amount of cost spent so far.
- Comment on The IP Laws That Stop Disenshittification. 2 months ago:
You cant remove that double negative without making it incorrect
Sure you can: The IP Laws That Reinforce Enshittification.
- Comment on Google give a 71% discount to US federal agencies for Workspace, as it looks to capitalize on the Trump administration's cost-cutting push. 2 months ago:
I’m pretty sure every federal executive agency has been on Active Directory and Exchange for like 20+ years now. The courts migrated off of IBM Domino/Notes about 6 or 7 years ago, onto MS Exchange/Outlook.
What we used when I was there 20 years ago was vastly more secure because we rolled our own encryption
Uh that’s now understood not to be best practice, because it tends to be quite insecure.
Either way, Microsoft’s ecosystem on enterprise is pretty much the default on all large organizations, and they have (for better or for worse) convinced almost everyone that the total cost of ownership is cheaper for MS-administered cloud stuff than for any kind of non-MS system for identity/user management, email, calendar, video chat, and instant messaging. Throwing in Word/Excel/PowerPoint is just icing on the cake.
- Comment on Framework temporarily pausing some laptop sales in the US due to tariffs 2 months ago:
They were largely unaffected by the tariffs targeting China, because US trade policy distinguishes between mainland China and Taiwan. Problem was that Trump announced huge tariffs on everyone, including a 32% tariff on Taiwan.