Morphit
@Morphit@feddit.uk
- Comment on There you go little guy 1 month ago:
- Comment on Firebrick thermal energy storage could reach 170 GW in the U.S. by 2050 1 month ago:
They’re not converting it back into electricity, this is for industrial process heat. They have 100 units of electrical energy and 98 units go into whatever the industry needs to heat.
Lots of industries use ovens kilns or furnaces. Mostly fueled by gas at the moment. Using electricity would be very expensive unless they can timeshift usage and get low spot prices. Since they need heat anyway, thermal storage is pretty cheap an efficient.
- Comment on Firebrick thermal energy storage could reach 170 GW in the U.S. by 2050 1 month ago:
It’s heat though. They’re turning electricity into heat then moving that heat to where it’s needed, when it’s needed. Making heat from electricity is nearly 100% efficient, and pumping losses for moving fluids are going to be tiny compared to the the amount of heat they can move. They quote the heat loss in storage seperately as 1% per day. It seems reasonable.
- Comment on Carrots help you hear better. 2 months ago:
Why a spoon, cousin? Why not an axe?
- Comment on Google pulls the plug on uBlock Origin, leaving over 30 million Chrome users susceptible to intrusive ads 2 months ago:
I think what you want is in Firefox nightly right now: …mozilla.org/…/firefox-sidebar-and-vertical-tabs-…
That expands and compacts based on the sidebar state and can be flipped to the right side of the window in the ‘customise sidebar’ settings.
- Comment on Google pulls the plug on uBlock Origin, leaving over 30 million Chrome users susceptible to intrusive ads 2 months ago:
1xx: hold on
2xx: here you go
3xx: go away
4xx: you fucked up
5xx: I fucked up
6xx: Google fucked up - Comment on The Elon / Trump interview on X started with an immediate tech disaster 2 months ago:
- Comment on JPEG is Dying - And that's a bad thing | 2kliksphilip 3 months ago:
A balance has to be struck. The alternative isn’t not getting anything better, it’s being sure the benefits are worth the costs. The comment was “Why is [adding another decoder] a negative?” There is a cost to it, and while most people don’t think about this stuff, someone does.
The floppy code was destined to be removed from Linux because no one wanted to maintain it and it had such a small user base. Fortunately I think some people stepped up to look after it but that could have made preserving old software significantly harder.
If image formats get abandoned, browsers are going to face hard decisions as to whether to drop support. There has to be some push-back to over-proliferation of formats or we could be in a worse position than now, where there are only two or three viable browser alternatives that can keep up with the churn of web technologies.
- Comment on JPEG is Dying - And that's a bad thing | 2kliksphilip 3 months ago:
I mean, the comic is even in the OP. The whole point is that AVIF is already out there, like it or not. I’m not happy about Google setting the standards but that has to be supported. Does JPEGXL cross the line where it’s really worth adding in addition to AVIF? It’s easy to yes when you’re not the one supporting it.
- Comment on The Deep Sea 3 months ago:
Yeah, tineye doesn’t find any matches for it but does for all the others.
The backlight could be sunlight, but the it wouldn’t be deep-sea. It could be another submersible with a light, but I don’t know why two would dive together. The bokeh looks pretty weird also. I think it’s AI.
- Comment on Only Honk 3 months ago:
- Comment on JPEG is Dying - And that's a bad thing | 2kliksphilip 3 months ago:
Adding more decoders means more overheads in code size, projects dependencies, maintanance, developer bandwidth and higher potential for security vulnerabilities.
- Comment on the lamarcube 3 months ago:
Then I guess you can fit about 500 tons of giraffe in it.
- Comment on the lamarcube 3 months ago:
If it’s a cube, I’d have questions before they got to 8m.
If it’s 1m², but 500m tall, I’d have … different questions. - Comment on Deleted GitHub data is forever accessible to anyone, researchers claim | Cybernews 3 months ago:
I guess the funny thing is that each Git commit is internally just a file. Branches and tags are just links to specific commit files and of course commits link to their parents. If a branch gets deleted or jumped back to a previous commit, the orphaned commits are still left in the filesystem. Various Git actions can trigger a garbage collection, but unless you generate huge diffs, they usually stick around for a really long time. Determining if a commit is orphaned is work that Git usually doesn’t bother doing. There’s also a reflog that can let you recover lost commits if you make a mistake.
- Comment on Deleted GitHub data is forever accessible to anyone, researchers claim | Cybernews 3 months ago:
I think Github keeps all the commits of forks in a single pool. So if someone commits a secret to one fork, that commit could be looked up in any of them, even if the one that was committed to was private/is deleted/no references exist to the commit.
The big issue is discovery. If no-one has pulled the leaky commit onto a fork, then the only way to access it is to guess the commit hash. Github makes this easier for you:
What’s more, Ayrey explained, you don’t even need the full identifying hash to access the commit. “If you know the first four characters of the identifier, GitHub will almost auto-complete the rest of the identifier for you,” he said, noting that with just sixty-five thousand possible combinations for those characters, that’s a small enough number to test all the possibilities.
I think all GitHub should do is prune orphaned commits from the auto-suggestion list. If someone grabbed the complete commit ID then they probably grabbed the content already anyway.
- Comment on Data from deleted GitHub repos may not really be deleted 3 months ago:
Ah - Actually reading the article reveals why this is actually an issue:
What’s more, Ayrey explained, you don’t even need the full identifying hash to access the commit. “If you know the first four characters of the identifier, GitHub will almost auto-complete the rest of the identifier for you,” he said, noting that with just sixty-five thousand possible combinations for those characters, that’s a small enough number to test all the possibilities.
So enumerating all the orphan commits wouldn’t be that hard.
In any case if a secret has been publicly disclosed, you should always assume it’s still out there. For sure, rotate your keys.
- Comment on Data from deleted GitHub repos may not really be deleted 3 months ago:
Well, sort of. GitHub certainly could refuse to render orphan commits. They pop up a banner saying so but I don’t see why they should show the commit at all. They could still keep the data until it’s garbage collected since a user might re-upload the commit in a new branch.
This seems like a non-issue though since someone who hasn’t already seen the disclosed information would need to somehow determine the hash of the deleted commit.
- Comment on Ancient CRT monitor hits astonishing 700Hz — resolution reduced to just 120p to reach extraordinary refresh rate 3 months ago:
What do you mean? The shadow mask ensures the gun for each colour can only hit the phosphors of that colour. How would a lower resolution changed that?
- Comment on Microsoft points finger at the EU for not being able to lock down Windows 3 months ago:
As far as we know, the input was a file filled with zeroes
CrowdStrike have said that was not the problem:
This is not related to null bytes contained within Channel File 291 or any other Channel File.
That said, their preliminary incident review doesn’t give us much to go on as to what was wrong with the file.
You’re speculating that it was something easy to test for by a third party. It certainly could have been but I would hope it’s a more subtle bug which, as you say, can’t be exhaustively tested for. Source code analysis definitely would have surfaced this bug so either they didn’t bother looking or didn’t bother fixing it.
- Comment on Microsoft points finger at the EU for not being able to lock down Windows 3 months ago:
How would you prove that no input exists that could crash a piece of code? The potential search space is enormous. Microsoft can’t prevent drivers from accepting external input, so there’s always a risk that something could trigger an undetected error in the code. Microsoft certainly ought to be fuzz testing drivers it certifies but that will only catch low hanging fruit. Unless they can see the source code, it’s hard to determine for sure that there are no memory safety bugs.
The driver developers are the ones with the source code and should have been using analysis tools to find these kinds of memory safety errors. Or they could have written it in a memory safe language like Rust.
- Comment on To what extent, if at all, would have CrowdStrike's faulty update have been made easier to deal with with an immutable distro? 3 months ago:
It should be relatively straightforward to script the recovery of cloud VM images (even without snapshots). Good luck getting the unwashed masses to follow a script to manually enter recovery mode and delete files in a critical area of the OS.
- Comment on To what extent, if at all, would have CrowdStrike's faulty update have been made easier to deal with with an immutable distro? 3 months ago:
How does Falcon store these channel files on Linux? I don’t know how an immutable distro would handle this given CrowdStrike push several of these updates per day and presumably use their own infrastructure to deploy them.
I guess if you pay them enough they could customize the deployment to work with whatever infrastructure you have but it’s all proprietary so I have no idea if they’re really doing that anywhere.
- Comment on CrowdStrike downtime apparently caused by update that replaced a file with 42kb of zeroes 3 months ago:
IFERROR(;0)
Maybe they should use a more appropriate development tool for their critical security platform than Excel.
- Comment on CrowdStrike downtime apparently caused by update that replaced a file with 42kb of zeroes 3 months ago:
This error isn’t intentionally crashing because of a security risk, though that could happen. It’s a null pointer exception, so there are no static or runtime checks that could have prevented or handled this more gracefully. This was presumably a bug in the driver for a long time, then a faulty config file came and triggered the crashes. Better static analysis and testing of the kernel driver is one aspect, how these live config updates are deployed and monitored is another.
- Comment on CrowdStrike downtime apparently caused by update that replaced a file with 42kb of zeroes 3 months ago:
You can still catch the error at runtime and do something appropriate. That might be to say this update might have been tampered with and refuse to boot, but more likely it’d be to just send an error report back to the developers that an unexpected condition is being hit and just continuing without loading that one faulty definition file.
- Comment on CrowdStrike downtime apparently caused by update that replaced a file with 42kb of zeroes 3 months ago:
The driver is in kernel mode. If it crashes, the kernel has no idea if any internal structures have been left in an inconsistent state. If it doesn’t halt then it has the potential to cause all sorts of damage.
- Comment on CrowdStrike downtime apparently caused by update that replaced a file with 42kb of zeroes 3 months ago:
I don’t think the kernel could continue like that. The driver runs in kernel mode and took a null pointer exception. The kernel can’t know how badly it’s been screwed by that, the only feasible option is to BSOD.
The driver itself is where the error handling should take place. First off it ought to have static checks to prove it can’t have trivial memory errors like this. Secondly, if a configuration file fails to load, it should make a determination about whether it’s safe to continue or halt the system to prevent a potential exploit. You know, instead of shitting its pants and letting Windows handle it.
- Comment on Happy International Blue Screen Day 3 months ago:
This doesn’t really answer my question but Crowdstrike do explain a bit here: crowdstrike.com/…/technical-details-on-todays-out…
These channel files are configuration for the driver and are pushed several times a day. It seems the driver can take a page fault if certain conditions are met. A mistake in a config file triggered this condition and put a lot of machines into a BSOD bootloop.
I think it makes sense that this was a preexisting bug in the driver which was triggered by an erroneous config. What I still don’t know is if these channel updates have a staged deployment (presumably driver updates do), and what fraction of machines that got the bad update actually had a BSOD.
Anyway, they should rewrite it in Rust.
- Comment on Happy International Blue Screen Day 3 months ago:
Does anyone know how these Cloudstrike updates are actually deployed? Presumably the software has its own update mechanism to react to emergent threats without waiting for patch tuesday. Can users control the update policy for these ‘channel files’ themselves?