Morphit
@Morphit@feddit.uk
- Comment on TURKEY POWER 5 weeks ago:
The Phénix rector shut down in 2009 so I think that was the end of France’s breeder reactors. India, China and Russia have operating breeder reactors.
Breeding from non-fissile material is different to reprocessing though. Reprocessing is a chemical process, not a nuclear one. The UK had an operational reprocessing capability - though it is being decommissioned now because it wasn’t cost effective with such a small fleet. Japan is still trying to bring its reprocessing plant online (after years of trouble). However France is doing it routinely for their domestic fleet and some foreign reactors IIRC. The USA made reprocessing illegal back in 1977 due to proliferation concerns. Despite that ban being repealed, they haven’t set up the regulatory infrastructure to be able to do it so no one has bothered. Maybe the new nuclear industry will shake that up a bit.
- Comment on TURKEY POWER 5 weeks ago:
They’re over by a factor of 6 which would add up to 21 hours, not 24. I don’t know what they’ve done to get 2.5 million, it should be 417 thousand with those numbers.
- Comment on TURKEY POWER 5 weeks ago:
1500 cubic meters
Did you really pick the figure from the RBMK reactor type?
For PWRs, 250 m³ of LILW per GW annum is 28.5 m³ of LILW per TWh.
2.5 million turkeys in a 2.4 kW oven for 3.5 hours uses 0.021 TWh.
So 2.5 million turkeys and 0.6 m³ total low and intermediate wastes generated. Most of this can be released after ~300 years with negligible activity over natural background. That is a long time but not “basically forever”.
- Comment on TURKEY POWER 5 weeks ago:
They’re talking about recycling the fuel and putting it back into the reactors. Unfortunately it’s cheaper to mine fresh fuel than to reprocess used fuel … as long as you just ignore the waste problem.
- Comment on TURKEY POWER 5 weeks ago:
No permanent storage location for the waste has been found, to date.
to burn the unburned fuel you would have to breed the material
France reprocesses spent fuel. With increased scale it would be cheaper and cut down on the volume of waste that must be dealt with regardless of if there’s a nuclear industry in the future.
- Comment on brains! 1 month ago:
All the way through?
- Comment on There you go little guy 2 months ago:
- Comment on Firebrick thermal energy storage could reach 170 GW in the U.S. by 2050 3 months ago:
They’re not converting it back into electricity, this is for industrial process heat. They have 100 units of electrical energy and 98 units go into whatever the industry needs to heat.
Lots of industries use ovens kilns or furnaces. Mostly fueled by gas at the moment. Using electricity would be very expensive unless they can timeshift usage and get low spot prices. Since they need heat anyway, thermal storage is pretty cheap an efficient.
- Comment on Firebrick thermal energy storage could reach 170 GW in the U.S. by 2050 3 months ago:
It’s heat though. They’re turning electricity into heat then moving that heat to where it’s needed, when it’s needed. Making heat from electricity is nearly 100% efficient, and pumping losses for moving fluids are going to be tiny compared to the the amount of heat they can move. They quote the heat loss in storage seperately as 1% per day. It seems reasonable.
- Comment on Carrots help you hear better. 3 months ago:
Why a spoon, cousin? Why not an axe?
- Comment on Google pulls the plug on uBlock Origin, leaving over 30 million Chrome users susceptible to intrusive ads 4 months ago:
I think what you want is in Firefox nightly right now: …mozilla.org/…/firefox-sidebar-and-vertical-tabs-…
That expands and compacts based on the sidebar state and can be flipped to the right side of the window in the ‘customise sidebar’ settings.
- Comment on Google pulls the plug on uBlock Origin, leaving over 30 million Chrome users susceptible to intrusive ads 4 months ago:
1xx: hold on
2xx: here you go
3xx: go away
4xx: you fucked up
5xx: I fucked up
6xx: Google fucked up - Comment on The Elon / Trump interview on X started with an immediate tech disaster 4 months ago:
- Comment on JPEG is Dying - And that's a bad thing | 2kliksphilip 4 months ago:
A balance has to be struck. The alternative isn’t not getting anything better, it’s being sure the benefits are worth the costs. The comment was “Why is [adding another decoder] a negative?” There is a cost to it, and while most people don’t think about this stuff, someone does.
The floppy code was destined to be removed from Linux because no one wanted to maintain it and it had such a small user base. Fortunately I think some people stepped up to look after it but that could have made preserving old software significantly harder.
If image formats get abandoned, browsers are going to face hard decisions as to whether to drop support. There has to be some push-back to over-proliferation of formats or we could be in a worse position than now, where there are only two or three viable browser alternatives that can keep up with the churn of web technologies.
- Comment on JPEG is Dying - And that's a bad thing | 2kliksphilip 4 months ago:
I mean, the comic is even in the OP. The whole point is that AVIF is already out there, like it or not. I’m not happy about Google setting the standards but that has to be supported. Does JPEGXL cross the line where it’s really worth adding in addition to AVIF? It’s easy to yes when you’re not the one supporting it.
- Comment on The Deep Sea 4 months ago:
Yeah, tineye doesn’t find any matches for it but does for all the others.
The backlight could be sunlight, but the it wouldn’t be deep-sea. It could be another submersible with a light, but I don’t know why two would dive together. The bokeh looks pretty weird also. I think it’s AI.
- Comment on Only Honk 4 months ago:
- Comment on JPEG is Dying - And that's a bad thing | 2kliksphilip 4 months ago:
Adding more decoders means more overheads in code size, projects dependencies, maintanance, developer bandwidth and higher potential for security vulnerabilities.
- Comment on the lamarcube 4 months ago:
Then I guess you can fit about 500 tons of giraffe in it.
- Comment on the lamarcube 4 months ago:
If it’s a cube, I’d have questions before they got to 8m.
If it’s 1m², but 500m tall, I’d have … different questions. - Comment on Deleted GitHub data is forever accessible to anyone, researchers claim | Cybernews 5 months ago:
I guess the funny thing is that each Git commit is internally just a file. Branches and tags are just links to specific commit files and of course commits link to their parents. If a branch gets deleted or jumped back to a previous commit, the orphaned commits are still left in the filesystem. Various Git actions can trigger a garbage collection, but unless you generate huge diffs, they usually stick around for a really long time. Determining if a commit is orphaned is work that Git usually doesn’t bother doing. There’s also a reflog that can let you recover lost commits if you make a mistake.
- Comment on Deleted GitHub data is forever accessible to anyone, researchers claim | Cybernews 5 months ago:
I think Github keeps all the commits of forks in a single pool. So if someone commits a secret to one fork, that commit could be looked up in any of them, even if the one that was committed to was private/is deleted/no references exist to the commit.
The big issue is discovery. If no-one has pulled the leaky commit onto a fork, then the only way to access it is to guess the commit hash. Github makes this easier for you:
What’s more, Ayrey explained, you don’t even need the full identifying hash to access the commit. “If you know the first four characters of the identifier, GitHub will almost auto-complete the rest of the identifier for you,” he said, noting that with just sixty-five thousand possible combinations for those characters, that’s a small enough number to test all the possibilities.
I think all GitHub should do is prune orphaned commits from the auto-suggestion list. If someone grabbed the complete commit ID then they probably grabbed the content already anyway.
- Comment on Data from deleted GitHub repos may not really be deleted 5 months ago:
Ah - Actually reading the article reveals why this is actually an issue:
What’s more, Ayrey explained, you don’t even need the full identifying hash to access the commit. “If you know the first four characters of the identifier, GitHub will almost auto-complete the rest of the identifier for you,” he said, noting that with just sixty-five thousand possible combinations for those characters, that’s a small enough number to test all the possibilities.
So enumerating all the orphan commits wouldn’t be that hard.
In any case if a secret has been publicly disclosed, you should always assume it’s still out there. For sure, rotate your keys.
- Comment on Data from deleted GitHub repos may not really be deleted 5 months ago:
Well, sort of. GitHub certainly could refuse to render orphan commits. They pop up a banner saying so but I don’t see why they should show the commit at all. They could still keep the data until it’s garbage collected since a user might re-upload the commit in a new branch.
This seems like a non-issue though since someone who hasn’t already seen the disclosed information would need to somehow determine the hash of the deleted commit.
- Comment on Ancient CRT monitor hits astonishing 700Hz — resolution reduced to just 120p to reach extraordinary refresh rate 5 months ago:
What do you mean? The shadow mask ensures the gun for each colour can only hit the phosphors of that colour. How would a lower resolution changed that?
- Comment on Microsoft points finger at the EU for not being able to lock down Windows 5 months ago:
As far as we know, the input was a file filled with zeroes
CrowdStrike have said that was not the problem:
This is not related to null bytes contained within Channel File 291 or any other Channel File.
That said, their preliminary incident review doesn’t give us much to go on as to what was wrong with the file.
You’re speculating that it was something easy to test for by a third party. It certainly could have been but I would hope it’s a more subtle bug which, as you say, can’t be exhaustively tested for. Source code analysis definitely would have surfaced this bug so either they didn’t bother looking or didn’t bother fixing it.
- Comment on Microsoft points finger at the EU for not being able to lock down Windows 5 months ago:
How would you prove that no input exists that could crash a piece of code? The potential search space is enormous. Microsoft can’t prevent drivers from accepting external input, so there’s always a risk that something could trigger an undetected error in the code. Microsoft certainly ought to be fuzz testing drivers it certifies but that will only catch low hanging fruit. Unless they can see the source code, it’s hard to determine for sure that there are no memory safety bugs.
The driver developers are the ones with the source code and should have been using analysis tools to find these kinds of memory safety errors. Or they could have written it in a memory safe language like Rust.
- Comment on To what extent, if at all, would have CrowdStrike's faulty update have been made easier to deal with with an immutable distro? 5 months ago:
It should be relatively straightforward to script the recovery of cloud VM images (even without snapshots). Good luck getting the unwashed masses to follow a script to manually enter recovery mode and delete files in a critical area of the OS.
- Comment on To what extent, if at all, would have CrowdStrike's faulty update have been made easier to deal with with an immutable distro? 5 months ago:
How does Falcon store these channel files on Linux? I don’t know how an immutable distro would handle this given CrowdStrike push several of these updates per day and presumably use their own infrastructure to deploy them.
I guess if you pay them enough they could customize the deployment to work with whatever infrastructure you have but it’s all proprietary so I have no idea if they’re really doing that anywhere.
- Comment on CrowdStrike downtime apparently caused by update that replaced a file with 42kb of zeroes 5 months ago:
IFERROR(;0)
Maybe they should use a more appropriate development tool for their critical security platform than Excel.