Kazumara
@Kazumara@discuss.tchncs.de
- Comment on Galaxy S10 til the wheels come off 4 days ago:
That’s exactly why I ended up going the used Pixel 6 + adapter route instead :-)
- Comment on Galaxy S10 til the wheels come off 4 days ago:
Yeah I only found out when buying my own adapter after getting a used Pixel 6. Luckily saw it in time before ordering, so I thought I’d share it forward.
- Comment on Automation 4 days ago:
No, they can’t
- Comment on Galaxy S10 til the wheels come off 4 days ago:
There are two kinds
- the purely analogue that just connects some of the USB pins to the jack
- the digital that contain a DAC
Not all phones have the internal wiring from their internal DAC to the USB port to make the analogue type of adapter work, so watch out what you buy, if you follow SomeGuy69’s advice.
- Comment on Automation 5 days ago:
All possible moves one step from a given position sure.
But if you then take all possible resulting positions and calculate all moves from there, and then take all possible resulting positions after that second move and calculate all possible third moves from there, and so on, then the possibilities explode so much in number that you can’t calculate them anymore. That’s the exponential part I was refering to.
You can try and estimate them roughly, let’s say you’re somewhere in the middle of the game, there are 12 units of each side still alive. About half a pawns so we take 1.2 possible moves for them, for the others, well let’s say around 8, thats a bit much for horses and the king on average, but probably a bit low for other units. So 6 times 8 and 6 times 1.2, lets call it 55 possibilities. So the first move there are 55 possible positions, for the second you have to consider all of them and their new possibilitues so there are 55 times 55 or 3025, for the third thats 166375, then 9.15 million, 500 million, 27.6 billion, 1.5 trillion etc. That last one was only 7 moves in the future. Most games won’t be finished by then from a given position, so you either need a scoring function or you’re running out of time.
- Comment on Even Apple finally admits that 8GB RAM isn't enough 5 days ago:
Okay good, thanks for confirming. I remember Kate feeling very nice to use during my studies, more responsive than VS Code or Eclipse. But I also had 16Gigabytes of RAM, so I couldn’t be sure.
- Comment on Even Apple finally admits that 8GB RAM isn't enough 6 days ago:
The lede by OP here contains this:
a feature called Predictive Code Completion. Unfortunately, if you bought into Apple’s claim that 8GB of unified memory was enough for base-model Apple silicon Macs, you won’t be able to use it
So either RecluseRamble meant that development with a feature predictive code completion would work on 8 GB of RAM if you were using Linux or his comparison was shit.
- Comment on Automation 6 days ago:
You don’t use it for the rule-set and allowable moves, but to score board positions.
For a chess computer calculating all possible moves until the end of the game is not possible in the given time, because the number of potential moves grows exponentially with each further move. So you need to look at a few, and try to reject bad ones early, so that you only calculate further along promising paths.
So you need to be able to say what is a better board position and what is a worse one. It’s complex to determine - in general - whether a position is better than another. Of course it is, otherwise everyone would just play the “good” positions, and chess would be boring like solved games e.g. Tic-Tac-Toe.
Now to have your chess computer estimate board positions you can construct tons of rules and heuristics with expert knowledge to hopefully assign sensible values to positions. People do this. But you can also hope that there is some machine learnable patterns in the data that you can discover by feeding historical games and the information on who won into an ML model. People do this too. I think both are fair approaches in this instance.
- Comment on 'One of the wildest Finnish startup says it can speed up any CPU by 100x using a tiny piece of hardware with no recoding 6 days ago:
The techradar article is terrible, the techcrunch article is better, the Flow website has some detail.
But overall I have to say I don’t believe them. You can’t just make threads independent if they logically have dependencies. Or just remove cache coherency latency by removing caches.
- Comment on Shower thoughts are wasting water. 1 week ago:
Who here really has the time to stand, think and waste in the shower?
People not in a drought. It’s been quite wet here in Switzerland recently.
- Comment on Fuck the law 1 week ago:
I’m not sure we’re thinking of the same hypothetical here…
I’m saying if you bring chicken to the cinema, and the staff (citizens) arrest you for it, they are beyond wrong. Because it’s oviously not illegal to bring chicken to the cinema, only against policy and therefore it would be false imprisonment.
- Comment on Fuck the law 1 week ago:
Yes, sure, but norimee is right, they can’t arrest you. If they do arrest you for it, it’s false improsinment and they’ll get arrested instead.
- Comment on MIT Students Stole $25 Million In Seconds By Exploiting ETH Blockchain Bug, DOJ Says 1 month ago:
Wow, thanks for the link. It seems things have gotten a lot more complicated with PoS. I didn’t even know about PBS. I haven’t been following along properly.
- Comment on MIT Students Stole $25 Million In Seconds By Exploiting ETH Blockchain Bug, DOJ Says 1 month ago:
It’s a private MEV mempool
Are you sure there is such a thing? My understanding was that they just submit their sandwich transactions to the mempool with higher and lower gas respectively to achieve their desired priority ranking. Could be wrong though.
- Comment on MIT Students Stole $25 Million In Seconds By Exploiting ETH Blockchain Bug, DOJ Says 1 month ago:
by fraudulently gaining access to pending transactions
That makes no sense to me. The mempool is public, everyone can see pending transactions.
- Comment on Researchers unlock fiber optic connection 1.2 million times faster than broadband 2 months ago:
Disaggregated compute might be able to leverage this in the data center.
I don’t think people would fuck with amplifiers in a DC environment. Just using more fiber would be so much cheaper and easier to maintain. At least I haven’t heard of any current Datacenters even using conventional DWDM in the C-band.
At best Google was using Bidir Optics, which I suppose is a minimal form of wavelength division multiplexing.
- Comment on Researchers unlock fiber optic connection 1.2 million times faster than broadband 2 months ago:
over 90 channels of 400G each
You mean with 50 GHz channels in the C-band? That would put you at something like 42 Gbaud/s with DP-QAM64 modulation, it probably works but your reach is going to be pretty shitty because your OSNR requirements will be high, so you can’t amplify often. I would think that 58 channels at 75 GHz or even 44 channels at 100 GHz are the more likely deployment scenarios.
On the other hand we aren’t struggling for spectrum yet, so I haven’t really had to make that call yet.
- Comment on Researchers unlock fiber optic connection 1.2 million times faster than broadband 2 months ago:
The zero dispersion wavelength of G.652.D fiber is between 1302 nm and 1322 nm, in the O-band, whereas typical current DWDM systems operate in the range of 1528.38 to 1563.86nm, in the C-band. Group dispersion is therefore lower in the shorter wavelength E-band and S-band compared to the C-band.
- Comment on Researchers unlock fiber optic connection 1.2 million times faster than broadband 2 months ago:
1988 TAT-8 already went into productive use as the first transatlantic fiber optic connection. So the lab work must have happened in the 80’s already.
- Comment on Researchers unlock fiber optic connection 1.2 million times faster than broadband 2 months ago:
First of all some corrections:
By constructing a device called an optical processor, however, researchers could access the never-before-used E- and S-bands.
It’s called an amplifier not processor, the Aston University page has it correct. And at least the S-band has seen plenty of use in ordinary CWDM systems, just not amplified. We have at least 20 operational S-band links at 1470 and 1490 nm in our backbone right now. The E-band maybe less so, because the optical absorption peak of water in conventional fiber sits somewhere in the middle of it. You could use it with low water peak fiber, but for most people it hasn’t been attractive trying to rent spans of only the correct type of fiber.
the E-band, which sits adjacent to the C-band in the electromagnetic spectrum
No, it does not, the S-band is between them. It goes O-band, E-band, S-band, C-band, L-band, for “original” and “extended” on the left side, and “conventional”, flanked by “short” and “long” on the right side.
Now to the actual meat: This is a cool material science achievement. However in my professional opinion this is not going to matter much for conventional terrestrial data networks. We already have the option of adding more spectrum to current C-band deployments in our networks, by using filters and additional L-band amplifiers. But I am not aware of any network around ours (AS559) that actually did so. Because fundamentally the question is this:
Which is cheaper:
- renting a second pair of fiber in an existing cable, and deploying the usual C-band equipment on the second pair,
- keeping just one pair, and deploying filters and the more expensive, rarer L-band equipment, or
- keeping just one pair, and using the available C-band spectrum more efficiently with incremental upgrades to new optics?
Currently, for us, there is enough spectrum still open in the C-band. And our hardware supplier is only just starting to introduce some L-band equipment. I’m currently leaning towards renting another pair being cheaper if we ever get there, but that really depends on where the big buying volume of the market will move.
Now let’s say people do end up extending to the L-band. Even then I’m not so sure that extending into the E- and S- bands as the next further step is going to be even equally attractive, for the simple reason that attenuation is much lower at the C-band and L-band wavelengths.
Maybe for subsea cables the economics shake out differently, but the way I understand their primary engineering constraint is getting enough power for amplifiers to the middle of the ocean, so maybe more amps, and higher attenuation, is not their favourite thing to develop towards either. This is hearsay though, I am not very familiar with their world.
- Comment on penguins 2 months ago:
Here, I found the original, unfortnuately on Instagram: www.instagram.com/alwaysyzy/p/C5KdNvmrJiT