So I did this, using a Ryzen 3600, with some light tweaking the base system burns about 40-50W idle. The drives add a lot, 5-10W each, but they would go into any NAS system, so that’s irrelevant. I had to add a GPU because the MB I had wouldn’t POST without one, so that increases the power draw a little, but it’s also necessary for proper Jellyfin transcoding. I recently swapped the GPU for an Intel ARC A310.
By comparison, the previous system I used for this had a low-power, fanless intel celeron, with a single drive and two SSDs it drew about 30W.
brucethemoose@lemmy.world 2 weeks ago
Depends.
Toss the GPU/wifi, disable audio, throttle the processor a ton, and set the OS to power saving, and old PCs can be shockingly efficient.
cmnybo@discuss.tchncs.de 2 weeks ago
You can slow the RAM down too. You don’t need XMP enabled if you’re just using the PC as a NAS. It can be quite power hungry.
brucethemoose@lemmy.world 2 weeks ago
Eh, older RAM doesn’t use much. If it runs close to stock voltage, maybe just set it at stock voltage and bump the speed down a notch, then you get a nice task energy gain from the performance boost.
fuckwit_mcbumcrumble@lemmy.dbzer0.com 2 weeks ago
There was a post a while back of someone trying to eek every single watt out of their computer. Disabling XMP and running the ram at the slowest speed possible saved like 3 watts I think. An impressive savings, but at the cost of HORRIBLE CPU performance. But you do actually need at least a little bit of grunt for a nas.
At work we have some of those atom based NASes and the combination of lack of CPU, and horrendous single channel ram speeds makes them absolutely crawl. One HDD on its own performs the same as this raid 10 array.
Aceticon@lemmy.dbzer0.com 2 weeks ago
Stuff designed for much higher peek usage tend to have a lot more waste.
For example, a 400W power source (which is what’s probably in the original PC of your example) will waste more power than a lower wattage on (unless it’s a very expensive one), so in that example of yours it should be replaced by something much smaller.
Even beyond that, everything in there - another example, the motherboard - will have a lot more power leakage than something designed for a low power system (say, an ARM SBC).
Unless it’s a notebook, that old PC will always consume more power than, say, an N100 Mini-PC, much less an ARM based one.
WhyJiffie@sh.itjust.works 2 weeks ago
in my experience power supplies are more efficient near the 50% utilization. be quiet psus have charts about it
Aceticon@lemmy.dbzer0.com 2 weeks ago
The way one designs hardware in is to optimize for the most common usage scenario with enough capacity to account for the peak use scenario (and with some safety margin on top).
However specifically for power sources, if you want to handle more power you have to for example use larger capacitors and switching MOSFETs, and those have more leakage hence more baseline losses. Mind you, using more expensive components one can get higher power stuff will less leakage, but that’s not going to happen outside specialist power supplies which are specifically designed for high-peak use AND low baseline power consumption, and I’m not even sure if there’s a genuine use case for such a design that justifies paying the extra cost for high-power low-leakage components.
brucethemoose@lemmy.world 2 weeks ago
All true, yep.
Still, the clocking advantage is there. Stuff like the N100 also optimizes for lower costs, which means higher clocks on smaller silicon. That’s even more dramatic for repurposed laptop hardware, which is much more heavily optimized for its idle state.
Valmond@lemmy.world 2 weeks ago
And heat your room in the winter!
Add spring + autumn if you live up north.