diomnep
@diomnep@lemmynsfw.com
- Comment on AMD says overclocking blows a hidden fuse on Ryzen Threadripper 7000 to show if you've overclocked the chip, but it doesn't automatically void your CPU's warranty 10 months ago:
I get what you’re saying. In a way I can see how it feels like setting a low speed limit so police can pull over whoever they want.
I think what I would say in response to that is, IMO, processors are all so fast these days that you can pretty much buy anything current and you will be fine for basic computing. The value of processors right now is just really high.
I just don’t think it is necessary to overclock in current year. It’s more of a hobby, and I say this as someone who overclocks as a hobby.
Back in the day, a couple hundred extra MHz would not just be a way more significant percentage numerically, but it could get you over the hump from a bad experience to a good experience. Today, we’re talking about 3300MHz vs 3500MHz, and it just isn’t a big difference when you experience it.
In fact, AMD’s precision boost overdrive will give you those couple-hundred MHz without voiding your warranty at all. So if you’re looking to squeeze out a little extra performance, you are covered. You just have to turn it off and demonstrate that you still have the issue before AMD will approve a warranty claim.
So what is actually voiding the warranty? It’s people going outside of what PBO is willing to do. That’s where we get into larger and larger increases in clock speed, and more importantly higher voltages. Higher voltages induce additional stress, leading to higher failure rates.
When I used to build PCs for my friends and family, you literally had to pay extra for the privilege of being able to overclock at all. Compared to that, AMD seems really reasonable in this case.
- Comment on AMD says overclocking blows a hidden fuse on Ryzen Threadripper 7000 to show if you've overclocked the chip, but it doesn't automatically void your CPU's warranty 11 months ago:
What a ridiculous take. I love overclocking and pushing hardware to its limits but if I operate equipment outside of its design parameters I don’t expect the manufacturer to bail me out if I damage it. I paid for a 3.8GHz 8 core processor (or whatever) and it’s on me if I decide to operate it outside of those parameters.
A lot of you have this sense of entitlement that does not line up with reality. If need a 12-core 3.8GHz processor that is what I buy. If you decide to buy a 12-core 3.2GHz processor and overclock it to 3.8GHz that is on you. It isn’t on the manufacturer to subsidize your overclocking adventure. Processors are binned according to what they are able to handle and based on benchmark data and the cost of higher-end processors factors in the reality that those higher-end processors may require more frequent replacements due to being on the cutting edge of the platform on which they were designed to run.
Deprogram yourself. If you buy a processor rated for X cores at Y GHz, that is the performance you should expect to receive. If you go beyond that you are on your own and what you encounter on that journey is on you.
What you are suggesting with this statement, whether you realize it or not, is that people who pay for what they actually need should subsidize your attempts to DIY that performance in the form of higher costs overall.
Please, void your warranty, but accept that you have voided it when you do.
- Comment on What a terrible relationship 1 year ago:
Just casual racism is all.
- Comment on Google Fiber goes big with 20-gig plan 1 year ago:
Thing is though, most consumer networking gear is capable of a maximum of 1gbit, so to even take advantage of 2gig or 2.5gig you at least need a router with a 2.5gig uplink. If you have this you can have a couple of people on the network using a gig each.
My setup is a 1.2g cable connection going into a 2.5g port on my router, with a couple of servers connected to the router over 10g. This basically lets me download off of my servers at the full speed of the network but the rest of my devices are limited to 1gig.
Going up to 20gig would require a large investment to see the benefits. First you would need a router with a 25g uplink port, which is really only going to be found on a specific tier of “enterprise” gear. These routers aren’t going to have a bunch of ports so you are going to need to dump the output either to a 25g switch or a couple of 10g switches (probably the most cost-effective option). From there you can distribute out to 20 machines at 1g.
Anyway, you are definitely right about the aim of a service like this but to see the benefits of a 20g connection would require some very expensive and specialized equipment.
- Comment on TURNING MY BRIGHTNESS UP TO A USEABLE LEVEL IS THE REAL ISSUE 1 year ago:
Doubtful that this is using traditional per-device water cooling. I’m betting that this is traditional hot row/cold row cooling and there is an evaporative component of the HVAC coolant loop to reduce the power requirements of the system as a whole.
Lemmy loves to shit on MS but they are constantly innovating in efficient data centers because spending less to operate directly reduces their expenditures, which directly equals profit on a service with fixed costs and multi-year reservations like Azure.
You can bet that if the solution was as simple as what you suggest that they would have been doing it for years, but the thermal considerations for one machine and the thermal considerations for 100,000 machines are not the same. The #1 priority to operate that many systems is to use as little power as possible because power is not only the biggest expense but also the primary limiting factor on the total number of systems you can host.