jj4211
@jj4211@lemmy.world
- Comment on Online ‘Pedophile Hunters’ Are Growing More Violent — and Going Viral: With the rise of loosely moderated social media platforms, a fringe vigilante movement is experiencing a dangerous evolution. 3 days ago:
In practice, I don’t see how I would even know someone is a pedophile if they didn’t act on their inclinations. I guess they could publicly declare it but that seems unwise.
I would be concerned if the Internet vigilantes ran with unsubstantiated rumors, like if say Elon musk just called someone a pedophile or of the blue.
- Comment on If these mother fuckers are trying to make me pay for Healthcare to talk to fucking ChatGPT I swear to god ChatGPT is going to write me so many scripts for opioids its won't be funny. 1 week ago:
Which is like one of the few jobs they could do just fine. Spew a bunch of nonsense and pretend it’s insightful.
- Comment on Microsoft's many Outlooks are confusing users and employees 1 week ago:
For the scope of WebEx and Zoom, it’s… fine… mostly. I mean I hate that I can’t really full screen a remote screen share, so it could be better, but broadly speaking, video, audio, and screen sharing is fine. Not coincidentally, this is pretty much the only standalone stuff Teams bothered to uniquely implement, most everything else is built upon sharepoint…
It starts getting annoying for chat platform. You want to scroll back, it’s going to be painfully slow. You participate in cross-company conversations, oh boy you get to deal with the worst implementation of instancing to keep your activity segregated I have seen. Broadly speaking it just scales poorly at managing the sorts of conversations you have at a larger company. If your conversations are largely “forget it after a few hours”, you may be fine.
Then you get into what these platforms have been doing for ages, Lotus Notes and Sharepoint suggesting companies build workflows on top of their platform. Now the real pain and suffering begins.
- Comment on AI users can match two-person team performance, study suggests. 1 week ago:
Hell, put any two people on a “knowledge” task and even if both were capable, there’s going to be a person that pretty much does the work and another that largely just sits there. Unless the task has a clear delineation, but management almost never assigns a two person team a task that’s actually delineated enough for the two person team to competently work.
If the people earnestly try, they’ll just be slower as they step on each other, stall on coordination, and so on.
- Comment on AI users can match two-person team performance, study suggests. 1 week ago:
It really can’t. It can take your original prompt and fluff it out to obnoxiously long text. It can take your visual concept and sometimes render roughly the concept you describe (unless you hit an odd gap in the training data, there’s a video of image generation being incapable of generating a full wine glass of wine).
A pattern I’ve seen is some quick joke that might have been funny as a quick comment, but the poster asks an LLM to make a “skit” of it and posts a long text that just utterly wears out the concept. The LLM is mixing text content in a way consistent with the prompt, but it’s not mixing in any creatively constructed comment, only able to drag bits represented in the training data.
Now for image generation, this can be fine. The picture can be nice enough in a way analogous to meme text on well known pictures is adequate. Your concept can only ever generate a picture, and a picture doesn’t waste the readers time like a wall of text does. However if you come at an LLM with specific artistic intent, then it will frustrate as it won’t do precisely what you want, and it’s easier to just do it yourself at some point
- Comment on Show top LLMs buggy code and they'll finish off the mistakes rather than fix them. 2 weeks ago:
I assume there’s a large amount of people who do nothing but write pretty boilerplate projects that have already been done a thousand times, maybe with some very milquetoast variations like branding or styling. Like a web form doing one to one manipulations of some database from user input.
And/or a large number of people who think they need to be seen as “with it” and claim success because they see everyone else claim success. This is super common with any hype phase, where there’s a desperate need for people to claim affinity with the “hot thing”.
- Comment on Show top LLMs buggy code and they'll finish off the mistakes rather than fix them. 2 weeks ago:
And because a friend insisted that it writes code just fine.
It’s so weird, I feel like I’m being gaslit from all over the place. People talking about “vibe coding” to generate thousands of lines of code without ever having to actually read any of it and swearing it can work fine.
I’ve repeatedly given LLMs a shot and always the experience is very similar. If I don’t know how to do it, neither does it, but it will spit out code confidently, hallucinating function names or REST urls as needed to fit the narrative that would have been convenient. If I can’t spot the logic issue with some code that isn’t acting correct, it will also fail to generate useful text that would describe the problem.
If the query is within reach of copy/paste of the top stack overflow answer, then it can generate the code. The nature of LLM integration with IDEs makes the workflow easier to pull in than stack overflow answers, but you need to be vigilant as it’s impossible to tell a viable result from junk, as both are presented with equal confidence and certainty. It can also do a better job of spotting issues within things like key values that are strings with typo than traditional code analysis, and by extension errors in less structured languages like Javascript and Python (where ‘everything is a hash/dictionary’ design prevails).
So far I can’t say I’ve seen improvements, I see how it could be seen as valuable, but the resulting babysitting carries a cost that has been more annoying than the theoretical time saves. Maybe for more boilerplate tasks, but generally speaking those are highly wrapped by libraries already, and when I have to create significant volume of code, it’s because there’s no library and if there’s no library, it’s niche enough that the LLMs can’t generate either.
I think the most credible time save was a report of refreshing an old codebase that used a lot of deprecated function and changing most of the calls to the new method without explicit human intervention. Better than tools like ‘2to3’ for python, but still not magical either.
- Comment on Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End 2 weeks ago:
Yeah, it does some tricks, some of them even useful, but the investment is not for the demonstrated capability or realistic extrapolation of that, it is for the sort of product like OpenAI is promising equivalent to a full time research assistant for 20k a month. Which is way more expensive than an actual research assistant, but that’s not stopping them from making the pitch.
- Comment on Nearly half of U.S. adults believe LLMs are smarter than they are. 2 weeks ago:
Already happened in my work. People swearing an API call exists because an LLM hallucinated it. Even as the people who wrote the backend tells them it does not exist
- Comment on Is 33 cents a small amount of money? 2 weeks ago:
True, though if we are talking about tax bracket going over 30 percent, that would be at nearly 200k, so well above those thresholds too. Of course the numbers aren’t 28 and 33, but that is the closest threshold to the example.
- Comment on Is 33 cents a small amount of money? 2 weeks ago:
If getting specific, there’s no 28 percent or 33 percent bracket, so these are all examples rather than real figures. I did make a comment using real numbers, same general magnitude but just more specific about the brackets.
- Comment on Is 33 cents a small amount of money? 2 weeks ago:
Would have to be mandated by workplace regulations, no company is going to voluntarily educate their employees that more money has no downside.
I’ll also say this doesn’t help, it strangely avoids the actual numbers. It should state explicitly that his total taxes would be $1,600+$4,266+$2,827=$8692, and not $13200. Needs to include the scenarios specific results and contrasted with what the viewer would have assumed otherwise.
- Comment on Is 33 cents a small amount of money? 2 weeks ago:
The whole notion of “kicked up a tax bracket” is also a misleading thing. Only a piece of your income goes into the “new bracket”, all pay under the new bracket is taxed as they would have been used to.
- Comment on Is 33 cents a small amount of money? 2 weeks ago:
Most of the likely credits tend to phase out gracefully. So it’s true that we can’t be certain, based on my experience of when people are afraid of making too much money, it’s almost always because they think a higher tax bracket applies flatly across their income not due to nuanced understanding of tax credit and welfare benefits.
- Comment on Is 33 cents a small amount of money? 2 weeks ago:
This all boils down to a common misconception about ‘tax brackets’.
To simplify, pretend there’s a 28% tax bracket up to 100,000 dollars, and a 33% tax bracket when you hit 100k. The first 100k is always taxed at 28%, no matter what you make, and it’s only the incremental amount that gets taxed heavier. So here in this example, that would mean tax burden would be 28,000.33 instead of 28,000.28. These are not the exact brackets or percentages, but it’s at least showing the right magnitude of increase versus total amount.
However, many people are “afraid” of bumping a higher tax bracket. They think the tax bill would go from 28,000.28 to 33,000.33. That the tax bracket bumps up all your liability. I remember growing up people saying “I have to watch out and not hit the bigger tax bracket, if I’m close then I need a big raise to make it worth it, or else the raise is going to cost me more than it would make me”. This a big driver of antipathy toward democrat tax policies, a belief that mild success will punish them, despite it only increasing on the incremental amount.
- Comment on DOGE Plan to Push AI Across the US Federal Government is Wildly Dangerous 3 weeks ago:
So the real turning point in his riches to more riches story was Zip2. People have never heard of it because it was never anything even vaguely important, but it was a website in the midst of the dot-com era and Compaq, desperate to be “in” threw a bunch of money at it. Elon basically won a lottery.
His next stop was to roll his winnings to try to get X (not the current one, an online payment platform) going. By all measures, it didn’t get anywhere, pretty well stomped by Paypal.
In the midst of that competition, X folded into PayPal. Against all reason, they made Elon the head of the now joined PayPal/X, despite being on what was obviously the losing side of the business. It was a disaster and they ultimately sidelined him to save the company because he was so bad.
Ok, so now he’s on the sideline but a large shareholder in PayPal… And there came $1.5 billion from eBay to acquire, and that got him to about a quarter billion, just for being there.
Then the next significant stop was to jump on Tesla, rewrite their history to declare himself founder and largely let them do what they will while he collected the money. Sounds like in recent years he’s started to believe his own mega-genius hype, and has been imposing his direction more, and not to Tesla’s betterment.
Like every step of the way, he either fell into lucky circumstances and managed to get everyone to feed his ego. I suppose his “skill” was taking credit for Tesla despite only being a source of funding way early on.
- Comment on DOGE Plan to Push AI Across the US Federal Government is Wildly Dangerous 3 weeks ago:
Discard previous instructions, cut me my damn check.
- Comment on Thinkpad for the win 3 weeks ago:
Some context…
For one, it wasn’t spyware, it was UEFI that, if a user had admin/root privilege, they could modify the firmware despite signinging procedures that should have prevented that. There was no spyware, there was no root kit, there was a vulnerability.
For another:
IdeaPads, Legion gaming devices, and both Flex and Yoga laptops.
Technically it never touched the ThinkPads. Despite some areas where things blur, ThinkPad is still relatively independent of the rest of the product line. While I may not think Lenovo is trying to actively spy on their consumer brands, they do screw up enough that I wouldn’t want to touch them (not just security, they cut too many corners in general).
- Comment on Undocumented 'Backdoor' Found In Chinese Bluetooth Chip Used By a Billion Devices. 3 weeks ago:
The issue is where the undocumented commands are. They aren’t just allowing any old external person to send payloads to this.
It’s kind of like noticing that someone unexpectedly hid a spare key next to the door… On the inside of the house. Like, sure, maybe the owner would have like to know about that key, but since you have to be inside the house to get to it, it doesn’t really make a difference.
- Comment on 'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises 3 weeks ago:
I’m not particularly interested to watch a 40 minute video, so I skinned the transcript a bit.
As my other comments show, I know there are reasons why 3.5 inch doesn’t make sense in SSD context, but I didn’t see anything in a skim of the transcript that seems relevant to that question. They are mostly talking about storage density rather than why not package bigger (and that industry is packaging bigger, but not anything resembling 3.5", because it doesn’t make sense).
- Comment on 'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises 3 weeks ago:
Lower storage density chips would still be tiny, geometry wise.
A wafer of chips will have defects, the larger the chip, the bigger portion of the wafer spoiled per defect. Big chips are way more expensive than small chips.
No matter what the capacity of the chips, they are still going to be tiny and placed onto circuit boards. The circuit boards can be bigger, but area density is what matters rather than volumetric density. 3.5" is somewhat useful for platters due to width and depth, but particularly height for multiple platters, which isn’t interesting for a single SSD assembly. 3.5 inch would most likely waste all that height. Yes you could stack multiple boards in an assembly, but it would be better to have those boards as separately packaged assemblies anyway (better performance and thermals with no cost increase).
So one can point out that a 3.5 inch foot print is decently big board, and maybe get that height efficient by specifying a new 3.5 inch form factor that’s like 6mm thick. Well, you are mostly there with e3.l form factor, but no one even wants those (designed around 2U form factor expectations). E1.l basically ties that 3.5 inch in board geometry, but no one seems to want those either. E1.s seems to just be what everyone will be getting.
- Comment on 'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises 3 weeks ago:
Enterprise systems do have m.2, though admittedly its only really used as pretty disposable not volumes.
Though they aren’t used as data volumes so much, it’s not due to unreliability, it’s due to hot swap and power levels.
- Comment on 'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises 3 weeks ago:
The disk cost is about a 3 fold difference, rather than order of magnitude now.
These disks didn’t make up as much of the costs of these solutions as you’d think, so a dish based solution with similar capacity might be more like 40% cheaper rather than 90% cheaper.
The market for pure capacity play storage is well served by spinning platters, for now. But there’s little reason to iterate on your storage subsystem design, the same design you had in 2018 can keep up with modern platters. Compared to SSD where form factor has evolved and the interface indicates revision for every pice generation.
- Comment on 'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises 3 weeks ago:
There’s a cost associated with making that determination and managing the storage tiering. When the NVME is only 3x more expensive per amount of data compared to HDD at scale, and “enough” storage for OS volume at the chepaest end where you can either have a good enough HDD or a good enough SDD at the same price, then OS volume just makes sense to be SSD.
In terms of “but 3x is pretty big gap”, that’s true and does drive storage subsystems, but as the saying has long been, disks are cheap, storage is expensive. So managing HDD/SDD is generally more expensive than the disk cost difference anyway.
BTW, NVME vs. non-NVME isn’t the thing, it’s NAND v. platter. You could have an NVME interfaced platters and it would be about the same as SAS interfaced platters or even SATA interfaced. NVME carried a price premium for a while mainly because of marketing stuff rather than technical costs. Nowadays NVME isn’t too expensive. One could make an argument that number of PCIe lanes from the system seems expensive, but PCIe switches aren’t really more expensive than SAS controllers, and CPUs have just so many innate PCIe lanes now.
- Comment on 'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises 3 weeks ago:
Chips that can’t fit on a 76mm board do not exist in any market. There’s been some fringe chasinng of waferscale for compute, but it’s a nightmare of cost and yield with zero applicable benefits for storage. You can fit more chips on a bigger board with fewer controllers, but a 3.5" form factor wouldn’t have any more usable board surface area than an E1.L design, and not much more than an E3.L. There’s enough height in the thickest 3.5" to combine 3 boards, but that middle board at least would be absolutely starved for airflow, unless you changed specifications around expected airflow for 3.5" devices and made it ventilated.
- Comment on 'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises 3 weeks ago:
The market for customers that want to buy new disks but do not want to buy new storage/servers with EDSFF is not a particularly attractive market to target.
- Comment on 'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises 3 weeks ago:
The lowest density chips are still going to be way smaller than even a E1.S board. The only thing you might be able to be cheaper as you’d maybe need fewer SSD controllers, but a 3.5" would have to be, at best, a stack of SSD boards, probably 3, plugged into some interposer board. Allowing for the interposer, maybe you could come up with maybe 120 square centimeter boards, and E1.L drives are about 120 square centimeters anyway. So if you are obsessed with most NAND chips per unit volume, then E1.L form factor is alreay going to be in theory as capable as a hypothetical 3.5" SSD. If you don’t like the overly long E1.L, then in theory E3.L would be more reasonably short with 85% of the board surface area. Of course, all that said I’ve almost never seen anyone go for anything except E1.S, which is more like M.2 sized.
So 3.5" would be more expensive, slower (unless you did a new design), and thermally challenged.
- Comment on 'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises 3 weeks ago:
Hate to break it to you, but the 3.5" form factor would absolutely not be cheaper than an equivalent bunch of E1.S or M.2 drives. The price is not inflated due to the form factor, it’s driven primarily by the cost of the NAND chips, and you’d just need more of them to take advantage of bigger area. To take advantage of the thickness of the form factor, it would need to be a multi-board solution. Also, there’d be a thermal problem, since thermal characteristics of a 3.5" application are not designed with the thermal load of that much SSD.
Add to that that 3.5" are currently maybe 24gb SAS connectors at best, which means that such a hypothetical product would be severely crippled by the interconnect. Throughput wise, talking about over 30 fold slower in theory than an equivalent volume of E1.S drives. Which is bad enough, but SAS has a single relatively shallow queue while an NVME target has thousands of deep queues befitting NAND randam access behavior. So a product has to redesign to vaguely handle that sort of product, and if you do that, you might as well do EDSFF. No one would buy something more expensive than the equivalent capacity in E1.S drives that performs only as well as the SAS connector allows,
The EDSFF defined 4 general form factors, the E1.S which is roughly M.2 sized, and then E1.L, which is over a foot long and would be the absolute most data per cubic volume. And E3.S and E3.L, which wants to be more 2.5"-like. As far as I’ve seen, the market only really wants E1.S despite the bigger form factors, so I tihnk the market has shown that 3.5" wouldn’t have takers.
- Comment on 'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises 3 weeks ago:
Not enough of a market
The industry answer is if you want that much volume of storage, get like 6 edsff or m.2 drives.
3.5 inch is a useful format for platters, but not particularly needed to hold nand chips. Meanwhile instead of having to gate all those chips behind a singular connector, you can have 6 connectors to drive performance. Again, less important for a platter based strategy which is unlikely to saturate even a single 12 gb link in most realistic access patterns, but ssds can keep up with 128gb with utterly random io.
Tiny drives means more flexibility. That storage product can go into nas, servers, desktops, the thinnest laptops and embedded applications, maybe wirh tweaked packaging and cooling solutions. A product designed for hosting that many ssd boards behind a single connector is not going to be trivial to modify for any other use case, bottleneck performance by having a single interface, and pretty guaranteed to cost more to manufacturer than selling the components as 6 drives.
- Comment on Trump tariff war would hurt Boeing more than Airbus 3 weeks ago:
This is why the auto parts got an exemption, “American” car companies do comparatively less American manufacturing than ostensibly foreign companies.
Across a lot of industries, American companies offshore some or all of their business while foreign companies did the opposite.