KingRandomGuy
@KingRandomGuy@lemmy.world
- Comment on Ender 3 V2 damage? 8 hours ago:
Cute cat! Nevermore and Bentobox are two super popular ones.
Since you’re running an E3 V2, first make sure you’ve replaced the hotend with an all-metal design. The stock hotend has the PTFE tube routed all the way into the hotend, which is fine for low temp materials like PLA, but can result in off-gassing at higher temperatures such as those used by ASA and some variants of PETG. The PTFE particles are almost certainly not good to breathe in during the long term, and can even be deadly to certain animals such as birds at small quantities.
- Comment on Ender 3 V2 damage? 8 hours ago:
In my experience doing a bit more than 10% can be helpful in the event of underextrusion, plus I’ve seen it add a bit more rigidity. But you’re right that there are diminishing returns till you start maxing out the infill.
4 perimeters at 0.6mm or 6 at 0.4 should be fine.
- Comment on Ender 3 V2 damage? 1 day ago:
Yeah, I agree. In the photo I didn’t see an enclosure so I said PETG is fine for this application. With an enclosure you’d really want to use ABS/ASA, though PETG could work in a pinch.
I also agree that an enclosure (combined with a filter) is a good idea. I think people tend to undersell the potential dangers from 3D printing, especially for people with animals in the home.
- Comment on Stop calling them tech companies: GenAI and SaaS — are they really tech? It’s time to call a spade a spade. 1 day ago:
Thanks for the respectful discussion! I work in ML (not LLMs, but computer vision), so of course I’m biased. But I think it’s understandable to dislike ML/AI stuff considering that there are unfortunately many unsavory practices taking place (potential copyright infringement, very high power consumption, etc.).
- Comment on Ender 3 V2 damage? 2 days ago:
All good, it’s still something to keep in mind (especially if OP thinks about enclosing their printer in the future). Thanks for your comment!
- Comment on Ender 3 V2 damage? 2 days ago:
IMO heat formed from stress will not be a big deal, especially considering that people frequently build machines out of PETG (Prusa’s i3 variants, custom CoreXYs like Vorons and E3NG). The bigger problem is creep, which suggests that you shouldn’t use PLA for this part.
- Comment on Ender 3 V2 damage? 2 days ago:
PETG will almost certainly be fine. Just use lots of walls (6 walls, maybe 30% infill). PETG’s heat resistance is more than good enough for a non-enclosed printer. Prusa has used PETG for their printer parts for a very long time without issues.
Heat isn’t the issue to worry about IMO. The bigger issue is creep/cold flowing, which is permanent deformation that results even with relatively light loads. PLA has very poor creep resistance unless annealed, PETG is a quite a bit better. ABS/ASA would be even better but they’re much more of a headache to print.
- Comment on Stop calling them tech companies: GenAI and SaaS — are they really tech? It’s time to call a spade a spade. 4 days ago:
It appears like reasoning because the LLM is iterating over material that has been previously reasoned out. An LLM can’t reason through a problem that it hasn’t previously seen
This also isn’t an accurate characterization IMO. LLMs and ML algorithms in general can generalize to unseen problems, even if they aren’t perfect at this; for instance, you’ll find that LLMs can produce commands to control robot locomotion, even on different robot types.
“Reasoning” here is based on chains of thought, where they generate intermediate steps which then helps them produce more accurate results. You can fairly argue that this isn’t reasoning, but it’s not like it’s traversing a fixed knowledge graph or something.
- Comment on Stop calling them tech companies: GenAI and SaaS — are they really tech? It’s time to call a spade a spade. 4 days ago:
All of the “AI” garbage that is getting jammed into everything is merely scaled up from what has been before. Scaling up is not advancement.
I disagree. Scaling might seem trivial now, but the state-of-the-art architectures for NLP a decade ago (LSTMs) would not be able to scale to the degree that our current methods can. Designing new architectures to better perform on GPUs (such as Attention and Mamba) is a legitimate advancement.
Furthermore, lots of advancements were necessary to train deep networks at all. Better optimizers like Adam instead of pure SGD, tricks like residual layers, batch normalization etc. were all necessary to allow scaling even small ConvNets up to work around issues such as vanishing gradients, covariate shift, etc. that tend to appear when naively training deep networks.
- Comment on Beyond RGB: A new image file format efficiently stores invisible light data 1 week ago:
I agree that pickle works well for storing arbitrary metadata, but my main gripe is that it isn’t like there’s an exact standard for how the metadata should be formatted. For FITS, for example, there are keywords for metadata such as the row order, CFA matrices, etc. that all FITS processing and displaying programs need to follow to properly read the image. So to make working with multi-spectral data easier, it’d definitely be helpful to have a standard set of keywords and encoding format.
It would be interesting to see if photo editing software will pick up multichannel JPEG. As of right now there are very few sources of multi-spectral imagery for consumers, so I’m not sure what the target use case would be though. The closest thing I can think of is narrowband imaging in astrophotography, but normally you process those in dedicated astronomy software (i.e. Siril, PixInsight), though you can also re-combine different wavelengths in traditional image editors.
I’ll also add that HDF5 and Zarr are good options to store arrays in Python if standardized metadata isn’t a big deal. Both of them have the benefit of user-specified chunk sizes, so they work well for tasks like ML where you may have random accesses.
- Comment on Beyond RGB: A new image file format efficiently stores invisible light data 1 week ago:
I guess part of the reason is to have a standardized method for multi and hyper spectral images, especially for storing things like metadata. Simply storing a numpy array may not be ideal if you don’t keep metadata on what is being stored and in what order (i.e. axis order, what channel corresponds to each frequency band, etc.). Plus it seems like they extend lossy compression to this modality which could be useful for some circumstances (though for scientific use you’d probably want lossless).
If compression isn’t the concern, certainly other formats could work to store metadata in a standardized way. FITS, the image format used in astronomy, comes to mind.
- Comment on Intel report says China aims to displace U.S. as top AI power by 2030. 1 week ago:
I guess you’d measure whose GenAI models are performing the best on benchmarks (generally currently OpenAI, though top models from China are not crazy far behind), as well as metrics like number of publications at top venues (NeurIPS, ICML, and ICLR for ML, CVPR, ICC and ECCV for vision, etc.).
A lot of great papers come out of Chinese institutions so I’m not sure who would be ahead in that metric either, though.
- Comment on I want to branch out from PLA. Should I try ABS or TPU? 4 weeks ago:
It really depends on what you’re looking for. Are you just looking to learn how to print new materials, or do you have specific requirements for a project?
If it’s the former, I’d say the easiest thing to try is PETG. It prints pretty reasonably on most printers though has stringing issues. It has different mechanical properties that make it suitable for other applications (for example, better temperature resistance and impact strength). It’ll be much less frustrating than trying to dial in ABS for the first time.
ABS and TPU are both a pretty large step up in difficulty, but are quite good for functional parts. If you insist on learning one of these, pick whichever one fits with your projects better. For ABS you’ll want an enclosure and a well ventilated room (IMO I wouldn’t be in the same room as the printer) as it emits harmful chemicals during printing.
- Comment on Apple reveals M3 Ultra, taking Apple silicon to a new extreme 4 weeks ago:
This type of thing is mostly used for inference with extremely large models, where a single GPU will have far too little VRAM to even load a model into memory. I doubt people are expecting this to perform particularly fast, they just want to get a model to run at all.
- Comment on Framework’s first desktop is a strange—but unique—mini ITX gaming PC 5 weeks ago:
Useless is a strong term. I do a fair amount of research on a single 4090. Lots of problems can fit in <32 GB of VRAM. Even my 3060 is good enough to run small scale tests locally.
I’m in CV, and even with enterprise grade hardware, most folks I know are limited to 48GB (A40 and L40S, substantially cheaper and more accessible than A100/H100/H200). My advisor would always say that you should really try to set up a problem where you can iterate in a few days worth of time on a single GPU, and lots of problems are still approachable that way. Of course you’re not going to make the next SOTA VLM on a 5090, but not every problem is that big.
- Comment on The science is divided 1 month ago:
Yep this is the exact issue. This problem comes up frequently in a first discrete math or formal mathematics course in universities, as an example of how subtle mistakes can arise in induction.
- Comment on The science is divided 1 month ago:
Exactly, the assumption (known as the inductive hypothesis) is completely fine by itself and doesn’t represent circular reasoning. The issue in the “proof” actually arises from the logic coming after this, in which they assume that they can form two differeny overlapping sets by removing a different horse from the total set of horses, which fails if n=2 (as then they each have a distinct element).
- Comment on Can I ethically use LLMs? 1 month ago:
I’m fairly certain blockchain GPUs have very different requirements than those used for ML, especially not LLMs. In particular they don’t need anywhere as much VRAM and generally don’t require floating point math, nor do they need features like tensor cores. Those “blockchain GPUs” likely didn’t turn into ML GPUs.
ML has been around for a long time. People have been using GPUs in ML since AlexNet in 2012, not just after blockchain hype started to die down.
- Comment on Arm's to launch first self-made processors, poaching employees from clients: Reports 1 month ago:
I think what they meant by that is “is this different wrt antitrust compared to Intel and x86?”
Intel both owns the x86 ISA and designs processors for it, though the situation is more favorable in that AMD owns x86-64 and obviously also designs their own processors.
- Comment on DeepSeek Proves It: Open Source is the Secret to Dominating Tech Markets (and Wall Street has it wrong). 1 month ago:
I would say that in comparison to the standards used for top ML conferences, the paper is relatively light on the details. But nonetheless some folks have been able to reimplement portions of their techniques.
ML in general has a reproducibility crisis. Lots of papers are extremely hard to reproduce, even if they’re open source, since the optimization process is partly random (ordering of batches, augmentations, nondeterminism in GPUs etc.), and unfortunately even with seeding, the randomness is not guaranteed to be consistent across platforms.
- Comment on Time to get serious with E2E encrypted messaging 1 month ago:
No, the server is on the github account linked above as well. The repo is here.
Signal however doesn’t federate and does not generally support third-party clients.
- Comment on Time to get serious with E2E encrypted messaging 1 month ago:
I used to do something like this before Signal became a thing. We used to use OTR via the Pidgin OTR plugin to send encrypted messages over Google Hangouts.
It was pretty funny to check the official Hangouts web client and see nonsensical text being sent.
- Comment on xkcd #3047: Rotary Tool 1 month ago:
The sidereal telescope mount one seems to be right (approx 1 rotation per day).
- Comment on Nobel Prize 2024 5 months ago:
I work in an ML-adjacent field (CV) and I thought I’d add that AI and ML aren’t quite the same thing. You can have non-learning based methods that fall under the field of AI - for instance, tree search methods can be pretty effective algorithms to define an agent for relatively simple games like checkers, and they don’t require any learning whatsoever.
Normally, we say Deep Learning (the subfield of ML that relates to deep neural networks, including LLMs) is a subset of Machine Learning, which in turn is a subset of AI.
Like others have mentioned, AI is just a poorly defined term unfortunately, largely because intelligence isn’t a well defined term either. In my undergrad we defined an AI system as a programmed system that has the capacity to do tasks that are considered to require intelligence. Obviously, this definition gets flaky since not everyone agrees on what tasks would be considered to require intelligence. This also has the problem where when the field solves a problem, people (including those in the field) tend to think “well, if we could solve it, surely it couldn’t have really required intelligence” and then move the goal posts. We’ve seen that already with games like Chess and Go, as well as CV tasks like image recognition and object detection at super-human accuracy.