Comment on Honey, I Shrunk The Vids [Mr. Universe Edition] v1.0.5
obelisk_complex@piefed.ca 1 day agoHey, replying again so you get a separate reply message. So like I said, I went looking for redundant loops and I found quite a few, just like you described. There was also a minor performance issue with the logic that built the FFMPEG argument; it used a lot of unnecessary flags, each of which required fresh memory allocation. That would only be an issue in specific circumstances, like if you were encoding thousands of videos in quick succession… but that’s exactly the kind of issue you were talking about, so I asked for and implemented the fix.
It does seem snappier. I’m pushing 1.0.9, which has the fixes beyond what I found from your comments (like the argument construction issue), included. If there’s anything else you’d recommend I look at, I’m all ears.
non_burglar@lemmy.world 19 hours ago
Nice.
The issues to look for are unnecessary logic (evaluating variables and conditions for no reason), and double sets of variables.
One of the seasoned devs I work with said she encourages coders to transpose work at major inflection points, and this helps all devs gain an understanding of their own code. The technique is simply to rewrite/refactor the code in a new project manually, changing the names of the variables and arrays. The process forces one to identify where variables and actions are being used and how. It’s not very practical for very big projects, but anything under 1000 lines would benefit from it.
Good luck.
obelisk_complex@piefed.ca 19 hours ago
That’s very similar to what I’ve been doing 😊 This project I think is on the cusp, a few of the files are over a thousand lines but it’s still kinda manageable. Comparatively, the PowerShell script I started with was far simpler. That one I actually did write most of it because I know how to get stuff done in PowerShell - just needed Claude’s help with the GUI.
Also, I was thinking about your comment on performance when you’re looking at tens of thousands of runs - definitely not my original intent for this, I figured anyone doing that would just use CLI, but it’s totally possible with HISTV. I added an option to put files in /outputs, path relative to the input file, so you totally could just drag a top level folder info the queue, it’ll enumerate the media in all the subdirectories, and hit start. You’d get the transcoded files right next to the originals in your folder structure so they’re easy to find. Useful, I hope, when doing that many jobs.
And thanks to your advice, it’ll do so a lot more efficiently. Like 5-6x lower resource usage, now. I really do appreciate the feedback, it’s exactly the kind of pointers I was hoping for when I posted this. I wish you’d come in to the comments outside my emotional response to someone else :P
non_burglar@lemmy.world 18 hours ago
I’m 50 yrs old now, but I used to react almost the same way you did, I understand where you’re coming from.
I personally believe LLMs (and AI in general) can be great tools to help along with coding and similar tasks, we just don’t have a very good culture of their use yet.
obelisk_complex@piefed.ca 18 hours ago
Yeah, it’s another reminder of why I always tell myself to take a day and think about my response - for some reason though I don’t often stick to it! 😅
Also, I realised I was being needlessly stubborn mostly because nobody was telling me how to tag my post, just that I have to; after a quick think, I do understand why people want to know if they’re gonna be reading genAI code. I’ve added a preamble that this is AI-assisted, and a bit about why I think I’m doing it differently than people might first expect. I’ll clean it up to include in future posts and in my repo readmes.