obelisk_complex
@obelisk_complex@piefed.ca
- Comment on 1 week ago:
Hahaha baby steps! But I’ll look at it; if nothing else I think it would be very funny to have the dev equivalent of Jar Jar Binks end the format war by accident (which I say as just a joke; I of course have no idea how complex the issue actually is).
- Comment on 1 week ago:
Ah, then the real slowness is going to come from having them on a spinning disk HDD. For friends, 4Mbps target bitrate should be plenty, with the 2x multiplier should be enough to preserve detail. No need to touch anything else, you don’t need precision mode for it. Try that on just one episode and see how you go!
- Comment on 1 week ago:
Fun fact - HISTV actually has two-pass encoding! Though, with enough system RAM you can actually look ahead far enough that you can get the benefits of two-pass in just a single pass. I have a bit about this in the README.md:
Precision mode
One checkbox for the best quality the app can produce. It picks the smartest encoding strategy based on how much RAM your system has:
Your RAM What it does 16GB or more Looks 250 frames ahead to plan bitrate (single pass) 8-16GB Looks 120 frames ahead to plan bitrate (single pass) Under 8GB Scans the whole file first, then encodes (two passes) Two-pass only happens when precision mode is on AND the system has less than 8GB RAM AND the file would be CRF-encoded. Reason being, Precision Mode normally uses CRF with extended lookahead (120-250 frames depending on RAM). Lookahead buffers live in memory. On low-RAM systems that buffer would be too large, so the app falls back to two-pass instead and stores the analysis run in a tempfile on disk. To break down each one:
- Pass 1: Runs ffmpeg with -pass 1 writing to a null output. ffmpeg analyses the entire file and writes a statistics log (the passlog file) describing the complexity of every scene. No actual video is produced - this pass is pure analysis.
- Pass 2: Runs ffmpeg with -pass 2 using the statistics from pass 1. The encoder now knows what’s coming and can distribute bits intelligently across the whole file - spending more on complex scenes, less on simple ones - without needing a large lookahead buffer in RAM. After both passes complete, the passlog temp files are cleaned up.
I went with this architecture to mitigate the biggest problem with two-pass encoding, when there’s enough system RAM to store the lookahead frames: the speed. The quality of single-pass with a 250-frame lookahead is equal to two-pass; 120-frame lookahead is just as fast, and still close enough in quality that it doesn’t really make a difference.
- Comment on 1 week ago:
You know what? I think I can figure out a way to estimate final file size and display it to the user. It’ll only work if “Precision Mode” is off though - that uses “CRF” or “Constant Rate Factor” which basically tells the encoder “be efficient, but make the file as big as it needs to be to look good”. As a result there’s no way to tell how big the file will end up being - the encoder makes the decision on the fly.
With “Precision Mode” off, HISTV has two gears:
- If your file is already small enough (at or below the target bitrate), it uses “CQP” or “Constant Quantisation Parameter” (the QP I/P numbers), which tells the encoder “Use this quality level for every frame, I don’t care how big the file ends up”. It’s fast and consistent - every frame gets the same treatment.
- If your file is too big, it switches to VBR (Variable Bit Rate), which tells the encoder “Stay around this target bitrate, spike up to the peak ceiling on complex scenes, but don’t go over”. It’s how the app actually shrinks files. You can estimate the output size with target mbps * seconds / 8 - so a 60-second clip at 4Mbps lands around 30MB. <- This is the maths I’m thinking about doing to display the estimate to the user.
- Comment on 1 week ago:
There are now!
- Comment on 1 week ago:
Sweet as mate, cheers for the kind words! I hope it’s useful - and if you run into any issues, or think of a feature you’d like to see, please do let me know. I always promise I’ll at least consider it, and I’m having a lot of fun working on this thing 😁
- Comment on 1 week ago:
Yeah, it’s early and I was just skimming the docs for it as I don’t have a Mac. I’ll think about the metadata downloader; it does have a place, arguably, as related to the mkv tag repair I built in.
As to the CLI, HISTV does have one! It’s a standalone binary but it uses the same Rust backend as the GUI for feature parity and maintainability 😊
- Comment on 1 week ago:
Haha yes, I use it regularly! And yes, it’ll be plenty fast on your system. I have very deliberately gone over the code base looking for inefficiencies six times now, so it runs nice and lean - I do an efficiency/hygiene pass every couple of releases to make sure bloat doesn’t creep in.
As ark3@lemmy.dbzer0.com said, CPU encoding is slow but it preserves the most quality. No kidding, it really is night and day compared to GPU encoding - for this, just tick “Precision mode” in HISTV. It’s about 1x speed on most videos, so a 45 minute file will take about 45 minutes.
GPU does go a lot faster, my 7900 XTX rips through 1080p at about 28x speed so a 45 minute file takes about 2 minutes. This is good enough for most content; set the multiplier to 2x or 3x with a bitrate of 4 if you want better GPU quality in HISTV. The multiplier says how high the peak bitrate can go, so you keep more data for fast-moving scenes, without forcing the encoder to keep useless data for slow scenes.
- Comment on 1 week ago:
This is super helpful (and I see that “btw”, you got a smoke with that one (☞゚ヮ゚)☞). Thank you for the heads up and ask this detailed information! I’m excited to check out Certum.
there seem to be very few people who simultaneously - pathologically dislike using Windows regularly - still want to make it easier for people on Windows to minimize Windows Defender complaints when running software that they build
Describes me to a T 😅 My career is rooted in support, so my pathologies include trying to make things end-user easy.
- Comment on 1 week ago:
There really aren’t many to be honest, Tdarr is super powerful! But the setup is a lot, at least on first run. The main point of HISTV is for the times when you can’t be bothered to set up tdarr, like if you only have to do spot conversions, or for people who don’t want to learn how to use Tdarr. But there are a few features unique to HISTV!
I have built in disk space monitoring so your drive doesn’t fill up during encoding, which tdarr doesn’t do; I don’t know if tdarr supports turning gif/webp/mjpeg/apng into MP4/MKV; also, tdarr doesn’t auto detect your hardware, where HISTV does a few test transcodes on startup to determine not only what hardware is available but whether the encoder for that hardware is working.
- Comment on 1 week ago:
Thank you! I am obnoxiously proud of myself for that one 😅
That’s actually how this all started for me haha! I was sitting there tweaking the same command as I hit videos with different formats or quality levels, and I thought to myself that it’d be a lot easier with just a few smarts like detecting if a video is already at the target quality level. The first version was actually just a winforms GUI wrapping that very PowerShell command - it’s come a long way in just a few weeks.
Subler is interesting! Hmm. Currently HISTV just copies over ask the subtitle tracks directly. I think I could add a subtitle function like Subler’s to bake in subtitle tracks, if the user has the subtitle file they want to use. Would that be useful? Metadata download from the internet is messy though, I might want to leave that to the existing *arr solutions.
- Comment on 1 week ago:
Thank you! And yes, it can - you can put the HISTV command for the CLI version in a custom script in Settings -> Connect -> Custom Script:
The script would be a one-liner with your HISTV command; args can be either passed in the script or built into a little JSON file next to your HISTV CLI binary 😊
- Comment on 1 week ago:
Thank you very much, that’s very kind! 😊
I didn’t know I could get the cert from elsewhere, I appreciate the heads up; my experience lies mainly in networking and DNS, rather than software. I’ll open that possibility back up, then. It’d be a first for me, and I’m getting a real taste for firsts with this project!
I’m not taking anyone’s money, though, even Windows users (right now, that includes me… I’m working on it lol). I’m doing this purely for the love of the game!
- Submitted 1 week ago to selfhosted@lemmy.world | 33 comments
- Comment on ONYX: self-hosted messenger with LAN mode and E2EE — an indie project story 1 week ago:
Hey bud, this is a neat project! I ran your codebase through my own Claude Opus 4.6 instance with extended thinking on, against a “code quality” skill I worked up with the bot the other day. The skill prioritises security, code efficiency, and enforcing end-user usability over taking shortcuts. It found a lot more than the 8 security issues I opened, but I didn’t wanna flood your issues section until I’m sure you’re happy for me to contribute like this.
Anyway, cheers, and good luck!
- Comment on I started playing WH40k Rogue Trader and I'm digging it, but I know virtually nothing of Warhammer. Any super basic world info I should know going in? 2 weeks ago:
Can’t believe nobody has said why The Warp exists - or rather why it overlaps with and breaks into our reality.
Warhammer has Space Elves - the Eldar/Aeldari. Powerful psykers. Millennia ago they got so kinky and on so many drugs they fucked a hole in reality itself, a hole that promptly engulfed their homeworld and sent them on an endless exodus in their Worldships. They were such powerful psychics their sexcapades also manifested demons and the Chaos Gods from the formless energies of The Warp.
The ones that renounced their horny ways became the Eldar, the ones who didn’t became the Dark Eldar who are still really into leather and chains.
This isn’t super relevant to the story of the game, I just really like this bit of the lore. It reads like a joke: “Why’s the universe so jacked up? Fuckin’ space elves…”
- Submitted 2 weeks ago to selfhosted@lemmy.world | 2 comments
- Comment on Harmony - Yet Another Discord Alternative 2 weeks ago:
I’m right there with you, bud. I tried StoatChat too, and I got a nice email from the German government about using an outdated version of React with RCE vulnerabilities. I think this must be a very difficult problem to properly solve, given the number of different approaches and how all of them have their own issues to contend with. Nextcloud Talk is the most usable option I’ve found because it does voice, video, and screen sharing and it also has call links you can send for unregistered people to join the calls. But performance is spotty even with the “high performance backend” set up (that may be due to my server being in Germany though 😅).
As to being accused of using AI, don’t let it get to you. The people yelling the loudest can’t tell the difference between handwritten code and AI, because they can’t code. If you pull down your repo, you’ll be depriving people who might be able to use your project because of trolls who never would have tried it in the first place.
I do use AI for coding, and I’ve gotten plenty of hate for it, but also people who don’t care and just want the functionality of the tool I built.
And in fact I’m going to check out your project and see if I can get it up and running, so please don’t take it down. I’ll likely be putting it on my German server so I’ll let you know what the performance is like with extreme round trips 😁
- Comment on Harmony - Yet Another Discord Alternative 2 weeks ago:
Would be easier to contribute to XMPP or Matrix IMO.
Synapse is in the middle of a rebuild without much compatibility between the legacy and new builds, and it’s a pain in the dick to set up at the moment. I know, because I did it.
XMPP I haven’t tried to set up yet, but I imagine it to be similarly in-depth.
As to why not contribute: like you said, most likely AI. The maintainers don’t want these contributions; if they did, they’d use the AI themselves. I didn’t get it at first but after some discussion here, and building my own thing, I understand why people feel that way too.
Now… why do the whole thing from scratch instead of forking? Great question. XMPP might just need a nice coat of paint, if it can handle voice and video and screen share; I haven’t come away with great impressions of matrix/synapse.
- Comment on 'Icky and heartbreaking': The $2 per hour worker behind the OnlyFans boom 3 weeks ago:
We’ve practically exhausted the Exploration and Expansion phases
The Ocean and Space both called, and they disagree
- Comment on Lutris now being built with Claude AI, developer decides to hide it after backlash 3 weeks ago:
Yikes. Hadn’t heard about the openclaw use. That stack scares the bejeezus out of me.
- Comment on Lutris now being built with Claude AI, developer decides to hide it after backlash 3 weeks ago:
Precisely this, yes, well said. We all stand on the shoulders of those who came before us, one way or another.
- Comment on Lutris now being built with Claude AI, developer decides to hide it after backlash 3 weeks ago:
Because coding is hard work even with AI assistance, and people who don’t code will judge you the loudest and longest and meanest for using AI to make the work easier. I personally suffer rejection sensitivity dysphoria so I understand the emotions behind their actions.
But yeah, everyone just ignores the years of coding work this person did for nothing just to help people enjoy their games, to crucify them for using AI and then having feelings about getting yelled at by the very beneficiaries of their prior work.
It’s not like they’re stripping out or reimplementing contributions and taking the project closed source, like BookLore. People need some damn perspective.
- Comment on yooooo i thought these cabbages were resident evil renders lol 3 weeks ago:
You got cabbaged! (https://www.youtube.com/watch?v=h8X7S4j_zXA)
- Comment on Claude Code deletes developers' production setup, including its database and snapshots — 2.5 years of records were nuked in an instant 4 weeks ago:
Sure, but reading the article, I think he might be knowledgeable enough. His mistake seems to have been blindly trusting the keys to the kingdom to an enthusiastic junior dev who’ll be very sorry if they nuke your system, but won’t think to do a damn thing to make sure it doesn’t happen in the first place…
- Comment on Claude Code deletes developers' production setup, including its database and snapshots — 2.5 years of records were nuked in an instant 4 weeks ago:
- Comment on Honey, I Shrunk The Vids [Mr. Universe Edition] v1.0.5 4 weeks ago:
lol I’ll take that as high praise, as everyone knows the 1990s were the peak of our civilisation!
- Comment on Honey, I Shrunk The Vids [Mr. Universe Edition] v1.0.5 4 weeks ago:
Yeah, I’m getting that; though this isn’t purely AI-generated. This is a working application that I’ve tested, have improved and plan on continuing to improve, and am currently using to transcode my media. There’s a lot more care and thought put into it than most people would expect on reading that it was created with the help of an AI model.
I put the disclaimer because I respect that serious developers who actually go look at the code would like a heads-up that it’s genAI before they waste their time reading it. But, I would like people to at least have a chance to read why I think my approach is different than most.
And, if you have videos to transcode, I’d love to hear what you think if you give it a go! I do actively fix bugs as well as add new features, so please do let me know if you try it and find an issue - I could use all the help testing it I can get ‘cause my hardware to test on is quite limited.
- Comment on Honey, I Shrunk The Vids [Mr. Universe Edition] v1.0.5 4 weeks ago:
I was hoping to catch this before your replied, as I went and read the readme, then it made more sense. So I deleted my reply. But too late!
All good! I’m actually enjoying talking about this thing with people who want to know more so I don’t mind at all _
The cool thing is there isn’t much to put into a command that does stuff like this, unless you changing the FFMPEG parameters every time, but that would seem unlikely.
So actually, that’s exactly the issue I was running into! I’d run a batch command on a whole folder full of videos, but a handful would already be well-encoded or at least they’d have a much MUCH lower bitrate, so I’d end up with mostly well-compressed files and a handful that looked like they went through a woodchipper. I wanted everything to be in the same codecs, in the same containers, at roughly the same quality (and playable on devices from around 2016 and newer) when it came out the other end, so I implemented a three-way decision based around the target bitrate you set and every file gets evaluated independently for which approach to use:
1. Above target → VBR re-encode: If a file’s source bitrate is higher than the target (e.g. source is 8 Mbps and target is 4 Mbps), the video is re-encoded using variable bitrate mode aimed at the target, with a peak cap set to 150% of the target. This is the only case where the file actually gets compressed. 2. At or below target, same codec → stream copy: If the file is already at or below the target bitrate and it’s already in the target codec (e.g. it’s HEVC and you’re encoding to HEVC), the video stream is copied bit-for-bit with -c:v copy. No re-encoding happens at all - the video passes through untouched. This is what prevents overcompression of files that are already well-compressed. 3. At or below target, different codec → quality-mode transcode: If the file is at or below the target but in a different codec (e.g. it’s H.264 and you’re encoding to HEVC), it can’t be copied because the codec needs to change. In this case it’s transcoded using either CQP (constant quantisation parameter) or CRF (constant rate factor) rather than VBR - so the encoder targets a quality level rather than a bitrate. This avoids the situation where VBR would try to force a 2 Mbps file “down” to a 4 Mbps target and potentially bloat it, or where the encoder wastes bits trying to hit a target that’s higher than what the content needs.
There’s also a post-encode size check as a safety net: if the output file ends up larger than the source (which can happen when a quality-mode transcode expands a very efficiently compressed source), HISTV deletes the output, remuxes the original source into the target container instead, and logs a warning. So even in the worst case, you never end up with a file bigger than what you started with which is much harder to claim with a raw CLI input. The audio side has a similar approach; each audio stream is independently compared against the audio cap, and streams already below the cap in the target codec are copied rather than re-encoded.
But yeah everything beyond that was bells and whistles to make it easier for people who aren’t me to use it haha.
I am 100% looking for more stuff I can build - let’s talk about it!
- Comment on Honey, I Shrunk The Vids [Mr. Universe Edition] v1.0.5 4 weeks ago:
Thanks mate! It’s been a rough as hell week at work and getting it when I’m trying to share my hobby work with people was unexpected and a little demoralising, so your comment is really nice to read and much appreciated 😊