Two major applications I’ve used that don’t deal well with slow cell links:
-
Lemmyverse.net runs an index of all Threadiverse instances and all communities on all instances, and presently is an irreplaceable resource for a user on here who wants to search for a given community. It loads an enormous amount of data for the communities page, and has some sort of short timeout. Whatever it’s pulling down internally — I didn’t look — either isn’t cached or is a single file, so reloading the page restarts from the start. The net result is that it won’t work over a slow connection.
-
This may have been fixed, but git had a serious period of time where it would smash into timeouts and not work on slow links, at least to github. This made it impossible to clone larger repositories; I remember failing trying to clone the Cataclysm: Dark Days Ahead repository, where one couldn’t even manage a shallow clone. This was greatly-exacerbated by the fact that git does not presently have the ability to resume downloads if a download is interrupted. I’ve generally wound up working around this by git cloning to a machine on a fast connection, then using rsync to pull a repository over to the machine on a slow link, which, frankly, is a little embarrassing when one considers that git really is the premier distributed VCS tool out there in 2025.
tal@lemmy.today 7 months ago
A bit of banging away later — I haven’t touched Linux traffic shaping in some years — I’ve got a quick-and-dirty script to set a machine up to temporarily simulate a slow inbound interface for testing.
:::spoiler slow.sh test script
:::
I’m going to see whether I can still reproduce that git failure for Cataclysm on git 2.47.2, which is what’s in Debian trixie. As I recall, it got a fair bit of the way into the download before bailing out. Including here since I think that the article makes a good point that there probably should be more slow-network testing, and maybe someone else wants to test something themselves on a slow network.
Probably be better to have something a little fancier to only slow traffic for one particular application — maybe create a “slow Podman container and match on traffic going to that?” — but this is good enough for a quick-and-dirty test.
mesamunefire@piefed.social 7 months ago
Nice! Scientific data!
Also looks like its still an issue with GH: https://github.com/orgs/community/discussions/135808 in slower countries. so yeah nvm its still a huge issue even today.
tal@lemmy.today 7 months ago
Thanks. Yeah, I’m pretty sure that that was what I was hitting. Hmm. Okay, that’s actually good — so it’s not a git bug, then, but something problematic in GitHub’s infrastructure.
mesamunefire@piefed.social 7 months ago
I wonder if there is a retry or something on git. I know there is if you create a basic bash script, but we can assume someone is having the same issue, right?
I did see some depth=1 or something like that to get only a certain depth of git commits but thats about it.
I cant find the curl workaround I used a long time ago. It might have been just pulling the code as a zip or something like some GH repos let you do.