There's a "philosopher" who the far-right techbro-oligarchs rely on, whose blog is grey-something-or-other..
I tried using wget & there's a bug or something in the site, so it keeps inserting links-to-other-sites into uri's, so you get bullshit like
grey-something-or-other.substack.com/e/b/a/http://en.wikipedia.org/wiki/etc..
The site apparently works for the people who browse it, but wget isn't succeeding in just cloning the thing.
I want the items that the usable-site is made-of, not endless-failed-requests following recursive errors, forever..
Apparently one has to be ultra-competent to be able to configure all the disincludes & things in the command-line-switches, to get any particular site dealt-with by wget.
Sure, on static-sites it's magic, but on too many sites with dynamically-constructed portions of themselves, it's a damn headache, at times..
_ /\ _
TheTwelveYearOld@lemmy.world 4 weeks ago
For instance, I can’t download completely youtube pages with videos using wget, but can with pywb (though pywb has issues with sites like reddit).
Not that I would necessarily use it for youtube pages, but that’s an example of a complex page with lots of AJAX.