One needs to send 1 mln HTTP requests concurrently, in batches, and read the responses. No more than 100 requests at a time.
Which way will it be better, recommended, idiomatic?
-
- Send 100 ones, wait for them to finish, send another 100, wait for them to finish… and so on
-
- Send 100 ones. And then keep adding new ones into the pool as requests in the pool finish. “Done - add a new one. Done - add a new one”.
ck_@discuss.tchncs.de 11 months ago
Given we know very little about the problem and runtime constraints, the second approach has the potential to have better performance, as with case 1. the average duration of the requests in a batch is equal to the worst case for all requests, while in case 2. the average duration of all requests is, well, the average.
salvador@lemmy.world 11 months ago
Who “we”? Voices in your head?
ck_@discuss.tchncs.de 11 months ago
Aaand… blocked