Comment on Introducing reitti: a selfhosted alternative to Google Timeline
danielgraf@discuss.tchncs.de 1 day agoThanks again for the follow-up. It is something I can investigate. I doubt that it is somehow related, but who knows. 🤷
Comment on Introducing reitti: a selfhosted alternative to Google Timeline
danielgraf@discuss.tchncs.de 1 day agoThanks again for the follow-up. It is something I can investigate. I doubt that it is somehow related, but who knows. 🤷
ada@piefed.blahaj.zone 1 day ago
Ok, so it may not be frozen. The numbers in the queue seem to imply it is, however, timelines and places are slowly filling out in my history. A couple of dates I had looked at previously were showing me tracklogs for the day, but not timeline information, and now, they're showing timelines for the day
danielgraf@discuss.tchncs.de 1 day ago
That’s good, but I still question why it is so slow. If you receive these timeout exceptions more often, at some point the data will cease to be analyzed.
I just re-tested it with multiple concurrent imports into a clean DB, and the
stay-detection-queue
completed in 10 minutes. It’s not normal for it to take that long for you. The component that should take the most time is actually themerge-visit-queue
because this creates a lot of stress for the DB. This test was conducted on my laptop, equipped with an AMD Ryzen™ 7 PRO 8840U and 32GB of RAM.ada@piefed.blahaj.zone 1 day ago
Since I last commented, the queue has jumped from about 9000 outstanding items, to 15,000 outstanding items, and it appears that I have timelines for a large amount of my history now.
However, the estimated time is still slowly creeping up (though only by a minute or two, despite adding 6000 more items to the queue).
I haven't uploaded anything manually that might have triggered the change in queue size.
Is there any external calls made during processing this queue that might be adding latency?
tl;dr - something is definitely happening
danielgraf@discuss.tchncs.de 1 day ago
This process is not triggered by any external events.
Every ten minutes, an internal background job activates. Its function is to scan the database for any
RawLocationPoints
that haven’t been processed yet. These unprocessed points are then batched into groups of 100, and each batch is sent as a message to be consumed by thestay-detection-queue
. This process naturally adds to the workload of that queue.However, if no new location data is being ingested, once all
RawLocationPoints
have been processed and their respective flags set, thestay-detection-queue
should eventually clear, and the system should return to a idle state. I’m still puzzled as to why this initial queue (stay-detection-queue
) is exhibiting such slow performance for you, as it’s typically one of the faster steps.