According to the benchmark in the article it’s already way faster at n = 1000. I think you’re overestimating the cost of multiplication relative to just cutting down n logarithmically.
log_2(1000) = roughly 10, which is a hundred times less than 1000. 2000 would be 11, and 4000 would be 12. Logs are crazy.
The article is comparing to the dynamic programming algorithm, which requires reading and writing to an array or hash table (the article uses a hash table, which is slower).
The naive algorithm is way faster than the DP algorithm.
FlapKap@feddit.dk 1 year ago
According to the article the linear algebra algorithm has a running time of O(log n)
cbarrick@lemmy.world 1 year ago
IAm_A_Complete_Idiot@sh.itjust.works 1 year ago
According to the benchmark in the article it’s already way faster at n = 1000. I think you’re overestimating the cost of multiplication relative to just cutting down n logarithmically.
log_2(1000) = roughly 10, which is a hundred times less than 1000. 2000 would be 11, and 4000 would be 12. Logs are crazy.
cbarrick@lemmy.world 1 year ago
The article is comparing to the dynamic programming algorithm, which requires reading and writing to an array or hash table (the article uses a hash table, which is slower).
The naive algorithm is way faster than the DP algorithm.