Comment on Can you think of any now?

<- View Parent
missfrizzle@discuss.tchncs.de ⁨1⁩ ⁨week⁩ ago

HOG and Hough transforms bring me back. honestly glad that I don’t have to mess with them anymore though.

I always found SVMs a little shady because you had to pick a kernel. we spent time talking about the different kernels you could pick but they were all pretty small and/or contrived. I guess with NNs you pick the architecture/activation functions but there didn’t seem to be an analogue in SVM land for “stack more layers and fatten the embeddings.” though I was only an undergrad.

do you really think NNs won purely because of large datasets and GPU acceleration? I feel like those could have applied to SVMs too. I thought the real win was solving vanishing gradients with ReLU and expanding the number of layers, rather than throwing everything into a 3 or 5-layer MLP, preventing overfitting, making the gradient landscape less prone to local maxima and enabling hierarchical feature extraction to be learned organically.

source
Sort:hotnewtop