It’s horrendously slow, unusable imo. With the larger DeepSeek distilled models I tried that didn’t fit into VRAM you could easily wait 5 minutes until it was done writing its essay. Compared to just a few seconds when it does. Bit that’s with a RTX 3070 Ti, not something the average ChatGPT user has lying around probably.