Comment on Hundreds of smartphone apps are monitoring users through their microphones
CeeBee_Eh@lemmy.world 10 hours agoI don’t have any questions. This is something I know a lot about at a very technical level.
The difference between one wake word and one thousand is marginal at most. At the hardware level the mic is still listening non-stop, and the audio is still being processed. It *has" to do that otherwise it wouldn’t be able to look for even one word. And then from there it doesn’t matter if it’s one word or 10k. It’s still processing the audio data through a model.
And that’s the key part, it doesn’t matter if the model has one output or thousands, the data still bounces through each layer of the network. The processing requirements are exactly the same (assuming the exact same model).
This is the part you simply do not understand.
LoveSausage@discuss.tchncs.de 9 hours ago
Seems you don’t, I and started your line with a question and continued to do so despite being provided with answers repeatedly . Is there some kink of roleplaying AI dev? You don’t really seem to have done your homework to do so.
CeeBee_Eh@lemmy.world 7 hours ago
That’s more applicable for something like a Google Mini. A phone is powerful enough, especially with the NPU most phones have now, to perform those detecting efficiently without stepping up the CPU state.
Is there some kink on your side in pretending you’re smart? You have no idea who I am or what I know.
Again, you’re showing your lack of knowledge here. A model doesn’t use more power if trained on one class or a hundred. The amount of cycles is the same in both instances.
It’s usually smart speakers that have a low powered chip that processes the wake word and fires up a more powerful chip. That doesn’t exist in phones.