Encrypting the transmission doesn’t do much if every app installation contains access credentials that can be extracted or sniffed.
Encrypt the credentials then? Or OAUTH pipeline, perhaps? Automated temporary private key generation for each upload (that sounds unrealistic, to be fair)? Can credentialing be used for intermediary storage that’s encrypted at that server and then decrypted on the database host?
Clearly my utter “noobishness” is showing, but at least it’s triggering a slight urge to casually peruse modern WebSec production workflows. I am but a humble DNNs-for-parametric-CAD-modelling (lots of Linear Algebra, PyTorch, and Grasshopper for Rhino) researcher. I am far removed from customer-facing production environments, and it shows.
Any recommendations on literature or articles on how engineers solve these problems in a “best practices” way that you can recommend? I suppose I could just look it up, but I thought I’d ask.
Chulk@lemmy.ml 8 months ago
Wouldn’t some sort of proxy in between the bucket and the client app solve this problem? I feel like you could even set up an endpoint on your backend that manages the upload. In other words, why is it necessary for the client app to connect directly with the bucket?
Maybe I’m not understanding the gist of the problem
zqps@sh.itjust.works 8 months ago
Exactly, it’s not necessary. It’s bad / lazy design. You don’t expose the DB storage directly, you expose a frontend that handles all the authentication and validation stuff before accessing the DB on the backend. That’s normal Client-Server-Database architecture.
nickwitha_k@lemmy.sdf.org 8 months ago
Yeah. You also landed on a correct thought process for security. Cloud providers will let you make datastores public but that’s like handing over a revolver with an unknown number of live chambers and saying “Have fun playing Russian roulette! I hope you win.” Making any datastore public facing, without an API abstraction to control authN and authZ is not just a bad practice, it’s a stupid practice.