There are services that check provided credentials against a dictionary of compromised ones and reject them. Off the top of my head Microsoft Azure does this and so does Nextcloud.
Comment on 23andMe tells victims it's their fault that their data was breached | TechCrunch
dpkonofa@lemmy.world 10 months agoThey did. They had MFA available and these users chose not to enable it. Every 23andMe account is prompted to set up MFA when they start. If people chose not to enable it and then someone gets access to their username and password, that is not 23andMe’s fault.
lightnsfw@reddthat.com 10 months ago
dpkonofa@lemmy.world 10 months ago
This assumes that the compromised credentials were made public prior to the exfiltration. In this case, it wasn’t as the data was being sold privately on the dark web. HIBP, Azure, and Nextcloud would have done nothing to prevent this.
lightnsfw@reddthat.com 10 months ago
Yea, you’re right. Good point.
sudneo@lemmy.world 10 months ago
The fact that they did not enforce 2fa on everyone (mandatory, not just having the feature enabled) is their responsibility. You are handling super sensitive data, credential stuffing is an attack with a super low level of complexity and high likelihood.
Similarly, they probably did not enforce complexity requirements on passwords (making an educated guess vere), or at least not sufficiently, which is also their fault.
Regarding the last bit, it might noto have helped against this specific breach, but we don’t know that. There are companies who offer threat intelligence services and buy data breached specifically to offer this service.
Anyway, in general the point I want to make is simple: if your only defense you have against a known attack like this is a user who chooses a strong and unique password, you don’t have sufficient controls.
dpkonofa@lemmy.world 10 months ago
I guess we just have different ideas of responsibility. It was 23andMe’s responsibility to offer MFA, and they did. It was the user’s responsibility to choose secure passwords and enable MFA and they didn’t.
sudneo@lemmy.world 10 months ago
My idea is definitely biased by the fact that I am a security engineer by trade. I believe a company is ultimately responsible for the security of their users, even if the threat is the users’ own behavior. The company is the one able to afford a security department who is competent about the attacks their users are exposed to and able to mitigate them (to a certain extent), and that’s why you enforce things.
Very often companies use “ease” or “users don’t like” to justify the absence of security measures such as enforced 2fa. However, this is their choice, who prioritize not pissing off (potentially) a small % of users for the price of more security for all users (especially the less proficient ones). It is a business choice that they need to be accountable for. I also want to stress that despite being mostly useless, different compliance standards also require measures that protect users who use simple or repeated passwords. That’s why complexity requirements are sometimes demanded, or also the trivial bruteforce protection with lockout period (for example, most gambling licenses require both of these, and companies who don’t enforce them cannot operate in a certain market). Preventing credentials stuffing is no different and if we look at OWASP recommendation, it’s clear that enforcing MFA is the way to go, even if maybe in a way that it does not trigger all the time, which would have worked in this case.
Hard disagree. The company, i.e. the data processor, is the only one who has the full understanding of the data (sensitivity, amount, etc.) and a security department. That’s the entity who needs to understand what threat actors exist for the users and implement controls appropriately. Would you trust a bank that allowed you to login and make bank transfers using just a login/password with no requirements whatsoever on the password and no brute force prevention?
dpkonofa@lemmy.world 10 months ago
This wasn’t a brute force attack, though. Even if they had brute force detection, which I’m not sure if they don’t or not, that would have done nothing to help this situation as nothing was brute forced in the way that would have been detected. The attempts were spread out over months using bots that were local to the last good login location. That’s the primary issue here. The logins looked legitimate. It wasn’t until after the exposure that they knew it wasn’t and that was because of other signals that 23andMe obviously had in place (I’m guessing usage patterns or automation detection).