Servers: one. No need to make the log a distributed system, CT itself is a distributed system.
The uptime target is 99%3 over three months, which allows for nearly 22h of downtime. That’s more than three motherboard failures per month.
CPU and memory: whatever, as long as it’s ECC memory. Four cores and 2 GB will do.
Bandwidth: 2 – 3 Gbps outbound.
3 – 5 TB of usable redundant filesystem space on SSD or
3 – 5 TB of S3-compatible object storage, and 200 GB of cache on SSD.
People: at least two. The Google policy requires two contacts, and generally who wants to carry a pager alone.
Seems beyond you typical homelab self hoster, except for the countries that have 5gbps symmetric home broadband.
If anyone can sneak 2-3gbps outbound pass their employer, I imagine the rest is trivial.
Altho… “At least 2 [people]” isn’t the typical self hosting
Moonrise2473@feddit.it 8 months ago
But your endpoints are already available to everyone with just a nslookup.
Maybe it’s more the permanent history of that, so if you run something like “radarr.example.com” then you wouldn’t have plausible deniability if you’re sued and the CT logs are presented as proof of your wrongdoing
xinayder@infosec.pub 8 months ago
With Encrypted Client Hello you can have some more privacy on obtaining certificates for wildcard domains, IIRC.
towerful@programming.dev 8 months ago
Not if you use wildcard dns records.
Orygin@sh.itjust.works 8 months ago
Not if you run a wildcard CNAME for your sub domains right ?
Like I have *.mydomain.com point to my server, and there I have a different reverse proxy depending on the domain.