Scale question for the group - I am looking at the...
# kolide
s
Scale question for the group - I am looking at the possibility of deploying fleet to a decent size nonprofit organization. It’s all napkin architecture right now so I am thinking of putting it in AWS behind a load balancer that fronts an elastic pool of EC2 instances of fleet. The databases would be Aurora (MySQL compliant) and ElastiCache (Redis compliant.) Is there any guidance out there on sizing? Or ratios of # of queries to hosts to instances of fleet with CPU XX and RAM YYYY?
s
We have a similar deployment in GCP, and with 1 vCPU and 3.75GB RAM we are seeing <1% CPU average with 30 hosts enrolled thus far.
w
I've got a fairly small deployment at the moment in AWS, configured exactly as you describe. I think I'm running two t3.micros for fleet, a t2.small for the aurora db, and a t2.micro for ElastiCache. With 60 hosts, I'm less than 2% CPU for the fleet instances, the db hovers around 10% CPU (which seems like pretty much the baseline without any active db connections), and ElastiCache is similarly low CPU use. I've also increased checkin values slightly for log submissions and config refresh from the base example configs.
Not a great indication of what it would look like with substantially more hosts, but so far everything has been pretty light on server resource requirements