Hi Folks! How close to reality are the examples of...
# fleet
v
Hi Folks! How close to reality are the examples of AWS Fargate configuration for deploying FleetDM in AWS (https://fleetdm.com/docs/deploying/reference-architectures#example-configuration-breakpoints). For example calculate for up to 1000 hosts https://calculator.aws/#/estimate?id=ae7d7ddec64bb979f3f6611d23616b1dff0e8dbd Too weak instances for RDS / Redis confuse me, since now my DB has use about 27 gigabytes of RAM on the baremetall servers (the number of connected hosts is up to 500)
m
I know @Jarod Reyes just did a deployment, not sure if he did Fargate.
Too weak instances for RDS / Redis confuse me, since now my DB has use about 27 gigabytes of RAM on the baremetall servers (the number of connected hosts is up to 500)
I wonder if this is coming from Redis cache settings in our Fargate examples... @Benjamin Edwards and @Kathy Satterlee, any ideas?
k
Hey @Victor Chaplygin! Just to be clear there, it sounds like you're looking to move from your local deployment with the memory allocations you described to the Fargate configuration we reccomend. If that's the case, then yes, in our loadtesting, we have found that the allocations we recommend there are a good starting point.
v
@Kathy Satterlee thank you, but what means “…a good starting point…” It’s just that it’s not clear to me from this wording whether, in the end, fargate with all the infrastructure of the proposed (not very powerful resources), that is, the proposed configuration really pulls up to 1000 osquery hosts?
b
@Victor Chaplygin in our testing Fleet can be run very lean. It truly depends on usage however. Things that can directly effect scale: • number of scheduled queries • frequency of scheduled queries • volume of results generated from schedule and/or live queries • host checkin interval (basically how fast do hosts respond to live queries) • number of hosts enrolled all of these items can greatly impact the overall performance of a fleet deployment, but in general running < 1000 hosts can be done very efficiently all things considered.
k
As @Benjamin Edwards said, there are a lot of variables beyond a total number of hosts that come into play, so we've identified a baseline configuration that tends to work well. I’d generally recommend using the configuration break points to start and then evaluating performance over time and scaling up as needed. Since you do have an existing deployment, you could also take a look at historical CPU and memory usage over time to see how much you're actually using and base your allocations on that.