...
- Increase the speed of the push of metric data points after collection:
- Distribute the push request work amongst the hypervisor actors and removing the single push-actor bottleneck.
- Improve reliability by distributing the push request queue amongst the hypervisor actors, because in a congested environment the queue was always full and oldest requests might have been able to be lost
- Improve performance by splitting large push requests
Optimize configuration of KairosDB incoming queue processor
Increase 'batch_size' (requires changing Cassandra configuration), we should try to dimension with the expected vm / metrics / datapointsXminute
- 'min_batch_size' and 'min_batch_wait' configuration is unchanged: only delay 0.5s if there are fewer than 100 data points
- Increase "memory_queue_size" to avoid disk usage
- Increase "thread_count" to allow more requests to Cassandra
- New KairosDB version with CQL instead of thrift
- Reduce response time of Emmett request to push metric data points. All the entities handled by Emmett (metric, alarm, and alert) can have tags and can be found using the tags
- Increase speed of retrieval of metrics, alarms and alerts from the database by decoupling tags from the entities. Only retrieve tags for create and search purposes
- Increase speed of push process by removing unnecessary database transactions
- Add Added a default local cache to improve the speed of the request to get a metric request. This can the be the local cache , which is disabled by default , or and a distributed cache should be used for load balanced instances.