...
- Increased the speed of the push of metric data points after collection:
- Distributed the push request work amongst the hypervisor actors and removing the single push-actor bottleneck.
- Improved reliability by distributing the push request queue amongst the hypervisor actors, because in a congested environment the queue was always full and oldest requests might have been able to be lost
- Improved performance by splitting large push requests
Optimized configuration of KairosDB incoming queue processor
Increased 'batch_size' (requires changing Cassandra configuration), we should try to dimension with the expected vm / metrics / datapointsXminutedata points per minute
- The 'min_batch_size' and 'min_batch_wait' configuration is unchanged: only delay 0.5s if there are fewer than 100 data points
- Increased "memory_queue_size" to avoid disk usage
- Increased "thread_count" to allow more requests to Cassandra
- New KairosDB version with CQL instead of Thrift
- Reduced response time of Emmett request to push metric data points. The Emmett module manages metrics, alarms, and alerts. It retrieves metric data and obtains alarm details and requests alarm evaluation. All the entities handled by Emmett (metric, alarm, and alert) can have tags and can be found using the tags
- Increased speed of retrieval of metrics, alarms, and alerts from the database by decoupling the tags from the entities. Only retrieve tags for create and search purposes
- Increased speed of push process by removing unnecessary database transactions
- Added a default local cache to improve the speed of the get metric request. This cache is disabled by default and a distributed cache should be added for load balanced instances
Related links: