![]() Increasing the Semaphore count seems like an obvious solution but if Zipkin is running in a cluster then it is hard to know how to tune the Semaphore appropriately so requests aren’t excessively dropped and storage isn’t overburdened. Probably worse, the StorageComponent being written too may have also been able to handle that load without issue. That much data wouldn’t cause Zipkin itself to die. An alternative problem that can come up with this approach would be: if 200 servers send span information at the same time and the Semaphore is set to 64 requests, even if the number of spans is small, 136 of them will drop. As a result, requests to store span information will be eagerly dropped in order to avoid excessively queueing results and eventually causing an OutOfMemoryError. However, when using an RPC collector (such as HTTP), Zipkin can’t control the ingestion rate natively. When Zipkin is configured to collect span data from a polling source such as Kafka or RabbitMQ this Semaphore should be relatively transparent and not affect collection rates in any capacity. Call SemaphoreĪs of October 2017, the Elasticsearch implementation of storage includes a Semaphore to limit requests. This topic will explore various options and what they can or cannot solve. ![]() A number of github issues have been raised, usually about Elasticsearch being over capacity and what to do about it.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |