Queue Monitoring

Learn how to monitor your queues with Sentry for improved application performance and health.

This feature is currently in Alpha. Alpha features are still in-progress and may have bugs. We recognize the irony.

Message Queues make asynchronous service-to-service communication possible in distributed architectures. Queues are great for making work that sometimes fails, more resilient and are therefore a building block for distributed applications. Some examples of what queues can help with include handling webhooks from third-party APIs or handling periodic tasks (such as calculating metrics for your users daily).

If you have performance monitoring enabled and your application interacts with message queue systems, you can configure Sentry to monitor their performance and health.

Queue monitoring allows you to monitor both the performance and error rates of your queue consumers and producers, providing observability into your distributed system.

The Queues page gives you a high-level overview so that you can see where messages are being written to. (You may see topic names or actual queue names, depending on the messaging system.) If you click on a transaction, you'll see the Destination Summary page, which provides metrics about specific endpoints within your applications that either write to, or read from the destination. You can also dig into individual endpoints within your application representing producers creating messages, and consumers reading messages. You'll see actual traces representing messages processed by your application.

Queue monitoring currently supports auto instrumentation for the Celery Distributed Task Queue in Python. Other messaging systems can be monitored using custom instrumentation.

Instructions for custom instrumentation in various languages are linked to below:

Help improve this content
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").