Observability and Monitoring
AdonisJS jobs package provides built-in observability features to help you monitor and manage your job queues effectively. This includes integration with Prometheus for metrics and Opentelemetry for tracing.
Prometheus Metrics
The package expose a custom collector for the package @julr/adonisjs-prometheus
to collect metrics from your jobs queues.
Make sure to install and configure the @julr/adonisjs-prometheus
package in your AdonisJS application first, and then, add the following collector to your config/prometheus
file:
import { defineConfig } from '@julr/adonisjs-prometheus'
import { bullmqCollector } from '@nemoventures/adonis-jobs/metrics'
import env from '#start/env'
export default defineConfig({
// ...
collectors: [
// ... other collectors
bullmqCollector(),
],
})
Once enabled, you should be able to access the metrics at the endpoint defined in your config/prometheus.ts
file, typically /metrics
.
Exposed Metrics
The BullMQ collector exposes the following Prometheus metrics:
Metric Name | Type | Description | Labels |
---|---|---|---|
bullmq_job_count | Gauge | Number of jobs in the queue by state | queue , state |
bullmq_job_processing_time | Histogram | Processing time for completed jobs (from processing start until completed) in milliseconds | queue , job |
bullmq_job_completion_time | Histogram | Completion time for completed jobs (from creation until completed) in milliseconds | queue , job |
Metric Details
bullmq_job_count
: Shows the current number of jobs in each state (waiting, active, completed, failed, etc.) for each queuebullmq_job_processing_time
: Measures how long jobs take to process once they start runningbullmq_job_completion_time
: Measures the total time from job creation to completion, including waiting time
Default Histogram Buckets
Both histogram metrics use the following default buckets (in milliseconds):
[100, 500, 1000, 2500, 5000, 10000, 30000, 60000]
You can customize these buckets when configuring the collector:
bullmqCollector({
processingTimeBuckets: [50, 100, 250, 500, 1000, 5000],
completionTimeBuckets: [100, 500, 1000, 5000, 10000, 60000]
})
Opentelemetry Tracing
This package also supports Opentelemetry for distributed tracing of your job processing.
Under the hood, it uses the bullmq-otel
package created by the BullMQ team, which automatically instruments job processing and emits spans for each job.
For every job processed, we are internally adding a bullmq.job.name
attribute to the span, which contains the job class name.
Make sure to check out the BullMQ OpenTelemetry documentation if you are new to using OpenTelemetry as it provides a quick start guide for setting up tracing in your application.
Job Logging
For logging inside jobs, we provide a child logger instance that automatically includes the trace ID. This logger can be accessed via the this.logger
property inside your job classes.
You can use it like this:
import { Job } from '@nemoventures/adonis-jobs'
export default class MyJob extends Job {
async handle() {
// Use this.logger to log messages with trace context
this.logger.info('Processing job', { jobId: this.id })
// Your job logic here
}
}
You will quickly notice that the logs generated by this logger aren't included in your BullMQ dashboard, wether you are using QueueDash or any other dashboard. This is expected, as the logger you are using is the one provided by AdonisJS, Pino, and BullMQ provides its own logger that is used to log job events.
To fix that, you can enable the multiLogger
option in your config/queue.ts
file:
const queueConfig = defineConfig({
// ...
multiLogger: true,
})
Now, the logs you generate using this.logger
will be sent to your Pino targets but also to the BullMQ logger, which will then be displayed in your dashboard.
Logging in Services used by Jobs
If you want to also log to both BullMQ and Pino, from other service classes that are used by your jobs, you'll need to use dependency injection with the @inject
decorator to ensure these logs are also sent to BullMQ when multiLogger
is enabled.
Here is a quick example of how to set this up:
import { inject } from '@adonisjs/core'
import { Job } from '@nemoventures/adonis-jobs'
import { EmailService } from '#services/test_service'
@inject()
export default class MyJob extends Job {
constructor(protected testService: EmailService) {
super()
}
async handle(): Promise<void> {
// This will be logged to both Pino and BullMQ
this.logger.info('Starting file write job')
// The service can also use logging that will be properly routed
await this.email.processData(this.data.data)
}
}
And in your service:
import { inject } from '@adonisjs/core'
import { Logger } from '@adonisjs/core/logger'
@inject()
export class TestService {
/**
* Here, the MultiLogger will be automatically injected when
* injected from a Job.
*
* If the service is also used outside of a Job context,
* no worries, the logger will still be available
* and will log to Pino targets only.
*/
constructor(protected logger: Logger) {}
async processData(data: string): Promise<void> {
// This log will be sent to both Pino and BullMQ when multiLogger is enabled
this.logger.info('Processing data in service', { dataLength: data.length })
// Your service logic here
}
}