Ensuring a stable operation of your application in production requires monitoring. Without monitoring, you have no insights about the internal state and health of your system and have to work with a black-box. MicroProfile Metrics gives you the ability to not only monitor pre-defined metrics like JVM statistics but also create custom metrics to monitor e.g. key figures of your business. These metrics are then exposed via HTTP and ready to visualize on a dashboard and create appropriate alarms.
Learn more about the MicroProfile Metrics specification and how to use it in this blog post.
Specification profile: MicroProfile Metrics
- Current version: 2.3
- GitHub repository
- Latest specification document
- Basic use case: Add custom metrics (e.g. timer or counter) to your application and expose them via HTTP
Default MicroProfile metrics defined in the specification
The specification defines one endpoint with three subresources to collect metrics from a MicroProfile application:
- The endpoint to collect all available metrics:
/metrics
- Base (pre-defined by the specification) metrics:
/metrics/base
- Application metrics:
/metrics/application
(optional) - Vendor-specific metrics:
/metrics/vendor
(optional)
So you can either use the main /metrics
endpoint and get all available metrics for your application or one of the subresources.
The default media type for these endpoints is text/plain
using the OpenMetrics format. You are also able to get them as JSON if you specify the Accept header in your request as application/json
.
In the specification, you find a list of base metrics every MicroProfile Metrics compliant application server has to offer. These are mainly JVM, GC, memory, and CPU related metrics to monitor the infrastructure. The following output is the required amount of base metrics:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | { "gc.total;name=scavenge": 393, "gc.time;name=global": 386, "cpu.systemLoadAverage": 0.92, "thread.count": 85, "classloader.loadedClasses.count": 11795, "classloader.unloadedClasses.total": 21, "jvm.uptime": 985206, "memory.committedHeap": 63111168, "thread.max.count": 100, "cpu.availableProcessors": 12, "classloader.loadedClasses.total": 11816, "thread.daemon.count": 82, "gc.time;name=scavenge": 412, "gc.total;name=global": 14, "memory.maxHeap": 4182573056, "cpu.processCpuLoad": 0.0017964831879557087, "memory.usedHeap": 34319912 } |
In addition, you are able to add metadata and tags to your metrics like in the output above for gc.time
where name=global
is a tag. You can use these tags to further separate a metric for multiple use cases.
Since MicroProfile 3.3 there is now also a new (optional) base metric REST.request
. This tracks the total count of requests and the total elapsed time spent at your JAX-RS endpoints. As this is an optional metric, it might not be available in every implementation.
Create a custom metric with MicroProfile Metrics
There are two ways for defining a custom metric with MicroProfile Metrics: using annotations or programmatically. The specification offers five different metric types:
- Timer: sampling the time for e.g. a method call
- Counter: monotonically counting e.g. invocations of a method
- Gauges: sample the value of an object e.g. current size of JMS queue
- Meters: tracking the throughput of e.g. a JAX-RS endpoint
- Histogram: calculate the distribution of values e.g. the variance of incoming user agents
For simple use cases, you can make use of annotations and just add them to a method you want to monitor. Each annotation offers attributes to configure tags and metadata for the metric:
1 2 3 4 5 6 | @Counted(name = "bookCommentClientInvocations", description = "Counting the invocations of the constructor", displayName = "bookCommentClientInvoke", tags = {"usecase=simple"}) public BookCommentClient() { } |
If your monitoring use case requires a more dynamic configuration, you can programmatically create/update your metrics. For this, you just need to inject the MetricRegistry
to your class:
1 2 3 4 5 6 7 8 9 10 11 12 | public class BookCommentClient { @Inject @RegistryType(type = MetricRegistry.Type.APPLICATION) private MetricRegistry metricRegistry; public String getBookCommentByBookId(String bookId) { Response response = this.bookCommentsWebTarget.path(bookId).request().get(); this.metricRegistry.counter("bookCommentApiResponseCode" + response.getStatus()).inc(); return response.readEntity(JsonObject.class).getString("body"); } } |
Create a timer metric
If you want to track and sample the duration for a method call, you can make use of timers. You can add them with the @Timer
annotation or using the MetricRegistry
. A good use case might be tracking the time for a call to an external service:
1 2 3 4 5 | @Timed(name = "getBookCommentByBookIdDuration") public String getBookCommentByBookId(String bookId) { Response response = this.bookCommentsWebTarget.path(bookId).request().get(); return response.readEntity(JsonObject.class).getString("body"); } |
While using the timer metric type you'll also get a count of method invocations and mean/max/min/percentile calculations out-of-the-box:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | "de.rieckpil.blog.BookCommentClient.getBookCommentByBookIdDuration": { "fiveMinRate": 0.000004243196464475842, "max": 3966817891, "count": 13, "p50": 737218798, "p95": 3966817891, "p98": 3966817891, "p75": 997698383, "p99": 3966817891, "min": 371079671, "fifteenMinRate": 0.005509550587308515, "meanRate": 0.003936521878196718, "mean": 1041488167.7031761, "p999": 3966817891, "oneMinRate": 1.1484886591525709e-24, "stddev": 971678361.3592016 } |
Be aware that you get the result as nanoseconds if you request the JSON result and for the OpenMetrics format, you get seconds:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | getBookCommentByBookIdDuration_rate_per_second 0.003756880727820997 getBookCommentByBookIdDuration_one_min_rate_per_second 7.980095572816848E-26 getBookCommentByBookIdDuration_five_min_rate_per_second 2.4892551645230856E-6 getBookCommentByBookIdDuration_fifteen_min_rate_per_second 0.004612201440656351 getBookCommentByBookIdDuration_mean_seconds 1.0414881677031762 getBookCommentByBookIdDuration_max_seconds 3.9668178910000003 getBookCommentByBookIdDuration_min_seconds 0.371079671 getBookCommentByBookIdDuration_stddev_seconds 0.9716783613592016 getBookCommentByBookIdDuration_seconds_count 13 getBookCommentByBookIdDuration_seconds{quantile="0.5"} 0.737218798 getBookCommentByBookIdDuration_seconds{quantile="0.75"} 0.997698383 getBookCommentByBookIdDuration_seconds{quantile="0.95"} 3.9668178910000003 getBookCommentByBookIdDuration_seconds{quantile="0.98"} 3.9668178910000003 getBookCommentByBookIdDuration_seconds{quantile="0.99"} 3.9668178910000003 getBookCommentByBookIdDuration_seconds{quantile="0.999"} 3.9668178910000003 |
Create a simple timer
As you saw it in the chapter above, the @Timed
annotation already calculates throughput and percentile statistics. If you don't need this amount of data to e.g. reduce the bandwidth, you can fall back on @SimplyTimed
.
This annotation is similar to the already mentioned timer, but solely tracks how long an invocation took to complete and does not prepare any statistics for you:
1 2 3 4 5 | @SimplyTimed(name = "getBookCommentByBookIdDuration") public String getBookCommentByBookId(String bookId) { Response response = this.bookCommentsWebTarget.path(bookId).request().get(); return response.readEntity(JsonObject.class).getString("body"); } |
Create a counter metric
The next metric type is the simplest one: a counter. With the counter, you can track e.g. the number of invocations of a method:
1 2 3 4 | @Counted public String doFoo() { return "Duke"; } |
In one of the previous MicroProfile Metrics versions, you were able to decrease the counter and have a not monotonic counter. As this caused confusion with the gauge metric type, the current specification version defines this metric type as a monotonic counter which can only increase.
If you use the programmatic approach, you are also able to define the amount of increase for the counter on each invocation:
1 2 3 4 | public void checkoutItem(String item, Long amount) { this.metricRegistry.counter(item + "Count").inc(amount); // further business logic } |
Create a metered metric
The meter type is perfect if you want to measure the throughput of something and get the one-, five- and fifteen-minute rates. As an example I'll monitor the throughput of a JAX-RS endpoint:
1 2 3 4 5 6 7 | @GET @Metered(name = "getBookCommentForLatestBookRequest", tags = {"spec=JAX-RS", "level=REST"}) @Produces(MediaType.TEXT_PLAIN) public Response getBookCommentForLatestBookRequest() { String latestBookRequestId = bookRequestProcessor.getLatestBookRequestId(); return Response.ok(this.bookCommentClient.getBookCommentByBookId(latestBookRequestId)).build(); } |
After several invocations, the result looks like the following:
1 2 3 4 5 6 7 | "de.rieckpil.blog.BookResource.getBookCommentForLatestBookRequest": { "oneMinRate;level=REST;spec=JAX-RS": 1.1363013189791909e-24, "fiveMinRate;level=REST;spec=JAX-RS": 0.0000042408326224725166, "meanRate;level=REST;spec=JAX-RS": 0.003936520624021342, "fifteenMinRate;level=REST;spec=JAX-RS": 0.0055092085268208186, "count;level=REST;spec=JAX-RS": 13 } |
Depending on your implementation provider of MicroProfile Metrics, tracking time and invocations for JAX-RS endpoints might be redundant, as there is now the optional base metric REST.request
.
Create a gauge metric
To monitor a value that can increase and decrease over time, you should use the gauge metric type. Imagine you want to visualize the current disk size or the remaining messages to process in a queue:
1 2 3 4 5 | @Gauge(unit = "amount") public Long remainingBookRequestsToProcess() { // monitor e.g. current size of a JMS queue return ThreadLocalRandom.current().nextLong(0, 1_000_000); } |
The unit
attribute of the annotation is required and has to be explicitly configured. There is a MetricUnits
class that you can use for common units like seconds or megabytes.
In contrast to all other metrics, the @Gauge
annotation can only be used in combination with a single instance (e.g. @ApplicationScoped
) as otherwise, it would be not clear which instance represents the actual value. There is a @ConcurrentGauge
if you need to count parallel invocations.
The outcome is the current value of the gauge, which might increase or decrease over time:
1 2 3 4 5 6 7 | # TYPE application_..._remainingBookRequestsToProcess_amount application_..._remainingBookRequestsToProcess_amount 990120 // invocation of /metrics 5 minutes later # TYPE application_..._remainingBookRequestsToProcess_amount application_..._remainingBookRequestsToProcess_amount 11003 |
YouTube video for using MicroProfile Metrics
Watch the following YouTube video of my Getting started with MicroProfile series to see MicroProfile Metrics in action:
You can find the source code for this blog post on GitHub.
Have fun using MicroProfile Metrics,
Phil