Skip to content

Easily add metrics to your code that actually help you spot and debug issues in production. Built on Prometheus and OpenTelemetry.

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT
Notifications You must be signed in to change notification settings

autometrics-dev/autometrics-rs

Repository files navigation

GitHub_headerImage

Documentation Crates.io Discord Shield

Autometrics is an observability micro-framework built for developers.

The Rust library provides a macro that makes it easy to instrument any function with the most useful metrics: request rate, error rate, and latency. Autometrics uses instrumented function names to generate Prometheus queries so you don’t need to hand-write complicated PromQL.

To make it easy for you to spot and debug issues in production, Autometrics inserts links to live charts directly into each function’s doc comments and provides dashboards that work out of the box. It also enables you to create powerful alerts based on Service-Level Objectives (SLOs) directly in your source code. Lastly, Autometrics writes queries that correlate your software’s version info with anomalies in the metrics to help you quickly identify commits that introduced bugs or latency.

use autometrics::autometrics;

#[autometrics]
pub async fn create_user() {
  // Now this function has metrics! 📈
}
See an example of a PromQL query generated by Autometrics

If your eyes glaze over when you see this, don't worry! Autometrics writes complex queries like this so you don't have to!

# Percentage of calls to the `create_user` function that return errors, averaged over 5 minute windows

sum by (function, module, commit, version) (
  rate(function_calls_count{function="create_user",result="error"}[5m])
  * on (instance, job) group_left(version, commit) last_over_time(build_info[1s])
)
/
sum by (function, module, commit, version) (
  rate(function_calls_count{function="create_user"}[5m])
  * on (instance, job) group_left(version, commit) last_over_time(build_info[1s])
)

Here is a demo of jumping from function docs to live Prometheus charts:

autometrics.mp4

Features

  • #[autometrics] macro instruments any function or impl block to track the most useful metrics
  • 💡 Writes Prometheus queries so you can understand the data generated without knowing PromQL
  • 🔗 Injects links to live Prometheus charts directly into each function's doc comments
  • 🔍 Identify commits that introduced errors or increased latency
  • 🚨 Define alerts using SLO best practices directly in your source code
  • 📊 Grafana dashboards work out of the box to visualize the performance of instrumented functions & SLOs
  • ⚙️ Configurable metric collection library (opentelemetry, prometheus, or metrics)
  • ⚡ Minimal runtime overhead

See Why Autometrics? for more details on the ideas behind autometrics.

Quickstart

  1. Add autometrics to your project:

    cargo add autometrics --features=prometheus-exporter
  2. Instrument your functions with the #[autometrics] macro

    Tip: Adding autometrics to all functions using the tracing::instrument macro

    You can use a search and replace to add autometrics to all functions instrumented with tracing::instrument.

    Replace:

    #[instrument]

    With:

    #[instrument]
    #[autometrics]

    And then let Rust Analyzer tell you which files you need to add use autometrics::autometrics at the top of.

    Tip: Adding autometrics to all pub functions (not necessarily recommended 😅)

    You can use a search and replace to add autometrics to all public functions. Yes, this is a bit nuts.

    Use a regular expression search to replace:

    (pub (?:async)? fn.*)
    

    With:

    #[autometrics]
    $1
    

    And then let Rust Analyzer tell you which files you need to add use autometrics::autometrics at the top of.

  3. Export the metrics for Prometheus

    For projects not currently using Prometheus metrics

    Autometrics includes optional functions to help collect and prepare metrics to be collected by Prometheus.

    In your main function, initialize the global_metrics_exporter:

    pub fn main() {
      let _exporter = autometrics::global_metrics_exporter();
      // ...
    }

    And create a route on your API (probably mounted under /metrics) that returns the following:

    use http::StatusCode;
    
    /// Export metrics for Prometheus to scrape
    pub fn get_metrics() -> (StatusCode, String) {
      match autometrics::encode_global_metrics() {
        Ok(metrics) => (StatusCode::OK, metrics),
        Err(err) => (StatusCode::INTERNAL_SERVER_ERROR, format!("{:?}", err))
      }
    }
    For projects already using custom Prometheus metrics

    Configure autometrics to use the same underlying metrics library you use with the appropriate feature flag (see below).

    The autometrics metrics will be produced alongside yours.

    You do not need to use the Prometheus exporter functions this library provides (you can leave out the prometheus-exporter feature flag) and you do not need a separate endpoint for autometrics' metrics.

  4. Configure Prometheus to scrape your metrics endpoint

  5. (Optional) If you have Grafana, import the Autometrics dashboards for an overview and detailed view of the function metrics

Open in Gitpod

To see autometrics in action:

  1. Install prometheus locally

  2. Run the complete example:

    cargo run -p example-full-api
  3. Hover over the function names to see the generated query links (like in the image above) and view the Prometheus charts

Contributing

Issues, feature suggestions, and pull requests are very welcome!

If you are interested in getting involved:

About

Easily add metrics to your code that actually help you spot and debug issues in production. Built on Prometheus and OpenTelemetry.

Topics

Resources

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT

Stars

Watchers

Forks

Contributors 18

Languages