Manual Instrumentation
Libraries that want to export telemetry data using OpenTelemetry MUST only
depend on the opentelemetry-api
package and should never configure or depend
on the OpenTelemetry SDK. The SDK configuration must be provided by
Applications which should also depend on the opentelemetry-sdk
package, or
any other implementation of the OpenTelemetry API. This way, libraries will
obtain a real implementation only if the user application is configured for it.
For more details, check out the Library Guidelines.
Setup¶
The first step is to get a handle to an instance of the OpenTelemetry
interface.
If you are an application developer, you need to configure an instance of the
OpenTelemetrySdk
as early as possible in your application. This can be done
using the OpenTelemetrySdk.builder()
method. The returned
OpenTelemetrySdkBuilder
instance gets the providers related to the signals,
tracing and metrics, in order to build the OpenTelemetry
instance.
You can build the providers by using the SdkTracerProvider.builder()
and
SdkMeterProvider.builder()
methods. It is also strongly recommended to define
a Resource
instance as a representation of the entity producing the telemetry;
in particular the service.name
attribute is the most important piece of
telemetry source-identifying info.
Maven¶
See releases for a full list of artifact coordinates.
Gradle¶
See releases for a full list of artifact coordinates.
Imports¶
Example¶
As an aside, if you are writing library instrumentation, it is strongly
recommended that you provide your users the ability to inject an instance of
OpenTelemetry
into your instrumentation code. If this is not possible for some
reason, you can fall back to using an instance from the GlobalOpenTelemetry
class. Note that you can't force end-users to configure the global, so this is
the most brittle option for library instrumentation.
Acquiring a Tracer¶
To do Tracing you'll need to acquire a
Tracer
.
Note: Methods of the OpenTelemetry SDK should never be called.
First, a Tracer
must be acquired, which is responsible for creating spans and
interacting with the Context. A tracer is acquired by
using the OpenTelemetry API specifying the name and version of the library
instrumenting the instrumented library or
application to be monitored. More information is available in the specification
chapter Obtaining a Tracer.
Important: the "name" and optional version of the tracer are purely
informational. All Tracer
s that are created by a single OpenTelemetry
instance will interoperate, regardless of name.
Create Spans¶
To create Spans, you only need to specify the name of the span. The start and end time of the span is automatically set by the OpenTelemetry SDK.
It's required to call end()
to end the span when you want it to end.
Create nested Spans¶
Most of the time, we want to correlate spans for nested operations. OpenTelemetry supports tracing within processes and across remote processes. For more details how to share context between remote processes, see Context Propagation.
For a method a
calling a method b
, the spans could be manually linked in the
following way:
The OpenTelemetry API offers also an automated way to propagate the parent span on the current thread:
To link spans from remote processes, it is sufficient to set the Remote Context as parent.
Get the current span¶
Sometimes it's helpful to do something with the current/active span at a particular point in program execution.
And if you want the current span for a particular Context
object:
Span Attributes¶
In OpenTelemetry spans can be created freely and it's up to the implementor to annotate them with attributes specific to the represented operation. Attributes provide additional context on a span about the specific operation it tracks, such as results or operation properties.
Semantic Attributes¶
There are semantic conventions for spans representing operations in well-known protocols like HTTP or database calls. Semantic conventions for these spans are defined in the specification at Trace Semantic Conventions.
First add the semantic conventions as a dependency to your application:
Maven¶
Gradle¶
Finally, you can update your file to include semantic attributes:
Create Spans with events¶
Spans can be annotated with named events (called Span Events) that can carry zero or more Span Attributes, each of which itself is a key:value map paired automatically with a timestamp.
Create Spans with links¶
A Span may be linked to zero or more other Spans that are causally related via a Span Link. Links can be used to represent batched operations where a Span was initiated by multiple initiating Spans, each representing a single incoming item being processed in the batch.
For more details how to read context from remote processes, see Context Propagation.
Set span status¶
A status can be set on a
span, typically used to specify that a
span has not completed successfully - SpanStatus.Error
. In rare scenarios, you
could override the Error
status with OK
, but don't set OK
on
successfully-completed spans.
The status can be set at any time before the span is finished:
Record exceptions in spans¶
It can be a good idea to record exceptions when they happen. It's recommended to do this in conjunction with setting span status.
This will capture things like the current stack trace in the span.
Context Propagation¶
OpenTelemetry provides a text-based approach to propagate context to remote services using the W3C Trace Context HTTP headers.
The following presents an example of an outgoing HTTP request using
HttpURLConnection
.
Similarly, the text-based approach can be used to read the W3C Trace Context from incoming requests. The following presents an example of processing an incoming HTTP request using HttpExchange.
The following code presents an example to read the W3C Trace Context from incoming request, add spans, and further propagate the context. The example utilizes HttpHeaders to fetch the traceparent header for context propagation.
Metrics¶
Spans provide detailed information about your application, but produce data that is proportional to the load on the system. In contrast, metrics combine individual measurements into aggregations, and produce data which is constant as a function of system load. The aggregations lack details required to diagnose low level issues, but complement spans by helping to identify trends and providing application runtime telemetry.
The metrics API defines a variety of instruments. Instruments record measurements, which are aggregated by the metrics SDK and eventually exported out of process. Instruments come in synchronous and asynchronous varieties. Synchronous instruments record measurements as they happen. Asynchronous instrument register a callback, which is invoked once per collection, and which records measurements at that point in time. The following instruments are available:
LongCounter
/DoubleCounter
: records only positive values, with synchronous and asynchronous options. Useful for counting things, such as the number of bytes sent over a network. Counter measurements are aggregated to always-increasing monotonic sums by default.LongUpDownCounter
/DoubleUpDownCounter
: records positive and negative values, with synchronous and asynchronous options. Useful for counting things that go up and down, like the size of a queue. Up down counter measurements are aggregated to non-monotonic sums by default.LongGauge
/DoubleGauge
: measures an instantaneous value with an asynchronous callback. Useful for recording values that can't be merged across attributes, like CPU utilization percentage. Gauge measurements are aggregated as gauges by default.LongHistogram
/DoubleHistogram
: records measurements that are most useful to analyze as a histogram distribution. No asynchronous option is available. Useful for recording things like the duration of time spent by an HTTP server processing a request. Histogram measurements are aggregated to explicit bucket histograms by default.
Note: The asynchronous varieties of counter and up down counter assume that the registered callback is observing the cumulative sum. For example, if you register an asynchronous counter whose callback records bytes sent over a network, it must record the cumulative sum of all bytes sent over the network, rather than trying to compute and record the difference since last call.
All metrics can be annotated with attributes: additional qualifiers that help describe what subdivision of the measurements the metric represents.
The following is an example of counter usage:
The following is an example of usage of an asynchronous instrument:
Logs¶
Logs are distinct from Metrics and Tracing in that there is no user-facing logs API. Instead, there is tooling to bridge logs from existing popular log frameworks (e.g. SLF4j, JUL, Logback, Log4j) into the OpenTelemetry ecosystem.
The two typical workflows discussed below each cater to different application requirements.
Direct to collector¶
In the direct to collector workflow, logs are emitted directly from an application to a collector using a network protocol (e.g. OTLP). This workflow is simple to set up as it doesn't require any additional log forwarding components, and allows an application to easily emit structured logs that conform to the log data model. However, the overhead required for applications to queue and export logs to a network location may not be suitable for all applications.
To use this workflow:
- Install appropriate Log Appender.
- Configure the OpenTelemetry Log SDK to export log records to desired target destination (the collector or other).
Log appenders¶
A log appender bridges logs from a log framework into the OpenTelemetry Log SDK using the Logs Bridge API. Log appenders are available for various popular java log frameworks:
The links above contain full usage and installation documentation, but installation is generally as follows:
- Add required dependency via gradle or maven.
- Extend the application's log configuration (i.e.
logback.xml
,log4j.xml
, etc) to include a reference to the OpenTelemetry log appender. - Optionally configure the log framework to determine which logs (i.e. filter by severity or logger name) are passed to the appender.
- Optionally configure the appender to indicate how logs are mapped to OpenTelemetry Log Records (i.e. capture thread information, context data, markers, etc).
Log appenders automatically include the trace context in log records, enabling log correlation with traces.
The Log Appender example demonstrates setup for a variety of scenarios.
Via file or stdout¶
In the file or stdout workflow, logs are written to files or standout output. Another component (e.g. FluentBit) is responsible for reading / tailing the logs, parsing them to more structured format, and forwarding them a target, such as the collector. This workflow may be preferable in situations where application requirements do not permit additional overhead from direct to collector. However, it requires that all log fields required down stream are encoded into the logs, and that the component reading the logs parse the data into the log data model. The installation and configuration of log forwarding components is outside the scope of this document.
Log correlation with traces is available by installing log context instrumentation.
Log context instrumentation¶
OpenTelemetry provides components which enrich log context with trace context for various popular java log frameworks:
This links above contain full usage and installation documentation, but installation is generally as follows:
- Add required dependency via gradle or maven.
- Extend the application's log configuration (i.e.
logback.xml
orlog4j.xml
, etc) to reference the trace context fields in the log pattern.
SDK Configuration¶
The configuration examples reported in this document only apply to the SDK
provided by opentelemetry-sdk
. Other implementation of the API might provide
different configuration mechanisms.
Tracing SDK¶
The application has to install a span processor with an exporter and may customize the behavior of the OpenTelemetry SDK.
For example, a basic configuration instantiates the SDK tracer provider and sets to export the traces to a logging stream.
Sampler¶
It is not always feasible to trace and export every user request in an application. In order to strike a balance between observability and expenses, traces can be sampled.
The OpenTelemetry SDK offers four samplers out of the box:
- AlwaysOnSampler which samples every trace regardless of upstream sampling decisions.
- AlwaysOffSampler which doesn't sample any trace, regardless of upstream sampling decisions.
- ParentBased which uses the parent span to make sampling decisions, if present.
- TraceIdRatioBased which samples a configurable percentage of traces, and additionally samples any trace that was sampled upstream.
Additional samplers can be provided by implementing the
io.opentelemetry.sdk.trace.Sampler
interface.
Span Processor¶
Different Span processors are offered by OpenTelemetry. The
SimpleSpanProcessor
immediately forwards ended spans to the exporter, while
the BatchSpanProcessor
batches them and sends them in bulk. Multiple Span
processors can be configured to be active at the same time using the
MultiSpanProcessor
.
Exporter¶
Span processors are initialized with an exporter which is responsible for sending the telemetry data a particular backend. OpenTelemetry offers five exporters out of the box:
InMemorySpanExporter
: keeps the data in memory, useful for testing and debugging.- Jaeger Exporter: prepares and sends the collected telemetry data to a Jaeger
backend via gRPC. Varieties include
JaegerGrpcSpanExporter
andJaegerThriftSpanExporter
. ZipkinSpanExporter
: prepares and sends the collected telemetry data to a Zipkin backend via the Zipkin APIs.- Logging Exporter: saves the telemetry data into log streams. Varieties include
LoggingSpanExporter
andOtlpJsonLoggingSpanExporter
. - OpenTelemetry Protocol Exporter: sends the data in OTLP to the OpenTelemetry
Collector or other OTLP receivers. Varieties include
OtlpGrpcSpanExporter
andOtlpHttpSpanExporter
.
Other exporters can be found in the OpenTelemetry Registry.
Metrics SDK¶
The application has to install a metric reader with an exporter, and may further customize the behavior of the OpenTelemetry SDK.
For example, a basic configuration instantiates the SDK meter provider and sets to export the metrics to a logging stream.
Metric Reader¶
Metric readers read aggregated metrics.
OpenTelemetry provides a variety of metric readers out of the box:
PeriodicMetricReader
: reads metrics on a configurable interval and pushes to aMetricExporter
.InMemoryMetricReader
: reads metrics into memory, useful for debugging and testing.PrometheusHttpServer
(alpha): an HTTP server that reads metrics and serializes to Prometheus text format.
Custom metric reader implementations are not currently supported.
Exporter¶
The PeriodicMetricReader
is paired with a metric exporter, which is
responsible for sending the telemetry data to a particular backend.
OpenTelemetry provides the following exporters out of the box:
InMemoryMetricExporter
: keeps the data in memory, useful for testing and debugging.- Logging Exporter: saves the telemetry data into log streams. Varieties include
LoggingMetricExporter
andOtlpJsonLoggingMetricExporter
. - OpenTelemetry Protocol Exporter: sends the data in OTLP to the OpenTelemetry
Collector or other OTLP receivers. Varieties include
OtlpGrpcMetricExporter
andOtlpHttpMetricExporter
.
Other exporters can be found in the OpenTelemetry Registry.
Views¶
Views provide a mechanism for controlling how measurements are aggregated into
metrics. They consist of an InstrumentSelector
and a View
. The instrument
selector consists of a series of options for selecting which instruments the
view applies to. Instruments can be selected by a combination of name, type,
meter name, meter version, and meter schema url. The view describes how
measurement should be aggregated. The view can change the name, description, the
aggregation, and define the set of attribute keys that should be retained.
Every instrument has a default view, which retains the original name, description, and attributes, and has a default aggregation that is based on the type of instrument. When a registered view matches an instrument, the default view is replaced by the registered view. Additional registered views that match the instrument are additive, and result in multiple exported metrics for the instrument.
Logs SDK¶
The logs SDK dictates how logs are processed when using the direct to collector workflow. No log SDK is needed when using the log forwarding workflow.
The typical log SDK configuration installs a log record processor and exporter. For example, the following installs the BatchLogRecordProcessor, which periodically exports to a network location via the OtlpGrpcLogRecordExporter:
See releases for log specific artifact coordinates.
LogRecord Processor¶
LogRecord processors process LogRecords emitted by log appenders.
OpenTelemetry provides the following LogRecord processors out of the box:
BatchLogRecordProcessor
: periodically sends batches of LogRecords to a LogRecordExporter.SimpleLogRecordProcessor
: immediately sends each LogRecord to a LogRecordExporter.
Custom LogRecord processors are supported by implementing the
LogRecordProcessor
interface. Common use cases include enriching the
LogRecords with contextual data like baggage, or filtering / obfuscating
sensitive data.
LogRecord Exporter¶
BatchLogRecordProcessor
and SimpleLogRecordProcessor
are paired with
LogRecordExporter
, which is responsible for sending telemetry data to a
particular backend. OpenTelemetry provides the following exporters out of the
box:
- OpenTelemetry Protocol Exporter: sends the data in OTLP to the OpenTelemetry
Collector or other OTLP receivers. Varieties include
OtlpGrpcLogRecordExporter
andOtlpHttpLogRecordExporter
. InMemoryLogRecordExporter
: keeps the data in memory, useful for testing and debugging.- Logging Exporter: saves the telemetry data into log streams. Varieties include
SystemOutLogRecordExporter
andOtlpJsonLoggingLogRecordExporter
. Note:OtlpJsonLoggingLogRecordExporter
logs to JUL, and may cause infinite loops (i.e. JUL -> SLF4J -> Logback -> OpenTelemetry Appender -> OpenTelemetry Log SDK -> JUL) if not carefully configured.
Custom exporters are supported by implementing the LogRecordExporter
interface.
Auto Configuration¶
Instead of manually creating the OpenTelemetry
instance by using the SDK
builders directly from your code, it is also possible to use the SDK
auto-configuration extension through the
opentelemetry-sdk-extension-autoconfigure
module.
This module is made available by adding the following dependency to your application.
It allows you to auto-configure the OpenTelemetry SDK based on a standard set of
supported environment variables and system properties. Each environment variable
has a corresponding system property named the same way but as lower case and
using the .
(dot) character instead of the _
(underscore) as separator.
The logical service name can be specified via the OTEL_SERVICE_NAME
environment variable (or otel.service.name
system property).
The traces, metrics or logs exporters can be set via the OTEL_TRACES_EXPORTER
,
OTEL_METRICS_EXPORTER
and OTEL_LOGS_EXPORTER
environment variables. For
example OTEL_TRACES_EXPORTER=jaeger
configures your application to use the
Jaeger exporter. The corresponding Jaeger exporter library has to be provided in
the classpath of the application as well.
It's also possible to set up the propagators via the OTEL_PROPAGATORS
environment variable, like for example using the tracecontext
value to use
W3C Trace Context.
For more details, see all the supported configuration options in the module's README.
The SDK auto-configuration has to be initialized from your code in order to
allow the module to go through the provided environment variables (or system
properties) and set up the OpenTelemetry
instance by using the builders
internally.
When environment variables or system properties are not sufficient, you can use
some extension points provided through the auto-configure
SPI
and several methods in the AutoConfiguredOpenTelemetrySdk
class.
Following an example with a code snippet for adding an additional custom span processor.
SDK Logging and Error Handling¶
OpenTelemetry uses java.util.logging to log information about OpenTelemetry, including errors and warnings about misconfigurations or failures exporting data.
By default, log messages are handled by the root handler in your application. If
you have not installed a custom root handler for your application, logs of level
INFO
or higher are sent to the console by default.
You may want to change the behavior of the logger for OpenTelemetry. For example, you can reduce the logging level to output additional information when debugging, increase the level for a particular class to ignore errors coming from that class, or install a custom handler or filter to run custom code whenever OpenTelemetry logs a particular message.
Examples¶
For more fine-grained control and special case handling, custom handlers and filters can be specified with code.