OpenInference Semantic Conventions
Spring AI provides built-in instrumentation for models, vector stores, and workflows based on the Micrometer APIs. Arconia lets you configure your Spring AI applications to export telemetry data adopting the OpenInference Semantic Conventions instead of the default Micrometer conventions, enabling rich trace visualization in platforms like Arize Phoenix.
Arconia provides a Phoenix Dev Service you can use during development and testing. Check out the Phoenix Dev Service documentation for more details.
|
The OpenInference Semantic Conventions are still experimental, and they might change in the future. |
Getting Started
Add the Arconia OpenInference AI Semantic Conventions dependency to your project.
-
Gradle
-
Maven
dependencies {
implementation 'io.arconia:arconia-openinference-ai-semantic-conventions'
}
<dependency>
<groupId>io.arconia</groupId>
<artifactId>arconia-openinference-ai-semantic-conventions</artifactId>
</dependency>
|
If another Generative AI convention module is also active, Arconia fails at startup with an actionable error message. Disable the unwanted module via its |
Your application also needs OpenTelemetry configured for exporting traces. The recommended approach is to use the Arconia OpenTelemetry Spring Boot Starter:
-
Gradle
-
Maven
dependencies {
implementation 'io.arconia:arconia-spring-boot-starter-opentelemetry'
}
<dependency>
<groupId>io.arconia</groupId>
<artifactId>arconia-spring-boot-starter-opentelemetry</artifactId>
</dependency>
Dev Service
During development and testing, you can use the Phoenix Dev Service to spin up a local Arize Phoenix instance automatically. Add the Dev Service dependency to your project:
-
Gradle
-
Maven
dependencies {
testAndDevelopmentOnly "io.arconia:arconia-dev-services-phoenix"
}
<dependency>
<groupId>io.arconia</groupId>
<artifactId>arconia-dev-services-phoenix</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
When the Dev Service is active, Arconia configures OpenTelemetry automatically to export traces to Phoenix based on the OpenInference Semantic Conventions. No additional OTLP configuration is needed for local development.
Arize Phoenix
To send telemetry to a hosted Arize Phoenix instance, configure the OTLP exporter with your endpoint:
arconia.otel.exporter.otlp.endpoint=https://<your-phoenix-endpoint>
arconia.otel.exporter.otlp.headers=Authorization=Bearer ${PHOENIX_API_KEY}
Alternatively, you can use the standard OpenTelemetry environment variables:
OTEL_EXPORTER_OTLP_ENDPOINT=https://<your-phoenix-endpoint>
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <your-phoenix-api-key>"
You can also set the project name to control how traces are grouped in Phoenix:
arconia.observations.conventions.openinference.ai.project-name=my-app
|
Since Phoenix only supports OpenTelemetry traces, configure your application to disable the export of logs and metrics when targeting a Phoenix instance or export them to a different backend. |
Features
The module provides semantic conventions for the following observation types:
-
Chat model spans with full parameter tracking (e.g., temperature, max tokens, penalties, stop sequences).
-
Embedding model spans with dimension tracking.
-
Tool execution spans.
-
Chat client observations (including conversation ID tracking via chat memory).
-
Token usage metrics.
-
Input/output message content, captured as span attributes or span events.
Content capturing is enabled by default, according to the OpenInference Semantic Conventions specification.
Configuration Properties
You can configure the OpenInference Semantic Conventions via configuration properties.
| Property | Default | Description |
|---|---|---|
|
|
Whether to enable the OpenInference AI Semantic Conventions module. |
|
- |
Name of the project in the OpenInference backend where to send the telemetry data. If not defined, the application name will be used as the project name. |
|
|
Maximum length of a base64-encoded image. |
|
|
Whether to hide all LLM choices. |
|
|
Whether to hide all embedding text. |
|
|
Whether to hide all embedding vectors. |
|
|
Whether to hide the LLM invocation parameters. |
|
|
Whether to hide all inputs. |
|
|
Whether to hide all images from the input messages. |
|
|
Whether to hide all input messages. |
|
|
Whether to hide all texts from the input messages. |
|
|
Whether to hide all outputs. |
|
|
Whether to hide all texts from the output messages. |
|
|
Whether to hide all output messages. |
|
|
Whether to hide all LLM prompts. |
Environment Variables
Arconia supports the OpenInference Environment Variable Specification for configuring the OpenInference semantic conventions. If both the OpenInference Environment Variables and the Arconia configuration properties are set, the OpenInference Environment Variables will take precedence.