前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Language APIs & SDKs-C++-Exporters

Language APIs & SDKs-C++-Exporters

作者头像
方亮
发布2024-05-24 19:22:17
490
发布2024-05-24 19:22:17
举报
文章被收录于专栏:方亮方亮
大纲

  • *Exporters*
    • *Available exporters*
    • *OTLP*
      • *Collector Setup*
      • *Dependencies*
    • *Usage*
      • *Console*
    • *Jaeger*
    • *Prometheus*
      • *Backend Setup*
      • *Dependencies*
    • *Zipkin*
      • *Backend Setup*
      • *Dependencies#*
    • *Other available exporters*
    • *Batching span and log records*

Exporters

导出器

Send telemetry to the OpenTelemetry Collector to make sure it’s exported correctly. Using the Collector in production environments is a best practice. To visualize your telemetry, export it to a backend such as Jaeger , Zipkin , Prometheus , or a vendor-specific backend. 将遥测数据发送到OpenTelemetry Collector以确保其导出正确。在生产环境中使用收集器是最佳实践。要可视化您的遥测数据,请将其导出到后端,例如 Jaeger、Zipkin、 Prometheus或特定于供应商的后端。

Available exporters

可用的导出器

The registry contains a list of exporters for C++. 注册表包含C++的导出器列表。

Among exporters, OpenTelemetry Protocol (OTLP) exporters are designed with the OpenTelemetry data model in mind, emitting OTel data without any loss of information. Furthermore, many tools that operate on telemetry data support OTLP (such as Prometheus , Jaeger, and most vendors), providing you with a high degree of flexibility when you need it. To learn more about OTLP, see OTLP Specification. 在导出器中,OpenTelemetry Protocol (OTLP)导出器在设计时考虑了 OpenTelemetry 数据模型,可在不丢失任何信息的情况下发出OTel数据。此外,许多对遥测数据进行操作的工具都支持OTLP(例如Prometheus、Jaeger和大多数供应商),在您需要时为您提供高度的灵活性。要了解有关OTLP的更多信息,请参阅OTLP规范。

This page covers the main OpenTelemetry C++ exporters and how to set them up. 本页介绍了主要的 OpenTelemetry C++ 导出器以及如何设置它们。

OTLP

Collector Setup

收集器设置

Note 笔记 If you have a OTLP collector or backend already set up, you can skip this section and setup the OTLP exporter dependencies for your application. 如果您已经设置了OTLP收集器或后端,则可以跳过本节并为您的应用程序设置OTLP导出器依赖项。

To try out and verify your OTLP exporters, you can run the collector in a docker container that writes telemetry directly to the console. 要尝试和验证OTLP导出器,您可以在Docker容器中运行收集器,将遥测数据直接写入控制台。

In an empty directory, create a file called collector-config.yaml with the following content: 在空目录中,创建一个包含collector-config.yaml以下内容的文件:

代码语言:javascript
复制
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
exporters:
  debug:
    verbosity: detailed
service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [debug]
    metrics:
      receivers: [otlp]
      exporters: [debug]
    logs:
      receivers: [otlp]
      exporters: [debug]

Now run the collector in a docker container: 现在在 Docker 容器中运行收集器:

代码语言:javascript
复制
docker run -p 4317:4317 -p 4318:4318 --rm -v $(pwd)/collector-config.yaml:/etc/otelcol/config.yaml otel/opentelemetry-collector

This collector is now able to accept telemetry via OTLP. Later you may want to configure the collector to send your telemetry to your observability backend. 该收集器现在能够通过OTLP接受遥测数据。稍后您可能需要配置收集器以将遥测数据发送到可观测性后端。

Dependencies

If you want to send telemetry data to an OTLP endpoint (like the OpenTelemetry Collector, Jaeger or Prometheus), you can choose between two different protocols to transport your data: 如果您想将遥测数据发送到OTLP端点(例如 OpenTelemetry Collector、Jaeger或 Prometheus),您可以选择两种不同的协议来传输数据:

  • HTTP/protobuf
  • gRPC

Make sure that you have set the right cmake build variables while building OpenTelemetry C++ from source : 确保在从源代码构建OpenTelemetry C++时设置了正确的cmake构建变量:

  • -DWITH_OTLP_GRPC=ON: To enable building OTLP gRPC exporter. 启用构建 OTLP gRPC 导出器。
  • -DWITH_OTLP_HTTP=ON: To enable building OTLP HTTP exporter. 启用构建 OTLP HTTP 导出器。

Usage

用法

Next, configure the exporter to point at an OTLP endpoint in your code. 接下来,将导出器配置为指向代码中的 OTLP 端点

代码语言:javascript
复制
// HTTP/proto
#include "opentelemetry/exporters/otlp/otlp_http_exporter_factory.h"
#include "opentelemetry/exporters/otlp/otlp_http_exporter_options.h"
#include "opentelemetry/sdk/trace/processor.h"
#include "opentelemetry/sdk/trace/batch_span_processor_factory.h"
#include "opentelemetry/sdk/trace/batch_span_processor_options.h"
#include "opentelemetry/sdk/trace/tracer_provider_factory.h"
#include "opentelemetry/trace/provider.h"
#include "opentelemetry/sdk/trace/tracer_provider.h"

#include "opentelemetry/exporters/otlp/otlp_http_metric_exporter_factory.h"
#include "opentelemetry/exporters/otlp/otlp_http_metric_exporter_options.h"
#include "opentelemetry/metrics/provider.h"
#include "opentelemetry/sdk/metrics/aggregation/default_aggregation.h"
#include "opentelemetry/sdk/metrics/export/periodic_exporting_metric_reader.h"
#include "opentelemetry/sdk/metrics/export/periodic_exporting_metric_reader_factory.h"
#include "opentelemetry/sdk/metrics/meter_context_factory.h"
#include "opentelemetry/sdk/metrics/meter_provider.h"
#include "opentelemetry/sdk/metrics/meter_provider_factory.h"

#include "opentelemetry/exporters/otlp/otlp_http_log_record_exporter_factory.h"
#include "opentelemetry/exporters/otlp/otlp_http_log_record_exporter_options.h"
#include "opentelemetry/logs/provider.h"
#include "opentelemetry/sdk/logs/logger_provider_factory.h"
#include "opentelemetry/sdk/logs/processor.h"
#include "opentelemetry/sdk/logs/simple_log_record_processor_factory.h"

namespace trace_api = opentelemetry::trace;
namespace trace_sdk = opentelemetry::sdk::trace;

namespace metric_sdk = opentelemetry::sdk::metrics;
namespace metrics_api = opentelemetry::metrics;

namespace otlp = opentelemetry::exporter::otlp;

namespace logs_api = opentelemetry::logs;
namespace logs_sdk = opentelemetry::sdk::logs;


void InitTracer()
{
  trace_sdk::BatchSpanProcessorOptions bspOpts{};
  otlp::OtlpHttpExporterOptions opts;
  opts.url = "http://localhost:4318/v1/traces";
  auto exporter  = otlp::OtlpHttpExporterFactory::Create(opts);
  auto processor = trace_sdk::BatchSpanProcessorFactory::Create(std::move(exporter), bspOpts);
  std::shared_ptr<trace_api::TracerProvider> provider =  trace_sdk::TracerProviderFactory::Create(std::move(processor));
  trace_api::Provider::SetTracerProvider(provider);
}

void InitMetrics()
{
  otlp::OtlpHttpMetricExporterOptions opts;
  opts.url = "http://localhost:4318/v1/metrics";
  auto exporter = otlp::OtlpHttpMetricExporterFactory::Create(opts);
  metric_sdk::PeriodicExportingMetricReaderOptions reader_options;
  reader_options.export_interval_millis = std::chrono::milliseconds(1000);
  reader_options.export_timeout_millis  = std::chrono::milliseconds(500);
  auto reader = metric_sdk::PeriodicExportingMetricReaderFactory::Create(std::move(exporter), reader_options);
  auto context = metric_sdk::MeterContextFactory::Create();
  context->AddMetricReader(std::move(reader));
  auto u_provider = metric_sdk::MeterProviderFactory::Create(std::move(context));
  std::shared_ptr<metrics_api::MeterProvider> provider(std::move(u_provider));
  metrics_api::Provider::SetMeterProvider(provider);
}

void InitLogger()
{
  otlp::OtlpHttpLogRecordExporterOptions opts;
  opts.url = "http://localhost:4318/v1/logs";
  auto exporter  = otlp::OtlpHttpLogRecordExporterFactory::Create(opts);
  auto processor = logs_sdk::SimpleLogRecordProcessorFactory::Create(std::move(exporter));
  std::shared_ptr<logs_api::LoggerProvider> provider =
      logs_sdk::LoggerProviderFactory::Create(std::move(processor));
  logs_api::Provider::SetLoggerProvider(provider);
}
代码语言:javascript
复制
// gRPC
#include "opentelemetry/exporters/otlp/otlp_grpc_exporter_factory.h"
#include "opentelemetry/exporters/otlp/otlp_grpc_exporter_options.h"
#include "opentelemetry/sdk/trace/processor.h"
#include "opentelemetry/sdk/trace/batch_span_processor_factory.h"
#include "opentelemetry/sdk/trace/batch_span_processor_options.h"
#include "opentelemetry/sdk/trace/tracer_provider_factory.h"
#include "opentelemetry/trace/provider.h"
#include "opentelemetry/sdk/trace/tracer_provider.h"

#include "opentelemetry/exporters/otlp/otlp_grpc_metric_exporter_factory.h"
#include "opentelemetry/exporters/otlp/otlp_grpc_metric_exporter_options.h"
#include "opentelemetry/metrics/provider.h"
#include "opentelemetry/sdk/metrics/aggregation/default_aggregation.h"
#include "opentelemetry/sdk/metrics/export/periodic_exporting_metric_reader.h"
#include "opentelemetry/sdk/metrics/export/periodic_exporting_metric_reader_factory.h"
#include "opentelemetry/sdk/metrics/meter_context_factory.h"
#include "opentelemetry/sdk/metrics/meter_provider.h"
#include "opentelemetry/sdk/metrics/meter_provider_factory.h"

#include "opentelemetry/exporters/otlp/otlp_grpc_log_record_exporter_factory.h"
#include "opentelemetry/exporters/otlp/otlp_grpc_log_record_exporter_options.h"
#include "opentelemetry/logs/provider.h"
#include "opentelemetry/sdk/logs/logger_provider_factory.h"
#include "opentelemetry/sdk/logs/processor.h"
#include "opentelemetry/sdk/logs/simple_log_record_processor_factory.h"

namespace trace_api = opentelemetry::trace;
namespace trace_sdk = opentelemetry::sdk::trace;

namespace metric_sdk = opentelemetry::sdk::metrics;
namespace metrics_api = opentelemetry::metrics;

namespace otlp = opentelemetry::exporter::otlp;

namespace logs_api = opentelemetry::logs;
namespace logs_sdk = opentelemetry::sdk::logs;

void InitTracer()
{
  trace_sdk::BatchSpanProcessorOptions bspOpts{};
  opentelemetry::exporter::otlp::OtlpGrpcExporterOptions opts;
  opts.endpoint = "localhost:4317";
  opts.use_ssl_credentials = true;
  opts.ssl_credentials_cacert_as_string = "ssl-certificate";
  auto exporter  = otlp::OtlpGrpcExporterFactory::Create(opts);
  auto processor = trace_sdk::BatchSpanProcessorFactory::Create(std::move(exporter), bspOpts);
  std::shared_ptr<opentelemetry::trace_api::TracerProvider> provider =
      trace_sdk::TracerProviderFactory::Create(std::move(processor));
  // Set the global trace provider
  trace_api::Provider::SetTracerProvider(provider);
}

void InitMetrics()
{
  otlp::OtlpGrpcMetricExporterOptions opts;
  opts.endpoint = "localhost:4317";
  opts.use_ssl_credentials = true;
  opts.ssl_credentials_cacert_as_string = "ssl-certificate";
  auto exporter = otlp::OtlpGrpcMetricExporterFactory::Create(opts);
  metric_sdk::PeriodicExportingMetricReaderOptions reader_options;
  reader_options.export_interval_millis = std::chrono::milliseconds(1000);
  reader_options.export_timeout_millis  = std::chrono::milliseconds(500);
  auto reader = metric_sdk::PeriodicExportingMetricReaderFactory::Create(std::move(exporter), reader_options);
  auto context = metric_sdk::MeterContextFactory::Create();
  context->AddMetricReader(std::move(reader));
  auto u_provider = metric_sdk::MeterProviderFactory::Create(std::move(context));
  std::shared_ptr<opentelemetry::metrics::MeterProvider> provider(std::move(u_provider));
  metrics_api::Provider::SetMeterProvider(provider);
}

void InitLogger()
{
  otlp::OtlpGrpcLogRecordExporterOptions opts;
  opts.endpoint = "localhost:4317";
  opts.use_ssl_credentials = true;
  opts.ssl_credentials_cacert_as_string = "ssl-certificate";
  auto exporter  = otlp::OtlpGrpcLogRecordExporterFactory::Create(opts);
  auto processor = logs_sdk::SimpleLogRecordProcessorFactory::Create(std::move(exporter));
  nostd::shared_ptr<logs_api::LoggerProvider> provider(
      logs_sdk::LoggerProviderFactory::Create(std::move(processor)));
  logs_api::Provider::SetLoggerProvider(provider);
}
Console

To debug your instrumentation or see the values locally in development, you can use exporters writing telemetry data to the console (stdout). 要调试您的装置或在开发环境中查看本地值,您可以使用导出器将遥测数据写入控制台(stdout)。

While building OpenTelemetry C++ from source the OStreamSpanExporter is included in the build by default. 从源代码构建OpenTelemetry C++时, 默认情况下将其OStreamSpanExporter包含在构建中。

代码语言:javascript
复制
#include "opentelemetry/exporters/ostream/span_exporter_factory.h"
#include "opentelemetry/sdk/trace/exporter.h"
#include "opentelemetry/sdk/trace/processor.h"
#include "opentelemetry/sdk/trace/simple_processor_factory.h"
#include "opentelemetry/sdk/trace/tracer_provider_factory.h"
#include "opentelemetry/trace/provider.h"

#include "opentelemetry/exporters/ostream/metrics_exporter_factory.h"
#include "opentelemetry/sdk/metrics/meter_provider.h"
#include "opentelemetry/sdk/metrics/meter_provider_factory.h"
#include "opentelemetry/metrics/provider.h"

#include "opentelemetry/exporters/ostream/log_record_exporter_factory.h"
#include "opentelemetry/logs/provider.h"
#include "opentelemetry/sdk/logs/logger_provider_factory.h"
#include "opentelemetry/sdk/logs/processor.h"
#include "opentelemetry/sdk/logs/simple_log_record_processor_factory.h"

namespace trace_api      = opentelemetry::trace;
namespace trace_sdk      = opentelemetry::sdk::trace;
namespace trace_exporter = opentelemetry::exporter::trace;

namespace metrics_sdk      = opentelemetry::sdk::metrics;
namespace metrics_api      = opentelemetry::metrics;
namespace metrics_exporter = opentelemetry::exporter::metrics;

namespace logs_api = opentelemetry::logs;
namespace logs_sdk = opentelemetry::sdk::logs;
namespace logs_exporter = opentelemetry::exporter::logs;

void InitTracer()
{
  auto exporter  = trace_exporter::OStreamSpanExporterFactory::Create();
  auto processor = trace_sdk::SimpleSpanProcessorFactory::Create(std::move(exporter));
  std::shared_ptr<opentelemetry::trace::TracerProvider> provider = trace_sdk::TracerProviderFactory::Create(std::move(processor));
  trace_api::Provider::SetTracerProvider(provider);
}

void InitMetrics()
{
    auto exporter = metrics_exporter::OStreamMetricExporterFactory::Create();
    auto u_provider = metrics_sdk::MeterProviderFactory::Create();
    std::shared_ptr<opentelemetry::metrics::MeterProvider> provider(std::move(u_provider));
    auto *p = static_cast<metrics_sdk::MeterProvider *>(u_provider.get());
    p->AddMetricReader(std::move(exporter));
    metrics_api::Provider::SetMeterProvider(provider);
}

void InitLogger()
{
  auto exporter = logs_exporter::OStreamLogRecordExporterFactory::Create();
  auto processor = logs_sdk::SimpleLogRecordProcessorFactory::Create(std::move(exporter));
  nostd::shared_ptr<logs_api::LoggerProvider> provider(
      logs_sdk::LoggerProviderFactory::Create(std::move(processor)));
  logs_api::Provider::SetLoggerProvider(provider);
}

Jaeger

Jaeger natively supports OTLP to receive trace data. You can run Jaeger in a docker container with the UI accessible on port 16686 and OTLP enabled on ports 4317 and 4318: Jaeger原生支持OTLP接收Trace数据。您可以在docker容器中运行Jaeger,并在端口16686上访问 UI,并在端口 4317和4318上启用 OTLP:

代码语言:javascript
复制
docker run --rm \
  -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
  -p 16686:16686 \
  -p 4317:4317 \
  -p 4318:4318 \
  -p 9411:9411 \
  jaegertracing/all-in-one:latest

Now following the instruction to setup the OTLP exporters. 现在按照说明设置OTLP导出器。

**

Prometheus

To send your metric data to Prometheus , you can either enable Prometheus’ OTLP Receiver and use the OTLP exporter or you can use the PrometheusHttpServer , a MetricReader, that starts an HTTP server that will collect metrics and serialize to Prometheus text format on request. 要将指标数据发送到Prometheus,您可以启用Prometheus的OTLP接收器并使用OTLP导出器,也可以使用PrometheusHttpServer——一个MetricReader来启动HTTP服务器,该服务器将收集指标并根据请求序列化为 Prometheus文本格式。

Backend Setup

后端设置

Note 笔记 If you have Prometheus or a Prometheus-compatible backend already set up, you can skip this section and setup the Prometheus or OTLP exporter dependencies for your application. 如果您已经设置了Prometheus或与Prometheus兼容的后端,则可以跳过本节并为您的应用程序设置Prometheus或OTLP导出器依赖项。

You can run Prometheus in a docker container, accessible on port 9090 by following these instructions: 您可以在Docker容器中运行Prometheus ,可以按照以下说明在9090端口上进行访问:

Create a file called prometheus.yml with the following content: 创建一个名为prometheus.yml的文件,内容如下:

代码语言:javascript
复制
scrape_configs:
  - job_name: dice-service
    scrape_interval: 5s
    static_configs:
      - targets: [host.docker.internal:9464]

Run Prometheus in a docker container with the UI accessible on port 9090: 在 docker 容器中运行 Prometheus,并在端口9090上访问UI:

Note 笔记 When using Prometheus’ OTLP Receiver, make sure that you set the OTLP endpoint for metrics in your application to http://localhost:9090/api/v1/otlp. 使用 Prometheus 的 OTLP 接收器时,请确保将应用程序中指标的OTLP 端点设置为http://localhost:9090/api/v1/otlp。 Not all docker environments support host.docker.internal. In some cases you may need to replace host.docker.internal with localhost or the IP address of your machine. 并非所有docker环境都支持host.docker.internal。在某些情况下,您可能需要替换host.docker.internal为localhost或您计算机的IP地址。

Dependencies

依赖关系

To send your trace data to Prometheus , make sure that you have set the right cmake build variables while building OpenTelemetry C++ from source : 要将Trace数据发送到Prometheus ,请确保在从源代码构建 OpenTelemetry C++时设置了正确的 cmake 构建变量:

代码语言:javascript
复制
cmake -DWITH_PROMETHEUS=ON ...

Update your OpenTelemetry configuration to use the Prometheus Exporter: 更新您的 OpenTelemetry 配置以使用 Prometheus Exporter

代码语言:javascript
复制
#include "opentelemetry/exporters/prometheus/exporter_factory.h"
#include "opentelemetry/exporters/prometheus/exporter_options.h"
#include "opentelemetry/metrics/provider.h"
#include "opentelemetry/sdk/metrics/meter_provider.h"
#include "opentelemetry/sdk/metrics/meter_provider_factory.h"

namespace metrics_sdk      = opentelemetry::sdk::metrics;
namespace metrics_api      = opentelemetry::metrics;
namespace metrics_exporter = opentelemetry::exporter::metrics;

void InitMetrics()
{
    metrics_exporter::PrometheusExporterOptions opts;
    opts.url = "localhost:9464";
    auto prometheus_exporter = metrics_exporter::PrometheusExporterFactory::Create(opts);
    auto u_provider = metrics_sdk::MeterProviderFactory::Create();
    auto *p = static_cast<metrics_sdk::MeterProvider *>(u_provider.get());
    p->AddMetricReader(std::move(prometheus_exporter));
    std::shared_ptr<metrics_api::MeterProvider> provider(std::move(u_provider));
    metrics_api::Provider::SetMeterProvider(provider);
}

With the above you can access your metrics at http://localhost:9464/metrics . Prometheus or an OpenTelemetry Collector with the Prometheus receiver can scrape the metrics from this endpoint. 通过上述内容,您可以在http://localhost:9464/metrics访问您的指标。 Prometheus或带有Prometheus接收器的 OpenTelemetry Collector可以从此端点抓取指标。

Zipkin

Backend Setup

后端设置

Note 笔记 If you have Zipkin or a Zipkin-compatible backend already set up, you can skip this section and setup the Zipkin exporter dependencies for your application. 如果您已经设置了Zipkin或与Zipkin兼容的后端,则可以跳过本节并为您的应用程序设置Zipkin导出器依赖项 。

You can run Zipkin on in a Docker container by executing the following command: 您可以通过执行以下命令在Docker容器中运行Zipkin :

代码语言:javascript
复制
docker run --rm -d -p 9411:9411 --name zipkin openzipkin/zipkin
Dependencies#

依赖关系

To send your trace data to Zipkin , make sure that you have set the right cmake build variables while building OpenTelemetry C++ from source :

代码语言:javascript
复制
cmake -DWITH_ZIPKIN=ON ...*

Update your OpenTelemetry configuration to use the Zipkin Exporter  and to send data to your Zipkin backend: 要将Trace数据发送到Zipkin ,请确保在从源代码构建OpenTelemetry C++时设置了正确的cmake构建变量 :

代码语言:javascript
复制
#include "opentelemetry/exporters/zipkin/zipkin_exporter_factory.h"
#include "opentelemetry/sdk/resource/resource.h"
#include "opentelemetry/sdk/trace/processor.h"
#include "opentelemetry/sdk/trace/simple_processor_factory.h"
#include "opentelemetry/sdk/trace/tracer_provider_factory.h"
#include "opentelemetry/trace/provider.h"

namespace trace     = opentelemetry::trace;
namespace trace_sdk = opentelemetry::sdk::trace;
namespace zipkin    = opentelemetry::exporter::zipkin;
namespace resource  = opentelemetry::sdk::resource;

void InitTracer()
{
  zipkin::ZipkinExporterOptions opts;
  resource::ResourceAttributes attributes = {{"service.name", "zipkin_demo_service"}};
  auto resource                           = resource::Resource::Create(attributes);
  auto exporter                           = zipkin::ZipkinExporterFactory::Create(opts);
  auto processor = trace_sdk::SimpleSpanProcessorFactory::Create(std::move(exporter));
  std::shared_ptr<opentelemetry::trace::TracerProvider> provider =
      trace_sdk::TracerProviderFactory::Create(std::move(processor), resource);
  // Set the global trace provider
  trace::Provider::SetTracerProvider(provider);
}

Other available exporters

其他可用的导出器

There are many other exporters available. For a list of available exporters, see the registry. 还有许多其他导出器商可供选择。有关可用导出器商的列表,请参阅注册表。

Finally, you can also write your own exporter. For more information, see the SpanExporter Interface in the API documentation. 最后,您还可以编写自己的导出器。有关更多信息,请参阅 API 文档中的 SpanExporter 接口。

Batching span and log records

批处理跨度和日志记录

The OpenTelemetry SDK provides a set of default span and log record processors, that allow you to either emit spans one-by-on (“simple”) or batched. Using batching is recommended, but if you do not want to batch your spans or log records, you can use a simple processor instead as follows: OpenTelemetry SDK 提供了一组默认的Span和日志记录处理器,允许您逐个(“简单”)或批量发出Span。建议使用批处理,但如果您不想对Span或日志记录进行批处理,则可以使用简单的处理器,如下所示:

代码语言:javascript
复制
// Bash
#include "opentelemetry/exporters/otlp/otlp_http_exporter_factory.h"
#include "opentelemetry/exporters/otlp/otlp_http_exporter_options.h"
#include "opentelemetry/sdk/trace/processor.h"
#include "opentelemetry/sdk/trace/batch_span_processor_factory.h"
#include "opentelemetry/sdk/trace/batch_span_processor_options.h"

opentelemetry::sdk::trace::BatchSpanProcessorOptions options{};

auto exporter  = opentelemetry::exporter::otlp::OtlpHttpExporterFactory::Create(opts);
auto processor = opentelemetry::sdk::trace::BatchSpanProcessorFactory::Create(std::move(exporter), options);
代码语言:javascript
复制
// Simple
#include "opentelemetry/exporters/otlp/otlp_http_exporter_factory.h"
#include "opentelemetry/exporters/otlp/otlp_http_exporter_options.h"
#include "opentelemetry/sdk/trace/processor.h"
#include "opentelemetry/sdk/trace/simple_processor_factory.h"

auto exporter  = opentelemetry::exporter::otlp::OtlpHttpExporterFactory::Create(opts);
auto processor = opentelemetry::sdk::trace::SimpleSpanProcessorFactory::Create(std::move(exporter));
本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2024-05-21,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 大纲
  • Exporters
    • Available exporters
      • OTLP
        • Collector Setup
        • Dependencies
      • Usage
        • Console
      • Jaeger
        • Prometheus
          • Backend Setup
          • Dependencies
        • Zipkin
          • Backend Setup
          • Dependencies#
        • Other available exporters
          • Batching span and log records
          相关产品与服务
          Prometheus 监控服务
          Prometheus 监控服务(TencentCloud Managed Service for Prometheus,TMP)是基于开源 Prometheus 构建的高可用、全托管的服务,与腾讯云容器服务(TKE)高度集成,兼容开源生态丰富多样的应用组件,结合腾讯云可观测平台-告警管理和 Prometheus Alertmanager 能力,为您提供免搭建的高效运维能力,减少开发及运维成本。
          领券
          问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档