Table of Contents

    Shahar Luftig
    Shahar Luftig
    Data Engineer

    Monitoring and observability are crucial in modern applications. With Cloudflare Workers, you get speed and global reach — but visibility into your application’s behavior can be tricky.

    Tail Workers solve this challenge by sending real-time, structured logs from Cloudflare directly to Datadog in a way that’s flexible, reliable, and developer-friendly.

    Key Terms

    Cloudflare Workers – Serverless functions that run at the edge, providing fast, globally distributed compute for web applications.

    Tail Workers – Special Workers that consume real-time logs from other Workers, allowing you to process and forward log data as it happens.

    Logpush – Cloudflare’s batch log export service that sends logs in bulk to external destinations like S3 or HTTP endpoints.

    Datadog – Monitoring and observability platform that provides log management, APM, and infrastructure monitoring.

    The Challenge with Traditional Cloudflare Logging

    Cloudflare’s Logpush service makes logging harder than it should be:

    Problems with Logpush

    • Complex configuration — Requires API calls to set up logpush jobs, manage destinations, and configure field mappings
    • No enrichment — You can’t add context like service names, environments, or custom tags before ingestion
    • Rigid schema — Predefined fields only, which reduces flexibility for observability pipelines
    • Processing overhead — Downstream systems must parse and reshape large batches before they’re useful

    Example Logpush Batch Structure

    Here’s what a typical Logpush batch looks like:

    JSON
    {
      "ScriptName": "your-worker-name",
      "EventTimestampMs": 1640995200000,
      "...": "extra metadata",
      "Logs": [
        {
          "TimestampMs": 1640995200000,
          "Message": [
            "{\"level\":\"info\",\"message\":\"Application started\",\"timestamp\":\"2023-01-01T00:00:00.000Z\"}"
          ],
          "Level": "log"
        },
        {
          "TimestampMs": 1640995201000,
          "Message": [
            "{\"level\":\"error\",\"message\":\"Database connection failed\",\"timestamp\":\"2023-01-01T00:00:01.000Z\"}"
          ],
          "Level": "log"
        }
      ]
    }
    

    Notice how each log’s Message field is an array of JSON strings — not parsed objects. This nested structure makes it cumbersome to extract meaningful log data without complex parsing logic.

    Datadog cannot directly parse this format, meaning you need additional processing to convert the nested JSON strings into individual log entries. This format is great for raw archiving, but painful if you want structured, contextual, real-time logs.

    The Tail Worker Advantage

    Tail Workers elegantly fix these limitations by providing:

    • Real-time streaming — Logs are processed as they happen, not delayed in bulk pushes
    • Code-driven transformation — Full programmatic control over log processing before shipment: enrich with metadata, apply filtering, or reshape into Datadog’s schema
    • Native integration — Runs fully inside the Cloudflare ecosystem
    • Flexible routing — Route logs differently based on environment, severity, or type

    In short: Tail Workers give you control, speed, and clean observability.

    Quick Start Guide

    Getting started with Tail Workers for Datadog integration is straightforward:

    ShellScript
    npx create-cloudflare@latest -- project_name
    cd project_name
    wrangler secret put DD_API_KEY

    Basic Configuration

    Create a minimal wrangler.jsonc:

    JSON
    {
      "name": "PROJECT_NAME",
      "main": "src/index.ts",
      "compatibility_date": "2024-09-01",
      "vars": {
        "SERVICE_NAME": "your-service-name",
        "ENVIRONMENT": "production",
        "DD_SITE": "datadoghq.com"
      }
    }
    

    Connect the Tail Worker in your main Worker’s wrangler.toml:

    JSON
    [[tail_consumers]]
    service = "mcp-tail-worker"
    

    Implementation Details

    Environment Bindings

    Use environment bindings for portability across different environments:

    TypeScript
    export interface Env {
      /** Datadog API key for authentication */
      DD_API_KEY: string;
      /** Service name for tagging logs */
      SERVICE_NAME: string;
      /** Environment name for the tail worker (e.g., production, staging) */
      ENVIRONMENT?: string;
      /** Datadog site (e.g., datadoghq.com, datadoghq.eu, us3.datadoghq.com) */
      DD_SITE?: string;
    

    Log Transformation

    Convert Cloudflare log events into Datadog’s expected schema:

    TypeScript
    function transformToDatadogLogs(events: TraceItem[], env: Env) {
      const datadogLogs = [];
    
      events.forEach(event => {
        if (event.logs && Array.isArray(event.logs)) {
          event.logs.forEach(logEntry => {
            const logMessage = Array.isArray(logEntry.message)
              ? logEntry.message.join(' ')
              : String(logEntry.message || '');
    
            datadogLogs.push({
              timestamp: logEntry.timestamp,
              status: logEntry.level || 'info',
              message: logMessage,
              service: event.scriptName || env.SERVICE_NAME,
              ddsource: 'cloudflare-tail-worker',
              ddtags: [
                `service:${event.scriptName || env.SERVICE_NAME}`,
                `source:cloudflare-tail-worker`,
                env.ENVIRONMENT ? `env:${env.ENVIRONMENT}` : 'env:dev'
              ].join(',')
            });
          });
        }
      });
    
      return datadogLogs;
    }
    

    Tail Handler Implementation

    The main function that ships logs to Datadog:

    TypeScript
    function transformToDatadogLogs(events: TraceItem[], env: Env) {
      const datadogLogs = [];
    
      events.forEach(event => {
        if (event.logs && Array.isArray(event.logs)) {
          event.logs.forEach(logEntry => {
            const logMessage = Array.isArray(logEntry.message)
              ? logEntry.message.join(' ')
              : String(logEntry.message || '');
    
            datadogLogs.push({
              timestamp: logEntry.timestamp,
              status: logEntry.level || 'info',
              message: logMessage,
              service: event.scriptName || env.SERVICE_NAME,
              ddsource: 'cloudflare-tail-worker',
              ddtags: [
                `service:${event.scriptName || env.SERVICE_NAME}`,
                `source:cloudflare-tail-worker`,
                env.ENVIRONMENT ? `env:${env.ENVIRONMENT}` : 'env:dev'
              ].join(',')
            });
          });
        }
      });
    
      return datadogLogs;
    }
    

    Ready-to-Deploy Solution

    For a complete, production-ready implementation, check out our open-source repository:

    GitHub: https://github.com/explorium-ai/cloudflare-tail-worker-datadog

    This repository provides a ready-to-deploy Tail Worker that converts Cloudflare traces into Datadog logs with tagging and enrichment built-in.

    Quick Deployment

    ShellScript
    git clone https://github.com/explorium-ai/mcp-tail-worker.git
    cd mcp-tail-worker
    wrangler secret put DD_API_KEY
    npm i && npm run deploy

    Attach as tail consumer in your Worker’s wrangler.toml:

    JSON
    [[tail_consumers]]
    service = "mcp-tail-worker"
    

    Datadog Integration

    The Tail Worker outputs structured logs in the format Datadog expects. Each message in the original array becomes a separate, properly formatted log entry:

    JSON
    {
      "timestamp": 1712697600000,
      "status": "info",
      "message": "Application started",
      "service": "your-service-name",
      "ddsource": "cloudflare-tail-worker",
      "ddtags": "service:your-service-name,source:cloudflare-tail-worker,env:production"
    }
    

    This mapping ensures your logs appear in Datadog with useful context, enabling you to:

    • Filter logs by service and environment
    • Create custom dashboards and alerts
    • Search across structured log data efficiently
    • Correlate logs with other Datadog metrics

    Conclusion

    Cloudflare Tail Workers bridge the gap between Cloudflare’s raw logging capabilities and Datadog’s powerful observability platform. They enable:

    • Real-time ingestion without waiting for batch jobs
    • Enrichment with context like service names and environment tags
    • Lightweight, resilient forwarding at the edge
    • Custom filtering and transformation to meet your specific needs

    By implementing Tail Workers, you gain the speed and global reach of Cloudflare Workers while maintaining the visibility and monitoring capabilities essential for production applications.

    Whether you’re building a simple API or a complex distributed system, this approach provides the observability foundation you need to maintain reliable, performant applications at scale.

    Ready to get started? Check out the explorium-ai/mcp-tail-worker repository for a complete implementation, or follow the quick start guide above to build your own custom solution.