@nevermined-io/payments
    Preparing search index...

    Class ObservabilityAPI

    The ObservabilityAPI class provides methods to wrap API calls with Helicone logging

    Hierarchy

    • BasePaymentsAPI
      • ObservabilityAPI
    Index

    Constructors

    Properties

    accountAddress: string
    appId?: string
    environment: EnvironmentInfo
    environmentName: EnvironmentName
    heliconeApiKey: string
    heliconeAsyncLoggingUrl: string
    heliconeBaseLoggingUrl: string
    heliconeManualLoggingUrl: string
    isBrowserInstance: boolean = true
    nvmApiKey: string
    returnUrl: string
    version?: string

    Methods

    • Helper function to calculate usage for dummy song operations

      Returns {
          completion_tokens: number;
          completion_tokens_details?: {
              accepted_prediction_tokens: number;
              audio_tokens: number;
              reasoning_tokens: number;
              rejected_prediction_tokens: number;
          };
          prompt_tokens: number;
          prompt_tokens_details?: { audio_tokens: number; cached_tokens: number };
          total_tokens: number;
      }

    • Helper function to calculate usage for image operations based on pixels

      Parameters

      • pixels: number

      Returns {
          completion_tokens: number;
          completion_tokens_details?: {
              accepted_prediction_tokens: number;
              audio_tokens: number;
              reasoning_tokens: number;
              rejected_prediction_tokens: number;
          };
          prompt_tokens: number;
          prompt_tokens_details?: { audio_tokens: number; cached_tokens: number };
          total_tokens: number;
      }

    • Helper function to calculate usage for song operations based on tokens/quota

      Parameters

      • tokens: number

      Returns {
          completion_tokens: number;
          completion_tokens_details?: {
              accepted_prediction_tokens: number;
              audio_tokens: number;
              reasoning_tokens: number;
              rejected_prediction_tokens: number;
          };
          prompt_tokens: number;
          prompt_tokens_details?: { audio_tokens: number; cached_tokens: number };
          total_tokens: number;
      }

    • Helper function to calculate usage for video operations (typically 1 token)

      Returns {
          completion_tokens: number;
          completion_tokens_details?: {
              accepted_prediction_tokens: number;
              audio_tokens: number;
              reasoning_tokens: number;
              rejected_prediction_tokens: number;
          };
          prompt_tokens: number;
          prompt_tokens_details?: { audio_tokens: number; cached_tokens: number };
          total_tokens: number;
      }

    • Creates a standardized Helicone payload for API logging

      Parameters

      • config: HeliconePayloadConfig

      Returns {
          frequency_penalty: number;
          messages: { content: string; role: string }[];
          model: string;
          n: number;
          presence_penalty: number;
          stream: boolean;
          temperature: number;
          top_p: number;
      }

    • Creates a standardized Helicone response for API logging

      Parameters

      • config: HeliconeResponseConfig

      Returns {
          choices: {
              finish_reason: string;
              index: number;
              logprobs: null;
              message: {
                  annotations: never[];
                  content: string;
                  refusal: null;
                  role: string;
              };
          }[];
          created: number;
          id: string;
          model: string;
          object: string;
          service_tier: string;
          system_fingerprint: string;
          usage: {
              completion_tokens: number;
              completion_tokens_details: {
                  accepted_prediction_tokens: number;
                  audio_tokens: number;
                  reasoning_tokens: number;
                  rejected_prediction_tokens: number;
              };
              prompt_tokens: number;
              prompt_tokens_details: { audio_tokens: number; cached_tokens: number };
              total_tokens: number;
          };
      }

    • It returns the account address associated with the NVM API Key used to initialize the Payments Library instance.

      Returns string | undefined

      The account address extracted from the NVM API Key

    • Internal

      Returns the HTTP options required to query the backend.

      Parameters

      • method: string

        HTTP method.

      • Optionalbody: any

        Optional request body.

      Returns any

      HTTP options object.

    • Internal

      Get HTTP options for public backend requests (no authorization header). Converts body keys from snake_case to camelCase for consistency.

      Parameters

      • method: string

        HTTP method

      • Optionalbody: any

        Optional request body (keys will be converted to camelCase)

      Returns {
          body?: string;
          headers: { Accept: string; "Content-Type": string };
          method: string;
      }

      HTTP options object

    • Parses the NVM API Key to extract the account address.

      Returns { accountAddress: string; heliconeApiKey: string }

      PaymentsError if the API key is invalid.

    • Creates an async logger with Nevermined logging enabled and automatic property injection

      This method wraps the OpenTelemetry SpanProcessor to automatically add all Helicone properties to every span. This mimics Python's Traceloop.set_association_properties() - no wrapping of individual LLM calls needed!

      Parameters

      • providers: AsyncLoggerProviders

        AI SDK modules to instrument (OpenAI, Anthropic, etc.)

      • agentRequest: StartAgentRequest

        The agent request for logging purposes

      • OptionalcustomProperties: CustomProperties

        Custom properties to add as Helicone headers

      Returns { init: () => void }

      The async logger instance with init() method

      import OpenAI from 'openai';

      const logger = observability.withAsyncLogger({ openAI: OpenAI }, agentRequest);
      logger.init();

      // Make LLM calls normally - properties are automatically added to all spans!
      const openai = new OpenAI({ apiKey });
      const result = await openai.chat.completions.create({ ... });
    • Wraps an async operation with Helicone logging

      Type Parameters

      • TInternal = any
      • TExtracted = any

      Parameters

      • agentName: string

        Name of the agent for logging purposes

      • payloadConfig: HeliconePayloadConfig

        Configuration for the Helicone payload

      • operation: () => Promise<TInternal>

        The async operation to execute (returns internal result with extra data)

      • resultExtractor: (internalResult: TInternal) => TExtracted

        Function to extract the user-facing result from internal result

      • usageCalculator: (
            internalResult: TInternal,
        ) => {
            completion_tokens: number;
            completion_tokens_details?: {
                accepted_prediction_tokens: number;
                audio_tokens: number;
                reasoning_tokens: number;
                rejected_prediction_tokens: number;
            };
            prompt_tokens: number;
            prompt_tokens_details?: { audio_tokens: number; cached_tokens: number };
            total_tokens: number;
        }

        Function to calculate usage metrics from the internal result

      • responseIdPrefix: string

        Prefix for the response ID

      • startAgentRequest: StartAgentRequest

        The agent request for logging purposes

      • customProperties: CustomProperties

        Custom properties to add as Helicone headers (should include agentid and sessionid)

      Returns Promise<TExtracted>

      Promise that resolves to the extracted user result

    • Creates a ChatOpenAI configuration with logging enabled

      Usage: const llm = new ChatOpenAI(observability.withLangchain("gpt-4o-mini", apiKey, agentRequest, customProperties));

      Parameters

      • model: string

        The OpenAI model to use (e.g., "gpt-4o-mini", "gpt-4")

      • apiKey: string

        The OpenAI API key

      • startAgentRequest: StartAgentRequest
      • customProperties: CustomProperties

        Custom properties to add as Helicone headers (should include agentid and sessionid)

      Returns ChatOpenAIConfiguration

      Configuration object for ChatOpenAI constructor with logging enabled

    • Creates an OpenAI client configuration with logging enabled

      Usage: const openai = new OpenAI(observability.withOpenAI(apiKey, heliconeApiKey, agentRequest, customProperties));

      Parameters

      • apiKey: string

        The OpenAI API key

      • agentRequest: StartAgentRequest

        The agent request for logging purposes

      • customProperties: CustomProperties

        Custom properties to add as Helicone headers (should include agentid and sessionid)

      Returns OpenAIConfiguration

      Configuration object for OpenAI constructor with logging enabled