The ObservabilityAPI class provides methods to wrap API calls with Helicone logging

Hierarchy

  • BasePaymentsAPI
    • ObservabilityAPI

Constructors

Properties

accountAddress?: string
appId?: string
environment: EnvironmentInfo
heliconeApiKey?: string
heliconeBaseLoggingUrl: string
heliconeManualLoggingUrl: string
isBrowserInstance: boolean = true
nvmApiKey: string
returnUrl: string
version?: string

Methods

  • Helper function to calculate usage for dummy song operations

    Returns {
        completion_tokens: number;
        completion_tokens_details?: {
            accepted_prediction_tokens: number;
            audio_tokens: number;
            reasoning_tokens: number;
            rejected_prediction_tokens: number;
        };
        prompt_tokens: number;
        prompt_tokens_details?: {
            audio_tokens: number;
            cached_tokens: number;
        };
        total_tokens: number;
    }

    • completion_tokens: number
    • Optional completion_tokens_details?: {
          accepted_prediction_tokens: number;
          audio_tokens: number;
          reasoning_tokens: number;
          rejected_prediction_tokens: number;
      }
      • accepted_prediction_tokens: number
      • audio_tokens: number
      • reasoning_tokens: number
      • rejected_prediction_tokens: number
    • prompt_tokens: number
    • Optional prompt_tokens_details?: {
          audio_tokens: number;
          cached_tokens: number;
      }
      • audio_tokens: number
      • cached_tokens: number
    • total_tokens: number
  • Helper function to calculate usage for image operations based on pixels

    Parameters

    • pixels: number

    Returns {
        completion_tokens: number;
        completion_tokens_details?: {
            accepted_prediction_tokens: number;
            audio_tokens: number;
            reasoning_tokens: number;
            rejected_prediction_tokens: number;
        };
        prompt_tokens: number;
        prompt_tokens_details?: {
            audio_tokens: number;
            cached_tokens: number;
        };
        total_tokens: number;
    }

    • completion_tokens: number
    • Optional completion_tokens_details?: {
          accepted_prediction_tokens: number;
          audio_tokens: number;
          reasoning_tokens: number;
          rejected_prediction_tokens: number;
      }
      • accepted_prediction_tokens: number
      • audio_tokens: number
      • reasoning_tokens: number
      • rejected_prediction_tokens: number
    • prompt_tokens: number
    • Optional prompt_tokens_details?: {
          audio_tokens: number;
          cached_tokens: number;
      }
      • audio_tokens: number
      • cached_tokens: number
    • total_tokens: number
  • Helper function to calculate usage for song operations based on tokens/quota

    Parameters

    • tokens: number

    Returns {
        completion_tokens: number;
        completion_tokens_details?: {
            accepted_prediction_tokens: number;
            audio_tokens: number;
            reasoning_tokens: number;
            rejected_prediction_tokens: number;
        };
        prompt_tokens: number;
        prompt_tokens_details?: {
            audio_tokens: number;
            cached_tokens: number;
        };
        total_tokens: number;
    }

    • completion_tokens: number
    • Optional completion_tokens_details?: {
          accepted_prediction_tokens: number;
          audio_tokens: number;
          reasoning_tokens: number;
          rejected_prediction_tokens: number;
      }
      • accepted_prediction_tokens: number
      • audio_tokens: number
      • reasoning_tokens: number
      • rejected_prediction_tokens: number
    • prompt_tokens: number
    • Optional prompt_tokens_details?: {
          audio_tokens: number;
          cached_tokens: number;
      }
      • audio_tokens: number
      • cached_tokens: number
    • total_tokens: number
  • Helper function to calculate usage for video operations (typically 1 token)

    Returns {
        completion_tokens: number;
        completion_tokens_details?: {
            accepted_prediction_tokens: number;
            audio_tokens: number;
            reasoning_tokens: number;
            rejected_prediction_tokens: number;
        };
        prompt_tokens: number;
        prompt_tokens_details?: {
            audio_tokens: number;
            cached_tokens: number;
        };
        total_tokens: number;
    }

    • completion_tokens: number
    • Optional completion_tokens_details?: {
          accepted_prediction_tokens: number;
          audio_tokens: number;
          reasoning_tokens: number;
          rejected_prediction_tokens: number;
      }
      • accepted_prediction_tokens: number
      • audio_tokens: number
      • reasoning_tokens: number
      • rejected_prediction_tokens: number
    • prompt_tokens: number
    • Optional prompt_tokens_details?: {
          audio_tokens: number;
          cached_tokens: number;
      }
      • audio_tokens: number
      • cached_tokens: number
    • total_tokens: number
  • Creates a standardized Helicone payload for API logging

    Parameters

    Returns {
        frequency_penalty: number;
        messages: {
            content: string;
            role: string;
        }[];
        model: string;
        n: number;
        presence_penalty: number;
        stream: boolean;
        temperature: number;
        top_p: number;
    }

    • frequency_penalty: number
    • messages: {
          content: string;
          role: string;
      }[]
    • model: string
    • n: number
    • presence_penalty: number
    • stream: boolean
    • temperature: number
    • top_p: number
  • Creates a standardized Helicone response for API logging

    Parameters

    Returns {
        choices: {
            finish_reason: string;
            index: number;
            logprobs: null;
            message: {
                annotations: never[];
                content: string;
                refusal: null;
                role: string;
            };
        }[];
        created: number;
        id: string;
        model: string;
        object: string;
        service_tier: string;
        system_fingerprint: string;
        usage: {
            completion_tokens: number;
            completion_tokens_details: {
                accepted_prediction_tokens: number;
                audio_tokens: number;
                reasoning_tokens: number;
                rejected_prediction_tokens: number;
            };
            prompt_tokens: number;
            prompt_tokens_details: {
                audio_tokens: number;
                cached_tokens: number;
            };
            total_tokens: number;
        };
    }

    • choices: {
          finish_reason: string;
          index: number;
          logprobs: null;
          message: {
              annotations: never[];
              content: string;
              refusal: null;
              role: string;
          };
      }[]
    • created: number
    • id: string
    • model: string
    • object: string
    • service_tier: string
    • system_fingerprint: string
    • usage: {
          completion_tokens: number;
          completion_tokens_details: {
              accepted_prediction_tokens: number;
              audio_tokens: number;
              reasoning_tokens: number;
              rejected_prediction_tokens: number;
          };
          prompt_tokens: number;
          prompt_tokens_details: {
              audio_tokens: number;
              cached_tokens: number;
          };
          total_tokens: number;
      }
      • completion_tokens: number
      • completion_tokens_details: {
            accepted_prediction_tokens: number;
            audio_tokens: number;
            reasoning_tokens: number;
            rejected_prediction_tokens: number;
        }
        • accepted_prediction_tokens: number
        • audio_tokens: number
        • reasoning_tokens: number
        • rejected_prediction_tokens: number
      • prompt_tokens: number
      • prompt_tokens_details: {
            audio_tokens: number;
            cached_tokens: number;
        }
        • audio_tokens: number
        • cached_tokens: number
      • total_tokens: number
  • It returns the account address associated with the NVM API Key used to initialize the Payments Library instance.

    Returns undefined | string

    The account address extracted from the NVM API Key

  • Internal

    Returns the HTTP options required to query the backend.

    Parameters

    • method: string

      HTTP method.

    • Optional body: any

      Optional request body.

    Returns any

    HTTP options object.

  • Parses the NVM API Key to extract the account address.

    Returns void

    Throws

    PaymentsError if the API key is invalid.

  • Creates a ChatOpenAI configuration with Helicone logging enabled

    Usage: const llm = new ChatOpenAI(observability.withHeliconeLangchain("gpt-4o-mini", apiKey, customProperties));

    Parameters

    • model: string

      The OpenAI model to use (e.g., "gpt-4o-mini", "gpt-4")

    • apiKey: string

      The OpenAI API key

    • customProperties: CustomProperties

      Custom properties to add as Helicone headers (should include agentid and sessionid)

    Returns ChatOpenAIConfiguration

    Configuration object for ChatOpenAI constructor with Helicone enabled

  • Wraps an async operation with Helicone logging

    Type Parameters

    • TInternal = any
    • TExtracted = any

    Parameters

    • agentName: string

      Name of the agent for logging purposes

    • payloadConfig: HeliconePayloadConfig

      Configuration for the Helicone payload

    • operation: (() => Promise<TInternal>)

      The async operation to execute (returns internal result with extra data)

    • resultExtractor: ((internalResult) => TExtracted)

      Function to extract the user-facing result from internal result

    • usageCalculator: ((internalResult) => {
          completion_tokens: number;
          completion_tokens_details?: {
              accepted_prediction_tokens: number;
              audio_tokens: number;
              reasoning_tokens: number;
              rejected_prediction_tokens: number;
          };
          prompt_tokens: number;
          prompt_tokens_details?: {
              audio_tokens: number;
              cached_tokens: number;
          };
          total_tokens: number;
      })

      Function to calculate usage metrics from the internal result

        • (internalResult): {
              completion_tokens: number;
              completion_tokens_details?: {
                  accepted_prediction_tokens: number;
                  audio_tokens: number;
                  reasoning_tokens: number;
                  rejected_prediction_tokens: number;
              };
              prompt_tokens: number;
              prompt_tokens_details?: {
                  audio_tokens: number;
                  cached_tokens: number;
              };
              total_tokens: number;
          }
        • Parameters

          Returns {
              completion_tokens: number;
              completion_tokens_details?: {
                  accepted_prediction_tokens: number;
                  audio_tokens: number;
                  reasoning_tokens: number;
                  rejected_prediction_tokens: number;
              };
              prompt_tokens: number;
              prompt_tokens_details?: {
                  audio_tokens: number;
                  cached_tokens: number;
              };
              total_tokens: number;
          }

          • completion_tokens: number
          • Optional completion_tokens_details?: {
                accepted_prediction_tokens: number;
                audio_tokens: number;
                reasoning_tokens: number;
                rejected_prediction_tokens: number;
            }
            • accepted_prediction_tokens: number
            • audio_tokens: number
            • reasoning_tokens: number
            • rejected_prediction_tokens: number
          • prompt_tokens: number
          • Optional prompt_tokens_details?: {
                audio_tokens: number;
                cached_tokens: number;
            }
            • audio_tokens: number
            • cached_tokens: number
          • total_tokens: number
    • responseIdPrefix: string

      Prefix for the response ID

    • customProperties: CustomProperties

      Custom properties to add as Helicone headers (should include agentid and sessionid)

    Returns Promise<TExtracted>

    Promise that resolves to the extracted user result

  • Creates an OpenAI client configuration with Helicone logging enabled

    Usage: const openai = new OpenAI(observability.withHeliconeOpenAI(apiKey, heliconeApiKey, customProperties));

    Parameters

    • apiKey: string

      The OpenAI API key

    • customProperties: CustomProperties

      Custom properties to add as Helicone headers (should include agentid and sessionid)

    Returns OpenAIConfiguration

    Configuration object for OpenAI constructor with Helicone enabled