Class GoogleBaseLLM<AuthOptions>Abstract

Integration with an LLM.

Type Parameters

  • AuthOptions

Hierarchy (view full)

Implements

Constructors

Properties

maxOutputTokens: number = 1024

Maximum number of tokens to generate in the completion.

model: string = "gemini-pro"

Model to use

modelName: string = "gemini-pro"

Model to use Alias for model

responseMimeType: GoogleAIResponseMimeType = "text/plain"

Available for gemini-1.5-pro. The output format of the generated candidate text. Supported MIME types:

  • text/plain: Text output.
  • application/json: JSON response in the candidates.
"text/plain"
safetyHandler: GoogleAISafetyHandler
safetySettings: GoogleAISafetySetting[] = []
stopSequences: string[] = []
temperature: number = 0.7

Sampling temperature to use

topK: number = 40

Top-k changes how the model selects tokens for output.

A top-k of 1 means the selected token is the most probable among all tokens in the model’s vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).

topP: number = 0.8

Top-p changes how the model selects tokens for output.

Tokens are selected from most probable to least until the sum of their probabilities equals the top-p value.

For example, if tokens A, B, and C have a probability of .3, .2, and .1 and the top-p value is .5, then the model will select either A or B as the next token (using temperature).

connection: GoogleLLMConnection<AuthOptions>
streamedConnection: GoogleLLMConnection<AuthOptions>

Accessors

Methods

  • Parameters

    • messages: BaseMessage[]
    • Optionaloptions: any
    • Optional_callbacks: any

    Returns Promise<BaseMessage>

""