Class GeminiUtil
java.lang.Object
com.google.adk.models.GeminiUtil
-
Field Summary
Fields -
Method Summary
Modifier and TypeMethodDescriptionstatic StringgetTextFromLlmResponse(LlmResponse llmResponse) Extracts text content from the first part of an LlmResponse, if available.static LlmRequestprepareGenenerateContentRequest(LlmRequest llmRequest, boolean sanitize) Prepares anLlmRequestfor the GenerateContent API.static LlmRequestsanitizeRequestForGeminiApi(LlmRequest llmRequest) Sanitizes the request to ensure it is compatible with the Gemini API backend.static booleanshouldEmitAccumulatedText(LlmResponse currentLlmResponse) Determines if accumulated text should be emitted based on the current LlmResponse.static List<com.google.genai.types.Content> stripThoughts(List<com.google.genai.types.Content> originalContents) Removes any `Part` that contains only a `thought` from the content list.
-
Field Details
-
CONTINUE_OUTPUT_MESSAGE
- See Also:
-
-
Method Details
-
prepareGenenerateContentRequest
Prepares anLlmRequestfor the GenerateContent API.This method can optionally sanitize the request and ensures that the last content part is from the user to prompt a model response. It also strips out any parts marked as "thoughts".
- Parameters:
llmRequest- The originalLlmRequest.sanitize- Whether to sanitize the request to be compatible with the Gemini API backend.- Returns:
- The prepared
LlmRequest.
-
sanitizeRequestForGeminiApi
Sanitizes the request to ensure it is compatible with the Gemini API backend. Required as there are some parameters that if included in the request will raise a runtime error if sent to the wrong backend (e.g. image names only work on Vertex AI).- Parameters:
llmRequest- The request to sanitize.- Returns:
- The sanitized request.
-
getTextFromLlmResponse
Extracts text content from the first part of an LlmResponse, if available.- Parameters:
llmResponse- The LlmResponse to extract text from.- Returns:
- The text content, or an empty string if not found.
-
shouldEmitAccumulatedText
Determines if accumulated text should be emitted based on the current LlmResponse. We flush if current response is not a text continuation (e.g., no content, no parts, or the first part is not inline_data, meaning it's something else or just empty, thereby warranting a flush of preceding text).- Parameters:
currentLlmResponse- The current LlmResponse being processed.- Returns:
- True if accumulated text should be emitted, false otherwise.
-
stripThoughts
-