Generative Ai Inference

oci.generative_ai_inference.GenerativeAiInferenceClient OCI Generative AI is a fully managed service that provides a set of state-of-the-art, customizable large language models (LLMs) that cover a wide range of use cases for text generation, summarization, and text embeddings.
oci.generative_ai_inference.GenerativeAiInferenceClientCompositeOperations This class provides a wrapper around GenerativeAiInferenceClient and offers convenience methods for operations that would otherwise need to be chained together.

Models

oci.generative_ai_inference.models.AssistantMessage Represents a single instance of assistant message.
oci.generative_ai_inference.models.BaseChatRequest The base class to use for the chat inference request.
oci.generative_ai_inference.models.BaseChatResponse The base class that creates the chat response.
oci.generative_ai_inference.models.ChatChoice Represents a single instance of the chat response.
oci.generative_ai_inference.models.ChatContent The base class for the chat content.
oci.generative_ai_inference.models.ChatDetails Details of the conversation for the model to respond.
oci.generative_ai_inference.models.ChatResult The response to the chat conversation.
oci.generative_ai_inference.models.Choice Represents a single instance of the generated text.
oci.generative_ai_inference.models.Citation A section of the generated response which cites the documents that were used for generating the response.
oci.generative_ai_inference.models.CohereChatBotMessage A message that represents a single chat dialog as CHATBOT role.
oci.generative_ai_inference.models.CohereChatRequest Details for the chat request for Cohere models.
oci.generative_ai_inference.models.CohereChatResponse The response to the chat conversation.
oci.generative_ai_inference.models.CohereLlmInferenceRequest Details for the text generation request for Cohere models.
oci.generative_ai_inference.models.CohereLlmInferenceResponse The generated text result to return.
oci.generative_ai_inference.models.CohereMessage A message that represents a single chat dialog.
oci.generative_ai_inference.models.CohereParameterDefinition A definition of tool parameter.
oci.generative_ai_inference.models.CohereSystemMessage A message that represents a single chat dialog as SYSTEM role.
oci.generative_ai_inference.models.CohereTool A definition of tool (function).
oci.generative_ai_inference.models.CohereToolCall A tool call generated by the model.
oci.generative_ai_inference.models.CohereToolMessage A message that represents a single chat dialog as TOOL role.
oci.generative_ai_inference.models.CohereToolResult The result from invoking tools recommended by the model in the previous chat turn.
oci.generative_ai_inference.models.CohereUserMessage A message that represents a single chat dialog as USER role.
oci.generative_ai_inference.models.DedicatedServingMode The model’s serving mode is dedicated serving and has an endpoint on a dedicated AI cluster.
oci.generative_ai_inference.models.EmbedTextDetails Details for the request to embed texts.
oci.generative_ai_inference.models.EmbedTextResult The generated embedded result to return.
oci.generative_ai_inference.models.GenerateTextDetails Details for the request to generate text.
oci.generative_ai_inference.models.GenerateTextResult The generated text result to return.
oci.generative_ai_inference.models.GeneratedText The text generated during each run.
oci.generative_ai_inference.models.GenericChatRequest Details for the chat request.
oci.generative_ai_inference.models.GenericChatResponse The response for a chat conversation.
oci.generative_ai_inference.models.LlamaLlmInferenceRequest Details for the text generation request for Llama models.
oci.generative_ai_inference.models.LlamaLlmInferenceResponse The generated text result to return.
oci.generative_ai_inference.models.LlmInferenceRequest The base class for the inference requests.
oci.generative_ai_inference.models.LlmInferenceResponse The base class for inference responses.
oci.generative_ai_inference.models.Logprobs Includes the logarithmic probabilities for the most likely output tokens and the chosen tokens.
oci.generative_ai_inference.models.Message A message that represents a single chat dialog.
oci.generative_ai_inference.models.OnDemandServingMode The model’s serving mode is on-demand serving on a shared infrastructure.
oci.generative_ai_inference.models.SearchQuery The generated search query.
oci.generative_ai_inference.models.ServingMode The model’s serving mode, which is either on-demand serving or dedicated serving.
oci.generative_ai_inference.models.SummarizeTextDetails Details for the request to summarize text.
oci.generative_ai_inference.models.SummarizeTextResult Summarize text result to return to caller.
oci.generative_ai_inference.models.SystemMessage Represents a single instance of system message.
oci.generative_ai_inference.models.TextContent Represents a single instance of text in the chat content.
oci.generative_ai_inference.models.TokenLikelihood An object that contains the returned token and its corresponding likelihood.
oci.generative_ai_inference.models.UserMessage Represents a single instance of user message.