@Generated(value="OracleSDKGenerator", comments="API Version: 20231130") public interface GenerativeAiInferenceAsync extends AutoCloseable
OCI Generative AI is a fully managed service that provides a set of state-of-the-art, customizable large language models (LLMs) that cover a wide range of use cases for text generation, summarization, and text embeddings.
Use the Generative AI service inference API to access your custom model endpoints, or to try
the out-of-the-box models to eNGenerative-ai-inferenceLatestChatResultChat
, eNGenerative-ai-inferenceLatestGenerateTextResultGenerateText
, eNGenerative-ai-inferenceLatestSummarizeTextResultSummarizeText
, and eNGenerative-ai-inferenceLatestEmbedTextResultEmbedText
.
To use a Generative AI custom model for inference, you must first create an endpoint for that
model. Use the eNGenerative-aiLatest
to eNGenerative-aiLatestModel
by fine-tuning an out-of-the-box model, or a previous version of a
custom model, using your own data. Fine-tune the custom model on a eNGenerative-aiLatestDedicatedAiCluster
. Then, create a eNGenerative-aiLatestDedicatedAiCluster
with an Endpoint
to host your custom model. For
resource management in the Generative AI service, use the eNGenerative-aiLatest
.
To learn more about the service, see the [Generative AI documentation](https://docs.cloud.oracle.com/iaas/Content/generative-ai/home.htm).
Modifier and Type | Method and Description |
---|---|
Future<ChatResponse> |
chat(ChatRequest request,
AsyncHandler<ChatRequest,ChatResponse> handler)
Creates a response for the given conversation.
|
Future<EmbedTextResponse> |
embedText(EmbedTextRequest request,
AsyncHandler<EmbedTextRequest,EmbedTextResponse> handler)
Produces embeddings for the inputs.
|
Future<GenerateTextResponse> |
generateText(GenerateTextRequest request,
AsyncHandler<GenerateTextRequest,GenerateTextResponse> handler)
Generates a text response based on the user prompt.
|
String |
getEndpoint()
Gets the set endpoint for REST call (ex, https://www.example.com)
|
void |
refreshClient()
Rebuilds the client from scratch.
|
void |
setEndpoint(String endpoint)
Sets the endpoint to call (ex, https://www.example.com).
|
void |
setRegion(Region region)
Sets the region to call (ex, Region.US_PHOENIX_1).
|
void |
setRegion(String regionId)
Sets the region to call (ex, ‘us-phoenix-1’).
|
Future<SummarizeTextResponse> |
summarizeText(SummarizeTextRequest request,
AsyncHandler<SummarizeTextRequest,SummarizeTextResponse> handler)
Summarizes the input text.
|
void |
useRealmSpecificEndpointTemplate(boolean realmSpecificEndpointTemplateEnabled)
Determines whether realm specific endpoint should be used or not.
|
close
void refreshClient()
Rebuilds the client from scratch. Useful to refresh certificates.
void setEndpoint(String endpoint)
Sets the endpoint to call (ex, https://www.example.com).
endpoint
- The endpoint of the serice.String getEndpoint()
Gets the set endpoint for REST call (ex, https://www.example.com)
void setRegion(Region region)
Sets the region to call (ex, Region.US_PHOENIX_1).
Note, this will call setEndpoint
after resolving the
endpoint. If the service is not available in this region, however, an
IllegalArgumentException will be raised.
region
- The region of the service.void setRegion(String regionId)
Sets the region to call (ex, ‘us-phoenix-1’).
Note, this will first try to map the region ID to a known Region and call setRegion
.
If no known Region could be determined, it will create an endpoint based on the default
endpoint format (Region.formatDefaultRegionEndpoint(Service, String)
and then call setEndpoint
.
regionId
- The public region ID.void useRealmSpecificEndpointTemplate(boolean realmSpecificEndpointTemplateEnabled)
Determines whether realm specific endpoint should be used or not. Set realmSpecificEndpointTemplateEnabled to “true” if the user wants to enable use of realm specific endpoint template, otherwise set it to “false”
realmSpecificEndpointTemplateEnabled
- flag to enable the use of realm specific endpoint
templateFuture<ChatResponse> chat(ChatRequest request, AsyncHandler<ChatRequest,ChatResponse> handler)
Creates a response for the given conversation.
request
- The request object containing the details to sendhandler
- The request handler to invoke upon completion, may be null.Future<EmbedTextResponse> embedText(EmbedTextRequest request, AsyncHandler<EmbedTextRequest,EmbedTextResponse> handler)
Produces embeddings for the inputs.
An embedding is numeric representation of a piece of text. This text can be a phrase, a sentence, or one or more paragraphs. The Generative AI embedding model transforms each phrase, sentence, or paragraph that you input, into an array with 1024 numbers. You can use these embeddings for finding similarity in your input text such as finding phrases that are similar in context or category. Embeddings are mostly used for semantic searches where the search function focuses on the meaning of the text that it's searching through rather than finding results based on keywords.
request
- The request object containing the details to sendhandler
- The request handler to invoke upon completion, may be null.Future<GenerateTextResponse> generateText(GenerateTextRequest request, AsyncHandler<GenerateTextRequest,GenerateTextResponse> handler)
Generates a text response based on the user prompt.
request
- The request object containing the details to sendhandler
- The request handler to invoke upon completion, may be null.Future<SummarizeTextResponse> summarizeText(SummarizeTextRequest request, AsyncHandler<SummarizeTextRequest,SummarizeTextResponse> handler)
Summarizes the input text.
request
- The request object containing the details to sendhandler
- The request handler to invoke upon completion, may be null.Copyright © 2016–2024. All rights reserved.