DBMS_CLOUD_OCI_GENERATIVE_AI_INFERENCE_VARCHAR2_TBL Type 🔗
Nested table type of varchar2(32767).
Syntax
CREATE OR REPLACE NONEDITIONABLE TYPE dbms_cloud_oci_generative_ai_inference_varchar2_tbl FORCE IS TABLE OF (varchar2(32767)) NOT PERSISTABLE;
DBMS_CLOUD_OCI_GENERATIVE_AI_INFERENCE_SERVING_MODE_T Type 🔗
The model's serving mode, which could be on-demand serving or dedicated serving.
Syntax
CREATE OR REPLACE NONEDITIONABLE TYPE dbms_cloud_oci_generative_ai_inference_serving_mode_t FORCE AUTHID CURRENT_USER IS OBJECT (
serving_type varchar2(32767),
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_serving_mode_t
RETURN SELF AS RESULT,
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_serving_mode_t (
serving_type varchar2
) RETURN SELF AS RESULT
) NOT PERSISTABLE NOT FINAL;
Fields
Field
Description
serving_type
(required) The serving mode type, which could be on-demand serving or dedicated serving.
Allowed values are: 'ON_DEMAND', 'DEDICATED'
DBMS_CLOUD_OCI_GENERATIVE_AI_INFERENCE_DEDICATED_SERVING_MODE_T Type 🔗
The model's serving mode is dedicated serving and has an endpoint on a dedicated AI cluster.
Syntax
CREATE OR REPLACE NONEDITIONABLE TYPE dbms_cloud_oci_generative_ai_inference_dedicated_serving_mode_t FORCE AUTHID CURRENT_USER UNDER dbms_cloud_oci_generative_ai_inference_serving_mode_t (
endpoint_id varchar2(32767),
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_dedicated_serving_mode_t
RETURN SELF AS RESULT,
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_dedicated_serving_mode_t (
serving_type varchar2,
endpoint_id varchar2
) RETURN SELF AS RESULT
);
dbms_cloud_oci_generative_ai_inference_dedicated_serving_mode_t is a subtype of the dbms_cloud_oci_generative_ai_inference_serving_mode_t type.
Fields
Field
Description
endpoint_id
(required) The OCID of the endpoint to use.
DBMS_CLOUD_OCI_GENERATIVE_AI_INFERENCE_EMBED_TEXT_DETAILS_T Type 🔗
Details for the request to embed texts.
Syntax
CREATE OR REPLACE NONEDITIONABLE TYPE dbms_cloud_oci_generative_ai_inference_embed_text_details_t FORCE AUTHID CURRENT_USER IS OBJECT (
inputs dbms_cloud_oci_generative_ai_inference_varchar2_tbl,
serving_mode dbms_cloud_oci_generative_ai_inference_serving_mode_t,
compartment_id varchar2(32767),
is_echo number,
truncate varchar2(32767),
input_type varchar2(32767),
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_embed_text_details_t
RETURN SELF AS RESULT,
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_embed_text_details_t (
inputs dbms_cloud_oci_generative_ai_inference_varchar2_tbl,
serving_mode dbms_cloud_oci_generative_ai_inference_serving_mode_t,
compartment_id varchar2,
is_echo number,
truncate varchar2,
input_type varchar2
) RETURN SELF AS RESULT
) NOT PERSISTABLE;
Fields
Field
Description
inputs
(required) The list of strings for embeddings.
serving_mode
(required)
compartment_id
(required) The OCID of compartment that the user is authorized to use to call into the Generative AI service.
is_echo
(optional) Whether or not to include the original inputs in the response. Results are index-based.
truncate
(optional) For an input that's longer than the maximum token length, specifies which part of the input text will be truncated.
DBMS_CLOUD_OCI_GENERATIVE_AI_INFERENCE_EMBED_TEXT_RESULT_T Type 🔗
The generated embedded result to return.
Syntax
CREATE OR REPLACE NONEDITIONABLE TYPE dbms_cloud_oci_generative_ai_inference_embed_text_result_t FORCE AUTHID CURRENT_USER IS OBJECT (
id varchar2(32767),
inputs dbms_cloud_oci_generative_ai_inference_varchar2_tbl,
embeddings json_array_t,
model_id varchar2(32767),
model_version varchar2(32767),
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_embed_text_result_t
RETURN SELF AS RESULT,
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_embed_text_result_t (
id varchar2,
inputs dbms_cloud_oci_generative_ai_inference_varchar2_tbl,
embeddings json_array_t,
model_id varchar2,
model_version varchar2
) RETURN SELF AS RESULT
) NOT PERSISTABLE;
Fields
Field
Description
id
(required) A unique identifier for the generated result.
inputs
(optional) The original inputs. Only present if \"isEcho\" is set to true.
embeddings
(required) The embeddings corresponding to inputs.
model_id
(optional) The OCID of the model used in this inference request.
model_version
(optional) The version of the model.
DBMS_CLOUD_OCI_GENERATIVE_AI_INFERENCE_ERROR_T Type 🔗
Error Information.
Syntax
CREATE OR REPLACE NONEDITIONABLE TYPE dbms_cloud_oci_generative_ai_inference_error_t FORCE AUTHID CURRENT_USER IS OBJECT (
code varchar2(32767),
message varchar2(32767),
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_error_t
RETURN SELF AS RESULT,
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_error_t (
code varchar2,
message varchar2
) RETURN SELF AS RESULT
) NOT PERSISTABLE;
Fields
Field
Description
code
(required) A short error code that defines the error, meant for programmatic parsing.
message
(required) A human-readable error string.
DBMS_CLOUD_OCI_GENERATIVE_AI_INFERENCE_GENERATE_TEXT_DETAILS_T Type 🔗
Details for the request to generate text.
Syntax
CREATE OR REPLACE NONEDITIONABLE TYPE dbms_cloud_oci_generative_ai_inference_generate_text_details_t FORCE AUTHID CURRENT_USER IS OBJECT (
prompts dbms_cloud_oci_generative_ai_inference_varchar2_tbl,
serving_mode dbms_cloud_oci_generative_ai_inference_serving_mode_t,
compartment_id varchar2(32767),
is_stream number,
is_echo number,
num_generations number,
max_tokens number,
temperature number,
top_k number,
top_p number,
frequency_penalty number,
presence_penalty number,
stop_sequences dbms_cloud_oci_generative_ai_inference_varchar2_tbl,
return_likelihoods varchar2(32767),
truncate varchar2(32767),
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_generate_text_details_t
RETURN SELF AS RESULT,
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_generate_text_details_t (
prompts dbms_cloud_oci_generative_ai_inference_varchar2_tbl,
serving_mode dbms_cloud_oci_generative_ai_inference_serving_mode_t,
compartment_id varchar2,
is_stream number,
is_echo number,
num_generations number,
max_tokens number,
temperature number,
top_k number,
top_p number,
frequency_penalty number,
presence_penalty number,
stop_sequences dbms_cloud_oci_generative_ai_inference_varchar2_tbl,
return_likelihoods varchar2,
truncate varchar2
) RETURN SELF AS RESULT
) NOT PERSISTABLE;
Fields
Field
Description
prompts
(required) Represents the prompt to be completed. Trailing whitespaces will be trimmed.
serving_mode
(required)
compartment_id
(required) The OCID of compartment that the user is authorized to use to call into the Generative AI service.
is_stream
(optional) Whether to stream back partial progress. If set, tokens are sent as data-only server-sent events as they become available.
is_echo
(optional) Whether to include the user prompt in the response. Applies only to non-stream results.
num_generations
(optional) The number of of generated texts that will be returned.
max_tokens
(optional) The maximum number of tokens to predict for each response. Includes input plus output tokens.
temperature
(optional) A number that sets the randomness of the generated output. A lower temperature means a less random generations. Use lower numbers for tasks with a correct answer such as question answering or summarizing. High temperatures can generate hallucinations or factually incorrect information. Start with temperatures lower than 1.0 and increase the temperature for more creative outputs, as you regenerate the prompts to refine the outputs.
top_k
(optional) An integer that sets up the model to generate outputs that include only the top k most likely tokens. A higher k introduces more randomness into the output making the output text sound more natural. Default value is 0 which means that this method is disabled and all tokens are considered. To set a number for the likely tokens, choose an integer between 1 and 500. If also using top p, then the model considers only the top tokens whose probabilities add up to p percent and ignores the rest of the k tokens. For example, if k is 20, but the probabilities of the top 10 add up to .75, then only the top 10 tokens are chosen.
top_p
(optional) If set to a probability 0.0 < p < 1.0, it ensures that only the most likely tokens, with total probability mass of p, are considered for generation at each step. To eliminate tokens with low likelihood, assign p a minimum percentage for the next token's likelihood. For example, when p is set to 0.75, the model eliminates the bottom 25 percent for the next token. Set to 1.0 to consider all tokens and set to 0 to disable. If both k and p are enabled, p acts after k.
frequency_penalty
(optional) To reduce repetitiveness of generated tokens, this number penalizes new tokens based on their frequency in the generated text so far. Greater numbers encourage the model to use new tokens and lower numbers encourage the model to repeat the tokens. Set to 0 to disable.
presence_penalty
(optional) To reduce repetitiveness of generated tokens, this number penalizes new tokens based on whether they've appeared in the generated text so far. Greater numbers encourage the model to use new tokens, while lower numbers encourage the model to repeat the tokens. Similar to frequency penalty, a penalty is applied to previously present tokens, except that this penalty is applied equally to all tokens that have already appeared, regardless of how many times they've appeared. Set to 0 to disable.
stop_sequences
(optional) The generated text is cut at the end of the earliest occurrence of this stop sequence. The generated text will include this stop sequence.
return_likelihoods
(optional) Specifies how and if the token likelihoods are returned with the response.
Allowed values are: 'NONE', 'ALL', 'GENERATION'
truncate
(optional) For an input that's longer than the maximum token length, specifies which part of the input text will be truncated.
Allowed values are: 'NONE', 'START', 'END'
DBMS_CLOUD_OCI_GENERATIVE_AI_INFERENCE_GENERATE_TEXT_RESULT_T Type 🔗
The generated text to return.
Syntax
CREATE OR REPLACE NONEDITIONABLE TYPE dbms_cloud_oci_generative_ai_inference_generate_text_result_t FORCE AUTHID CURRENT_USER IS OBJECT (
id varchar2(32767),
generated_texts json_array_t,
time_created timestamp with time zone,
prompts dbms_cloud_oci_generative_ai_inference_varchar2_tbl,
model_id varchar2(32767),
model_version varchar2(32767),
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_generate_text_result_t
RETURN SELF AS RESULT,
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_generate_text_result_t (
id varchar2,
generated_texts json_array_t,
time_created timestamp with time zone,
prompts dbms_cloud_oci_generative_ai_inference_varchar2_tbl,
model_id varchar2,
model_version varchar2
) RETURN SELF AS RESULT
) NOT PERSISTABLE;
Fields
Field
Description
id
(required) A unique identifier for this GenerateTextResult.
generated_texts
(required) Each prompt in input array has an array of GeneratedText, controlled by numGenerations parameter in request.
time_created
(required) The date and time that the model was created in an RFC3339 formatted datetime string.
prompts
(optional) The original prompt. Only applicable for non-stream response.
model_id
(optional) The OCID of the model used in this inference request.
model_version
(optional) The version of the model.
DBMS_CLOUD_OCI_GENERATIVE_AI_INFERENCE_TOKEN_LIKELIHOOD_T Type 🔗
An object that contains the returned token and its corresponding likelihood.
Syntax
CREATE OR REPLACE NONEDITIONABLE TYPE dbms_cloud_oci_generative_ai_inference_token_likelihood_t FORCE AUTHID CURRENT_USER IS OBJECT (
token varchar2(32767),
likelihood number,
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_token_likelihood_t
RETURN SELF AS RESULT,
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_token_likelihood_t (
token varchar2,
likelihood number
) RETURN SELF AS RESULT
) NOT PERSISTABLE;
Fields
Field
Description
token
(optional) A word, part of a word, or a punctuation. For example, apple is a token and friendship is made up of two tokens, friend and ship. When you run a model, you can set the maximum number of output tokens. Estimate three tokens per word.
likelihood
(optional) The likelihood of this token during generation.
DBMS_CLOUD_OCI_GENERATIVE_AI_INFERENCE_TOKEN_LIKELIHOOD_TBL Type 🔗
Nested table type of dbms_cloud_oci_generative_ai_inference_token_likelihood_t.
Syntax
CREATE OR REPLACE NONEDITIONABLE TYPE dbms_cloud_oci_generative_ai_inference_token_likelihood_tbl FORCE IS TABLE OF (dbms_cloud_oci_generative_ai_inference_token_likelihood_t) NOT PERSISTABLE;
DBMS_CLOUD_OCI_GENERATIVE_AI_INFERENCE_GENERATED_TEXT_T Type 🔗
The text generated during each run.
Syntax
CREATE OR REPLACE NONEDITIONABLE TYPE dbms_cloud_oci_generative_ai_inference_generated_text_t FORCE AUTHID CURRENT_USER IS OBJECT (
id varchar2(32767),
text varchar2(32767),
likelihood number,
finish_reason varchar2(32767),
token_likelihoods dbms_cloud_oci_generative_ai_inference_token_likelihood_tbl,
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_generated_text_t
RETURN SELF AS RESULT,
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_generated_text_t (
id varchar2,
text varchar2,
likelihood number,
finish_reason varchar2,
token_likelihoods dbms_cloud_oci_generative_ai_inference_token_likelihood_tbl
) RETURN SELF AS RESULT
) NOT PERSISTABLE;
Fields
Field
Description
id
(required) A unique identifier for this text generation.
text
(required) The generated text.
likelihood
(required) The overall likelihood of the generated text. When a large language model generates a new token for the output text, a likelihood is assigned to all tokens, where tokens with higher likelihoods are more likely to follow the current token. For example, it's more likely that the word favorite is followed by the word food or book rather than the word zebra. A lower likelihood means that it's less likely that token follows the current token.
finish_reason
(optional) The reason why the model stopped generating tokens. A model stops generating tokens if the model hits a natural stop point or reaches a provided stop sequence.
token_likelihoods
(optional) A collection of generated tokens and their corresponding likelihoods.
DBMS_CLOUD_OCI_GENERATIVE_AI_INFERENCE_ON_DEMAND_SERVING_MODE_T Type 🔗
The model's serving mode is on-demand serving on a shared infrastructure.
Syntax
CREATE OR REPLACE NONEDITIONABLE TYPE dbms_cloud_oci_generative_ai_inference_on_demand_serving_mode_t FORCE AUTHID CURRENT_USER UNDER dbms_cloud_oci_generative_ai_inference_serving_mode_t (
model_id varchar2(32767),
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_on_demand_serving_mode_t
RETURN SELF AS RESULT,
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_on_demand_serving_mode_t (
serving_type varchar2,
model_id varchar2
) RETURN SELF AS RESULT
);
dbms_cloud_oci_generative_ai_inference_on_demand_serving_mode_t is a subtype of the dbms_cloud_oci_generative_ai_inference_serving_mode_t type.
Fields
Field
Description
model_id
(required) The unique ID of a model to use. Can use list Models API to list available models.
DBMS_CLOUD_OCI_GENERATIVE_AI_INFERENCE_SUMMARIZE_TEXT_DETAILS_T Type 🔗
Details for the request to summarize text.
Syntax
CREATE OR REPLACE NONEDITIONABLE TYPE dbms_cloud_oci_generative_ai_inference_summarize_text_details_t FORCE AUTHID CURRENT_USER IS OBJECT (
input varchar2(32767),
serving_mode dbms_cloud_oci_generative_ai_inference_serving_mode_t,
compartment_id varchar2(32767),
is_echo number,
temperature number,
additional_command varchar2(32767),
length varchar2(32767),
format varchar2(32767),
extractiveness varchar2(32767),
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_summarize_text_details_t
RETURN SELF AS RESULT,
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_summarize_text_details_t (
input varchar2,
serving_mode dbms_cloud_oci_generative_ai_inference_serving_mode_t,
compartment_id varchar2,
is_echo number,
temperature number,
additional_command varchar2,
length varchar2,
format varchar2,
extractiveness varchar2
) RETURN SELF AS RESULT
) NOT PERSISTABLE;
Fields
Field
Description
input
(required) The input string to be summarized.
serving_mode
(required)
compartment_id
(required) The OCID of compartment that the user is authorized to use to call into the Generative AI service.
is_echo
(optional) Whether or not to include the original inputs in the response.
temperature
(optional) A number that sets the randomness of the generated output. Lower temperatures mean less random generations. Use lower numbers for tasks with a correct answer such as question answering or summarizing. High temperatures can generate hallucinations or factually incorrect information. Start with temperatures lower than 1.0, and increase the temperature for more creative outputs, as you regenerate the prompts to refine the outputs.
additional_command
(optional) A free-form instruction for modifying how the summaries get generated. Should complete the sentence \"Generate a summary _\". For example, \"focusing on the next steps\" or \"written by Yoda\".
length
(optional) Indicates the approximate length of the summary. If \"AUTO\" is selected, the best option will be picked based on the input text.
(optional) Indicates the style in which the summary will be delivered - in a free form paragraph or in bullet points. If \"AUTO\" is selected, the best option will be picked based on the input text.
(optional) Controls how close to the original text the summary is. High extractiveness summaries will lean towards reusing sentences verbatim, while low extractiveness summaries will tend to paraphrase more.
DBMS_CLOUD_OCI_GENERATIVE_AI_INFERENCE_SUMMARIZE_TEXT_RESULT_T Type 🔗
Summarize text result to return to caller.
Syntax
CREATE OR REPLACE NONEDITIONABLE TYPE dbms_cloud_oci_generative_ai_inference_summarize_text_result_t FORCE AUTHID CURRENT_USER IS OBJECT (
id varchar2(32767),
input varchar2(32767),
summary varchar2(32767),
model_id varchar2(32767),
model_version varchar2(32767),
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_summarize_text_result_t
RETURN SELF AS RESULT,
CONSTRUCTOR FUNCTION dbms_cloud_oci_generative_ai_inference_summarize_text_result_t (
id varchar2,
input varchar2,
summary varchar2,
model_id varchar2,
model_version varchar2
) RETURN SELF AS RESULT
) NOT PERSISTABLE;
Fields
Field
Description
id
(required) A unique identifier for this SummarizeTextResult.
input
(optional) The original input. Only included if \"isEcho\" set to true.
summary
(required) Summary result corresponding to input.
model_id
(optional) The OCID of the model used in this inference request.