Class: OCI::GenerativeAiInference::Models::CohereChatRequest
- Inherits:
-
BaseChatRequest
- Object
- BaseChatRequest
- OCI::GenerativeAiInference::Models::CohereChatRequest
- Defined in:
- lib/oci/generative_ai_inference/models/cohere_chat_request.rb
Overview
Details for the chat request for Cohere models.
Constant Summary
Constants inherited from BaseChatRequest
BaseChatRequest::API_FORMAT_ENUM
Instance Attribute Summary collapse
-
#chat_history ⇒ Array<OCI::GenerativeAiInference::Models::CohereMessage>
A list of previous messages between the user and the model, meant to give the model conversational context for responding to the user's message.
-
#documents ⇒ Array<Object>
list of relevant documents that the model can cite to generate a more accurate reply.
-
#frequency_penalty ⇒ Float
To reduce repetitiveness of generated tokens, this number penalizes new tokens based on their frequency in the generated text so far.
-
#is_search_queries_only ⇒ BOOLEAN
When true, the response will only contain a list of generated search queries, but no search will take place, and no reply from the model to the user's message will be generated.
-
#is_stream ⇒ BOOLEAN
Whether to stream back partial progress.
-
#max_tokens ⇒ Integer
The maximum number of tokens to predict for each response.
-
#message ⇒ String
[Required] Text input for the model to respond to.
-
#preamble_override ⇒ String
When specified, the default Cohere preamble will be replaced with the provided one.
-
#presence_penalty ⇒ Float
To reduce repetitiveness of generated tokens, this number penalizes new tokens based on whether they've appeared in the generated text so far.
-
#temperature ⇒ Float
A number that sets the randomness of the generated output.
-
#top_k ⇒ Integer
An integer that sets up the model to use only the top k most likely tokens in the generated output.
-
#top_p ⇒ Float
If set to a probability 0.0 < p < 1.0, it ensures that only the most likely tokens, with total probability mass of p, are considered for generation at each step.
Attributes inherited from BaseChatRequest
Class Method Summary collapse
-
.attribute_map ⇒ Object
Attribute mapping from ruby-style variable name to JSON key.
-
.swagger_types ⇒ Object
Attribute type mapping.
Instance Method Summary collapse
-
#==(other) ⇒ Object
Checks equality by comparing each attribute.
-
#build_from_hash(attributes) ⇒ Object
Builds the object from hash.
- #eql?(other) ⇒ Boolean
-
#hash ⇒ Fixnum
Calculates hash code according to all attributes.
-
#initialize(attributes = {}) ⇒ CohereChatRequest
constructor
Initializes the object.
-
#to_hash ⇒ Hash
Returns the object in the form of hash.
-
#to_s ⇒ String
Returns the string representation of the object.
Methods inherited from BaseChatRequest
Constructor Details
#initialize(attributes = {}) ⇒ CohereChatRequest
Initializes the object
136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 136 def initialize(attributes = {}) return unless attributes.is_a?(Hash) attributes['apiFormat'] = 'COHERE' super(attributes) # convert string to symbol for hash key attributes = attributes.each_with_object({}) { |(k, v), h| h[k.to_sym] = v } self. = attributes[:'message'] if attributes[:'message'] self.chat_history = attributes[:'chatHistory'] if attributes[:'chatHistory'] raise 'You cannot provide both :chatHistory and :chat_history' if attributes.key?(:'chatHistory') && attributes.key?(:'chat_history') self.chat_history = attributes[:'chat_history'] if attributes[:'chat_history'] self.documents = attributes[:'documents'] if attributes[:'documents'] self.is_search_queries_only = attributes[:'isSearchQueriesOnly'] unless attributes[:'isSearchQueriesOnly'].nil? self.is_search_queries_only = false if is_search_queries_only.nil? && !attributes.key?(:'isSearchQueriesOnly') # rubocop:disable Style/StringLiterals raise 'You cannot provide both :isSearchQueriesOnly and :is_search_queries_only' if attributes.key?(:'isSearchQueriesOnly') && attributes.key?(:'is_search_queries_only') self.is_search_queries_only = attributes[:'is_search_queries_only'] unless attributes[:'is_search_queries_only'].nil? self.is_search_queries_only = false if is_search_queries_only.nil? && !attributes.key?(:'isSearchQueriesOnly') && !attributes.key?(:'is_search_queries_only') # rubocop:disable Style/StringLiterals self.preamble_override = attributes[:'preambleOverride'] if attributes[:'preambleOverride'] raise 'You cannot provide both :preambleOverride and :preamble_override' if attributes.key?(:'preambleOverride') && attributes.key?(:'preamble_override') self.preamble_override = attributes[:'preamble_override'] if attributes[:'preamble_override'] self.is_stream = attributes[:'isStream'] unless attributes[:'isStream'].nil? self.is_stream = false if is_stream.nil? && !attributes.key?(:'isStream') # rubocop:disable Style/StringLiterals raise 'You cannot provide both :isStream and :is_stream' if attributes.key?(:'isStream') && attributes.key?(:'is_stream') self.is_stream = attributes[:'is_stream'] unless attributes[:'is_stream'].nil? self.is_stream = false if is_stream.nil? && !attributes.key?(:'isStream') && !attributes.key?(:'is_stream') # rubocop:disable Style/StringLiterals self.max_tokens = attributes[:'maxTokens'] if attributes[:'maxTokens'] raise 'You cannot provide both :maxTokens and :max_tokens' if attributes.key?(:'maxTokens') && attributes.key?(:'max_tokens') self.max_tokens = attributes[:'max_tokens'] if attributes[:'max_tokens'] self.temperature = attributes[:'temperature'] if attributes[:'temperature'] self.temperature = 0.3 if temperature.nil? && !attributes.key?(:'temperature') # rubocop:disable Style/StringLiterals self.top_k = attributes[:'topK'] if attributes[:'topK'] raise 'You cannot provide both :topK and :top_k' if attributes.key?(:'topK') && attributes.key?(:'top_k') self.top_k = attributes[:'top_k'] if attributes[:'top_k'] self.top_p = attributes[:'topP'] if attributes[:'topP'] self.top_p = 0.75 if top_p.nil? && !attributes.key?(:'topP') # rubocop:disable Style/StringLiterals raise 'You cannot provide both :topP and :top_p' if attributes.key?(:'topP') && attributes.key?(:'top_p') self.top_p = attributes[:'top_p'] if attributes[:'top_p'] self.top_p = 0.75 if top_p.nil? && !attributes.key?(:'topP') && !attributes.key?(:'top_p') # rubocop:disable Style/StringLiterals self.frequency_penalty = attributes[:'frequencyPenalty'] if attributes[:'frequencyPenalty'] self.frequency_penalty = 0.0 if frequency_penalty.nil? && !attributes.key?(:'frequencyPenalty') # rubocop:disable Style/StringLiterals raise 'You cannot provide both :frequencyPenalty and :frequency_penalty' if attributes.key?(:'frequencyPenalty') && attributes.key?(:'frequency_penalty') self.frequency_penalty = attributes[:'frequency_penalty'] if attributes[:'frequency_penalty'] self.frequency_penalty = 0.0 if frequency_penalty.nil? && !attributes.key?(:'frequencyPenalty') && !attributes.key?(:'frequency_penalty') # rubocop:disable Style/StringLiterals self.presence_penalty = attributes[:'presencePenalty'] if attributes[:'presencePenalty'] self.presence_penalty = 0.0 if presence_penalty.nil? && !attributes.key?(:'presencePenalty') # rubocop:disable Style/StringLiterals raise 'You cannot provide both :presencePenalty and :presence_penalty' if attributes.key?(:'presencePenalty') && attributes.key?(:'presence_penalty') self.presence_penalty = attributes[:'presence_penalty'] if attributes[:'presence_penalty'] self.presence_penalty = 0.0 if presence_penalty.nil? && !attributes.key?(:'presencePenalty') && !attributes.key?(:'presence_penalty') # rubocop:disable Style/StringLiterals end |
Instance Attribute Details
#chat_history ⇒ Array<OCI::GenerativeAiInference::Models::CohereMessage>
A list of previous messages between the user and the model, meant to give the model conversational context for responding to the user's message.
18 19 20 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 18 def chat_history @chat_history end |
#documents ⇒ Array<Object>
list of relevant documents that the model can cite to generate a more accurate reply. Some suggested keys are "text", "author", and "date". For better generation quality, it is recommended to keep the total word count of the strings in the dictionary to under 300 words.
26 27 28 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 26 def documents @documents end |
#frequency_penalty ⇒ Float
To reduce repetitiveness of generated tokens, this number penalizes new tokens based on their frequency in the generated text so far. Greater numbers encourage the model to use new tokens, while lower numbers encourage the model to repeat the tokens. Set to 0 to disable.
67 68 69 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 67 def frequency_penalty @frequency_penalty end |
#is_search_queries_only ⇒ BOOLEAN
When true, the response will only contain a list of generated search queries, but no search will take place, and no reply from the model to the user's message will be generated.
30 31 32 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 30 def is_search_queries_only @is_search_queries_only end |
#is_stream ⇒ BOOLEAN
Whether to stream back partial progress. If set, tokens are sent as data-only server-sent events as they become available.
38 39 40 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 38 def is_stream @is_stream end |
#max_tokens ⇒ Integer
The maximum number of tokens to predict for each response. Includes input plus output tokens.
42 43 44 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 42 def max_tokens @max_tokens end |
#message ⇒ String
[Required] Text input for the model to respond to.
14 15 16 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 14 def @message end |
#preamble_override ⇒ String
When specified, the default Cohere preamble will be replaced with the provided one. Preambles are a part of the prompt used to adjust the model's overall behavior and conversation style. Default preambles vary for different models.
34 35 36 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 34 def preamble_override @preamble_override end |
#presence_penalty ⇒ Float
To reduce repetitiveness of generated tokens, this number penalizes new tokens based on whether they've appeared in the generated text so far. Greater numbers encourage the model to use new tokens, while lower numbers encourage the model to repeat the tokens.
Similar to frequency penalty, a penalty is applied to previously present tokens, except that this penalty is applied equally to all tokens that have already appeared, regardless of how many times they've appeared. Set to 0 to disable.
74 75 76 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 74 def presence_penalty @presence_penalty end |
#temperature ⇒ Float
A number that sets the randomness of the generated output. A lower temperature means a less random generations. Use lower numbers for tasks with a correct answer such as question answering or summarizing. High temperatures can generate hallucinations or factually incorrect information. Start with temperatures lower than 1.0 and increase the temperature for more creative outputs, as you regenerate the prompts to refine the outputs.
48 49 50 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 48 def temperature @temperature end |
#top_k ⇒ Integer
An integer that sets up the model to use only the top k most likely tokens in the generated output. A higher k introduces more randomness into the output making the output text sound more natural. Default value is 0 which disables this method and considers all tokens. To set a number for the likely tokens, choose an integer between 1 and 500.
If also using top p, then the model considers only the top tokens whose probabilities add up to p percent and ignores the rest of the k tokens. For example, if k is 20, but the probabilities of the top 10 add up to .75, then only the top 10 tokens are chosen.
55 56 57 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 55 def top_k @top_k end |
#top_p ⇒ Float
If set to a probability 0.0 < p < 1.0, it ensures that only the most likely tokens, with total probability mass of p, are considered for generation at each step.
To eliminate tokens with low likelihood, assign p a minimum percentage for the next token's likelihood. For example, when p is set to 0.75, the model eliminates the bottom 25 percent for the next token. Set to 1.0 to consider all tokens and set to 0 to disable. If both k and p are enabled, p acts after k.
62 63 64 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 62 def top_p @top_p end |
Class Method Details
.attribute_map ⇒ Object
Attribute mapping from ruby-style variable name to JSON key.
77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 77 def self.attribute_map { # rubocop:disable Style/SymbolLiteral 'api_format': :'apiFormat', 'message': :'message', 'chat_history': :'chatHistory', 'documents': :'documents', 'is_search_queries_only': :'isSearchQueriesOnly', 'preamble_override': :'preambleOverride', 'is_stream': :'isStream', 'max_tokens': :'maxTokens', 'temperature': :'temperature', 'top_k': :'topK', 'top_p': :'topP', 'frequency_penalty': :'frequencyPenalty', 'presence_penalty': :'presencePenalty' # rubocop:enable Style/SymbolLiteral } end |
.swagger_types ⇒ Object
Attribute type mapping.
98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 98 def self.swagger_types { # rubocop:disable Style/SymbolLiteral 'api_format': :'String', 'message': :'String', 'chat_history': :'Array<OCI::GenerativeAiInference::Models::CohereMessage>', 'documents': :'Array<Object>', 'is_search_queries_only': :'BOOLEAN', 'preamble_override': :'String', 'is_stream': :'BOOLEAN', 'max_tokens': :'Integer', 'temperature': :'Float', 'top_k': :'Integer', 'top_p': :'Float', 'frequency_penalty': :'Float', 'presence_penalty': :'Float' # rubocop:enable Style/SymbolLiteral } end |
Instance Method Details
#==(other) ⇒ Object
Checks equality by comparing each attribute.
225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 225 def ==(other) return true if equal?(other) self.class == other.class && api_format == other.api_format && == other. && chat_history == other.chat_history && documents == other.documents && is_search_queries_only == other.is_search_queries_only && preamble_override == other.preamble_override && is_stream == other.is_stream && max_tokens == other.max_tokens && temperature == other.temperature && top_k == other.top_k && top_p == other.top_p && frequency_penalty == other.frequency_penalty && presence_penalty == other.presence_penalty end |
#build_from_hash(attributes) ⇒ Object
Builds the object from hash
267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 267 def build_from_hash(attributes) return nil unless attributes.is_a?(Hash) self.class.swagger_types.each_pair do |key, type| if type =~ /^Array<(.*)>/i # check to ensure the input is an array given that the the attribute # is documented as an array but the input is not if attributes[self.class.attribute_map[key]].is_a?(Array) public_method("#{key}=").call( attributes[self.class.attribute_map[key]] .map { |v| OCI::Internal::Util.convert_to_type(Regexp.last_match(1), v) } ) end elsif !attributes[self.class.attribute_map[key]].nil? public_method("#{key}=").call( OCI::Internal::Util.convert_to_type(type, attributes[self.class.attribute_map[key]]) ) end # or else data not found in attributes(hash), not an issue as the data can be optional end self end |
#eql?(other) ⇒ Boolean
247 248 249 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 247 def eql?(other) self == other end |
#hash ⇒ Fixnum
Calculates hash code according to all attributes.
256 257 258 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 256 def hash [api_format, , chat_history, documents, is_search_queries_only, preamble_override, is_stream, max_tokens, temperature, top_k, top_p, frequency_penalty, presence_penalty].hash end |
#to_hash ⇒ Hash
Returns the object in the form of hash
300 301 302 303 304 305 306 307 308 309 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 300 def to_hash hash = {} self.class.attribute_map.each_pair do |attr, param| value = public_method(attr).call next if value.nil? && !instance_variable_defined?("@#{attr}") hash[param] = _to_hash(value) end hash end |
#to_s ⇒ String
Returns the string representation of the object
294 295 296 |
# File 'lib/oci/generative_ai_inference/models/cohere_chat_request.rb', line 294 def to_s to_hash.to_s end |