@Generated(value="OracleSDKGenerator", comments="API Version: 20221001") public final class TextClassificationModelMetrics extends com.oracle.bmc.http.client.internal.ExplicitlySetBmcModel
Model level text classification metrics
Note: Objects should always be created or deserialized using the TextClassificationModelMetrics.Builder
. This model
distinguishes fields that are null
because they are unset from fields that are explicitly
set to null
. This is done in the setter methods of the TextClassificationModelMetrics.Builder
, which maintain a
set of all explicitly set fields called TextClassificationModelMetrics.Builder.__explicitlySet__
. The hashCode()
and equals(Object)
methods are implemented to take the explicitly set
fields into account. The constructor, on the other hand, does not take the explicitly set fields
into account (since the constructor cannot distinguish explicit null
from unset null
).
Modifier and Type | Class and Description |
---|---|
static class |
TextClassificationModelMetrics.Builder |
EXPLICITLY_SET_FILTER_NAME, EXPLICITLY_SET_PROPERTY_NAME
Constructor and Description |
---|
TextClassificationModelMetrics(Float accuracy,
Float microF1,
Float microPrecision,
Float microRecall,
Float macroF1,
Float macroPrecision,
Float macroRecall,
Float weightedF1,
Float weightedPrecision,
Float weightedRecall)
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
static TextClassificationModelMetrics.Builder |
builder()
Create a new builder.
|
boolean |
equals(Object o) |
Float |
getAccuracy()
The fraction of the labels that were correctly recognised .
|
Float |
getMacroF1()
F1-score, is a measure of a model’s accuracy on a dataset
|
Float |
getMacroPrecision()
Precision refers to the number of true positives divided by the total number of positive
predictions (i.e., the number of true positives plus the number of false positives)
|
Float |
getMacroRecall()
Measures the model’s ability to predict actual positive classes.
|
Float |
getMicroF1()
F1-score, is a measure of a model’s accuracy on a dataset
|
Float |
getMicroPrecision()
Precision refers to the number of true positives divided by the total number of positive
predictions (i.e., the number of true positives plus the number of false positives)
|
Float |
getMicroRecall()
Measures the model’s ability to predict actual positive classes.
|
Float |
getWeightedF1()
F1-score, is a measure of a model’s accuracy on a dataset
|
Float |
getWeightedPrecision()
Precision refers to the number of true positives divided by the total number of positive
predictions (i.e., the number of true positives plus the number of false positives)
|
Float |
getWeightedRecall()
Measures the model’s ability to predict actual positive classes.
|
int |
hashCode() |
TextClassificationModelMetrics.Builder |
toBuilder() |
String |
toString() |
String |
toString(boolean includeByteArrayContents)
Return a string representation of the object.
|
markPropertyAsExplicitlySet, wasPropertyExplicitlySet
@Deprecated @ConstructorProperties(value={"accuracy","microF1","microPrecision","microRecall","macroF1","macroPrecision","macroRecall","weightedF1","weightedPrecision","weightedRecall"}) public TextClassificationModelMetrics(Float accuracy, Float microF1, Float microPrecision, Float microRecall, Float macroF1, Float macroPrecision, Float macroRecall, Float weightedF1, Float weightedPrecision, Float weightedRecall)
public static TextClassificationModelMetrics.Builder builder()
Create a new builder.
public TextClassificationModelMetrics.Builder toBuilder()
public Float getAccuracy()
The fraction of the labels that were correctly recognised .
public Float getMicroF1()
F1-score, is a measure of a model’s accuracy on a dataset
public Float getMicroPrecision()
Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
public Float getMicroRecall()
Measures the model’s ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
public Float getMacroF1()
F1-score, is a measure of a model’s accuracy on a dataset
public Float getMacroPrecision()
Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
public Float getMacroRecall()
Measures the model’s ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
public Float getWeightedF1()
F1-score, is a measure of a model’s accuracy on a dataset
public Float getWeightedPrecision()
Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
public Float getWeightedRecall()
Measures the model’s ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
public String toString()
toString
in class com.oracle.bmc.http.client.internal.ExplicitlySetBmcModel
public String toString(boolean includeByteArrayContents)
Return a string representation of the object.
includeByteArrayContents
- true to include the full contents of byte arrayspublic boolean equals(Object o)
equals
in class com.oracle.bmc.http.client.internal.ExplicitlySetBmcModel
public int hashCode()
hashCode
in class com.oracle.bmc.http.client.internal.ExplicitlySetBmcModel
Copyright © 2016–2024. All rights reserved.