Analyzing a Stored Video Using a Pretrained Model

Identify scene-based features and objects, and detect faces and label frames in a video by calling the video analysis pretrained model.

The maximum size and duration of each video is shown in the Limits section.

For more information about the video analysis, see the section on Stored Video Analysis.

    1. Open the navigation menu and click Analytics & AI. Under AI Services, click Vision.
    2. On the Vision page, click Video Analysis.
    3. Select the compartment where you want to store the results.
    4. Select the location of the video:
      • Demo
      • Local file
      • Object storage
        1. (Optional) If you selected Demo, click Analyze demo video to start the analysis.
        2. (Optional) If you selected Local file:
          1. Select a bucket from the list. If the bucket is in a different compartment, click Change compartment.
          2. (Optional) Enter a prefix in the Add prefix text field.
          3. Drag the video file to the Select file area, or click select one... and browse to the image.
          4. Click Upload and analyze. The Pre-Authenticated URL for video dialog box is displayed.
          5. (Optional) Copy the URL.
          6. Click Close.
        3. If you selected Object storage, enter the video URL and click Analyze.

      The analyzeVideo API is invoked, and the model immediately analyzes the video. The status of the job is displayed.

      The Results area has tabs for each of Label detection, Object detection, Text detection, and Face detection with confidence scores, and the request and response JSON.

    5. (Optional) To stop the job running, click Cancel.
    6. (Optional) To change the output location, click Change output location.
    7. (Optional) To select what is analyzed, click Video analysis capabilities, and select as appropriate from:
      • Label detection
      • Object detection
      • Text detection
      • Face detection
    8. (Optional) To generate code for using the video SDK, click Code for video inferencing.
    9. (Optional) To analyze videos again, click Video job tracker, and select Recently uploaded videos from the menu.
      1. Click the video you want to analyze.
      2. Click Analyze.
    10. To see the status of a video analysis job, click Video job tracker, and select Get job status from the menu.
      1. Enter the job OCID.
      2. Click Get job status.
      3. (Optional) To stop the job running, click Cancel.
      4. (Optional) To get the status of another job, click Get another video job status.
      5. (Optional) To get the JSON response, click Fetch response data.
      6. (Optional) To remove a job status, click Remove.
  • Use the analyze-video command and required parameters to classify the image:

    oci ai-vision analyze-video [OPTIONS]
    For a complete list of flags and variable options for CLI commands, see the CLI Command Reference.
  • Run the CreateVideoJob operation to analyze an image.