Using the Code Editor

Learn about Data Flow and the Oracle Cloud Infrastructure Code Editor.

The Code Editor provides a rich editing environment in the Console, including syntax highlighting, intelligent completions, bracket matching, linting, code navigation (go to method definition, find all references), and refactoring. You can:
  • Create, build, edit, and deploy Applications in Java, Scala, and Python, without having to switch between the Console and the local development environment.
  • Get started with Data Flow templates that are included with the Code Editor.
  • Run and test your code locally with the Cloud Shell, before deploying to Data Flow.
  • Set Spark parameters.
Other benefits include:
  • Git integration that enables you to clone any Git-based repository, track changes made to files, and commit, pull, and push code directly from within the Code Editor, letting you to contribute code and revert code changes with ease. See the Developer Guide for information on using Git and GitHub.
  • A persistent state across sessions auto-saves progress and persists the state across many user sessions, so the Code Editor automatically opens the last edited page on start-up.

  • Direct access to Apache Spark and over 30 tools, including sbt, and Scala pre-installed with Cloud Shell.
  • Over a dozen Data Flow examples covering different features bundled as Templates to help get you started.

For more information on using the Code Editor to create Applications, see Creating an Application with the Code Editor.

For more information about the Code Editor's features and functionality, see the Code Editor documentation.


  • The Code Editor uses the same IAM policies as Cloud Shell. For more information, see Cloud Shell Required IAM Policy.

  • A user configuration file for authentication. You can use the Console to generate the config file. For instructions on creating the config file, see the Developer Guide.
    It contains the following information:
    • user, the OCID of the user for whom the key pair is being added.
    • fingerprint, the fingerprint of the key that was added.
    • tenancy, the tenancy's OCID.
    • region, the selected region in the Console.
    • security_token.
  • Confirm that the languages and tools needed are installed in the Cloud Shell.
  • If you're using Data Catalog Metastore, then you need the appropriate policies set up.
The following tools and minimum versions must be installed on the Cloud Shell:
Required Tools and Supported Version
Tool Version Description
Scala 2.12.15 Used to write Scala-based code in the Code Editor.
sbt 1.7.1 Used to Interactively build Scala applications.
Python 3.8.14 Python interpreter
Git 2.27.0 Git bash to interactively run GIT commands.
JDK 11.0.17 Used to develop, build, and test Data Flow Java Applications.
Apache Spark 3.2.1 A local Instance of Apache Spark running on the Cloud Shell. used to test the code.


  • Data Flow is only able to access resources against the region selected in the Console's Region selection menu when Cloud Shell was started.
  • Only Java-based, Python-based, and Scala-based Data Flow Applications are supported
  • The Code Editor doesn't support Compilation and Debugging. You must do those in the Cloud Shell.
  • The plug-in is supported only with Apache Spark version 3.2.1.
  • All the limitations of Cloud Shell apply.

Setting Up the Data Flow Spark Plug-In

Follow these steps to set up the Data Flow Spark Plug-in.

  1. From the command line, navigate to the HOME directory.
  2. Run /opt/dataflow/df_artifacts/

    The script is an automated process to set up Spark in the user directory with other required artifacts.