Working with jobs in Amazon Glue
An Amazon Glue job encapsulates a script that connects to your source data, processes it, and then writes it out to your data target. Typically, a job runs extract, transform, and load (ETL) scripts. Jobs can also run general-purpose Python scripts (Python shell jobs.) Amazon Glue triggers can start jobs based on a schedule or event, or on demand. You can monitor job runs to understand runtime metrics such as completion status, duration, and start time.
You can use scripts that Amazon Glue generates or you can provide your own. With a source schema and target location or schema, the Amazon Glue code generator can automatically create an Apache Spark API (PySpark) script. You can use this script as a starting point and edit it to meet your goals.
Amazon Glue can write output files in several data formats, including JSON, CSV, ORC (Optimized Row Columnar), Apache Parquet, and Apache Avro. For some data formats, common compression formats can be written.
Amazon Glue supports the following types of jobs:
A Spark job is run in an Apache Spark environment managed by Amazon Glue. It processes data in batches.
-
A streaming ETL job is similar to a Spark job, except that it performs ETL on data streams. It uses the Apache Spark Structured Streaming framework. Some Spark job features are not available to streaming ETL jobs.
-
A Python shell job runs Python scripts as a shell and supports a Python version that depends on the Amazon Glue version you are using. You can use these jobs to schedule and run tasks that don't require an Apache Spark environment.
-
Ray is an open-source distributed computation framework that you can use to scale up workloads, with a focus on Python. Amazon Glue Ray jobs and interactive sessions allow you to use Ray within Amazon Glue.
The following sections provide information on ETL and Ray jobs in Amazon Glue.