Data format options for inputs and outputs in Amazon Glue for Spark
These pages offer information about feature support and configuration parameters for data formats supported by Amazon Glue for Spark. See the following for a description of the usage and applicablity of this information.
Feature support across data formats in Amazon Glue
Each data format may support different Amazon Glue features. The following common features may or may not be supported based on your format type. Refer to the documentation for your data format to understand how to leverage our features to meet your requirements.
Read | Amazon Glue can recognize and interpret this data format without additional resources, such as connectors. |
Write | Amazon Glue can write data in this format without additional resources. You can include third-party libraries in your job and use standard Apache Spark functions to write data, as you would in other Spark environments. For more information about including libraries, see Using Python libraries with Amazon Glue. |
Streaming read | Amazon Glue can recognize and interpret this data format from an Apache Kafka, Amazon Managed Streaming for Apache Kafka or Amazon Kinesis message
stream. We expect streams to present data in a consistent format, so they are read in as
DataFrames . |
Group small files | Amazon Glue can group files together to batch work sent to each node when performing Amazon Glue transforms. This can significantly improve performance for workloads involving large amounts of small files. For more information, see Reading input files in larger groups. |
Job bookmarks | Amazon Glue can track the progress of transforms performing the same work on the same dataset across job runs with job bookmarks. This can improve performance for workloads involving datasets where work only needs to be done on new data since the last job run. For more information, see Tracking processed data using job bookmarks. |
Parameters used to interact with data formats in Amazon Glue
Certain Amazon Glue connection types support multiple format
types, requiring you to specify
information about your data format with a format_options
object when using methods like
GlueContext.write_dynamic_frame.from_options
.
-
s3
– For more information, see Connection types and options for ETL in Amazon Glue: S3 connection parameters. You can also view the documentation for the methods facilitating this connection type: create_dynamic_frame_from_options and write_dynamic_frame_from_options in Python and the corresponding Scala methods def getSourceWithFormat and def getSinkWithFormat. -
kinesis
– For more information, see Connection types and options for ETL in Amazon Glue: Kinesis connection parameters. You can also view the documentation for the method facilitating this connection type: create_data_frame_from_options and the corresponding Scala method def createDataFrameFromOptions. -
kafka
– For more information, see Connection types and options for ETL in Amazon Glue: Kafka connection parameters. You can also view the documentation for the method facilitating this connection type: create_data_frame_from_options and the corresponding Scala method def createDataFrameFromOptions.
Some connection types do not require format_options
. For example, in normal use, a JDBC
connection to a relational database retrieves data in a consistent, tabular data format. Therefore, reading from a
JDBC connection would not require format_options
.
Some methods to read and write data in glue do not require format_options
. For example, using
GlueContext.create_dynamic_frame.from_catalog
with Amazon Glue crawlers. Crawlers determine the shape of
your data. When using crawlers, a Amazon Glue classifier will examine your data to make smart decisions about how to
represent your data format. It will then store a representation of your data in the Amazon Glue Data Catalog, which can
be used within a Amazon Glue ETL script to retrieve your data with the
GlueContext.create_dynamic_frame.from_catalog
method. Crawlers remove the need to manually specify
information about your data format.
For jobs that access Amazon Lake Formation governed tables, Amazon Glue supports reading and writing all formats supported by Lake Formation governed tables. For the current list of supported formats for Amazon Lake Formation governed tables, see Notes and Restrictions for Governed Tables in the Amazon Lake Formation Developer Guide.
Note
For writing Apache Parquet, Amazon Glue ETL only supports writing to a governed table by specifying an option for a custom Parquet writer type optimized for Dynamic Frames. When writing to a governed table with the parquet
format, you should add the key useGlueParquetWriter
with a value of true
in the table parameters.
Topics
- Using the CSV format in Amazon Glue
- Using the Parquet format in Amazon Glue
- Using the XML format in Amazon Glue
- Using the Avro format in Amazon Glue
- Using the grokLog format in Amazon Glue
- Using the Ion format in Amazon Glue
- Using the JSON format in Amazon Glue
- Using the ORC format in Amazon Glue
- Using data lake frameworks with Amazon Glue ETL jobs
- Shared configuration reference
Shared configuration reference
You can use the following format_options
values with any format type.
-
attachFilename
— A string in the appropriate format to be used as a column name. If you provide this option, the name of the source file for the record will be appended to the record. The parameter value will be used as the column name. -
attachTimestamp
— A string in the appropriate format to be used as a column name. If you provide this option, the modification time of the source file for the record will be appended to the record. The parameter value will be used as the column name.