Apache Hudi - Amazon Athena
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Apache Hudi

Apache Hudi is an open-source data management framework that simplifies incremental data processing. Record-level insert, update, upsert, and delete actions are processed with greater precision, which reduces overhead.

To use Apache Hudi tables in Athena for Spark, configure the following Spark properties. These properties are configured for you by default in the Athena for Spark console when you choose Apache Hudi as the table format. For steps, see Editing session details or Creating your own notebook.

"spark.sql.catalog.spark_catalog": "org.apache.spark.sql.hudi.catalog.HoodieCatalog", "spark.serializer": "org.apache.spark.serializer.KryoSerializer", "spark.sql.extensions": "org.apache.spark.sql.hudi.HoodieSparkSessionExtension"

The following procedure shows you how to use an Apache Hudi table in an Athena for Spark notebook. Run each step in a new cell in the notebook.

To use an Apache Hudi table in Athena for Spark
  1. Define the constants to use in the notebook.

  2. Create an Apache Spark DataFrame.

    columns = ["language","users_count"] data = [("Golang", 3000)] df = spark.createDataFrame(data, columns)
  3. Create a database.

    spark.sql("CREATE DATABASE {} LOCATION '{}'".format(DB_NAME, TABLE_S3_LOCATION))
  4. Create an empty Apache Hudi table.

    spark.sql(""" CREATE TABLE {}.{} ( language string, users_count int ) USING HUDI TBLPROPERTIES ( primaryKey = 'language', type = 'mor' ); """.format(DB_NAME, TABLE_NAME))
  5. Insert a row of data into the table.

    spark.sql("""INSERT INTO {}.{} VALUES ('Golang', 3000)""".format(DB_NAME,TABLE_NAME))
  6. Confirm that you can query the new table.

    spark.sql("SELECT * FROM {}.{}".format(DB_NAME, TABLE_NAME)).show()