自定义可视化脚本的示例 - Amazon Glue
Amazon Web Services 文档中描述的 Amazon Web Services 服务或功能可能因区域而异。要查看适用于中国区域的差异,请参阅 中国的 Amazon Web Services 服务入门 (PDF)

自定义可视化脚本的示例

以下示例执行等效转换。但是,第二个示例(SparkSQL)是最干净、最高效的,其次是 Pandas UDF,最后是第一个示例中的低级映射。以下示例是将两列相加的简单转换的完整示例:

from awsglue import DynamicFrame # You can have other auxiliary variables, functions or classes on this file, it won't affect the runtime def record_sum(rec, col1, col2, resultCol): rec[resultCol] = rec[col1] + rec[col2] return rec # The number and name of arguments must match the definition on json config file # (expect self which is the current DynamicFrame to transform # If an argument is optional, you need to define a default value here # (resultCol in this example is an optional argument) def custom_add_columns(self, col1, col2, resultCol="result"): # The mapping will alter the columns order, which could be important fields = [field.name for field in self.schema()] if resultCol not in fields: # If it's a new column put it at the end fields.append(resultCol) return self.map(lambda record: record_sum(record, col1, col2, resultCol)).select_fields(paths=fields) # The name we assign on DynamicFrame must match the configured "functionName" DynamicFrame.custom_add_columns = custom_add_columns

以下示例是利用 SparkSQL API 的等效转换。

from awsglue import DynamicFrame # The number and name of arguments must match the definition on json config file # (expect self which is the current DynamicFrame to transform # If an argument is optional, you need to define a default value here # (resultCol in this example is an optional argument) def custom_add_columns(self, col1, col2, resultCol="result"): df = self.toDF() return DynamicFrame.fromDF( df.withColumn(resultCol, df[col1] + df[col2]) # This is the conversion logic , self.glue_ctx, self.name) # The name we assign on DynamicFrame must match the configured "functionName" DynamicFrame.custom_add_columns = custom_add_columns

以下示例使用相同的转换,但使用的是 pandas UDF,这比使用普通 UDF 更有效。有关编写 pandas UDF 的更多信息,请参阅:Apache Spark SQL 文档

from awsglue import DynamicFrame import pandas as pd from pyspark.sql.functions import pandas_udf # The number and name of arguments must match the definition on json config file # (expect self which is the current DynamicFrame to transform # If an argument is optional, you need to define a default value here # (resultCol in this example is an optional argument) def custom_add_columns(self, col1, col2, resultCol="result"): @pandas_udf("integer") # We need to declare the type of the result column def add_columns(value1: pd.Series, value2: pd.Series) → pd.Series: return value1 + value2 df = self.toDF() return DynamicFrame.fromDF( df.withColumn(resultCol, add_columns(col1, col2)) # This is the conversion logic , self.glue_ctx, self.name) # The name we assign on DynamicFrame must match the configured "functionName" DynamicFrame.custom_add_columns = custom_add_columns