Interface Channel.Builder
- All Superinterfaces:
Buildable
,CopyableBuilder<Channel.Builder,
,Channel> SdkBuilder<Channel.Builder,
,Channel> SdkPojo
- Enclosing class:
Channel
-
Method Summary
Modifier and TypeMethodDescriptionchannelName
(String channelName) The name of the channel.compressionType
(String compressionType) If training data is compressed, the compression type.compressionType
(CompressionType compressionType) If training data is compressed, the compression type.contentType
(String contentType) The MIME type of the data.default Channel.Builder
dataSource
(Consumer<DataSource.Builder> dataSource) The location of the channel data.dataSource
(DataSource dataSource) The location of the channel data.(Optional) The input mode to use for the data channel in a training job.inputMode
(TrainingInputMode inputMode) (Optional) The input mode to use for the data channel in a training job.recordWrapperType
(String recordWrapperType) recordWrapperType
(RecordWrapper recordWrapperType) default Channel.Builder
shuffleConfig
(Consumer<ShuffleConfig.Builder> shuffleConfig) A configuration for a shuffle option for input data in a channel.shuffleConfig
(ShuffleConfig shuffleConfig) A configuration for a shuffle option for input data in a channel.Methods inherited from interface software.amazon.awssdk.utils.builder.CopyableBuilder
copy
Methods inherited from interface software.amazon.awssdk.utils.builder.SdkBuilder
applyMutation, build
Methods inherited from interface software.amazon.awssdk.core.SdkPojo
equalsBySdkFields, sdkFields
-
Method Details
-
channelName
The name of the channel.
- Parameters:
channelName
- The name of the channel.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
dataSource
The location of the channel data.
- Parameters:
dataSource
- The location of the channel data.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
dataSource
The location of the channel data.
This is a convenience method that creates an instance of theDataSource.Builder
avoiding the need to create one manually viaDataSource.builder()
.When the
Consumer
completes,SdkBuilder.build()
is called immediately and its result is passed todataSource(DataSource)
.- Parameters:
dataSource
- a consumer that will call methods onDataSource.Builder
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
contentType
The MIME type of the data.
- Parameters:
contentType
- The MIME type of the data.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
compressionType
If training data is compressed, the compression type. The default value is
None
.CompressionType
is used only in Pipe input mode. In File mode, leave this field unset or set it to None.- Parameters:
compressionType
- If training data is compressed, the compression type. The default value isNone
.CompressionType
is used only in Pipe input mode. In File mode, leave this field unset or set it to None.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
compressionType
If training data is compressed, the compression type. The default value is
None
.CompressionType
is used only in Pipe input mode. In File mode, leave this field unset or set it to None.- Parameters:
compressionType
- If training data is compressed, the compression type. The default value isNone
.CompressionType
is used only in Pipe input mode. In File mode, leave this field unset or set it to None.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
recordWrapperType
Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
- Parameters:
recordWrapperType
-Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
recordWrapperType
Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
- Parameters:
recordWrapperType
-Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
inputMode
(Optional) The input mode to use for the data channel in a training job. If you don't set a value for
InputMode
, SageMaker uses the value set forTrainingInputMode
. Use this parameter to override theTrainingInputMode
setting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the training job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML storage volume, and mount the directory to a Docker volume, useFile
input mode. To stream data directly from Amazon S3 to the container, choosePipe
input mode.To use a model for incremental training, choose
File
input model.- Parameters:
inputMode
- (Optional) The input mode to use for the data channel in a training job. If you don't set a value forInputMode
, SageMaker uses the value set forTrainingInputMode
. Use this parameter to override theTrainingInputMode
setting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the training job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML storage volume, and mount the directory to a Docker volume, useFile
input mode. To stream data directly from Amazon S3 to the container, choosePipe
input mode.To use a model for incremental training, choose
File
input model.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
inputMode
(Optional) The input mode to use for the data channel in a training job. If you don't set a value for
InputMode
, SageMaker uses the value set forTrainingInputMode
. Use this parameter to override theTrainingInputMode
setting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the training job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML storage volume, and mount the directory to a Docker volume, useFile
input mode. To stream data directly from Amazon S3 to the container, choosePipe
input mode.To use a model for incremental training, choose
File
input model.- Parameters:
inputMode
- (Optional) The input mode to use for the data channel in a training job. If you don't set a value forInputMode
, SageMaker uses the value set forTrainingInputMode
. Use this parameter to override theTrainingInputMode
setting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the training job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML storage volume, and mount the directory to a Docker volume, useFile
input mode. To stream data directly from Amazon S3 to the container, choosePipe
input mode.To use a model for incremental training, choose
File
input model.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
shuffleConfig
A configuration for a shuffle option for input data in a channel. If you use
S3Prefix
forS3DataType
, this shuffles the results of the S3 key prefix matches. If you useManifestFile
, the order of the S3 object references in theManifestFile
is shuffled. If you useAugmentedManifestFile
, the order of the JSON lines in theAugmentedManifestFile
is shuffled. The shuffling order is determined using theSeed
value.For Pipe input mode, shuffling is done at the start of every epoch. With large datasets this ensures that the order of the training data is different for each epoch, it helps reduce bias and possible overfitting. In a multi-node training job when ShuffleConfig is combined with
S3DataDistributionType
ofShardedByS3Key
, the data is shuffled across nodes so that the content sent to a particular node on the first epoch might be sent to a different node on the second epoch.- Parameters:
shuffleConfig
- A configuration for a shuffle option for input data in a channel. If you useS3Prefix
forS3DataType
, this shuffles the results of the S3 key prefix matches. If you useManifestFile
, the order of the S3 object references in theManifestFile
is shuffled. If you useAugmentedManifestFile
, the order of the JSON lines in theAugmentedManifestFile
is shuffled. The shuffling order is determined using theSeed
value.For Pipe input mode, shuffling is done at the start of every epoch. With large datasets this ensures that the order of the training data is different for each epoch, it helps reduce bias and possible overfitting. In a multi-node training job when ShuffleConfig is combined with
S3DataDistributionType
ofShardedByS3Key
, the data is shuffled across nodes so that the content sent to a particular node on the first epoch might be sent to a different node on the second epoch.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
shuffleConfig
A configuration for a shuffle option for input data in a channel. If you use
S3Prefix
forS3DataType
, this shuffles the results of the S3 key prefix matches. If you useManifestFile
, the order of the S3 object references in theManifestFile
is shuffled. If you useAugmentedManifestFile
, the order of the JSON lines in theAugmentedManifestFile
is shuffled. The shuffling order is determined using theSeed
value.For Pipe input mode, shuffling is done at the start of every epoch. With large datasets this ensures that the order of the training data is different for each epoch, it helps reduce bias and possible overfitting. In a multi-node training job when ShuffleConfig is combined with
This is a convenience method that creates an instance of theS3DataDistributionType
ofShardedByS3Key
, the data is shuffled across nodes so that the content sent to a particular node on the first epoch might be sent to a different node on the second epoch.ShuffleConfig.Builder
avoiding the need to create one manually viaShuffleConfig.builder()
.When the
Consumer
completes,SdkBuilder.build()
is called immediately and its result is passed toshuffleConfig(ShuffleConfig)
.- Parameters:
shuffleConfig
- a consumer that will call methods onShuffleConfig.Builder
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-