You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::Batch::Types::RegisterJobDefinitionRequest
- Inherits:
-
Struct
- Object
- Struct
- Aws::Batch::Types::RegisterJobDefinitionRequest
- Defined in:
- (unknown)
Overview
When passing RegisterJobDefinitionRequest as input to an Aws::Client method, you can use a vanilla Hash:
{
job_definition_name: "String", # required
type: "container", # required, accepts container, multinode
parameters: {
"String" => "String",
},
container_properties: {
image: "String",
vcpus: 1,
memory: 1,
command: ["String"],
job_role_arn: "String",
execution_role_arn: "String",
volumes: [
{
host: {
source_path: "String",
},
name: "String",
},
],
environment: [
{
name: "String",
value: "String",
},
],
mount_points: [
{
container_path: "String",
read_only: false,
source_volume: "String",
},
],
readonly_root_filesystem: false,
privileged: false,
ulimits: [
{
hard_limit: 1, # required
name: "String", # required
soft_limit: 1, # required
},
],
user: "String",
instance_type: "String",
resource_requirements: [
{
value: "String", # required
type: "GPU", # required, accepts GPU
},
],
linux_parameters: {
devices: [
{
host_path: "String", # required
container_path: "String",
permissions: ["READ"], # accepts READ, WRITE, MKNOD
},
],
init_process_enabled: false,
shared_memory_size: 1,
tmpfs: [
{
container_path: "String", # required
size: 1, # required
mount_options: ["String"],
},
],
max_swap: 1,
swappiness: 1,
},
log_configuration: {
log_driver: "json-file", # required, accepts json-file, syslog, journald, gelf, fluentd, awslogs, splunk
options: {
"String" => "String",
},
secret_options: [
{
name: "String", # required
value_from: "String", # required
},
],
},
secrets: [
{
name: "String", # required
value_from: "String", # required
},
],
},
node_properties: {
num_nodes: 1, # required
main_node: 1, # required
node_range_properties: [ # required
{
target_nodes: "String", # required
container: {
image: "String",
vcpus: 1,
memory: 1,
command: ["String"],
job_role_arn: "String",
execution_role_arn: "String",
volumes: [
{
host: {
source_path: "String",
},
name: "String",
},
],
environment: [
{
name: "String",
value: "String",
},
],
mount_points: [
{
container_path: "String",
read_only: false,
source_volume: "String",
},
],
readonly_root_filesystem: false,
privileged: false,
ulimits: [
{
hard_limit: 1, # required
name: "String", # required
soft_limit: 1, # required
},
],
user: "String",
instance_type: "String",
resource_requirements: [
{
value: "String", # required
type: "GPU", # required, accepts GPU
},
],
linux_parameters: {
devices: [
{
host_path: "String", # required
container_path: "String",
permissions: ["READ"], # accepts READ, WRITE, MKNOD
},
],
init_process_enabled: false,
shared_memory_size: 1,
tmpfs: [
{
container_path: "String", # required
size: 1, # required
mount_options: ["String"],
},
],
max_swap: 1,
swappiness: 1,
},
log_configuration: {
log_driver: "json-file", # required, accepts json-file, syslog, journald, gelf, fluentd, awslogs, splunk
options: {
"String" => "String",
},
secret_options: [
{
name: "String", # required
value_from: "String", # required
},
],
},
secrets: [
{
name: "String", # required
value_from: "String", # required
},
],
},
},
],
},
retry_strategy: {
attempts: 1,
evaluate_on_exit: [
{
on_status_reason: "String",
on_reason: "String",
on_exit_code: "String",
action: "RETRY", # required, accepts RETRY, EXIT
},
],
},
timeout: {
attempt_duration_seconds: 1,
},
tags: {
"TagKey" => "TagValue",
},
}
Instance Attribute Summary collapse
-
#container_properties ⇒ Types::ContainerProperties
An object with various properties specific to single-node container-based jobs.
-
#job_definition_name ⇒ String
The name of the job definition to register.
-
#node_properties ⇒ Types::NodeProperties
An object with various properties specific to multi-node parallel jobs.
-
#parameters ⇒ Hash<String,String>
Default parameter substitution placeholders to set in the job definition.
-
#retry_strategy ⇒ Types::RetryStrategy
The retry strategy to use for failed jobs that are submitted with this job definition.
-
#tags ⇒ Hash<String,String>
The tags that you apply to the job definition to help you categorize and organize your resources.
-
#timeout ⇒ Types::JobTimeout
The timeout configuration for jobs that are submitted with this job definition, after which AWS Batch terminates your jobs if they have not finished.
-
#type ⇒ String
The type of job definition.
Instance Attribute Details
#container_properties ⇒ Types::ContainerProperties
An object with various properties specific to single-node
container-based jobs. If the job definition\'s type
parameter is
container
, then you must specify either containerProperties
or
nodeProperties
.
#job_definition_name ⇒ String
The name of the job definition to register. Up to 128 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed.
#node_properties ⇒ Types::NodeProperties
An object with various properties specific to multi-node parallel jobs.
If you specify node properties for a job, it becomes a multi-node
parallel job. For more information, see Multi-node Parallel Jobs in
the AWS Batch User Guide. If the job definition\'s type
parameter is
container
, then you must specify either containerProperties
or
nodeProperties
.
#parameters ⇒ Hash<String,String>
Default parameter substitution placeholders to set in the job
definition. Parameters are specified as a key-value pair mapping.
Parameters in a SubmitJob
request override any corresponding parameter
defaults from the job definition.
#retry_strategy ⇒ Types::RetryStrategy
The retry strategy to use for failed jobs that are submitted with this job definition. Any retry strategy that is specified during a SubmitJob operation overrides the retry strategy defined here. If a job is terminated due to a timeout, it is not retried.
#tags ⇒ Hash<String,String>
The tags that you apply to the job definition to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging AWS Resources in AWS General Reference.
#timeout ⇒ Types::JobTimeout
The timeout configuration for jobs that are submitted with this job definition, after which AWS Batch terminates your jobs if they have not finished. If a job is terminated due to a timeout, it is not retried. The minimum value for the timeout is 60 seconds. Any timeout configuration that is specified during a SubmitJob operation overrides the timeout configuration defined here. For more information, see Job Timeouts in the Amazon Elastic Container Service Developer Guide.
#type ⇒ String
The type of job definition.
Possible values:
- container
- multinode