You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.

Class: Aws::MediaConvert::Types::H265Settings

Inherits:
Struct
  • Object
show all
Defined in:
(unknown)

Overview

Note:

When passing H265Settings as input to an Aws::Client method, you can use a vanilla Hash:

{
  adaptive_quantization: "OFF", # accepts OFF, LOW, MEDIUM, HIGH, HIGHER, MAX
  alternate_transfer_function_sei: "DISABLED", # accepts DISABLED, ENABLED
  bitrate: 1,
  codec_level: "AUTO", # accepts AUTO, LEVEL_1, LEVEL_2, LEVEL_2_1, LEVEL_3, LEVEL_3_1, LEVEL_4, LEVEL_4_1, LEVEL_5, LEVEL_5_1, LEVEL_5_2, LEVEL_6, LEVEL_6_1, LEVEL_6_2
  codec_profile: "MAIN_MAIN", # accepts MAIN_MAIN, MAIN_HIGH, MAIN10_MAIN, MAIN10_HIGH, MAIN_422_8BIT_MAIN, MAIN_422_8BIT_HIGH, MAIN_422_10BIT_MAIN, MAIN_422_10BIT_HIGH
  dynamic_sub_gop: "ADAPTIVE", # accepts ADAPTIVE, STATIC
  flicker_adaptive_quantization: "DISABLED", # accepts DISABLED, ENABLED
  framerate_control: "INITIALIZE_FROM_SOURCE", # accepts INITIALIZE_FROM_SOURCE, SPECIFIED
  framerate_conversion_algorithm: "DUPLICATE_DROP", # accepts DUPLICATE_DROP, INTERPOLATE, FRAMEFORMER
  framerate_denominator: 1,
  framerate_numerator: 1,
  gop_b_reference: "DISABLED", # accepts DISABLED, ENABLED
  gop_closed_cadence: 1,
  gop_size: 1.0,
  gop_size_units: "FRAMES", # accepts FRAMES, SECONDS
  hrd_buffer_initial_fill_percentage: 1,
  hrd_buffer_size: 1,
  interlace_mode: "PROGRESSIVE", # accepts PROGRESSIVE, TOP_FIELD, BOTTOM_FIELD, FOLLOW_TOP_FIELD, FOLLOW_BOTTOM_FIELD
  max_bitrate: 1,
  min_i_interval: 1,
  number_b_frames_between_reference_frames: 1,
  number_reference_frames: 1,
  par_control: "INITIALIZE_FROM_SOURCE", # accepts INITIALIZE_FROM_SOURCE, SPECIFIED
  par_denominator: 1,
  par_numerator: 1,
  quality_tuning_level: "SINGLE_PASS", # accepts SINGLE_PASS, SINGLE_PASS_HQ, MULTI_PASS_HQ
  qvbr_settings: {
    max_average_bitrate: 1,
    qvbr_quality_level: 1,
    qvbr_quality_level_fine_tune: 1.0,
  },
  rate_control_mode: "VBR", # accepts VBR, CBR, QVBR
  sample_adaptive_offset_filter_mode: "DEFAULT", # accepts DEFAULT, ADAPTIVE, OFF
  scene_change_detect: "DISABLED", # accepts DISABLED, ENABLED, TRANSITION_DETECTION
  slices: 1,
  slow_pal: "DISABLED", # accepts DISABLED, ENABLED
  spatial_adaptive_quantization: "DISABLED", # accepts DISABLED, ENABLED
  telecine: "NONE", # accepts NONE, SOFT, HARD
  temporal_adaptive_quantization: "DISABLED", # accepts DISABLED, ENABLED
  temporal_ids: "DISABLED", # accepts DISABLED, ENABLED
  tiles: "DISABLED", # accepts DISABLED, ENABLED
  unregistered_sei_timecode: "DISABLED", # accepts DISABLED, ENABLED
  write_mp_4_packaging_type: "HVC1", # accepts HVC1, HEV1
}

Settings for H265 codec

Returned by:

Instance Attribute Summary collapse

Instance Attribute Details

#adaptive_quantizationString

Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to the following settings: Flicker adaptive quantization (flickerAdaptiveQuantization), Spatial adaptive quantization (spatialAdaptiveQuantization), and Temporal adaptive quantization (temporalAdaptiveQuantization).

Possible values:

  • OFF
  • LOW
  • MEDIUM
  • HIGH
  • HIGHER
  • MAX

Returns:

  • (String)

    Specify the strength of any adaptive quantization filters that you enable.

#alternate_transfer_function_seiString

Enables Alternate Transfer Function SEI message for outputs using Hybrid Log Gamma (HLG) Electro-Optical Transfer Function (EOTF).

Possible values:

  • DISABLED
  • ENABLED

Returns:

  • (String)

    Enables Alternate Transfer Function SEI message for outputs using Hybrid Log Gamma (HLG) Electro-Optical Transfer Function (EOTF).

#bitrateInteger

Specify the average bitrate in bits per second. Required for VBR and CBR. For MS Smooth outputs, bitrates must be unique when rounded down to the nearest multiple of 1000.

Returns:

  • (Integer)

    Specify the average bitrate in bits per second.

#codec_levelString

H.265 Level.

Possible values:

  • AUTO
  • LEVEL_1
  • LEVEL_2
  • LEVEL_2_1
  • LEVEL_3
  • LEVEL_3_1
  • LEVEL_4
  • LEVEL_4_1
  • LEVEL_5
  • LEVEL_5_1
  • LEVEL_5_2
  • LEVEL_6
  • LEVEL_6_1
  • LEVEL_6_2

Returns:

  • (String)

    H.265 Level.

#codec_profileString

Represents the Profile and Tier, per the HEVC (H.265) specification. Selections are grouped as [Profile] / [Tier], so \"Main/High\" represents Main Profile with High Tier. 4:2:2 profiles are only available with the HEVC 4:2:2 License.

Possible values:

  • MAIN_MAIN
  • MAIN_HIGH
  • MAIN10_MAIN
  • MAIN10_HIGH
  • MAIN_422_8BIT_MAIN
  • MAIN_422_8BIT_HIGH
  • MAIN_422_10BIT_MAIN
  • MAIN_422_10BIT_HIGH

Returns:

  • (String)

    Represents the Profile and Tier, per the HEVC (H.265) specification.

#dynamic_sub_gopString

Choose Adaptive to improve subjective video quality for high-motion content. This will cause the service to use fewer B-frames (which infer information based on other frames) for high-motion portions of the video and more B-frames for low-motion portions. The maximum number of B-frames is limited by the value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames).

Possible values:

  • ADAPTIVE
  • STATIC

Returns:

  • (String)

    Choose Adaptive to improve subjective video quality for high-motion content.

#flicker_adaptive_quantizationString

Enable this setting to have the encoder reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. This setting is disabled by default. Related setting: In addition to enabling this setting, you must also set adaptiveQuantization to a value other than Off (OFF).

Possible values:

  • DISABLED
  • ENABLED

Returns:

  • (String)

    Enable this setting to have the encoder reduce I-frame pop.

#framerate_controlString

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

Possible values:

  • INITIALIZE_FROM_SOURCE
  • SPECIFIED

Returns:

  • (String)

    If you are using the console, use the Framerate setting to specify the frame rate for this output.

#framerate_conversion_algorithmString

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

Possible values:

  • DUPLICATE_DROP
  • INTERPOLATE
  • FRAMEFORMER

Returns:

  • (String)

    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate.

#framerate_denominatorInteger

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

Returns:

  • (Integer)

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction.

#framerate_numeratorInteger

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

Returns:

  • (Integer)

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction.

#gop_b_referenceString

If enable, use reference B frames for GOP structures that have B frames > 1.

Possible values:

  • DISABLED
  • ENABLED

Returns:

  • (String)

    If enable, use reference B frames for GOP structures that have B frames > 1.

#gop_closed_cadenceInteger

Frequency of closed GOPs. In streaming applications, it is recommended that this be set to 1 so a decoder joining mid-stream will receive an IDR frame as quickly as possible. Setting this value to 0 will break output segmenting.

Returns:

  • (Integer)

    Frequency of closed GOPs.

#gop_sizeFloat

GOP Length (keyframe interval) in frames or seconds. Must be greater than zero.

Returns:

  • (Float)

    GOP Length (keyframe interval) in frames or seconds.

#gop_size_unitsString

Indicates if the GOP Size in H265 is specified in frames or seconds. If seconds the system will convert the GOP Size into a frame count at run time.

Possible values:

  • FRAMES
  • SECONDS

Returns:

  • (String)

    Indicates if the GOP Size in H265 is specified in frames or seconds.

#hrd_buffer_initial_fill_percentageInteger

Percentage of the buffer that should initially be filled (HRD buffer model).

Returns:

  • (Integer)

    Percentage of the buffer that should initially be filled (HRD buffer model).

#hrd_buffer_sizeInteger

Size of buffer (HRD buffer model) in bits. For example, enter five megabits as 5000000.

Returns:

  • (Integer)

    Size of buffer (HRD buffer model) in bits.

#interlace_modeString

Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that\'s interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

Possible values:

  • PROGRESSIVE
  • TOP_FIELD
  • BOTTOM_FIELD
  • FOLLOW_TOP_FIELD
  • FOLLOW_BOTTOM_FIELD

Returns:

  • (String)

    Choose the scan line type for the output.

#max_bitrateInteger

Maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. Required when Rate control mode is QVBR.

Returns:

  • (Integer)

    Maximum bitrate in bits/second.

#min_i_intervalInteger

Enforces separation between repeated (cadence) I-frames and I-frames inserted by Scene Change Detection. If a scene change I-frame is within I-interval frames of a cadence I-frame, the GOP is shrunk and/or stretched to the scene change I-frame. GOP stretch requires enabling lookahead as well as setting I-interval. The normal cadence resumes for the next GOP. This setting is only used when Scene Change Detect is enabled. Note: Maximum GOP stretch = GOP size + Min-I-interval - 1

Returns:

  • (Integer)

    Enforces separation between repeated (cadence) I-frames and I-frames inserted by Scene Change Detection.

#number_b_frames_between_reference_framesInteger

Number of B-frames between reference frames.

Returns:

  • (Integer)

    Number of B-frames between reference frames.

#number_reference_framesInteger

Number of reference frames to use. The encoder may use more than requested if using B-frames and/or interlaced encoding.

Returns:

  • (Integer)

    Number of reference frames to use.

#par_controlString

Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

Possible values:

  • INITIALIZE_FROM_SOURCE
  • SPECIFIED

Returns:

  • (String)

    Optional.

#par_denominatorInteger

Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

Returns:

  • (Integer)

    Required when you set Pixel aspect ratio (parControl) to SPECIFIED.

#par_numeratorInteger

Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

Returns:

  • (Integer)

    Required when you set Pixel aspect ratio (parControl) to SPECIFIED.

#quality_tuning_levelString

Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

Possible values:

  • SINGLE_PASS
  • SINGLE_PASS_HQ
  • MULTI_PASS_HQ

Returns:

  • (String)

    Optional.

#qvbr_settingsTypes::H265QvbrSettings

Settings for quality-defined variable bitrate encoding with the H.265 codec. Required when you set Rate control mode to QVBR. Not valid when you set Rate control mode to a value other than QVBR, or when you don\'t define Rate control mode.

Returns:

#rate_control_modeString

Use this setting to specify whether this output has a variable bitrate (VBR), constant bitrate (CBR) or quality-defined variable bitrate (QVBR).

Possible values:

  • VBR
  • CBR
  • QVBR

Returns:

  • (String)

    Use this setting to specify whether this output has a variable bitrate (VBR), constant bitrate (CBR) or quality-defined variable bitrate (QVBR).

#sample_adaptive_offset_filter_modeString

Specify Sample Adaptive Offset (SAO) filter strength. Adaptive mode dynamically selects best strength based on content

Possible values:

  • DEFAULT
  • ADAPTIVE
  • OFF

Returns:

  • (String)

    Specify Sample Adaptive Offset (SAO) filter strength.

#scene_change_detectString

Enable this setting to insert I-frames at scene changes that the service automatically detects. This improves video quality and is enabled by default. If this output uses QVBR, choose Transition detection (TRANSITION_DETECTION) for further video quality improvement. For more information about QVBR, see https://docs.aws.amazon.com/console/mediaconvert/cbr-vbr-qvbr.

Possible values:

  • DISABLED
  • ENABLED
  • TRANSITION_DETECTION

Returns:

  • (String)

    Enable this setting to insert I-frames at scene changes that the service automatically detects.

#slicesInteger

Number of slices per picture. Must be less than or equal to the number of macroblock rows for progressive pictures, and less than or equal to half the number of macroblock rows for interlaced pictures.

Returns:

  • (Integer)

    Number of slices per picture.

#slow_palString

Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

Possible values:

  • DISABLED
  • ENABLED

Returns:

  • (String)

    Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps).

#spatial_adaptive_quantizationString

Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn\'t take into account where the viewer\'s attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.

Possible values:

  • DISABLED
  • ENABLED

Returns:

  • (String)

    Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity.

#telecineString

This field applies only if the Streams > Advanced > Framerate (framerate) field is set to 29.970. This field works with the Streams > Advanced > Preprocessors > Deinterlacer field (deinterlace_mode) and the Streams > Advanced > Interlaced Mode field (interlace_mode) to identify the scan type for the output: Progressive, Interlaced, Hard Telecine or Soft Telecine. - Hard: produces 29.97i output from 23.976 input. - Soft: produces 23.976; the player converts this output to 29.97i.

Possible values:

  • NONE
  • SOFT
  • HARD

Returns:

  • (String)

    This field applies only if the Streams > Advanced > Framerate (framerate) field is set to 29.970.

#temporal_adaptive_quantizationString

Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren\'t moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn\'t take into account where the viewer\'s attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn\'t have moving objects with sharp edges, such as sports athletes\' faces, you might choose to disable this feature. Related setting: When you enable temporal quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization).

Possible values:

  • DISABLED
  • ENABLED

Returns:

  • (String)

    Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on temporal variation of content complexity.

#temporal_idsString

Enables temporal layer identifiers in the encoded bitstream. Up to 3 layers are supported depending on GOP structure: I- and P-frames form one layer, reference B-frames can form a second layer and non-reference b-frames can form a third layer. Decoders can optionally decode only the lower temporal layers to generate a lower frame rate output. For example, given a bitstream with temporal IDs and with b-frames = 1 (i.e. IbPbPb display order), a decoder could decode all the frames for full frame rate output or only the I and P frames (lowest temporal layer) for a half frame rate output.

Possible values:

  • DISABLED
  • ENABLED

Returns:

  • (String)

    Enables temporal layer identifiers in the encoded bitstream.

#tilesString

Enable use of tiles, allowing horizontal as well as vertical subdivision of the encoded pictures.

Possible values:

  • DISABLED
  • ENABLED

Returns:

  • (String)

    Enable use of tiles, allowing horizontal as well as vertical subdivision of the encoded pictures.

#unregistered_sei_timecodeString

Inserts timecode for each frame as 4 bytes of an unregistered SEI message.

Possible values:

  • DISABLED
  • ENABLED

Returns:

  • (String)

    Inserts timecode for each frame as 4 bytes of an unregistered SEI message.

#write_mp_4_packaging_typeString

If the location of parameter set NAL units doesn\'t matter in your workflow, ignore this setting. Use this setting only with CMAF or DASH outputs, or with standalone file outputs in an MPEG-4 container (MP4 outputs). Choose HVC1 to mark your output as HVC1. This makes your output compliant with the following specification: ISO IECJTC1 SC29 N13798 Text ISO/IEC FDIS 14496-15 3rd Edition. For these outputs, the service stores parameter set NAL units in the sample headers but not in the samples directly. For MP4 outputs, when you choose HVC1, your output video might not work properly with some downstream systems and video players. The service defaults to marking your output as HEV1. For these outputs, the service writes parameter set NAL units directly into the samples.

Possible values:

  • HVC1
  • HEV1

Returns:

  • (String)

    If the location of parameter set NAL units doesn\'t matter in your workflow, ignore this setting.