Step 2: Examine the code - Amazon Kinesis Video Streams
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Step 2: Examine the code

In this section of the Android Producer Library procedure, you examine the example code.

The Android test application (AmazonKinesisVideoDemoApp) shows the following coding pattern:

  • Create an instance of KinesisVideoClient.

  • Create an instance of MediaSource.

  • Start streaming. Start the MediaSource, and it starts sending data to the client.

The following sections provide details.

Creating an instance of KinesisVideoClient

You create the KinesisVideoClient object by calling the createKinesisVideoClient operation.

mKinesisVideoClient = KinesisVideoAndroidClientFactory.createKinesisVideoClient( getActivity(), KinesisVideoDemoApp.KINESIS_VIDEO_REGION, KinesisVideoDemoApp.getCredentialsProvider());

For KinesisVideoClient to make network calls, it needs credentials to authenticate. You pass in an instance of AWSCredentialsProvider, which reads your Amazon Cognito credentials from the awsconfiguration.json file that you modified in the previous section.

Creating an instance of MediaSource

To send bytes to your Kinesis video stream, you must produce the data. Amazon Kinesis Video Streams provides the MediaSource interface, which represents the data source.

For example, the Kinesis Video Streams Android library provides the AndroidCameraMediaSource implementation of the MediaSource interface. This class reads data from one of the device's cameras.

In the following code example (from the fragment/ file), the configuration for the media source is created:

private AndroidCameraMediaSourceConfiguration getCurrentConfiguration() { return new AndroidCameraMediaSourceConfiguration( AndroidCameraMediaSourceConfiguration.builder() .withCameraId(mCamerasDropdown.getSelectedItem().getCameraId()) .withEncodingMimeType(mMimeTypeDropdown.getSelectedItem().getMimeType()) .withHorizontalResolution(mResolutionDropdown.getSelectedItem().getWidth()) .withVerticalResolution(mResolutionDropdown.getSelectedItem().getHeight()) .withCameraFacing(mCamerasDropdown.getSelectedItem().getCameraFacing()) .withIsEncoderHardwareAccelerated( mCamerasDropdown.getSelectedItem().isEndcoderHardwareAccelerated()) .withFrameRate(FRAMERATE_20) .withRetentionPeriodInHours(RETENTION_PERIOD_48_HOURS) .withEncodingBitRate(BITRATE_384_KBPS) .withCameraOrientation(-mCamerasDropdown.getSelectedItem().getCameraOrientation()) .withNalAdaptationFlags(StreamInfo.NalAdaptationFlags.NAL_ADAPTATION_ANNEXB_CPD_AND_FRAME_NALS) .withIsAbsoluteTimecode(false)); }

In the following code example (from the fragment/ file), the media source is created:

mCameraMediaSource = (AndroidCameraMediaSource) mKinesisVideoClient .createMediaSource(mStreamName, mConfiguration);

Starting the media source

Start the media source so that it can begin generating data and sending it to the client. The following code example is from the fragment/ file:


Next step

Step 3: Run and verify the code