Service Connect concepts - Amazon Elastic Container Service
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Service Connect concepts

The Service Connect feature creates a virtual network of related services. The same service configuration can be used across multiple different namespaces to run independent yet identical sets of applications. Service Connect defines the proxy container in the Amazon ECS service. This way, the same task definition can be used to run identical applications in different namespaces with different Service Connect configurations. Each task that the Amazon ECS service makes runs a proxy container in the task.

Service Connect is suitable for connections between Amazon ECS services within the same namespace. For the following applications, you need to use an additional interconnection method to connect to an Amazon ECS service that is configured with Service Connect:

  • Amazon ECS tasks that are configured in other namespaces

  • Amazon ECS tasks that aren’t configured for Service Connect

  • other applications outside of Amazon ECS

These applications can connect through the Service Connect proxy but can’t resolve Service Connect endpoint names.

For these applications to resolve the IP addresses of ECS tasks, you need to use another interconnection method. For a list of interconnection methods, see Choosing an interconnection method.

Service Connect terminology

The following terms are used with Service Connect.

port name

The Amazon ECS task definition configuration that assigns a name to a particular port mapping. This configuration is only used by Amazon ECS Service Connect.

client alias

The Amazon ECS service configuration that assigns the port number that is used in the endpoint. Additionally, the client alias can assign the DNS name of the endpoint, overriding the discovery name. If a discovery name isn't provided in the Amazon ECS service, the client alias name overrides the port name as the endpoint name. For endpoint examples, see the definition of endpoint. Multiple client aliases can be assigned to an Amazon ECS service. This configuration is only used by Amazon ECS Service Connect.

discovery name

The optional, intermediate name that you can create for a specified port from the task definition. This name is used to create a Amazon Cloud Map service. If this name isn't provided, the port name from the task definition is used. Multiple discovery names can be assigned to a specific port an Amazon ECS service. This configuration is only used by Amazon ECS Service Connect.

Amazon Cloud Map service names must be unique within a namespace. Because of this limitation, you can have only one Service Connect configuration without a discovery name for a particular task definition in each namespace.

endpoint

The URL to connect to an API or website. The URL contains the protocol, a DNS name, and the port. For more information about endpoints in general, see endpoint in the Amazon glossary in the Amazon Web Services General Reference.

Service Connect creates endpoints that connect to Amazon ECS services and configures the tasks in Amazon ECS services to connect to the endpoints. The URL contains the protocol, a DNS name, and the port. You select the protocol and port name in the task definition, as the port must match the application that is inside the container image. In the service, you select each port by name and can assign the DNS name. If you don't specify a DNS name in the Amazon ECS service configuration, the port name from the task definition is used by default. For example, a Service Connect endpoint could be http://blog:80, grpc://checkout:8080, or http://_db.production.internal:99.

Service Connect service

The configuration of a single endpoint in an Amazon ECS service. This is a part of the Service Connect configuration, consisting of a single row in the Service Connect and discovery name configuration in the console, or one object in the services list in the JSON configuration of an Amazon ECS service. This configuration is only used by Amazon ECS Service Connect.

For more information, see ServiceConnectService in the Amazon Elastic Container Service API Reference.

namespace

The short name or full Amazon Resource Name (ARN) of the Amazon Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace in Amazon Cloud Map doesn't affect Service Connect.

Service Connect uses the Amazon Cloud Map namespace as a logical grouping of Amazon ECS tasks that talk to one another. Each Amazon ECS service can belong to only one namespace. The services within a namespace can be spread across different Amazon ECS clusters within the same Amazon Web Services Region in the same Amazon Web Services account. Because each cluster can run tasks of every operating system, CPU architecture, VPC, and EC2, Fargate, and External types, you can freely organize your services by any criteria that you choose.

client service

An Amazon ECS service that runs a network client application. This service must have a namespace configured. Each task in the service can discover and connect to all of the endpoints in the namespace through a Service Connect proxy container.

If any of your containers in the task need to connect to an endpoint from a service in a namespace, choose a client service. If a frontend, reverse proxy, or load balancer application receives external traffic through other methods such as from Elastic Load Balancing, it could use this type of Service Connect configuration.

client-server service

An Amazon ECS service that runs a network or web service application. This service must have a namespace and at least one endpoint configured. Each task in the service is reachable by using the endpoints. The Service Connect proxy container listens on the endpoint name and port to direct traffic to the app containers in the task.

If any of the containers expose and listen on a port for network traffic, choose a client-server service. These applications don't need to connect to other client-server services in the same namespace, but the client configuration is configured. A backend, middleware, business tier, or most microservices would use this type of Service Connect configuration. If you want a frontend, reverse proxy, or load balancer application to receive traffic from other services configured with Service Connect in the same namespace, these services should use this type of Service Connect configuration.

Cluster configuration

You can set a default namespace for Service Connect when you create the cluster or by updating the cluster. If you specify a namespace name that doesn't exist in the same Amazon Web Services Region and account, a new HTTP namespace is created.

If you create a cluster and specify a default Service Connect namespace, the cluster waits in the PROVISIONING status while Amazon ECS creates the namespace. You can see an attachment in the status of the cluster that shows the status of the namespace. Attachments aren't displayed by default in the Amazon CLI, you must add --include ATTACHMENTS to see them.

Service Connect service configuration

Service Connect is designed to require the minimum configuration. You need to set a name for each port mapping that you would like to use with Service Connect in the task definition. In the service, you need to turn on Service Connect and select a namespace to make a client service. To make a client-server service, you need to add a single Service Connect service configuration that matches the name of one of the port mappings. Amazon ECS reuses the port number and port name from the task definition to define the Service Connect service and endpoint. To override those values, you can use the other parameters Discovery, DNS, and Port in the console, or discoveryName and clientAliases, respectively in the Amazon ECS API.

The following example shows each kind of Service Connect configuration being used together in the same Amazon ECS service. Shell comments are provided, however note that the JSON configuration used to Amazon ECS services doesn't support comments.

{ ... serviceConnectConfiguration: { enabled: true, namespace: "internal", #config for client services can end here, only these two parameters are required. services: [{ portName: "http" }, #minimal client - server service config can end here.portName must match the "name" parameter of a port mapping in the task definition. { discoveryName: "http-second" #name the discoveryName to avoid a Task def port name collision with the minimal config in the same Cloud Map namespace portName: "http" }, { clientAliases: [{ dnsName: "db", port: 81 }] #use when the port in Task def is not the port that client apps use.Client apps can use http: //db:81 to connect discoveryName: "http-three" portName: "http" }, { clientAliases: [{ dnsName: "db.app", port: 81 }] #use when the port in Task def is not the port that client apps use.duplicates are fine as long as the discoveryName is different. discoveryName: "http-four" portName: "http", ingressPortOverride: 99 #If App should also accept traffic directly on Task def port. } ] } }

Deployment order

When you use Amazon ECS Service Connect, you configure each Amazon ECS service either to run a server application that receives network requests (client-server service) or to run a client application that makes the requests (client service).

When you prepare to start using Service Connect, start with a client-server service. You can add a Service Connect configuration to a new service or an existing service. After you edit and update an Amazon ECS service to add a Service Connect configuration, Amazon ECS creates a Service Connect endpoint in the namespace. Additionally, Amazon ECS creates a new deployment in the service to replace the tasks that are currently running.

Existing tasks and other applications can continue to connect to existing endpoints, and external applications. If a client-server service adds tasks by scaling out, new connections from clients will be balanced between all of the tasks immediately. If a client-server service is updated, new connections from clients will be balanced between the tasks of the new version immediately.

Existing tasks can't resolve and connect to the new endpoint. Only new Amazon ECS tasks that have a Service Connect configuration in the same namespace and that start running after this deployment can resolve and connect to this endpoint. For example, an Amazon ECS service that runs a client application must be redeployed to connect to a new database server endpoint. Start the client deployment after the deployment completes for the server.

This means that the operator of the client application determines when the configuration of their app changes, even though the operator of the server application can change their configuration at any time. The list of endpoints in the namespace can change every time that any Amazon ECS service in the namespace is deployed, but existing tasks and replacement tasks continue to behave the same as they did after the most recent deployment.

Consider the following examples.

First, assume that you are creating an application that is available to the public internet in a single Amazon CloudFormation template and single Amazon CloudFormation stack. The public discovery and reachability should be created last by Amazon CloudFormation, including the frontend client service. The services need to be created in this order to prevent an time period when the frontend client service is running and available the public, but a backend isn't. This eliminates error messages from being sent to the public during that time period. In Amazon CloudFormation, you must use the dependsOn to indicate to Amazon CloudFormation that multiple Amazon ECS services can't be made in parallel or simultaneously. You should add the dependsOn to the frontend client service for each backend client-server service that the client tasks connect to.

Second, assume that a frontend service exists without Service Connect configuration. The tasks are connecting to an existing backend service. Add a client-server Service Connect configuration to the backend service first, using the same name in the DNS or clientAlias that the frontend uses. This creates a new deployment, so all the deployment rollback detection or Amazon Web Services Management Console, Amazon CLI, Amazon SDKs and other methods to roll back and revert the backend service to the previous deployment and configuration. If you are satisfied with the performance and behavior of the backend service, add a client or client-server Service Connect configuration to the frontend service. Only the tasks in the new deployment use the Service Connect proxy that is added to those new tasks. If you have issues with this configuration, you can roll back and revert to your previous configuration by using the deployment rollback detection or Amazon Web Services Management Console, Amazon CLI, Amazon SDKs and other methods to roll back and revert the backend service to the previous deployment and configuration. If you use another service discovery system that is based on DNS instead of Service Connect, any frontend or client applications begin using new endpoints and changed endpoint configuration after the local DNS cache expires, commonly taking multiple hours.

Networking

In the default configuration, the Service Connect proxy listens on the containerPort from the port mapping in the task definition. You need rules in your security group to allow ingress to this port from the VPC CIDRs, or specifically from subnets where clients will run.

Even if you set a port number in the Service Connect service configuration, this doesn't change the port for the client-server service that the Service Connect proxy listens on. When you set this port number, Amazon ECS changes the port of the endpoint that the client services connect to, on the Service Connect proxy inside those tasks. The proxy in the client service connects to the proxy in the client-server service using the containerPort.

If you want to change the port that the Service Connect proxy listens on, change the ingressPortOverride in the Service Connect configuration of the client-server service. If you change this port number, you must allow inbound traffic on this port in the Amazon VPC security group that is used by traffic to this service.

Traffic that your applications send to Amazon ECS services configured for Service Connect require that the Amazon VPC and subnets have route table rules and network ACL rules that allow the containerPort and ingressOverridePort port numbers that you are using.

You can send traffic between VPCs with Service Connect. You must consider the same requirements for the route table rules, network ACLs, and security groups as they apply to both VPCs.

For example, two clusters create tasks in different VPCs. A service in each cluster is configured to use the same namespace. The applications in these two services can resolve every endpoint in the namespace without any VPC DNS configuration. However, the proxies can't connect unless the VPC peering, VPC or subnet route tables, and VPC network ACLs allow the traffic on the containerPort and ingressOverridePort port numbers you are using

Service Connect proxy

If you create or update an Amazon ECS service with Service Connect configuration, Amazon ECS adds a new container to each new task as it is started. This pattern of using a separate container is called a sidecar. This container isn't present in the task definition and you can't configure it. Amazon ECS manages the configuration of this container in the Amazon ECS service. Because of this, you can reuse the same task definitions between multiple Amazon ECS services, namespaces, and you can run tasks without Service Connect also.

Proxy resources
  • The task CPU and memory limits are the only parameters that you need to configure for this container in the task definition. The only parameter you need to configure for this container in the Amazon ECS service is the log configuration, which you'll find in the Service Connect configuration. For more information about Service Connect configuration, see Service Connect service configuration.

  • The task definition must set the task memory limit to use Service Connect. The additional CPU and memory in the task limits that you don't allocate in the container limits in your other containers are used by the Service Connect proxy container and other containers that don't set container limits.

  • We recommend adding 256 CPU units and at least 64 MiB of memory to your task CPU and memory for the Service Connect proxy container. On Amazon Fargate, the lowest amount of memory that you can set is 512 MiB of memory. On Amazon EC2, task memory is optional, but it is required for Service Connect.

  • If you expect tasks in this service to receive more than 500 requests per second at their peak load, we recommend adding 512 CPU units to your task CPU in this task definition for the Service Connect proxy container.

  • If you expect to create more than 100 Service Connect services in the namespace or 2000 tasks in total across all Amazon ECS services within the namespace, we recommend adding 128 MiB of memory to your task memory for the Service Connect proxy container. You should do this in every task definition that is used by all of the Amazon ECS services in the namespace.

Proxy configuration

Your applications connect to the proxy in the sidecar container in the same task as the application is in. Amazon ECS configures the task and containers so that applications only connect to the proxy if the application is connecting to the endpoint names in the same namespace. All other traffic doesn't use the proxy. The other traffic includes IP addresses in the same VPC, Amazon service endpoints, and external traffic.

Load balancing

Service Connect configures the proxy to use the round-robin strategy for load balancing between the tasks in a Service Connect endpoint. The local proxy that is in the task where the connection comes from, picks one of the tasks in the client-server service that provides the endpoint.

For example, consider a task that runs WordPress in an Amazon ECS service that is configured as a client service in a namespace called local. There is another service with 2 tasks that run the MySQL database. This service is configured to provide an endpoint called mysql through Service Connect in the same namespace. In the WordPress task, the WordPress application connects to the database using the endpoint name. Because of the Service Connect configuration, connections to this name go to the proxy that runs in a sidecar container in the same task. Then, the proxy can connect to either of the MySQL tasks using the round-robin strategy.

Load balancing strategies: round-robin

Outlier detection

This feature uses data that the proxy has about prior failed connections to avoid sending new connections to the hosts that had the failed connections. Service Connect configures the outlier detection feature of the proxy to provide passive health checks.

For example, consider a task that runs WordPress in an Amazon ECS service that is configured as a client service in a namespace called local. There is another service with 2 tasks that run the MySQL database. This service is configured to provide an endpoint called mysql through Service Connect in the same namespace. In the WordPress task, the WordPress application connects to the proxy that runs in a sidecar container in the same task. The proxy can connect to either of the MySQL tasks. If the proxy made multiple connections to a specific MySQL task, and 5 or more of the connections failed in the last 30 seconds, then the proxy avoids that MySQL task for 30 to 300 seconds.

Retries

Service Connect configures the proxy to retry connection that pass through the proxy and fail, and the second attempt avoids using the host from the previous connection. This ensures that each connection through Service Connect doesn't fail for one-off reasons.

Number of retries: 2

Timeout

Service Connect configures the proxy to wait a maximum time for your client-server applications to respond. The default timeout value is 15 seconds, but it can be updated.

Optional parameters:

idleTimeout ‐ The amount of time in seconds a connection will stay active while idle. A value of 0 disables idleTimeout.

The idleTimeout default for HTTP/HTTP2/GRPC is 5 minutes.

The idleTimeout default for TCP is 1 hour.

perRequestTimeout ‐ The amount of time waiting for the upstream to respond with a complete response per request. A value of 0 disables perRequestTimeout. Can only be set when the appProtocol for application container is HTTP/HTTP2/GRPC. The default is 15 seconds.

Note

If idleTimeout is set to a time that is less than perRequestTimeout, the connection will close when the idleTimeout is reached and not the perRequestTimeout.