As a transitional step, this site will temporarily be made Read-Only from July 8th until the new community launch. During this time, you can still search and read articles and discussions.

While the community is read-only, if you have questions or issues requiring TIBCO review/response, please access the new TIBCO Community and select "Ask A Question."

You will need to register or log in or register to engage in the new community.

TIBCO BusinessWorks™ Container Edition Tutorial: Deploy highly-available and scalable microservices on AWS EC2 Container Service (part 1)

Last updated:
11:17pm Aug 26, 2019

About this tutorial

Content and duration

Amazon EC2 Container Service ("ECS" hereafter) is a scalable container management service that allows you to manage Docker containers on a cluster of Amazon EC2 instances. Some of the latest improvements -- including auto scaling at the container level and load balancing to multiple containers running the same image with dynamic port mapping on a given instance -- make it a viable and probably simpler alternative to Google Kubernetes and Docker Swarm.

TIBCO BusinessWorks Container Edition provides a simple, visual, and low-code way to create microservices, especially when it comes to orchestration workloads.

This tutorial, which should take approximately 20 to 30 minutes to complete (depending on how quickly you type and how keyboard shortcuts savvy you are) will guide you through the steps required to make you first deployment of a TIBCO BusinessWorks Container Edition built microservice on ECS. A sequel walks you through the additional steps of defining auto scaling policies both at the container and instance level.

What you will use

Before you begin, prerequisites

Some of the terms we will use are closely related to ECS. Having a quick read about how ECS works and a few basics should help having a better grasp of what you can achieve in terms of container management and will enhance your experience.

We will make use of the Amazon Web Services CLI tools to perform a few operations related to the docker repository. You will need a valid Amazon Web Services account, and also have the AWS CLI set up and configured before working through this guide. Ensure that your account has the appropriate administative privileges. Amazon provides extensive documentation on how to set up the AWS CLI should you need it.

Our base microservice will be the BookStore sample shipped with BusinessWorks Container Edition. A documentation is available online. Make sure you have reviewed its contents and tested it in your local development environment, both running in TIBCO Business Studio Container Edition as well as in a local Docker. As a result of that experimentation, you should have a ready-to-use bookstore application packaged as a BWCE-based docker image named bwce-rest-bookstore-app.

Finally, we assume you already have set the matching postgreSQL database up. In this tutorial, the database engine will need to be accessible from Amazon Web Services, so a good option is hosting it on AWS Relational Database Service (RDS).

Step 1: Create a Docker repository on ECS

AWS ECS can pull images from external repositories. It however has a requirement that the target repository is secure, which makes it a bit impractical for tutorial use - unless your organisation has already set a full-fledged, secure repository up. Otherwise, doing that on AWS ECS is the quickest way to fulfill the requirement.

Let us go through the (very few) steps required to do so.

Log on to AWS and navigate to the EC2 Container Service dashboard:

Navigating to EC2 Container Service

Now, kick start the creation of a new repository:

Kick start the creation of a repository

We shall name the repository bwce-bookstore as illustrated below:

Configure the repository

Make a note of the URI of the repository that has been provisioned for you as we will need it when setting our ECS cluster up.

You will also be given instructions on how to interact with that repository and push images thereto in the following screen. Note that the build step is probably not necessary as you have a pre-built docker image for the BookStore application. You will however need to adapt the fourth step to reflect the exact naming of the image in your local repository you want to tag; that should by default be bwce-rest-bookstore-app.

Instructions on how to Interact with the reposity

Please perform all the listed steps bearing in mind you need to use the values provided in the instructions generated for you, the above screenshot pertaining to my own AWS setup.

Congratulations, you have performed the first step of this tutorial!

Step 2: Create a cluster on ECS

Let us start with a cluster of three instances spanning over all the availability zones we have in our region - here Asia Pacific (Sydney). We will address the elasticity of the cluster in a further tutorial, so the initial cluster configuration is just a baseline. All the sizing figures can also be adjusted manually once the cluster has been created.

Within the ECS dashboard, pick the Cluster section on the left-hand side part of the window, and click on the “Create Cluster” button as follows:

Create an ECS Cluster

We will use the Community keyword in the name of every object created in this tutorial, making sure there is no naming conflict with your current EC2 setup as there is sometimes a requirement that items are uniquely named.

Let us hence begin with Community-Cluster, having three m3.medium instances with each 22GiB storage - this is the minimum allowed by Amazon. We do not plan on remotely accessing the instances as part of the tutorial, so there is no need to specify a key pair to SSH in. These settings are illustrated below:

Details of the cluster creation

The next section of the form focuses on Network and Security settings. We will create a new VPC and use the default CIDR block, but will add a third Subnet so our VPC can span over the three availability zones in Sydney; you may however adjust if your own region has less availability zones. Let us also create a new Security Group and let (for now) all traffic in on port 8080 - remember that this is the default port the BookStore application uses. We will eventually need to adjust the Security Group settings later on and may consider closing direct traffic from the Internet to the individual instances of the cluster as we will dispatch traffic through an upfront load balancer.

Networking settings for the cluster

The last section of the form is about picking the right IAM role:

Cluster container instance IAM role settings

Let us now finish the creation of that cluster by clicking on the Create button at the bottom right:

Create cluster

The upcoming screen will let you know about the progress of that creation. Many AWS EC2 and VPC resources are generated as a result, so expect to wait for a minute for this to complete - or even longer if it is a busy time on AWS. A CloudFormation stack will also be generated to automate the creation and tear-down of the cluster. You may use that as a template to automate the provisioning of clusters in the future.

As designed, the VPC spans across three Subnets distributes over the availability zones in our region, here ap-southeast-2a, ap-southeast-2b, and ap-southeast-2c. An auto-scaling group has already been set up for our cluster, but we will address auto-scaling as a second tutorial. Make note of your VPC identifier as we will then need it in the next steps.

The outcome is illustrated below:

Cluster launch status panel

At this stage, we are done with the cluster settings: congratulations! A dashboard that is accessible from the ECS landing page will enable you to further control you cluster. You can get there now by clicking the View Cluster button (which is this time on the top…).

View Cluster

This dashboard is illustrated below:

Cluster summary dashboard

The most interesting tab at this stage is ECS Instances. Three instance have been created, spanning the availability zones:

ECS instances of a Cluster

We can now focus on defining the Task that will run on our cluster - in ECS limbo, this is the workload we will run as containers on the instance.

Step 3: Define a Task

Click on the Task Definitions section on the left-hand side of your ECS landing page:

Navigating to ECS Task Definitions

We will actually create a brand new Task Definition here, as illustrated below:

Creating a new Task Definition

Let us fill the first section of the form, which sets the basics. Our Task will be named Community-Task, the Docker Network Mode will be Bridge as we intend to bridge the ports of our image with that of the host in order to fulfill API calls. We won’t discuss placement constraints in this tutorial, so leave that empty, as illustrated below.

Task Definition basic settings

We will also need an IAM Task Role here, so follow the pop-out link to the IAM Console there and create a IAM new role as follows:

Creating a new IAM role for the Task Definition

You will then be prompted to select the role type. It is a fair bit of scrolling down to find the role we need, Amazon EC2 Container Service Task Role:

Selecting the Amazon EC2 Container Service Task Role

Move on to Step 3 and let us then add the Amazon EC2ContainerServiceAutoscaleRole policy in preparation for the upcoming tutorial on auto scaling, as illustrated here:

Attaching the EC2ContainerServiceAutoscaleRole

Finally, let us name this role Community-TaskRole (even though you ultimately may want to chose a different name as the Role will be reusable across Task definitions), and proceed with the creation:

Setting the role name

Now that the role is successfully create, close the pop-out tab and go back that pertaining to our Task Definition. In there, refresh the list of roles and pick our latest creation:

Set the newly created role in the Task definition

The next section of our the Task Definition form is all about the Docker image(s) to be used. In our case, we shall have a single one as the required PostgreSQL instance can be hosted on Amazon Web Services RDS, as suggested in the introduction. Several images could however be deployed for more complex use cases - explore this on your own once you have successfully completed this tutorial.

Let us add the one container:

Adding a container to the Task definition

The first part of the Add container pop-in form is about the details of the container, which we will name Bookstore-Community-Container. Use the image details that you wrote down in Step 1 of this tutorial - the settings illustrated below pertain to my own repository. The suggested memory limit for the container (1024 MB), considering the low footprint of the BookStore sample, is plenty but resource usage fine tuning is not our main focus here.

Finally, the port mappings are simple as we expose port 8080 in the image. By leaving the host port empty, you actually let EC2 do dynamic port binding so that multiple containers can be deployed on the same host without port mapping conflicts. The application load balancer we will set up later on will distribute service calls to all containers from a single endpoint and automatically point to the appropriate destination ports as further containers are created.

Details of the container to add to the Task definition

The following section is about setting the environment required by our container. Let us ignore allocated CPU units here (as it is relevant mostly for multi-container tasks), and keep this container as essential to the task. Setting up entry points, commands, and working directory are not required for the BookStore application. The Env Variables passed to Docker are much more interesting and should reflect the requirements set by the Bookstore sample. We set the BW.HOSTNAME to so that all adapters are bound, use the standard 8080 port. You should reflect the DB_URL, DB_USERNAME, and DB_PASSWORD set up for your PostgreSQL database.

Setting environment for the container in Task definition

We will disregard all the other sections in this tutorial, but a wealth of options is available on AWS ECS.

Let us now confirm this container addition at the very bottom:

Confirm container addition

Back in the Task Definition form, we will ignore the Volumes section (additional storage, potentially shared across containers) and go straight to the Task Definition creation:

Create Task definition

This will lead to the Task Definition summary form. Note that the Task Definition name has been suffixed with :1 to denote its first revision. From then, we will be able to proceed with the next step of our tutorial, creating a Service based on this Task definition.

Step 4: Create a Service

From the Task definition summary, select Create Service from the drop-down next to the Task definition name:

Creating the Service

The Service creation form enables to pick the Cluster to run the Service on (so that you can create different Services running on different Clusters with different settings, reusing a Task Definition). Do chose Community-Cluster and let us name this Service Bookstore-Community-Service. We will start with three tasks deployed (i.e. one per EC2 instance) and set the minimum healthy percentage to 66% (i.e. roughly two) so that more tasks are deployed if required:

Create Service form

The next section of the form is about the placement of Tasks (in our case it is a single Container) on the instances of the Cluster. The default (balancing across availability zones) is fine:

Task Placement for a Service

The last section is about load balancing and auto scaling. We will discuss auto scaling in a upcoming tutorial, so let us handle the load balancing now:

Load balancing for an ECS Service

In order to do that, you will need to have a pre-existing EC2 Elastic Load Balancer, so let us do that straight away by opening a new browser tab, moving to your EC2 Dashboard, and selecting the Load Balancer section on the left-hand side.

Navigating to Load Balancers

Then click on the Create Load Balancer button:

Create a new Load Balancer

We will need to use a Application Load Balancer:

Selecting an Application Load Balancer

Then set it up as follows, naming it Bookstore-Community-ELB and having a listener on port 8080. Choose the right VPC (as per your notes), and selecting all availability zones:

Configuring the Load Balancer

Ignore the security warning in the second step and proceed to configuring the the Security Groups, as follows - be mindful to pick the non default Security Group as you Load Balancer must be in the same Security Group as your Cluster instances to work properly:

Configuring Load Balancer Security Groups

Note that you will need to change the inboud rules of your Security Group to allow internal traffic on TCP ports 32768-61000. These ports will be dynamically mapped by ECS to our containers’ 8080 and the Load Balancer will need to place traffic on this ports within the Security Group. Consider doing this straight away in a new browser tab:

Changing Security Group inbound rules to allow Load Balancer traffic

Then move to the fourth step of your Load Balancer settings and configure the routing Target Group named Bookstore-Community-Target as follows, using port 8080 and the /books http URL to perform health checks on. This is for lack of a better one, as this specific operation may potentially return a large payload in the BookStore application:

Configuring Routing for the Load Balancer

Before moving to the Register Targets step, ensure you adjust the Advanced health check settings as well. BWCE docker containers may need some time to start and the standard settings are likely to fail (especially if several containers are started at the same time on an instance, or if you choose more modest instance…). Feel free to adjust these values if you experience unhealthy and draining instances right after service creation but these settings did work for me:

Advanced health check settings

On the fifth step, Register Targets, you do not have to register member as the Cluster will eventually perform the registration for you (including for instances spun as a result of auto scaling):

Registering Load Balancer Targets

Finally, review settings and proceed with the creation of the Load Balancer:

Load Balancer setting review

Once the Load Balancer creation is confirmed, close the page:

Load Balancer Creation Status

Now let us move back to the browser tab pertaining to the Load Balancer settings for the Service. Make sure you select the Application Load Balancer type, keep the default IAM role, pick your recently created Bookstore-Community-ELB, and add the default container (Bookstore-Community-Container) to the ELB:

ELB Settings for the Service

In the section that then appears, select the only available Listener port as well as the already created Target Group (Bookstore-Community-Target). This will grey out further options:

Container to load balance

Finally, save your settings:

Saving Service ELB settings

As we will explore auto scaling in an upcoming tutorial, let us confirm the creation of the Service as it is:

Confirm Service creation

A page updating you on the creation is then displayed. A few tens of seconds may be needed for the operation to complete:

Service launch wait screen

The Service summary page then shows us that three Tasks (and in this case three Containers as we defined the Task as having a single bwce-bookstore container) running with Elastic Load Balancing.

Service summary page

Et voilà ! Your Service is now ready to respond to requests sent to the endpoint of your Elastic Load Balancer.

Step 5: Invoke REST operations on the Service

You are now able to invoke REST operations against the service and have the load spread across containers and instances. We will soon explore how to add auto scaling to the mix so as to adapt our resource usage to the level of traffic occurring on the Service as part of a new tutorial.

curl -X GET -H "Content-type: application/json" -H "Accept: application/json" ""


Relax and enjoy. Or, continue on to Part two of this Tutorial


Contributors: Emmanuel Schweitzer and Ian James.

Feedback (1)

Good Work, folks!

Don Johnson 10:21am May. 22, 2017