A python microservice framework


Quick start
Project files
OpenAPI specification
API objects
Server code
Deployment pipeline
Docker packaging
JWT authentication
Error handling
Asynchronous execution
Database serialisation

Deployment to Amazon Beanstalk

pymacaron-aws allows you to deploy your PyMacaron docker image on an Amazon Beanstalk environment using single-container instances and a best-practice auto-scalling setup.

Deployment pipeline

When deploying to Beanstalk, ‘pymdeploy’ goes through the following specific deployment steps:

  1. Start an Elastic Beanstalk environment running single-container docker instances, and load the pymacaron docker image in it.

  2. Run the acceptance tests again, this time against the Beanstalk environment. Stop if tests fail.

  3. Swap the new Beanstalk environment with the current live one with a blue/green deployment. Your app is now live!


You will need:


Here are the steps you must follow before being able to deploy a pymacaron microservice to AWS.

Create a S3 bucket for docker config

Beanstalk’s way of receiving the docker configuration for an image relies on a S3 bucket to pass the configuration.

In the amazon aws console, create a S3 bucket with a name of your choice ‘DOCKER_CFG_BUCKET’. In this bucket, create an empty directory called ‘docker’.

Docker registry credentials in S3

We assume that you have executed ‘docker login’ as part of setting up docker packaging for PyMacaron, and therefore have created the file ‘~/.docker/config.json file’.

Find the and in '~/.docker/config.json', and upload to S3 '/docker/dockercfg' the following file:

  "": {
    "auth": "<auth-token>",
    "email": "<email>"

With that, Amazon will be able to fetch your microservice image from the docker registry.

Configure AWS credentials

In the Amazon AWS console, setup an IAM user with the name of your choice, ‘IAM_USER_NAME’, with the following policies attached:

    "Version": "2012-10-17",
    "Statement": [
            "Action": [
            "Resource": [
            "Effect": "Allow"

Still in the IAM console, attach the custom policy to ‘aws-elasticbeanstalk-ec2-role’ and ‘aws-elasticbeanstalk-service-role’.

Then, under the user’s ‘Security Crendetials’ in the IAM console, create an access key for the user ‘IAM_USER_NAME’ and note its ‘Access key ID’ and ‘Secret access key’.

In a terminal on the host from which you will deploy the microservice, configure the aws profile of the ‘IAM_USER_NAME’:

aws configure --profile <IAM_USER_NAME>
# Enter the 'Access key ID' and 'Secret access key' for klue-publish
# Choose the default region that suits you (ex: eu-west-1)

Create an Elastic Beanstalk application for your micro-service

In the Amazon aws console, create an EBS application:

From the root directory of your microservice:

eb init --region eu-west-1 --profile <IAM_USER_NAME>

eb list

Calling ‘eb use’ marks this Beanstalk application as the current live instance of the micro-service, that will be swapped with the new instance upon every deploy.

Setup in pym-config.yaml

To be able to deploy against Beanstalk, the following key-values must be present in your project’s pym-config.yaml:

deploy_target: aws-beanstalk
docker_bucket: <S3_DOCKER_CONFIG_BUCKET>    # Name of the S3 bucket containing the docker config for aws deploys
aws_user: <IAM_USER_NAME>                   # Name of the aws IAM user to deploy as
aws_zone_id: <ZONE_ID>                      # The Route53 zone ID of the zone containing the record for the live_host

See here for details on ‘pym-config.yaml’.

Manual removal of out-dated Beanstalk environment

WARNING! The deploy pipeline does not remove the old application environment after swapping it away in favor of a new one. The old environment is kept so that you may swap it back into live position should your new environment be failing despite all acceptance tests.

Each environment consumes resources that you’ll have to pay for, so you should regularly delete manually the old environments from your Beanstalk application.