A few ways to deploy your application

It is a half the battle to write the code. The second half is we need to deploy it into clouds. It doesn’t matter if it’s a single $5 server in DigitalOcean or a massive cluster of AWS EC2 instances, anyway, you might want to automate the deployment somehow.

It could be a single $5 server in DigitalOcean or a massive cluster of AWS EC2 instances, it doesn’t matter, anyway, you might want to automate the deployment somehow.

Some time ago I was asking myself the same question, so in this article, I’ll clarify the difference between approaches of deployment automation with Bash, Fabric, Ansible and Docker. But I’m not going to give you detailed instructions on how to use such tools otherwise this article will become a book of hundred pages, just a few examples.

Please note: You’ll see some examples of code below, consider them as a pseudo-code, not as real-life examples, I haven’t tried to run that code therefore it can contain some mistakes. If you notice something like that, feel free to mail me.


I don’t like bash. It’s ugly, it’s strange, but at the same time, I love it. Why? Because it’s already installed on each Linux system and I don’t need to install any additional dependency. Thus, despite its strangeness, it’s quite a powerful and convenient tool to write short deployment scripts, and often I do it if a deployment process is straightforward.

What is straightforward? Well, for the most part of my life I’m either Go or Python developer, so the deployment process looks like:

  1. Login to the server over SSH
  2. Update the code from git or upload a binary
  3. Update application dependencies
  4. Run database migrations
  5. Restart the application server

It’s also perfect for JS applications where we just need to upload a few static files on the server, we can do that easily with scp or rsync. Can Bash help us here? Definitely! Usually, I create a file with something like the following and call it deploy.sh:

#!/usr/bin/env bash

ssh appuser@appserver << EOF
cd ~/app
git pull -f
~/venv/bin/pip install -r requirements.txt
~/venv/bin/python manage.py migrate
sudo supervisorctl reload app_server

That’s it. Every time I want to update application version on the server I run ./deploy.sh in the console. Of course, in most cases, deployment processes are not as simple so we might wish to have a more expressive tool to describe some logic and abstractions.


It’s a powerful tool that can execute SSH commands on remote servers and do a lot of additional stuff like file transferring, templating or editing files. Like Bash it is a great choice for applications that are deployed on one or two servers and if you don’t mind configuring those servers manually at the beginning.

All that I need to do in this case is to create a single file fabfile.py with the following content:

from fabric.api import env, task, run, prefix

env.user = 'appuser'
env.hosts = ['appserver1']

def deploy():
    with prefix('cd ~/app'):
        run('git pull -f')
        with prefix('source ~/venv/bin/python'):
            run('python manage.py collectstatic --no-input')
            run('python manage.py migrate')

    run('sudo supervisorctl restart yourapp_gunicorn')

Looks not much harder than the bash example but much more flexible, and another significant advantage is that it’s not Bash anymore. Now we can build different abstractions, for example, config templating I usually have at my fabfile.py:

import tempfile
from fabric.api import put, task

def put_template(source, dest, data=None):
    if data is None:
        data = {}
    with tempfile.NamedTemporaryFile() as template:
        with open(source, 'r') as fp:
            put(template.name, dest)

def update_nginx_config():
    put_template('nginx_site.conf', '/etc/nginx/sites-available/api.conf', {
        'server_name': 'api.myapp.com',

However, it is still not enough if we have more servers and more environments like development, staging, production. Every developer might even want to have their own dev environment each to test some features. Fabric does its job well, but if we want real automation which configures an empty server, it’s going to be challenging to create such scripts using Bash and Fabric. That’s why we might need something like Ansible.


Everything that we were talking about above was more about updating the code, there was nothing about installing system packages, creating databases and users. It was ok to do that manually as we had only one or two servers.

But sometimes we need to deploy applications on many servers simultaneously to provide high availability and the ability to survive under pressure from thousands of users. Besides, we might want to add servers to our cluster as fast as possible.

It might be quite costly to configure those servers manually because it takes too much time and because we can’t keep everything in mind so it certainly will lead us to an inconsistent configuration (e.g., in package versions) and hard-to-debug issues for sure. If we have more than just two servers, we need a tool to configure them like one from a single place.

Yes, we still can do such automation with Fabric. I even had such experience, and it still works in a couple of my former projects. The weak point is that I wrote a lot of code. And I mean a LOT. Thousands of lines of code that set up the whole cluster with databases, load balancers, a few applications servers and so on. Many bugs can be concealed there.

Ansible is an abstraction over SSH. We don’t use raw shell commands anymore but use Ansible’s DSL to describe the state of servers we need to configure: which packages need to be installed, which users should be created and which configuration files should be uploaded.

For me, it was quite difficult to understand the concepts of Ansible after using Fabric for many years, but afterward, it became my primary tool to deploy different applications.

So, what is it looks like? Imagine that we have different types of servers: one database server and two application servers. First, we describe the database server’s state:

- hosts: dbserver
  remote_user: root
  become: true

    - name: install postgresql
      apt: item=postgresql-server state=present

    - name: create database
        name: app_db
        encoding: UTF-8

    - name: create database user
        name: app_db
        db: app_db
        passord: notsosecure
        priv: ALL

And now we describe the state for application servers:

- hosts: [appserver1, appserver2]
  remote_user: root
  become: true

    - name: restart app
        name: app_server
        state: restarted

    - name: install system packages
      apt: name={{ item }} state=present
        - python3
        - python3-dev
        - build-essential

    - name: create application user
        name: app_user
        shell: /bin/bash

    - name: update code from repo
      become_user: app_user
        repo: ssh://git@github.com/mycompany/app.git
        dest: ~/app
        clone: true
        update: true
        force: true
      notify: restart app

    - name: update app dependencies
      become_user: app_user
        requirements: ~/app/requirements.txt
        virtualenv: ~/venv/
        virtualenv_python: python3

Now we can create as much application environments as we want just by replacing hosts parameter or user/database names. And if we want to add more servers into our configuration, we just add another appserver in the hosts array and rerun the deployment script.

Of course, I haven’t described the whole process because, again, it is not a manual, but you can see that those scripts (Ansible calls them playbooks) looks simple but does lots of things under the hood. Ansible doesn’t run the commands, it checks the previous state and does precisely what he need to reach the described state. For example, have a look at the “create database user” task. Before creating a user, it checks if that user already exists and if it has the same password. And it also applies to everything, we just describe what we want to get in the Ansible playbook and don’t care what shell commands or SQL queries we need to run.


Actually, Docker is not a way of deployment, we still need some automation to set up your servers, like Ansible. But I’ve considered it’s necessary to mention Docker here as it might simplify the deployment process if your project contains more than one application. That’s not so uncommon actually, even in the simplest projects we often have a REST API server, a frontend server for React’s server-side rendering and some background worker to send emails or generate image previews in the background.

Docker is like a static compilation on steroids. We deploy a container which includes everything that is required to run our application: operating system, system dependencies, python interpreter of any version and that sort of things. Containers are fully isolated from the host operating systems and other containers so we can do whatever we want and don’t care about the rest environment.

That leads us to the idea that we don’t have to pay attention to server configuration anymore. We just need to setup essential environment with Ansible, install Docker and then deploy containers there in proper way.