Running a Python Flask application in a Docker container

I’ve played with Docker containers but haven’t really done anything, useful or otherwise, with them. I decided to create a Docker image that includes a web-based chatbot. You can find the Git repository for this (including the finished Dockerfile) at https://github.com/cherdt/docker-nltk-chatbot

I’ve worked with this particular chatbot before, which is based on the nltk.chat.eliza module. I turned it into a web application by wrapping it in a Flask app. And because Flask warns you not to run Flask directly in production, I call the Flask app via uWSGI.

I started by creating a Dockerfile:

# mkdir chatbot
# cd chatbot
# vi Dockerfile

I needed to start with a base image, so I picked CentOS 7:

FROM centos:centos7

I knew I would need several packages installed to satisfy the dependencies for Flask and uWSGI (although it took me a couple tries before I determined that I would need gcc and python-devel in order to install uWSGI):

RUN /usr/bin/yum --assumeyes install epel-release gcc
RUN /usr/bin/yum --assumeyes install python python-devel python-pip

Then to install Flask and dependencies, uWSGI, and the Python NLTK (Natural Language ToolKit):

RUN /usr/bin/pip install Flask flask-cors nltk requests uwsgi

The Python file for the Flask app and the HTML template file needed to be copied into the container:

COPY chatbot.py ./
COPY chat.html ./

Finally, the command that will run uWSGI and point it at the Flask app:

CMD ["/usr/bin/uwsgi", "--http", ":9500", "--manage-script-name", "--mount", "/=chatbot:app"]

Note that CMD does not like spaces: each item needs to be a separate array element.

Now to build the image:

# docker build --tag chatbot .
...[some other output]...
Successfully built ab3a32938f0e
Successfully tagged chatbot:latest

# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
chatbot             latest              ab3a32938f0e        5 seconds ago        486MB

To run a container based on this image, I wanted to do several things:

  • Run the process in the background (detached mode), using -d
  • Restart the Docker container on errors, using --restart on-failure
  • Map port 80 on the host to 9500 (the listening port of uWSGI in the container) using -p 80:9500
# docker run -d --restart on-failure -p 80:9500 chatbot

Now, to test the application:

# curl localhost/chat-api?text=Does+this+work%3F
Please consider whether you can answer your own question.

Success!!!

Deploying to Production (or at least somewhere else)

Now that I had a working container, I wanted to deploy it somewhere other than my development environment. I created logged into the Docker Hub website and create a repository at cherdt/nltk-chatbot.

To store the image, first I needed to log in:

# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: cherdt
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

Then I needed to tag my image with the repository name:

# docker tag ab3a32938f0e cherdt/nltk-chatbot

Then I was able to push the image to the repository:

# docker push cherdt/nltk-chatbot

I don’t have a production server where I want to run this, but as a proof-of-concept for myself I wanted to deploy it somewhere. I created another CentOS 7 virtual machine and installed Docker there (see Get Docker CE for CentOS).

I pulled the image onto the new host. This did not require logging in, since it is a public repository:

# docker pull cherdt/nltk-chatbot

I ran a container the same way I had before, but updating the image name to match the repository:

# docker run -d --restart on-failure -p 80:9500 cherdt/nltk-chatbot

And the test?

# curl localhost/chat-api?text=Does+this+work%3F
Please consider whether you can answer your own question.

Considerations

The deployment host needed to have Docker installed and the Docker daemon running, but none of the other dependencies were needed: gcc, python-devel, pip, Flask, uwsgi, etc. are all self-contained in the Docker image.

On the other hand, the Docker image is just shy of 500 MB:

# docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
cherdt/nltk-chatbot   latest              ab3a32938f0e        28 minutes ago      486MB

That’s pretty heavy considering the chatbot.py file is 369 bytes! For a trivial proof-of-concept application that seems like a lot of overhead, but if this was a critical production application, or even if it’s something I planned to deploy several times, the amount of time saved in setting up and configuring new hosts would be worth it. It also means that the behavior of the application in my development environment should be the same as the behavior in the production environment.

One thought on “Running a Python Flask application in a Docker container”

Leave a Reply

Your email address will not be published. Required fields are marked *