What is Docker?
Docker is a platform that allows one to package an application with all of its dependencies and surrounding environment in an isolated container. This is useful because the next time you want to deploy your application somewhere else, for development or production, you can simply pull down that container rather than setting up your environment again. Gone are the days of repeatedly downloading and installing dependencies!
What are Docker containers?
Containers are composed of an application and anything it may need to run. You can run the application within the container, as well as upload the container’s data so that the exact same environment can be replicated somewhere else.
DockerHub is a repository for storing images. Once you have an account in DockerHub you can upload your own images to download later on different machines. You can also access popular containers the community has uploaded.
Creating a Docker Image of a simple Python app
Test Docker Installation
Let’s test that Docker is installed properly by running the sample command:
$ docker run hello-world
The Dockerfile defines the environment for your application. Inside a container, network access and disk access is virtualized. With the Dockerfile, we can specify which files we want to include in our environment, what ports we want exposed to the outside world, what commands to run to set up our dependencies, and more. The commands in a Dockerfile are run sequentially.
Lets create a directory to work in. Create a file named ‘Dockerfile’. Here is the Dockerfile we are going to use:
# Use an official Python runtime as a parent image FROM python:2.7-slim # Set the working directory to /app WORKDIR /app # Copy the current directory contents into the container at /app ADD . /app # Install any needed packages specified in requirements.txt RUN pip install --trusted-host pypi.python.org -r requirements.txt # Make port 80 available to the world outside this container EXPOSE 80 # Run app.py when the container launches CMD ["python", "app.py"]
Important: Take the time to read over the comments and understand what each line is doing. The commands after the RUN Docker instructions are executed when the image is built. The command after the CMD docker instruction is executed when docker run is run from the command line. This is how you launch your Dockerized app.
Notice the reference to “parent image” in the first line – this is just the original, “base” Docker image that your image will be based on. The python:2.7-slim image contains the official Python 2.7 runtime, which we shall now have access to in our image. A Dockerfile must start with a FROM instruction.
Let’s examine the RUN instruction in our Dockerfile.
You can have multiple RUN instructions in a Dockerfile, and their commands will be sequentially executed when the image is being built. Here is where we place the commands to install whatever dependencies we need. Our command will use pip (inherited from the python2.7-slim base image) to install the Python modules we need. pip is a simple tool used to install and manage python modules. The requirements.txt file tells pip which modules to install. In this case, we will be using just the Flask module, so create the file requirements.txt containing the following:
Flask is a python web framework that will allow us to get a web app created and deployed quickly and easily
Create app.py as (make sure to use tabs):
from flask import Flask import os import socket app = Flask(__name__) @app.route("/") def hello(): return "<body>Simple Flask Application</body>" if __name__ == "__main__": app.run(host='0.0.0.0', port=80)
Containerize our Python webapp
Now we can use the docker build command to construct the environment using the Dockerfile
docker build -t docker-test .
We can locate the image we just built with the following command.
docker image ls
Let’s finally run our containerized application! We map the container’s port 80, where we are hosting our Flask webapp (recall the app.run(host=’0.0.0.0′, port=80) line), to our container’s port 4000.
$ docker run -p 4000:80 docker-test
Visit http://localhost:4000 to visit your site!
Once we are finished, we can terminate our application in another terminal window using the command:docker container stop along with the container id, which we can find by using the command docker container ls, as described in the screenshot below.
Sharing our new Docker Image
A major part of what makes Docker images so useful are their portability. We will demonstrate this by uploading our recently created image and running it somwhere else. In this case, we will upload our images to DockerHub, the public Docker registry.
Login using the docker login command using the account created in the pre-lab.
We tag our images in the format [username/repository:tag] to associate a local image with a repository. Try to use meaningful names for the repository and tag. Use your Dockerhub username.
The syntax for the command is docker tag [image] [username/repository:tag]
$ docker tag lab7 srivardhan/cse110:lab7
We can then upload our image to the repository with the command docker push username/repository:tag, replaced with the names you chose in the previous step.
$ docker push [username/repository:tag]
Now you can run your image from any machine with the command below, replacing with the usernames you chose in the previous step.
docker run -p 4000:80 [username/repository:tag]
Docker will first check for the image locally and download it from the public repository if it isn’t present. Just remember to docker login first.
Confirm that your newly created repository and image appears on hub.docker.com.