際際滷

際際滷Share a Scribd company logo
Dockerizing Django
PDX Portland
October 27th, 2016
Who are we?
Michael Dougherty
@maackle
Senior Front-end Engineer
CrowdStreet, Inc.
Hannes Hapke
@hanneshapke
Software Engineer
Talentpair, Inc.
Our pre-Docker World 
 Single instance world
(e.g. celery ran on the web
server)
 Outdated Amazon machine
image
 No documentation about the
setup, consultancy work
 Live data monkey patching
 Scaling/Recovery time > 8 hours
 Clunky QA setup > bottleneck
Our post-Docker World 
 Single instance world
(e.g. celery ran on the web
server)
 Outdated Amazon machine
image
 No documentation about the
setup, consultancy work
 Live data monkey patching
 Scaling/Recovery time > 8 hours
 Clunky QA setup > bottleneck
 One service per container,
redundancy of instances
 One common base image shared
across all instances
 Explicit, declarative server setup
 Immutable infrastructure (mostly)
 Scaling/Recovery time ~ 20min
 As many QA instances as we
want
What is Docker?
Docker  Compose  Machine  Swarm
Docker containers 
 wrap a piece of software in a complete 鍖lesystem that
contains everything needed to run: code, runtime, system
tools, system libraries  anything that can be installed on a
server. This guarantees that the software will always run
the same, regardless of its environment. *
Basically a virtual env for your operating system.
* from https://www.docker.com/what-docker
Docker vs. Vagrant
Where is the difference?
Images from https://www.docker.com/what-docker
VM includes OS No OS needed
How does Docker
work?
 Create a Docker鍖le
 Build the Docker image and push it to the docker registry
Plain Docker
FROM ubuntu:16.04
RUN apt-get update && apt-get upgrade -y
RUN pip install Django
COPY requirements.txt .
WORKDIR /dev
$ docker build -t your_project/your-whale .
$ docker images
$ docker tag {image_hash} your_project/your-whale:latest
$ docker push
Plain Docker
 Run a shell in a docker container

-i start an interactive container
-t creates Pseudo interface with stdin and stdout
 Run the Django server in a container

-d run container in detached mode
-P maps all ports to the host machine
$ docker run -i -t ubuntu /bin/bash
$ docker run -d -P my-container python manage.py run server
What if we need
multiple services?
Docker  Compose  Machine  Swarm
Docker Compose
Compose is a tool for orchestrating the building,
running, and intercommunication of
multi-container Docker applications.
How does it work?
1) De鍖ne a Docker鍖le for every service
2) De鍖ne a Docker compose description of the environment
3) Use docker-compose build/up to start all services
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
redis:
image: redis
How can I easily
provision a server with
the containers?
Docker  Compose  Machine  Swarm
Docker Machine
 is a great tool which creates Docker hosts anywhere.
Yes, anywhere.
Locally, AWS EC2, Digital Ocean, MS Azure, you name it.
No Ansible, Puppet, Chef, fabric, etc. required.
What if I need multiple
instances with multiple
services?
Docker  Compose  Machine  Swarm
Docker Swarm
Dockerize for real
If you start from scratch 
 Docker documentation includes a great Django
setup
 Too much work? The Django Cookie Cutter
template includes a great Docker setup
Other projects:
 django-docker on github
If you convert a
project like us
Reorganize your folder
structure
Normalize folders
 Create folders for every service
 docker-compose-{env}.yml go into the project root
 Docker鍖les go into every service folder
 startup.sh scripts go into the service folders
 Keep your local folder structure similar to the folder
structure within the container(s) - for sanity
Reorganized folders
Project Root
|-apps
|-settings
|-static
|-templates
|-manage.py
|-fab鍖le.py
|-urls.py
 requirements.txt
Project Root
|-django
| |-apps
| |- 
| |-Docker鍖le
| |-startup-django.sh
| -manage.py
|-nginx
|-webpack
|-docker-compose.yml
|-urls.py
 requirements.txt
Build a base image
Base Image
 Create one (or more) base Docker鍖le(s) with all
common packages
 Service containers can use this base image - this
will increase build speed
 If you store the base image(s) in a separate git
repo, the docker registry will build them
automatically for you
Base Image
FROM ubuntu:16.04
RUN apt-get update && apt-get upgrade -y
RUN apt-get install -y vim # Install some useful editor
RUN apt-get install -y build-essential git software-properties-common
RUN apt-get install -y python python-dev 
python-setuptools build-essential
RUN apt-get install -y nodejs npm
RUN npm install -g n # upgrading the npm version
RUN n stable
...
Base Image
Add image of the Docker registry
Set up Docker compose
for the different
environments
Docker Compose
 For every environment, local, QA, staging,
production, de鍖ne a docker-compose-{env}.yml 鍖le
 The 鍖les describe the environment stack
 Each service within the docker-compose 鍖le can
have its own Docker鍖le
Docker Composeversion: '2'
volumes:
postgres_data_dev: {}
redisdata: {}
webpack_data: {}
services:
postgres:
image: postgres:9.5
volumes:
- postgres_data_dev:/var/lib/postgresql/data
restart: always
environment:
- POSTGRES_USER=postgres_user
- POSTGRES_DB=my_fancy_db
- POSTGRES_PASSWORD=
webpack:
image: crowdstreet/crowdstreet-whale:latest
command: npm run watch
environment:
- NODE_PATH=/node_modules
volumes:
- ./webpack/frontend-src:/frontend-src
- ./django:/crowdstreet-src
- webpack_data:/webpack_data/
ports:
- "3000:3000"
restart: always
Docker Composedjango:
build:
context: .
dockerfile: ./django/Dockerfile-dev
command: python /crowdstreet-src/manage.py runserver
0.0.0.0:8000 --settings=settings.dev
depends_on:
- postgres
environment:
- ENV=dev
- DJANGO_SETTINGS_MODULE=settings
volumes:
- ./django:/crowdstreet-src
- ./webpack/frontend-src:/frontend-src
- webpack_data:/webpack_data/
ports:
- "8000:8000"
- "80:8000"
links:
- postgres
- redis
- webpack
- memcached
redis:
restart: always
image: redis:latest
volumes:
- redisdata:/data
restart: always
Docker Compose
 Build your service stack with
 Start the container stack with
 Access a single container with
$ docker-compose -f docker-compose-{env}.yml build
$ docker-compose -f docker-compose-{env}.yml up
$ docker-compose -f docker-compose-{env}.yml run django bash
$ docker-compose -f docker-compose-{env}.yml 
run container name command
PDXPortland - Dockerize Django
Set up Docker machine
and deploy to the world
PDXPortland - Dockerize Django
Docker machine is
awesome!
Docker Machine
 With




will provision you an AWS instance
 Activate the instance with
 Afterwards, any docker-compose command will be
executed on the active machine
 Easy to start/stop/terminate machines
$ docker-machine create --driver amazonec2 
--amazonec2-region [e.g. us-west-2] 
--amazonec2-vpc-id [YOUR_VPC_ID vpc-xxxxxx] 
--amazonec2-instance-type [e.g. t2.small]
[INSTANCE_NAME]
$ docker-machine env [INSTANCE_NAME]
Lessons Learned
PDXPortland - Dockerize Django
Or... how to cowboy code
with Docker
 Sometimes you just need to manually change
something
 Docker provides ways to get a shell inside a
running instance and copy 鍖les back and forth
 Your changes will of course be lost next time you
spin up a new container
The Disciplined Way:
The Cowboy Way:
$ docker-compose run django bash
$ docker exec -it {container_id} bash
How does QA work with
Docker?
 No QA bottleneck anymore
 No database gridlock anymore
 Each feature branch gets its own instance
 Once feature is tested, instance gets terminated
How can I access the
manage.py shell/migrate?
 Access the bash of the django container with

 Continue as usual with


Some for migrations, make_migrations, etc.
 Or run it from outside of the container stack with
$ docker-compose -f docker-compose-{env}.yml run django bash
# ./manage.py shell
docker-compose -f  run django python manage.py migrate
Help, ipdb doesnt work
anymore 
 Start the Django container with the service ports
enabled
 If no command is speci鍖ed, then Docker will default
to the command in the docker-compose.yml 鍖le
$ docker-compose -f dev.yml run --service-ports django
How to run tests?
 Start the Django container with your test command

$ docker-compose -f  run django manage.py test
CI Testing is convenient
 Setup for Circle CI
machine:
pre:
- curl -sSL https://s3.amazonaws.com/circle-downloads/install-circleci-docker.sh 
| bash -s -- 1.10.0
services:
- docker
dependencies:
override:
- sudo pip install docker-compose
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- docker-compose -f docker-compose-circle.yml build
- npm install -g jshint
test:
pre:
- sudo killall postgres # not sure why, but port 5432 is already taken up sometimes!
- docker-compose -f docker-compose-circle.yml up -d postgres
override:
- jshint ~/your_project/django/static/js/your_project*
- docker-compose -f docker-compose-circle.yml run django 
/your_project/manage.py test --verbosity=2
WTF, the 鍖les I copied into
my container are missing??
 If a volume is mounted at the same directory
where you copied other 鍖les, you will essentially
overwrite those 鍖les
Sharing Docker Machine
credentials
 Docker machine is great, but there is no concept
of sharing credentials
 All credentials are simple text 鍖les, no magic
 npm tool `machine-share` solved the problem
 Lets you export and import machine credentials
General Troubleshooting
 Con鍖rm that the correct docker-machine environment is active
 Rebuild your container stack
 Rebuild with the --pull and/or --no-cache options
 Restart the docker daemon
 Restart your docker machine with docker-machine restart
[INSTANCE NAME]
 Restart your docker machine VirtualBox VM
 Remove and recreate your docker machine (essentially recreates
your dev environment from scratch)
So, what does our setup
look like now?
Dev Environment
 You can use the same image as in your production
builds
 All services run at once, all output piped to a single
log stream (which we saw earlier)
 You can still have live reloading via Docker Volumes
(but be careful!)
How does the deployment
work now?
 Create AWS instance with docker-machine
 Activate the docker machine
 Use docker-compose to build the stack
 Use docker-compose up -d
 Switch the load balancer
Summary of technologies
 Learned about Docker
 How to use docker to de鍖ne images and containers
 Learned about Docker-compose to de鍖ne
relationships between containers
 Learned about Docker-machine to seamlessly work
with containers on local/remote machines
Summary of bene鍖ts
 Explicit, declarative server setup
 Zero down time deployments
 All dev services in one "window" and start with one
command
 Easy provisioning of multiple QA instances
 Quick onboarding for new devs
Thank you!
Q&A

More Related Content

PDXPortland - Dockerize Django

  • 2. Who are we? Michael Dougherty @maackle Senior Front-end Engineer CrowdStreet, Inc. Hannes Hapke @hanneshapke Software Engineer Talentpair, Inc.
  • 3. Our pre-Docker World Single instance world (e.g. celery ran on the web server) Outdated Amazon machine image No documentation about the setup, consultancy work Live data monkey patching Scaling/Recovery time > 8 hours Clunky QA setup > bottleneck
  • 4. Our post-Docker World Single instance world (e.g. celery ran on the web server) Outdated Amazon machine image No documentation about the setup, consultancy work Live data monkey patching Scaling/Recovery time > 8 hours Clunky QA setup > bottleneck One service per container, redundancy of instances One common base image shared across all instances Explicit, declarative server setup Immutable infrastructure (mostly) Scaling/Recovery time ~ 20min As many QA instances as we want
  • 5. What is Docker? Docker Compose Machine Swarm
  • 6. Docker containers wrap a piece of software in a complete 鍖lesystem that contains everything needed to run: code, runtime, system tools, system libraries anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment. * Basically a virtual env for your operating system. * from https://www.docker.com/what-docker
  • 8. Where is the difference? Images from https://www.docker.com/what-docker VM includes OS No OS needed
  • 10. Create a Docker鍖le Build the Docker image and push it to the docker registry Plain Docker FROM ubuntu:16.04 RUN apt-get update && apt-get upgrade -y RUN pip install Django COPY requirements.txt . WORKDIR /dev $ docker build -t your_project/your-whale . $ docker images $ docker tag {image_hash} your_project/your-whale:latest $ docker push
  • 11. Plain Docker Run a shell in a docker container -i start an interactive container -t creates Pseudo interface with stdin and stdout Run the Django server in a container -d run container in detached mode -P maps all ports to the host machine $ docker run -i -t ubuntu /bin/bash $ docker run -d -P my-container python manage.py run server
  • 12. What if we need multiple services? Docker Compose Machine Swarm
  • 13. Docker Compose Compose is a tool for orchestrating the building, running, and intercommunication of multi-container Docker applications.
  • 14. How does it work? 1) De鍖ne a Docker鍖le for every service 2) De鍖ne a Docker compose description of the environment 3) Use docker-compose build/up to start all services version: '2' services: web: build: . ports: - "5000:5000" volumes: - .:/code redis: image: redis
  • 15. How can I easily provision a server with the containers? Docker Compose Machine Swarm
  • 16. Docker Machine is a great tool which creates Docker hosts anywhere. Yes, anywhere. Locally, AWS EC2, Digital Ocean, MS Azure, you name it. No Ansible, Puppet, Chef, fabric, etc. required.
  • 17. What if I need multiple instances with multiple services? Docker Compose Machine Swarm
  • 20. If you start from scratch Docker documentation includes a great Django setup Too much work? The Django Cookie Cutter template includes a great Docker setup Other projects: django-docker on github
  • 21. If you convert a project like us
  • 23. Normalize folders Create folders for every service docker-compose-{env}.yml go into the project root Docker鍖les go into every service folder startup.sh scripts go into the service folders Keep your local folder structure similar to the folder structure within the container(s) - for sanity
  • 24. Reorganized folders Project Root |-apps |-settings |-static |-templates |-manage.py |-fab鍖le.py |-urls.py requirements.txt Project Root |-django | |-apps | |- | |-Docker鍖le | |-startup-django.sh | -manage.py |-nginx |-webpack |-docker-compose.yml |-urls.py requirements.txt
  • 25. Build a base image
  • 26. Base Image Create one (or more) base Docker鍖le(s) with all common packages Service containers can use this base image - this will increase build speed If you store the base image(s) in a separate git repo, the docker registry will build them automatically for you
  • 27. Base Image FROM ubuntu:16.04 RUN apt-get update && apt-get upgrade -y RUN apt-get install -y vim # Install some useful editor RUN apt-get install -y build-essential git software-properties-common RUN apt-get install -y python python-dev python-setuptools build-essential RUN apt-get install -y nodejs npm RUN npm install -g n # upgrading the npm version RUN n stable ...
  • 28. Base Image Add image of the Docker registry
  • 29. Set up Docker compose for the different environments
  • 30. Docker Compose For every environment, local, QA, staging, production, de鍖ne a docker-compose-{env}.yml 鍖le The 鍖les describe the environment stack Each service within the docker-compose 鍖le can have its own Docker鍖le
  • 31. Docker Composeversion: '2' volumes: postgres_data_dev: {} redisdata: {} webpack_data: {} services: postgres: image: postgres:9.5 volumes: - postgres_data_dev:/var/lib/postgresql/data restart: always environment: - POSTGRES_USER=postgres_user - POSTGRES_DB=my_fancy_db - POSTGRES_PASSWORD= webpack: image: crowdstreet/crowdstreet-whale:latest command: npm run watch environment: - NODE_PATH=/node_modules volumes: - ./webpack/frontend-src:/frontend-src - ./django:/crowdstreet-src - webpack_data:/webpack_data/ ports: - "3000:3000" restart: always
  • 32. Docker Composedjango: build: context: . dockerfile: ./django/Dockerfile-dev command: python /crowdstreet-src/manage.py runserver 0.0.0.0:8000 --settings=settings.dev depends_on: - postgres environment: - ENV=dev - DJANGO_SETTINGS_MODULE=settings volumes: - ./django:/crowdstreet-src - ./webpack/frontend-src:/frontend-src - webpack_data:/webpack_data/ ports: - "8000:8000" - "80:8000" links: - postgres - redis - webpack - memcached redis: restart: always image: redis:latest volumes: - redisdata:/data restart: always
  • 33. Docker Compose Build your service stack with Start the container stack with Access a single container with $ docker-compose -f docker-compose-{env}.yml build $ docker-compose -f docker-compose-{env}.yml up $ docker-compose -f docker-compose-{env}.yml run django bash $ docker-compose -f docker-compose-{env}.yml run container name command
  • 35. Set up Docker machine and deploy to the world
  • 38. Docker Machine With will provision you an AWS instance Activate the instance with Afterwards, any docker-compose command will be executed on the active machine Easy to start/stop/terminate machines $ docker-machine create --driver amazonec2 --amazonec2-region [e.g. us-west-2] --amazonec2-vpc-id [YOUR_VPC_ID vpc-xxxxxx] --amazonec2-instance-type [e.g. t2.small] [INSTANCE_NAME] $ docker-machine env [INSTANCE_NAME]
  • 41. Or... how to cowboy code with Docker Sometimes you just need to manually change something Docker provides ways to get a shell inside a running instance and copy 鍖les back and forth Your changes will of course be lost next time you spin up a new container
  • 42. The Disciplined Way: The Cowboy Way: $ docker-compose run django bash $ docker exec -it {container_id} bash
  • 43. How does QA work with Docker? No QA bottleneck anymore No database gridlock anymore Each feature branch gets its own instance Once feature is tested, instance gets terminated
  • 44. How can I access the manage.py shell/migrate? Access the bash of the django container with Continue as usual with Some for migrations, make_migrations, etc. Or run it from outside of the container stack with $ docker-compose -f docker-compose-{env}.yml run django bash # ./manage.py shell docker-compose -f run django python manage.py migrate
  • 45. Help, ipdb doesnt work anymore Start the Django container with the service ports enabled If no command is speci鍖ed, then Docker will default to the command in the docker-compose.yml 鍖le $ docker-compose -f dev.yml run --service-ports django
  • 46. How to run tests? Start the Django container with your test command $ docker-compose -f run django manage.py test
  • 47. CI Testing is convenient Setup for Circle CI machine: pre: - curl -sSL https://s3.amazonaws.com/circle-downloads/install-circleci-docker.sh | bash -s -- 1.10.0 services: - docker dependencies: override: - sudo pip install docker-compose - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS - docker-compose -f docker-compose-circle.yml build - npm install -g jshint test: pre: - sudo killall postgres # not sure why, but port 5432 is already taken up sometimes! - docker-compose -f docker-compose-circle.yml up -d postgres override: - jshint ~/your_project/django/static/js/your_project* - docker-compose -f docker-compose-circle.yml run django /your_project/manage.py test --verbosity=2
  • 48. WTF, the 鍖les I copied into my container are missing?? If a volume is mounted at the same directory where you copied other 鍖les, you will essentially overwrite those 鍖les
  • 49. Sharing Docker Machine credentials Docker machine is great, but there is no concept of sharing credentials All credentials are simple text 鍖les, no magic npm tool `machine-share` solved the problem Lets you export and import machine credentials
  • 50. General Troubleshooting Con鍖rm that the correct docker-machine environment is active Rebuild your container stack Rebuild with the --pull and/or --no-cache options Restart the docker daemon Restart your docker machine with docker-machine restart [INSTANCE NAME] Restart your docker machine VirtualBox VM Remove and recreate your docker machine (essentially recreates your dev environment from scratch)
  • 51. So, what does our setup look like now?
  • 52. Dev Environment You can use the same image as in your production builds All services run at once, all output piped to a single log stream (which we saw earlier) You can still have live reloading via Docker Volumes (but be careful!)
  • 53. How does the deployment work now? Create AWS instance with docker-machine Activate the docker machine Use docker-compose to build the stack Use docker-compose up -d Switch the load balancer
  • 54. Summary of technologies Learned about Docker How to use docker to de鍖ne images and containers Learned about Docker-compose to de鍖ne relationships between containers Learned about Docker-machine to seamlessly work with containers on local/remote machines
  • 55. Summary of bene鍖ts Explicit, declarative server setup Zero down time deployments All dev services in one "window" and start with one command Easy provisioning of multiple QA instances Quick onboarding for new devs
  • 57. Q&A