22
loading...
This website collects cookies to deliver better user experience
cd django-auth-react-tutorial
virtualenv --python=/usr/bin/python3.8 venv
source venv/bin/activate
psql
and let's start writing some SQL commands.CREATE DATABASE coredb;
CREATE USER core WITH PASSWORD '12345678';
GRANT ALL PRIVILEGES ON DATABASE coredb TO core;
psycopg2
using pip install psycopg2
is a popular PostgreSQL database adapter for Python.pip install psycopg2
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
}
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'coredb',
'USER': 'core',
'PASSWORD': '12345678',
'HOST': 'localhost',
'PORT': 5432,
}
}
ENGINE
: We changed the database engine to use postgresql_psycopg2
instead of sqlite3
. NAME
: is the name of the database we created for our project. USER
: is the database user we've created during the database creation. PASSWORD
: is the password to the database we created. migrate
command which is responsible for executing the SQL commands specified in the migrations files.python manage.py migrate
Applying auth.0001_initial... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
...
Applying core_user.0001_initial... OK
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying admin.0003_logentry_add_action_flag_choices... OK
Applying sessions.0001_initial... OK
python manage.py runserver
http://127.0.0.1:8000
in your browser. django-environ
.pip install django-environ
# CoreRoot/settings.py
import environ
# Initialise environment variables
env = environ.Env()
environ.Env.read_env()
.env
file which will contain all environment variables that django-environ
will read env.example
file which will contain the same content as .env
. .env
file is ignored by git. The env.example
file here represents a skeleton we can use to create our .env
file in another machine. # ./.env
SECRET_KEY=django-insecure-97s)x3c8w8h_qv3t3s7%)#k@dpk2edr0ed_(rq9y(rbb&_!ai%
DEBUG=0
DJANGO_ALLOWED_HOSTS="localhost 127.0.0.1 [::1]"
DB_ENGINE=django.db.backends.postgresql_psycopg2
DB_NAME=coredb
DB_USER=core
DB_PASSWORD=12345678
DB_HOST=localhost
DB_PORT=5432
CORS_ALLOWED_ORIGINS="http://localhost:3000 http://127.0.0.1:3000"
.env.example
, but make sure to delete the values../env.example
SECRET_KEY=
DEBUG=
DJANGO_ALLOWED_HOSTS=
DB_ENGINE=
DB_NAME=
DB_USER=
DB_PASSWORD=
DB_HOST=
DB_PORT=
CORS_ALLOWED_ORIGINS=
# ./CoreRoot/settings.py
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env('SECRET_KEY', default='qkl+xdr8aimpf-&x(mi7)dwt^-q77aji#j*d#02-5usa32r9!y')
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = int(env("DEBUG", default=1))
ALLOWED_HOSTS = env("DJANGO_ALLOWED_HOSTS").split(" ")
DATABASES = {
'default': {
'ENGINE': env('DB_ENGINE', default='django.db.backends.postgresql_psycopg2'),
'NAME': env('DB_NAME', default='coredb'),
'USER': env('DB_USER', default='core'),
'PASSWORD': env('DB_PASSWORD', default='12345678'),
'HOST': env('DB_HOST', default='localhost'),
'PORT': env('DB_PORT', default='5432'),
}
}
CORS_ALLOWED_ORIGINS = env("CORS_ALLOWED_ORIGINS").split(" ")
UserViewSet
. test_runner.py
in CoreRoot
directory. DiscoverRunner
, to load our custom fixtures in the test database.# ./CoreRoot/test_runner.py
from importlib import import_module
from django.conf import settings
from django.db import connections
from django.test.runner import DiscoverRunner
class CoreTestRunner(DiscoverRunner):
def setup_test_environment(self, **kwargs):
"""We set the TESTING setting to True. By default, it's on False."""
super().setup_test_environment(**kwargs)
settings.TESTING = True
def setup_databases(self, **kwargs):
"""We set the database"""
r = super().setup_databases(**kwargs)
self.load_fixtures()
return r
@classmethod
def load_fixtures(cls):
try:
module = import_module(f"core.fixtures")
getattr(module, "run_fixtures")()
except ImportError:
return
settings.py
file.# CoreRoot/settings.py
...
TESTING = False
TEST_RUNNER = "CoreRoot.test_runner.CoreTestRunner"
# core/auth/tests.py
from django.urls import reverse
from rest_framework.test import APITestCase
from rest_framework import status
class AuthenticationTest(APITestCase):
base_url_login = reverse("core:auth-login-list")
base_url_refresh = reverse("core:auth-refresh-list")
data_register = {"username": "test", "password": "pass", "email": "[email protected]"}
data_login = {
"email": "[email protected]",
"password": "12345678",
}
# core/auth/tests.py
...
def test_login(self):
response = self.client.post(f"{self.base_url_login}", data=self.data_login)
self.assertEqual(response.status_code, status.HTTP_200_OK)
python manage.py test
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
.
----------------------------------------------------------------------
Ran 1 test in 0.287s
OK
Destroying test database for alias 'default'...
# core/auth/tests.py
...
def test_refresh(self):
# Login
response = self.client.post(f"{self.base_url_login}", data=self.data_login)
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_data = response.json()
access_token = response_data.get('access')
refresh_token = response_data.get('refresh')
# Refreshing the token
response = self.client.post(f"{self.base_url_refresh}", data={
"refresh": refresh_token
})
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_data = response.json()
self.assertNotEqual(access_token, response_data.get('access'))
Dockerfile
represents a text document containing all the commands that could call on the command line to create an image.# pull official base image
FROM python:3.9-alpine
# set work directory
WORKDIR /app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# install python dependencies
COPY requirements.txt /app/requirements.txt
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
# copy project
COPY . .
PYTHONDONTWRITEBYTECODE
to prevent Python from writing .pyc
files to discPYTHONUNBUFFERED
to prevent Python from buffering stdout
and stderr
requirements.txt
file to our app path, upgrading pip, and installing the python package to run our application.dockerignore
file.env
venv
.dockerignore
Dockerfile
docker-compose
command, we can create and start all those services.docker-compose.dev.yml
file will contain three services that make our app: nginx, web, and db. version: '3.7'
services:
nginx:
container_name: core_web
restart: on-failure
image: nginx:stable
volumes:
- ./nginx/nginx.dev.conf:/etc/nginx/conf.d/default.conf
- static_volume:/app/static
ports:
- "80:80"
depends_on:
- web
web:
container_name: core_app
build: .
restart: always
env_file: .env
ports:
- "5000:5000"
command: >
sh -c " python manage.py migrate &&
gunicorn CoreRoot.wsgi:application --bind 0.0.0.0:5000"
volumes:
- .:/app
- static_volume:/app/static
depends_on:
- db
db:
container_name: core_db
image: postgres:12.0-alpine
env_file: .env
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
static_volume:
postgres_data:
nginx
: NGINX is an open-source software for web serving, reverse proxying, caching, load balancing, media streaming, and more.web
: We'll run and serve the endpoint of the Django application through Gunicorn. db
: As you guessed, this service is related to our PostgreSQL database.nginx
directory and create a nginx.dev.conf
file.upstream webapp {
server core_app:5000;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://webapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /app/static/;
}
}
gunicorn
and some configurations before building our image.pip install gunicorn
requirements.txt
. requirements.txt
file looks like :Django==3.2.4
djangorestframework==3.12.4
djangorestframework-simplejwt==4.7.1
django-cors-headers==3.7.0
psycopg2==2.9.1
django-environ==0.4.5
gunicorn==20.1.0
STATIC_ROOT
in the settings.py
file.docker-compose -f docker-compose.dev.yml up -d --build
localhost/api/auth/login/
to see if your application is working. main
branch.django.yml
to run some Django tests. .github
. Inside that directory, create another directory named workflows
and create the django.yml
file.name: Django CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
strategy:
max-parallel: 4
matrix:
python-version: [3.9]
services:
postgres:
image: postgres:12
env:
POSTGRES_USER: core
POSTGRES_PASSWORD: 12345678
POSTGRES_DB: coredb
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: psycopg2 prerequisites
run: sudo apt-get install python-dev libpq-dev
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run Tests
run: |
python manage.py test
ubuntu-latest
as the OS and precise the Python version on which this workflow will run. #!/usr/bin/env bash
TARGET='main'
cd ~/app || exit
ACTION='\033[1;90m'
NOCOLOR='\033[0m'
# Checking if we are on the main branch
echo -e ${ACTION}Checking Git repo
BRANCH=$(git rev-parse --abbrev-ref HEAD)
if [ "$BRANCH" != ${TARGET} ]
then
exit 0
fi
# Checking if the repository is up to date.
git fetch
HEADHASH=$(git rev-parse HEAD)
UPSTREAMHASH=$(git rev-parse ${TARGET}@{upstream})
if [ "$HEADHASH" == "$UPSTREAMHASH" ]
then
echo -e "${FINISHED}"Current branch is up to date with origin/${TARGET}."${NOCOLOR}"
exit 0
fi
# If that's not the case, we pull the latest changes and we build a new image
git pull origin main;
# Docker
docker-compose up -d --build
exit 0;
mkdir app .scripts
cd .scripts
vim docker-deploy.sh
cd ~/app
git clone <your_repository> .
.
. Using this, it will simply clone the content of the repository in the current directory.docker-compose.yml
file which will be run on this server.nginx.conf
file.docker-compose.yml
file.version: '3.7'
services:
nginx:
container_name: core_web
restart: on-failure
image: jonasal/nginx-certbot:latest
env_file:
- .env.nginx
volumes:
- nginx_secrets:/etc/letsencrypt
- ./nginx/user_conf.d:/etc/nginx/user_conf.d
ports:
- "80:80"
- "443:443"
depends_on:
- web
web:
container_name: core_app
build: .
restart: always
env_file: .env
ports:
- "5000:5000"
command: >
sh -c " python manage.py migrate &&
gunicorn CoreRoot.wsgi:application --bind 0.0.0.0:5000"
volumes:
- .:/app
- static_volume:/app/static
depends_on:
- db
db:
container_name: core_db
image: postgres:12.0-alpine
env_file: .env
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
static_volume:
postgres_data:
nginx_secrets:
nginx
service. Now, we are using the docker-nginx-certbot
image. It'll automatically create and renew SSL certificates using the Let's Encrypt free CA (Certificate authority) and its client certbot
. user_conf.d
inside the nginx
directory and create a new file nginx.conf
.upstream webapp {
server core_app:5000;
}
server {
listen 443 default_server reuseport;
listen [::]:443 ssl default_server reuseport;
server_name dockerawsdjango.koladev.xyz;
server_tokens off;
client_max_body_size 20M;
ssl_certificate /etc/letsencrypt/live/dockerawsdjango.koladev.xyz/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/dockerawsdjango.koladev.xyz/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/dockerawsdjango.koladev.xyz/chain.pem;
ssl_dhparam /etc/letsencrypt/dhparams/dhparam.pem;
location / {
proxy_pass http://webapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /app/static/;
}
}
dockerawsdjango.koladev.xyz
with your own domain name...server {
listen 443 default_server reuseport;
listen [::]:443 ssl default_server reuseport;
server_name dockerawsdjango.koladev.xyz;
server_tokens off;
client_max_body_size 20M;
443
for HTTPS. server_name
which is the domain name. We set the server_tokens
to off to not show the server version on error pages. ...
deploy:
name: Deploying
needs: [test]
runs-on: ubuntu-latest
steps:
- name: Deploying Application
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.SSH_AWS_SERVER_IP }}
username: ${{ secrets.SSH_SERVER_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
passphrase: ${{ secrets.SSH_PASSPHRASE }}
script: |
cd ~/.scripts
./docker-deploy.sh
needs: [build]
line. It helps us make sure that the precedent job is successful before deploying the new version of the app.cd app/
vim .env # or nano or whatever
.env.nginx
file. This will contain the required configurations to create an SSL certificate.# Required
CERTBOT_EMAIL=
# Optional (Defaults)
STAGING=1
DHPARAM_SIZE=2048
RSA_KEY_SIZE=2048
ELLIPTIC_CURVE=secp256r1
USE_ECDSA=0
RENEWAL_INTERVAL=8d
STAGING
is set to 1. We will test the configuration first with Let’s encrypt staging environment! It is important to not set staging=0 before you are 100% sure that your configuration is correct. docker-compose down
.env.nginx
file and set STAGING=0
.sudo docker-compose up -d --build