Docker For Local Web Development, Part 3-A Three-Tier Architecture With Frameworks
Docker For Local Web Development, Part 3-A Three-Tier Architecture With Frameworks
In this post
In this series
In this post
Foreword
A three-tier architecture?
The backend application
The new backend service
Non-root user
Creating a new Laravel project
OPcache
The frontend application
Conclusion
Foreword
In all honesty, what we've covered so far is pretty standard. Articles
about LEMP stacks on Docker are legion, and while I hope to add
some value through a beginner-friendly approach and a certain
level of detail, there was hardly anything new (after all, I was
already writing about this back in 2015).
In that sense, today's article is probably where the rubber meets the
road for some of you. That is not to say the previous ones are
negligible – they constitute a necessary introduction contributing to
making this series comprehensive – but this is where the theory
meets the practical complexity of modern web applications.
If you prefer, you can also directly checkout the part-3 branch,
which is the final result of today's article.
Again, this is by no means the one and only approach, just one that
has been successful for me and the companies I set it up for.
A three-tier architecture?
After setting up a LEMP stack on Docker and shrinking down the
size of the images, we are about to complement our MySQL
database with a frontend application based on Vue.js and a
backend application based on Laravel, in order to form what we call
a three-tier architecture.
Let's also get rid of the previous PHP-related files, to make room for
the new backend application. Delete the .docker/php folder, the
.docker/nginx/conf.d/php.conf file and the src/index.php
file. Your file and directory structure should now look similar to this:
docker-tutorial/
├── .docker/
│ ├── mysql/
│ │ └── my.cnf
│ └── nginx/
│ └── conf.d/
│ └── phpmyadmin.conf
├── src/
├── .env
├── .env.example
├── .gitignore
└── docker-compose.yml
version: '3.8'
# Services
services:
# Nginx Service
nginx:
image: nginx:1.21-alpine
ports:
- 80:80
volumes:
- ./src/backend:/var/www/backend
- ./.docker/nginx/conf.d:/etc/nginx/conf.d
- phpmyadmindata:/var/www/phpmyadmin
depends_on:
- backend
- phpmyadmin
# Backend Service
backend:
build: ./src/backend
working_dir: /var/www/backend
volumes:
- ./src/backend:/var/www/backend
depends_on:
mysql:
condition: service_healthy
# MySQL Service
mysql:
image: mysql/mysql-server:8.0
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: demo
volumes:
- ./.docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf
- mysqldata:/var/lib/mysql
healthcheck:
test: mysqladmin ping -h 127.0.0.1 -u root --password=$$MYSQL_ROOT_PASSWORD
interval: 5s
retries: 10
# PhpMyAdmin Service
phpmyadmin:
image: phpmyadmin/phpmyadmin:5-fpm-alpine
environment:
PMA_HOST: mysql
volumes:
- phpmyadmindata:/var/www/html
depends_on:
mysql:
condition: service_healthy
# Volumes
volumes:
mysqldata:
phpmyadmindata:
The main update is the removal of the PHP service in favour of the
backend service, although they are quite similar. The build key
now points to a Dockerfile located in the backend application's
directory ( src/backend ), which is also mounted as a volume on
the container.
1 server {
2 listen 80;
3 listen [::]:80;
4 server_name backend.demo.test;
5 root /var/www/backend/public;
6
7 add_header X-Frame-Options "SAMEORIGIN";
8 add_header X-Content-Type-Options "nosniff";
9
10 index index.php;
11
12 charset utf-8;
13
14 location / {
15 try_files $uri $uri/ /index.php?$query_string;
16 }
17
18 location = /favicon.ico { access_log off; log_not_found off
19 location = /robots.txt { access_log off; log_not_found off
20
21 error_page 404 /index.php;
22
23 location ~ \.php$ {
24 fastcgi_pass backend:9000;
25 fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name
26 include fastcgi_params;
27 }
28
29 location ~ /\.(?!well-known).* {
30 deny all;
31 }
32 }
You also need to update your local hosts file with the new domain
names (have a quick look here if you've forgotten how to do that):
server_name phpmyadmin.demo.test
FROM php:8.1-fpm-alpine
docker-tutorial/
├── .docker/
│ ├── mysql/
│ │ └── my.cnf
│ └── nginx/
│ └── conf.d/
│ └── backend.conf
│ └── phpmyadmin.conf
├── src/
│ └── backend/
│ └── Dockerfile
├── .env
├── .env.example
├── .gitignore
└── docker-compose.yml
Now if you remember, in the first part of this series we used exec
to run Bash on a container, whereas this time we are using run to
execute the command we need. What's the difference?
[PHP Modules]
Core
ctype
curl
date
dom
fileinfo
filter
ftp
hash
iconv
json
libxml
mbstring
mysqlnd
openssl
pcre
PDO
pdo_sqlite
Phar
posix
readline
Reflection
session
SimpleXML
sodium
SPL
sqlite3
standard
tokenizer
xml
xmlreader
xmlwriter
zlib
[Zend Modules]
1 FROM php:8.1-fpm-alpine
2
3 # Install extensions
4 RUN docker-php-ext-install pdo_mysql bcmath
5
6 # Install Composer
7 COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
Non-root user
There's one last thing we need to cover before proceeding further.
# Backend Service
backend:
build:
context: ./src/backend
args:
HOST_UID: $HOST_UID
working_dir: /var/www/backend
volumes:
- ./src/backend:/var/www/backend
depends_on:
mysql:
condition: service_healthy
1 FROM php:8.1-fpm-alpine
2
3 # Install extensions
4 RUN docker-php-ext-install pdo_mysql bcmath
5
6 # Install Composer
7 COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
8
9 # Create user based on provided user ID
10 ARG HOST_UID
11 RUN adduser --disabled-password --gecos "" --uid $HOST_UID demo
12
13 # Switch to that user
14 USER demo
Why do we need to do this?
This happens because the user and group IDs used by the
container to create the file don't necessarily match that of the host
machine, in which case the operation is not permitted (I don't want
to linger on this for too long, but I invite you to read this great post
to better understand what's going on).
On the other hand, Linux and WSL 2 users are usually affected,
and as the latter is getting more traction, there is suddenly a lot
more people facing file permission issues.
There's one last thing we need to do, and that is to pass the user ID
from the host machine to the docker-compose.yml file.
First, let's find out what this value is by running the following
command in a terminal on the host machine:
$ id -u
Now open the .env file at the root of the project and add the
following line to it:
HOST_UID=501
Change the value for the one obtained with the previous command
if different. We're done!
With this setup, we now have the guarantee that files shared
between the host machine and the containers always belong to the
same user ID, no matter which side they were created from. This
solution also has the huge advantage of working across operating
systems.
I know I went through this quite quickly, but I don't want to drown
you in details in the middle of a post which is already quite dense.
My advice to you is to bookmark the few URLs I provided in this
section and to come back to them once you've completed this
tutorial.
That being said, it seems that having the same user ID on both
sides is enough to avoid those file permission issues anyway,
which lead me to the conclusion that creating a group with the
same ID as well was unnecessary. If you think this is a mistake
though, please let me know why in the comments.
By the way, that doesn't mean that the user created in the
container doesn't belong to any group – if none is specified,
adduser will create one with the same ID as the user's by
default, and assign it to it.
$ docker compose run --rm backend sh -c "mv -n tmp/.* ./ && mv tmp/* ./ && rm -Rf
This will run the content between the double quotes on the
container, sh -c basically being a trick allowing us to run more
than a single command at once (if we ran docker compose run
--rm backend mv -n tmp/.* ./ && mv tmp/* ./ && rm -Rf tmp
instead, only the first mv instruction would be executed on the
container, and the rest would be run on the local machine).
By default, Laravel has created a .env file for you, but let's
replace its content with this one (you will find this file under
src/backend ):
APP_NAME=demo
APP_ENV=local
APP_KEY=base64:BcvoJ6dNU/I32Hg8M8IUc4M5UhGiqPKoZQFR804cEq8=
APP_DEBUG=true
APP_URL=http://backend.demo.test
LOG_CHANNEL=single
DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=demo
DB_USERNAME=root
DB_PASSWORD=root
BROADCAST_DRIVER=log
CACHE_DRIVER=file
QUEUE_CONNECTION=sync
SESSION_DRIVER=file
Not much to see here, apart from the database configuration (mind
the value of DB_HOST ) and some standard application settings.
$ docker compose up -d
There are other things you could do to try and speed things up
on macOS (e.g. Mutagen) but they feel a bit hacky and I'm
personally not a big fan of them. If PHP is your language of
choice though, make sure to check out the OPcache section
below.
We are done for this section but, if you wish to experiment further,
while the backend's container is up you can run Artisan and
Composer commands like this:
Using Xdebug?
I don't use it myself, but at this point you might want to add
Xdebug to your setup. I won't cover it in detail because this is
too PHP-specific, but this tutorial will show you how to make it
work with Docker and Docker Compose.
OPcache
You can skip this section if PHP is not the language you intend to
use in the backend. If it is though, I strongly recommend you follow
these steps because OPcache is a game changer when it comes to
local performance, especially on macOS (but it will also improve
your experience on other operating systems).
I won't explain it in detail here and will simply quote the official PHP
documentation:
1 [opcache]
2 opcache.enable=1
3 opcache.revalidate_freq=0
4 opcache.validate_timestamps=1
5 opcache.max_accelerated_files=10000
6 opcache.memory_consumption=192
7 opcache.max_wasted_percentage=10
8 opcache.interned_strings_buffer=16
9 opcache.fast_shutdown=1
We place this file here and not at the very root of the project
because this configuration is specific to the backend application,
and we need to reference it from its Dockerfile.
1 FROM php:8.1-fpm-alpine
2
3 # Install extensions
4 RUN docker-php-ext-install pdo_mysql bcmath opcache
5
6 # Install Composer
7 COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
8
9 # Configure PHP
10 COPY .docker/php.ini $PHP_INI_DIR/conf.d/opcache.ini
11
12 # Use the default development configuration
13 RUN mv $PHP_INI_DIR/php.ini-development $PHP_INI_DIR/php.ini
14
15 # Create user based on provided user ID
16 ARG HOST_UID
17 RUN adduser --disabled-password --gecos "" --uid $HOST_UID demo
18
19 # Switch to that user
20 USER demo
Note that we've added an instruction to copy the php.ini file over
to the directory where custom configurations are expected to go in
the container, and whose location is given by the $PHP_INI_DIR
environment variable. And as we were at it, we also used the
default development settings provided by the image's maintainers,
which sets up error reporting parameters, among other things
(that's what the following RUN instruction is for).
And that's it! Build the image again and restart the containers – you
should notice some improvement around the backend's
responsiveness:
$ docker compose build backend
$ docker compose up -d
1 # Frontend Service
2 frontend:
3 build: ./src/frontend
4 working_dir: /var/www/frontend
5 volumes:
6 - ./src/frontend:/var/www/frontend
7 depends_on:
8 - backend
# Nginx Service
nginx:
image: nginx:1.21-alpine
ports:
- 80:80
volumes:
- ./src/backend:/var/www/backend
- ./.docker/nginx/conf.d:/etc/nginx/conf.d
- phpmyadmindata:/var/www/phpmyadmin
depends_on:
- backend
- phpmyadmin
- frontend
Then, create a new frontend folder under src and add the
following Dockerfile to it:
FROM node:17-alpine
We simply pull the Alpine version of Node.js' official image for now,
which ships with both Yarn and npm (which are package managers
like Composer, but for JavaScript). I will be using Yarn, as I am told
this is what the cool kids use nowadays.
Once the image is ready, create a fresh Vue.js project with the
following command:
$ docker compose run --rm frontend yarn create vite tmp --template vue
We're using Vite to create a new project in the tmp directory. This
directory is located under /var/www/frontend , which is the
container's working directory as per docker-compose.yml .
Why Vite?
Just like the backend, let's move the files out of tmp and back to
the parent directory:
$ docker compose run --rm frontend sh -c "mv -n tmp/.* ./ && mv tmp/* ./ && rm -Rf
If all went well, you will find the application's files under
src/frontend on your local machine.
1 server {
2 listen 80;
3 listen [::]:80;
4 server_name frontend.demo.test;
5
6 location / {
7 proxy_pass http://frontend:8080;
8 proxy_http_version 1.1;
9 proxy_set_header Upgrade $http_upgrade;
10 proxy_set_header Connection 'upgrade';
11 proxy_cache_bypass $http_upgrade;
12 proxy_set_header Host $host;
13 }
14 }
Let's complete our Dockerfile by adding the command that will start
the development server:
1 FROM node:17-alpine
2
3 # Start application
4 CMD ["yarn", "dev"]
And start the project so Docker picks up the image changes (for
some reason, the restart command won't do that):
$ docker compose up -d
If you don't, take a look at the container's logs to see what's going
on:
1 <template>
2 <div id="app">
3 <HelloThere :msg="msg"/>
4 </div>
5 </template>
6
7 <script>
8 import axios from 'axios'
9 import HelloThere from './components/HelloThere.vue'
10
11 export default {
12 name: 'App',
13 components: {
14 HelloThere
15 },
16 data () {
17 return {
18 msg: null
19 }
20 },
21 mounted () {
22 axios
23 .get('http://backend.demo.test/api/hello-there')
24 .then(response => (this.msg = response.data))
25 }
26 }
27 </script>
1 <template>
2 <div>
3 <img src="https://tech.osteel.me/images/2020/03/04/hello.gif"
4 <p>{{ msg }}</p>
5 </div>
6 </template>
7
8 <script>
9 export default {
10 name: 'HelloThere',
11 props: {
12 msg: String
13 }
14 }
15 </script>
16
17 <style>
18 p {
19 font-family: "Arial", sans-serif;
20 font-size: 90px;
21 text-align: center;
22 font-weight: bold;
23 }
24
25 .center {
26 display: block;
27 margin-left: auto;
28 margin-right: auto;
29 width: 50%;
30 }
31 </style>
The component contains a little bit of HTML and CSS code, and
displays the value of msg in a <p> tag.
Save the file and go back to your browser: the content of our API
endpoint's response should now display at the bottom of the page.
If you want to experiment further, while the frontend's container is
up you can run Yarn commands like this:
Conclusion
That was another long one, well done if you made it this far!
This article once again underscores the fact that, when it comes to
building such an environment, a lot is left to the maintainer's
discretion. There is seldom any clear way of doing things with
Docker, which is both a strength and a weakness – a somewhat
overwhelming flexibility. These little detours contribute to making
these articles dense, but I think it is important for you to know that
you are allowed to question the way things are done.
On the same note, you might also start to wonder about the
practicality of such an environment, with the numerous commands
and syntaxes one needs to remember to navigate it properly. And
you would be right. That is why the next article will be about using
Bash to abstract away some of that complexity, to introduce a nicer,
more user-friendly interface in its place.
You can subscribe to email alerts below to make sure you don't
miss it, or you can also follow me on Twitter where I will share my
posts as soon as they are published.