0% found this document useful (0 votes)
139 views42 pages

Docker For Local Web Development, Part 3-A Three-Tier Architecture With Frameworks

This document is part 3 of a Docker tutorial series about setting up a local three-tier web development environment. It introduces a Laravel backend application and Vue frontend application separated into different containers, along with an existing MySQL database. The backend is built with a Dockerfile and configured with Nginx. Related Docker Compose and Nginx configuration files are updated to support the new architecture. Hosts files are also edited to allow access to the new applications by domain name.

Uploaded by

Shirouit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
139 views42 pages

Docker For Local Web Development, Part 3-A Three-Tier Architecture With Frameworks

This document is part 3 of a Docker tutorial series about setting up a local three-tier web development environment. It introduces a Laravel backend application and Vue frontend application separated into different containers, along with an existing MySQL database. The backend is built with a Dockerfile and configured with Nginx. Related Docker Compose and Nginx configuration files are updated to support the new architecture. Hosts files are also edited to allow access to the new applications by domain name.

Uploaded by

Shirouit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

osteel's blog Web development resources

Docker for local web


development, part 3: a
three-tier architecture
with frameworks
Last updated: 2022-02-19 :: Published: 2020-03-30 :: [ history ]

Been here before?

Get notified of future posts by email:


Introduction: why should you care?
Part 1: a basic LEMP stack
Part 2: put your images on a diet
Part 3: a three-tier architecture with frameworks you are
here
Part 4: smoothing things out with Bash
Part 5: HTTPS all the things
Part 6: expose a local container to the Internet
Part 7: using a multi-stage build to introduce a worker
Part 8: scheduled tasks
Conclusion: where to go from here

Subscribe to email alerts at the end of this article or follow me on


Twitter to be informed of new publications.

In this post
In this series
In this post
Foreword
A three-tier architecture?
The backend application
The new backend service
Non-root user
Creating a new Laravel project
OPcache
The frontend application
Conclusion

Foreword
In all honesty, what we've covered so far is pretty standard. Articles
about LEMP stacks on Docker are legion, and while I hope to add
some value through a beginner-friendly approach and a certain
level of detail, there was hardly anything new (after all, I was
already writing about this back in 2015).

I believe this is about to change with today's article. There are


many ways to manage a multitiered project with Docker, and while
the approach I am about to describe certainly isn't the only one, I
also think this is a subject that doesn't get much coverage at all.

In that sense, today's article is probably where the rubber meets the
road for some of you. That is not to say the previous ones are
negligible – they constitute a necessary introduction contributing to
making this series comprehensive – but this is where the theory
meets the practical complexity of modern web applications.

The assumed starting point of this tutorial is where we left things at


the end of the previous part, corresponding to the repository's
part-2 branch.

If you prefer, you can also directly checkout the part-3 branch,
which is the final result of today's article.

Again, this is by no means the one and only approach, just one that
has been successful for me and the companies I set it up for.

A three-tier architecture?
After setting up a LEMP stack on Docker and shrinking down the
size of the images, we are about to complement our MySQL
database with a frontend application based on Vue.js and a
backend application based on Laravel, in order to form what we call
a three-tier architecture.

Behind this somewhat intimidating term is a popular way to


structure an application, consisting in separating the presentation
layer (a.k.a. the frontend), the application layer (a.k.a. the backend)
and the persistence layer (a.k.a. the database), ensuring each part
is independently maintainable, deployable, scalable, and easily
replaceable if need be. Each of these layers represents one tier of
the three-tier architecture. And that's it!

In such a setup, it is common for the backend and frontend


applications to each have their own repository. For simplicity's sake,
however, we'll stick to a single repository in this tutorial.

The backend application


Before anything else, let's get rid of the previous containers and
volumes (not the images, as we still need them) by running the
following command from the project's root directory:

$ docker compose down -v

Remember that down destroys the containers, and -v deletes


the associated volumes.

Let's also get rid of the previous PHP-related files, to make room for
the new backend application. Delete the .docker/php folder, the
.docker/nginx/conf.d/php.conf file and the src/index.php
file. Your file and directory structure should now look similar to this:
docker-tutorial/
├── .docker/
│ ├── mysql/
│ │ └── my.cnf
│ └── nginx/
│ └── conf.d/
│ └── phpmyadmin.conf
├── src/
├── .env
├── .env.example
├── .gitignore
└── docker-compose.yml

The new backend service


Replace the content of docker-compose.yml with this one
(changes have been highlighted in bold):

version: '3.8'

# Services
services:

# Nginx Service
nginx:
image: nginx:1.21-alpine
ports:
- 80:80
volumes:
- ./src/backend:/var/www/backend
- ./.docker/nginx/conf.d:/etc/nginx/conf.d
- phpmyadmindata:/var/www/phpmyadmin
depends_on:
- backend
- phpmyadmin

# Backend Service
backend:
build: ./src/backend
working_dir: /var/www/backend
volumes:
- ./src/backend:/var/www/backend
depends_on:
mysql:
condition: service_healthy

# MySQL Service
mysql:
image: mysql/mysql-server:8.0
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: demo
volumes:
- ./.docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf
- mysqldata:/var/lib/mysql
healthcheck:
test: mysqladmin ping -h 127.0.0.1 -u root --password=$$MYSQL_ROOT_PASSWORD
interval: 5s
retries: 10

# PhpMyAdmin Service
phpmyadmin:
image: phpmyadmin/phpmyadmin:5-fpm-alpine
environment:
PMA_HOST: mysql
volumes:
- phpmyadmindata:/var/www/html
depends_on:
mysql:
condition: service_healthy

# Volumes
volumes:

mysqldata:

phpmyadmindata:

The main update is the removal of the PHP service in favour of the
backend service, although they are quite similar. The build key
now points to a Dockerfile located in the backend application's
directory ( src/backend ), which is also mounted as a volume on
the container.

As the backend application will be built with Laravel, let's create an


Nginx server configuration based on the one provided in the official
documentation. Create a new backend.conf file in
.docker/nginx/conf.d :

1 server {
2 listen 80;
3 listen [::]:80;
4 server_name backend.demo.test;
5 root /var/www/backend/public;
6
7 add_header X-Frame-Options "SAMEORIGIN";
8 add_header X-Content-Type-Options "nosniff";
9
10 index index.php;
11
12 charset utf-8;
13
14 location / {
15 try_files $uri $uri/ /index.php?$query_string;
16 }
17
18 location = /favicon.ico { access_log off; log_not_found off
19 location = /robots.txt { access_log off; log_not_found off
20
21 error_page 404 /index.php;
22
23 location ~ \.php$ {
24 fastcgi_pass backend:9000;
25 fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name
26 include fastcgi_params;
27 }
28
29 location ~ /\.(?!well-known).* {
30 deny all;
31 }
32 }

Mind the values for server_name and fastcgi_pass (now


pointing to port 9000 of the backend container).

Why not use Laravel Sail?

If you're already a Laravel developer, you might have come


across Sail, Laravel's official Docker-based development
environment. While their approach is similar to this series' in
many ways, the main difference is that Sail is meant to cater
for Laravel monoliths only. In other words, Sail is meant for
applications that are fully encompassed within Laravel, as
opposed to applications where the Laravel bit is only one
component among others, as is the case in this series.

If you're curious about Laravel Sail, I've published a


comprehensive guide that will teach you everything you need
to know.

You also need to update your local hosts file with the new domain
names (have a quick look here if you've forgotten how to do that):

127.0.0.1 backend.demo.test frontend.demo.test phpmyadmin.demo.test

Note that we also added the frontend's domain to save us a trip


later, and that we changed phpMyAdmin's from phpmyadmin.test
to phpmyadmin.demo.test for consistency.

You should open the phpmyadmin.conf file in the .docker/nginx


/conf.d folder now and update the following line accordingly:

server_name phpmyadmin.demo.test

Now create the src/backend directory and add a Dockerfile with


this content:

FROM php:8.1-fpm-alpine

Your file structure should now look like this:

docker-tutorial/
├── .docker/
│ ├── mysql/
│ │ └── my.cnf
│ └── nginx/
│ └── conf.d/
│ └── backend.conf
│ └── phpmyadmin.conf
├── src/
│ └── backend/
│ └── Dockerfile
├── .env
├── .env.example
├── .gitignore
└── docker-compose.yml

Laravel requires a few PHP extensions to function properly, so we


need to ensure those are installed. The Alpine version of the PHP
image comes with a number of pre-installed extensions which we
can list by running the following command (from the project's root,
as usual):

$ docker compose run --rm backend php -m

Now if you remember, in the first part of this series we used exec
to run Bash on a container, whereas this time we are using run to
execute the command we need. What's the difference?

exec simply allows us to execute a command on an already


running container, whereas run does so on a new container which
is immediately stopped after the command is over. It does not
delete the container by default, however; we need to specify --rm
after run for it to happen.

The command essentially runs php -m on the backend container,


and gives the following result:

[PHP Modules]
Core
ctype
curl
date
dom
fileinfo
filter
ftp
hash
iconv
json
libxml
mbstring
mysqlnd
openssl
pcre
PDO
pdo_sqlite
Phar
posix
readline
Reflection
session
SimpleXML
sodium
SPL
sqlite3
standard
tokenizer
xml
xmlreader
xmlwriter
zlib

[Zend Modules]

That is quite a lot of extensions, which might come as a surprise


after reading the previous part praising Alpine images for featuring
the bare minimum by default. The reason is also hinted at in the
previous article: since it is not always simple to install things on an
Alpine distribution, the PHP image's maintainers chose to make
their users' lives easier by preinstalling a bunch of extensions.

The final result is around 80 MB, which is still very small.

Now that we know which extensions are missing, we can complete


the Dockerfile to install them, along with Composer which is needed
for Laravel:

1 FROM php:8.1-fpm-alpine
2
3 # Install extensions
4 RUN docker-php-ext-install pdo_mysql bcmath
5
6 # Install Composer
7 COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer

Here we're leveraging multi-stage builds (more specifically, using


an external image as a stage) to get Composer's latest version from
the Composer Docker image directly (we'll talk more about multi-
stage builds in part 7).

Non-root user
There's one last thing we need to cover before proceeding further.

First, modify the backend service's build section in docker-


compose.yml :

# Backend Service
backend:
build:
context: ./src/backend
args:
HOST_UID: $HOST_UID
working_dir: /var/www/backend
volumes:
- ./src/backend:/var/www/backend
depends_on:
mysql:
condition: service_healthy

The section now comprises a context key pointing to the


backend's Dockerfile like before, but also an args section that
we'll cover in a minute.

Next, update the backend's Dockerfile again so it looks like this:

1 FROM php:8.1-fpm-alpine
2
3 # Install extensions
4 RUN docker-php-ext-install pdo_mysql bcmath
5
6 # Install Composer
7 COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
8
9 # Create user based on provided user ID
10 ARG HOST_UID
11 RUN adduser --disabled-password --gecos "" --uid $HOST_UID demo
12
13 # Switch to that user
14 USER demo
Why do we need to do this?

An important thing to understand is that, unless stated otherwise,


containers will use the root user by default.

While this is arguably fine for a local setup, it sometimes leads to


permission issues, like when trying to access or update a file on the
host machine that was created by the container (which is often the
case for the laravel.log file, for instance).

This happens because the user and group IDs used by the
container to create the file don't necessarily match that of the host
machine, in which case the operation is not permitted (I don't want
to linger on this for too long, but I invite you to read this great post
to better understand what's going on).

Users of Docker Desktop for Mac typically don't need to worry


about this because it's doing some magic under the hood to avoid
the problem altogether (unfortunately I couldn't find a proper
description of the corresponding mechanism, other than this GitHub
issue from 2018 refering to a section of the Docker documentation
that doesn't exist anymore).

On the other hand, Linux and WSL 2 users are usually affected,
and as the latter is getting more traction, there is suddenly a lot
more people facing file permission issues.

So what do the extra Dockerfile instructions do? The first one


indicates that there will be a HOST_UID build-time variable
available while building the image (once again, I don't want to
spend too much time on this, but this excellent article will tell you
everything you need to know about Dockerfile variables). The
corresponding value will come from the args section we also
added to the docker-compose.yml file earlier, which will contain
the host machine's current user ID (more on this in a minute).
We then create a user with the same ID in the container, using
Alpine's adduser command (please refer to the documentation for
details). We name that user demo .

Finally, we tell Docker to switch to that user using the USER


instruction, meaning that from then on, demo will be the
container's default user.

There's one last thing we need to do, and that is to pass the user ID
from the host machine to the docker-compose.yml file.

First, let's find out what this value is by running the following
command in a terminal on the host machine:

$ id -u

On macOS for instance, that value might be 501 .

Now open the .env file at the root of the project and add the
following line to it:

HOST_UID=501

Change the value for the one obtained with the previous command
if different. We're done!

With this setup, we now have the guarantee that files shared
between the host machine and the containers always belong to the
same user ID, no matter which side they were created from. This
solution also has the huge advantage of working across operating
systems.

I know I went through this quite quickly, but I don't want to drown
you in details in the middle of a post which is already quite dense.
My advice to you is to bookmark the few URLs I provided in this
section and to come back to them once you've completed this
tutorial.

How about the group ID?

While researching this issue, most of the resources I found


also featured the creation of a group with the same ID as the
user's group on the host machine. I initially intended to do the
same, but it was causing all sorts of conflicts with no easy way
to work around them.

That being said, it seems that having the same user ID on both
sides is enough to avoid those file permission issues anyway,
which lead me to the conclusion that creating a group with the
same ID as well was unnecessary. If you think this is a mistake
though, please let me know why in the comments.

By the way, that doesn't mean that the user created in the
container doesn't belong to any group – if none is specified,
adduser will create one with the same ID as the user's by
default, and assign it to it.

We now need to rebuild the image to apply the changes:

$ docker compose build backend

Creating a new Laravel project


Once the new version of the image is ready, run the following
command:

$ docker compose run --rm backend composer create-project --prefer-dist laravel/la


This will use the version of Composer installed on the backend
container (no need to install Composer locally!) to create a new
Laravel 9 project in the container's /var/www/backend/tmp folder.

As per docker-compose.yml , the container's working directory is


/var/www/backend , onto which the local folder src/backend was
mounted – if you look into that directory now on your local machine,
you will find a new tmp folder containing the files of a fresh
Laravel application. But why did we not create the project in
backend directly?

Behind the scenes, composer create-project performs a git


clone , and that won't work if the target directory is not empty. In
our case, backend already contains the Dockerfile, which is
necessary to run the command in the first place. We essentially
created the tmp folder as a temporary home for our project, and
we now need to move back the files to their final location:

$ docker compose run --rm backend sh -c "mv -n tmp/.* ./ && mv tmp/* ./ && rm -Rf

This will run the content between the double quotes on the
container, sh -c basically being a trick allowing us to run more
than a single command at once (if we ran docker compose run
--rm backend mv -n tmp/.* ./ && mv tmp/* ./ && rm -Rf tmp
instead, only the first mv instruction would be executed on the
container, and the rest would be run on the local machine).

Shouldn't Composer be in its own container?

A common way to deal with package managers is to isolate


them into their own containers, the main reason being that they
are external tools mostly used during development and that
they've got no business shipping with the application itself
(which is all correct).

I actually used this approach for a while, but it comes with


downsides that are often overlooked: as Composer allows
developers to specify a package's requirements (necessary
PHP extensions, PHP's minimum version, etc.), by default it
will check the system on which it installs the application's
dependencies to make sure it meets those criteria. In practice,
this means the configuration of the container hosting
Composer must be as close as possible to the application's,
which often means doubling the work for the maintainer. As a
result, some people choose to run Composer with the
--ignore-platform-reqs flag instead, ensuring
dependencies will always install regardless of the system's
configuration.

This is a dangerous thing to do: while most of the time


dependency-related errors will be spotted during development,
in some instances the problem could go unnoticed until
someone stumbles upon it, either on staging or even in
production (this is especially true if your application does't have
full test coverage). Moreover, staged builds are an effective
way to separate the package manager from the application in a
single Dockerfile, but that's a topic I will broach later in this
series. Bear with!

By default, Laravel has created a .env file for you, but let's
replace its content with this one (you will find this file under
src/backend ):

APP_NAME=demo
APP_ENV=local
APP_KEY=base64:BcvoJ6dNU/I32Hg8M8IUc4M5UhGiqPKoZQFR804cEq8=
APP_DEBUG=true
APP_URL=http://backend.demo.test
LOG_CHANNEL=single

DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=demo
DB_USERNAME=root
DB_PASSWORD=root

BROADCAST_DRIVER=log
CACHE_DRIVER=file
QUEUE_CONNECTION=sync
SESSION_DRIVER=file

Not much to see here, apart from the database configuration (mind
the value of DB_HOST ) and some standard application settings.

Let's try out our new setup:

$ docker compose up -d

Once it is up, visit backend.demo.test; if all went well, you should


see Laravel's home page:

Pump up that RAM!

If the backend's response time feels slow, a quick and easy


trick for Docker Desktop is to increase the amount of RAM it is
allowed to use. Open the preferences and adjust the Memory
slider under Resources:

The default value is 2 GB; I doubled that and it made a notable


difference (if you've got 8 GB or more, it will most likely not
impact your machine's performance in any visible way).

There are other things you could do to try and speed things up
on macOS (e.g. Mutagen) but they feel a bit hacky and I'm
personally not a big fan of them. If PHP is your language of
choice though, make sure to check out the OPcache section
below.

And if you're a Windows user and haven't given WSL 2 a try


yet, you should probably look into it.

Let's also verify our database settings by running Laravel's default


migrations:

$ docker compose exec backend php artisan migrate


You can log into phpmyadmin.demo.test (with the root / root
credentials) to confirm the presence of the demo database and its
newly created tables ( failed_jobs , migrations ,
password_resets and users ).

There is one last thing we need to do before moving on to the


frontend application: since our aim is to have it interact with the
backend, let's add the following endpoint to routes/api.php :

1 <?php // ignore this line, it's for syntax highlighting only


2
3 Route::get('/hello-there', function () {
4 return 'General Kenobi';
5 });

Try it out by accessing backend.demo.test/api/hello-there, which


should display "General Kenobi".

Our API is ready!

We are done for this section but, if you wish to experiment further,
while the backend's container is up you can run Artisan and
Composer commands like this:

$ docker compose exec backend php artisan


$ docker compose exec backend composer

And if it's not running:

$ docker compose run --rm backend php artisan


$ docker compose run --rm backend composer

Using Xdebug?

I don't use it myself, but at this point you might want to add
Xdebug to your setup. I won't cover it in detail because this is
too PHP-specific, but this tutorial will show you how to make it
work with Docker and Docker Compose.

OPcache
You can skip this section if PHP is not the language you intend to
use in the backend. If it is though, I strongly recommend you follow
these steps because OPcache is a game changer when it comes to
local performance, especially on macOS (but it will also improve
your experience on other operating systems).

I won't explain it in detail here and will simply quote the official PHP
documentation:

OPcache improves PHP performance by storing


precompiled script bytecode in shared memory,
thereby removing the need for PHP to load and
parse scripts on each request.

It's not enabled by default but we can easily do so by following a


few steps borrowed from this article (which I invite you to read for
more details on the various parameters).

First, we need to introduce a custom configuration for PHP. Create


a new .docker folder in src/backend , and add a php.ini file
to it with the following content:

1 [opcache]
2 opcache.enable=1
3 opcache.revalidate_freq=0
4 opcache.validate_timestamps=1
5 opcache.max_accelerated_files=10000
6 opcache.memory_consumption=192
7 opcache.max_wasted_percentage=10
8 opcache.interned_strings_buffer=16
9 opcache.fast_shutdown=1
We place this file here and not at the very root of the project
because this configuration is specific to the backend application,
and we need to reference it from its Dockerfile.

The Dockerfile is indeed where we can import this configuration


from, and also where we will install the opcache extension.
Replace its content with the following:

1 FROM php:8.1-fpm-alpine
2
3 # Install extensions
4 RUN docker-php-ext-install pdo_mysql bcmath opcache
5
6 # Install Composer
7 COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
8
9 # Configure PHP
10 COPY .docker/php.ini $PHP_INI_DIR/conf.d/opcache.ini
11
12 # Use the default development configuration
13 RUN mv $PHP_INI_DIR/php.ini-development $PHP_INI_DIR/php.ini
14
15 # Create user based on provided user ID
16 ARG HOST_UID
17 RUN adduser --disabled-password --gecos "" --uid $HOST_UID demo
18
19 # Switch to that user
20 USER demo

Note that we've added an instruction to copy the php.ini file over
to the directory where custom configurations are expected to go in
the container, and whose location is given by the $PHP_INI_DIR
environment variable. And as we were at it, we also used the
default development settings provided by the image's maintainers,
which sets up error reporting parameters, among other things
(that's what the following RUN instruction is for).

And that's it! Build the image again and restart the containers – you
should notice some improvement around the backend's
responsiveness:
$ docker compose build backend
$ docker compose up -d

The frontend application


The third tier of our architecture is the frontend application, for
which we will use the ever more popular Vue.js.

The steps to set it up are actually quite similar to the backend's;


first, let's add the corresponding service to docker-compose.yml ,
right after the backend one:

1 # Frontend Service
2 frontend:
3 build: ./src/frontend
4 working_dir: /var/www/frontend
5 volumes:
6 - ./src/frontend:/var/www/frontend
7 depends_on:
8 - backend

Nothing that we haven't seen before here. We also need to specify


in the Nginx service that the frontend service should be started first:

# Nginx Service
nginx:
image: nginx:1.21-alpine
ports:
- 80:80
volumes:
- ./src/backend:/var/www/backend
- ./.docker/nginx/conf.d:/etc/nginx/conf.d
- phpmyadmindata:/var/www/phpmyadmin
depends_on:
- backend
- phpmyadmin
- frontend

Then, create a new frontend folder under src and add the
following Dockerfile to it:
FROM node:17-alpine

We simply pull the Alpine version of Node.js' official image for now,
which ships with both Yarn and npm (which are package managers
like Composer, but for JavaScript). I will be using Yarn, as I am told
this is what the cool kids use nowadays.

Let's build the image:

$ docker compose build frontend

Once the image is ready, create a fresh Vue.js project with the
following command:

$ docker compose run --rm frontend yarn create vite tmp --template vue

We're using Vite to create a new project in the tmp directory. This
directory is located under /var/www/frontend , which is the
container's working directory as per docker-compose.yml .

Why Vite?

Vite is a set of frontend tools that was brought to my attention


by Brian Parvin in the comments. It brings significant
performance gains over Vue CLI around compilation and hot-
reloading.

Just like the backend, let's move the files out of tmp and back to
the parent directory:

$ docker compose run --rm frontend sh -c "mv -n tmp/.* ./ && mv tmp/* ./ && rm -Rf
If all went well, you will find the application's files under
src/frontend on your local machine.

Finally, let's install the project's dependencies:

$ docker compose run --rm frontend yarn

Issues around node_modules ?

A few readers have reported symlink issues related to the


node_modules folder on Windows. To be honest I am not
entirely sure what is going on here, but it seems that creating a
.dockerignore file at the root of the project containing the
line node_modules fixes it (you might need to delete that
folder first). Again, not sure why is that, but you might find
answers here.

.dockerignore files are very similar to .gitignore files in


that they allow us to specify files and folders that should be
ignored when copying or adding content to a container. They
are not really necessary in the context of this series and I only
touch upon them in the conclusion, but you can read more
about them here.

We've already added frontend.demo.test to the hosts file


earlier, so let's move on to creating the Nginx server configuration.
Add a new frontend.conf file to .docker/nginx/conf.d , with
the following content (most of the location block comes from this
article):

1 server {
2 listen 80;
3 listen [::]:80;
4 server_name frontend.demo.test;
5
6 location / {
7 proxy_pass http://frontend:8080;
8 proxy_http_version 1.1;
9 proxy_set_header Upgrade $http_upgrade;
10 proxy_set_header Connection 'upgrade';
11 proxy_cache_bypass $http_upgrade;
12 proxy_set_header Host $host;
13 }
14 }

This simple configuration redirects the traffic from the domain


name's port 80 to the container's port 8080, which is the port the
development server will use.

Let's also update the vite.config.js file in src/frontend :

1 import { defineConfig } from 'vite'


2 import vue from '@vitejs/plugin-vue'
3
4 export default defineConfig({
5 plugins: [vue()],
6 server: {
7 host: true,
8 hmr: {port: 80},
9 port: 8080,
10 watch: {
11 usePolling: true
12 }
13 }
14 })

This will ensure the hot-reload feature is functional. I won't go into


details about each option here, as the point is not so much about
configuring Vite as seeing how to articulate a frontend and a
backend application with Docker, regardless of the chosen
frameworks. You are welcome to browse the documentation
though!

Let's complete our Dockerfile by adding the command that will start
the development server:

1 FROM node:17-alpine
2
3 # Start application
4 CMD ["yarn", "dev"]

Rebuild the image:

$ docker compose build frontend

And start the project so Docker picks up the image changes (for
some reason, the restart command won't do that):

$ docker compose up -d

Once everything is up and running, access frontend.demo.test and


you should see this welcome page:

If you don't, take a look at the container's logs to see what's going
on:

$ docker compose logs -f frontend


Open src/frontend/src/components/HelloWorld.vue and
update some content (the <h1> tag, for example). Go back to your
browser and you should see the change happen in real time – this
is hot-reload doing its magic!

To make sure our setup is complete, all we've got left to do is to


query the API endpoint we defined earlier in the backend, with the
help of Axios. Let's install the package with the following command:

$ docker compose exec frontend yarn add axios

Once Yarn is done, replace the content of src/frontend


/src/App.vue with this one:

1 <template>
2 <div id="app">
3 <HelloThere :msg="msg"/>
4 </div>
5 </template>
6
7 <script>
8 import axios from 'axios'
9 import HelloThere from './components/HelloThere.vue'
10
11 export default {
12 name: 'App',
13 components: {
14 HelloThere
15 },
16 data () {
17 return {
18 msg: null
19 }
20 },
21 mounted () {
22 axios
23 .get('http://backend.demo.test/api/hello-there')
24 .then(response => (this.msg = response.data))
25 }
26 }
27 </script>

All we are doing here is hitting the hello-there endpoint we


created earlier and assigning its response to the msg property,
which is passed on to the HelloThere component. Once again I
won't linger too much on this, as this is not a Vue.js tutorial – I
merely use it as an example.

Delete src/frontend/src/components/HelloWorld.vue , and


create a new HelloThere.vue file in its place:

1 <template>
2 <div>
3 <img src="https://tech.osteel.me/images/2020/03/04/hello.gif"
4 <p>{{ msg }}</p>
5 </div>
6 </template>
7
8 <script>
9 export default {
10 name: 'HelloThere',
11 props: {
12 msg: String
13 }
14 }
15 </script>
16
17 <style>
18 p {
19 font-family: "Arial", sans-serif;
20 font-size: 90px;
21 text-align: center;
22 font-weight: bold;
23 }
24
25 .center {
26 display: block;
27 margin-left: auto;
28 margin-right: auto;
29 width: 50%;
30 }
31 </style>

The component contains a little bit of HTML and CSS code, and
displays the value of msg in a <p> tag.

Save the file and go back to your browser: the content of our API
endpoint's response should now display at the bottom of the page.
If you want to experiment further, while the frontend's container is
up you can run Yarn commands like this:

$ docker compose exec frontend yarn

And if it's not running:

$ docker compose run --rm frontend yarn

Using separate repositories

As I mentioned at the beginning of this article, in a real-life


situation the frontend and backend applications are likely to be
in their own repositories, and the Docker environment in a third
one. How to articulate the three of them?

The way I do it is by adding the src folder to the


.gitignore file at the root, and I checkout both the frontend
and backend applications in it, in separate directories. And
that's about it! Since src is git-ignored, you can safely
checkout other codebases in it, without conflicts. Theoretically,
you could also use Git submodules to achieve this, but in
practice it adds little to no value, especially when applying
what we'll cover in the next part.

Conclusion
That was another long one, well done if you made it this far!

This article once again underscores the fact that, when it comes to
building such an environment, a lot is left to the maintainer's
discretion. There is seldom any clear way of doing things with
Docker, which is both a strength and a weakness – a somewhat
overwhelming flexibility. These little detours contribute to making
these articles dense, but I think it is important for you to know that
you are allowed to question the way things are done.

On the same note, you might also start to wonder about the
practicality of such an environment, with the numerous commands
and syntaxes one needs to remember to navigate it properly. And
you would be right. That is why the next article will be about using
Bash to abstract away some of that complexity, to introduce a nicer,
more user-friendly interface in its place.

You can subscribe to email alerts below to make sure you don't
miss it, or you can also follow me on Twitter where I will share my
posts as soon as they are published.

Share Tweet Email


Enjoying the content?

Get notified of future posts by email:

You might also like