Coder Social home page Coder Social logo

php-docker-good-defaults's Introduction

PHP (Laravel) + Docker Hello World, for Showing Good Defaults for Using A PHP Stack in Docker

This tries to be a "good defaults" example of using PHP, Nginx, PHP-FPM, Laravel, and Composer in Docker for local development and shipping to production with all the bells, whistles, and best practices. Issues/PR welcome.

NOTE: This is not a full PHP sample project. It's a collection of the Docker and Nginx related things you'll need to have this sort of setup fully in Docker. I'm not a PHP/Laravel developer, but rather an ops guy working with many smart PHP dev's. I continue to refine this repo as I work with teams to dev with, test on, and deploy production containers on PHP.

Also Note I have courses on Docker, Swarm, and upcoming Docker for Node.js here.

Official Laravel Docker Environment

As of version 8 of Laravel there is an offically supported Docker development environment called 'Sail'. If you are specifically doing Laravel development you may want to check it out at the Laravel website.

Contribuitions

This project was made possible in part with support and development from PrinterLogic.

Local Development Features

  • Dev as close to prod as you can. docker-compose builds a local development image that is just like production image except for the below dev-only features needed in image. Goal is to have dev env be as close to test and prod as possible while still giving all the nice tools to make you a happy dev.
  • Prevent needing node/npm on host. Installs node_modules outside app root in container so local development won't run into a problem of bind-mounting over it with local source code. This means it will run npm install once on container build and you don't need to run npm on host or on each docker run. It will re-run on build if you change package.json.
  • One line startup. Uses docker-compose up for single-line build and run of local development server.
  • Edit locally while code runs in container. docker-compose uses proper bind-mounts of host source code into container so you can edit locally while running code in Linux container.
  • Enable debug from host to container. opens the legacy debug port 5858 and new inspect port 9229 for using host-based debugging like chrome tools or VS Code. Nodemon enables --inspect by default in docker-compose, but you can change to --debug for < 6.3 debugging.
  • Quick re-builds. COPY in package.json and run npm install && npm cache clean before COPY in your source code. This saves big on build time and keep container lean. Same for Composer and Bower.

Production-minded Features

  • Use Docker build-in healthchecks. uses Dockerfile HEALTHCHECK with php-fpm /ping route to help Docker know if your container is running properly.
  • Nginx + PHP-FPM in one container. Supervisor is used to combine the two services, Nginx and PHP-FPM in a single container. Those two services have to work together to give you a webserver and PHP processor. Unlike Apache + mod_php, which runs under the Apache process and only needs to start one process on container startup, the combo of Nginx + PHP-FPM have to be started separately. Docker is designed to run a single process with CMD in the Dockerfile, so the simple Supervisor program is used to manage them with a simple config file. Having them both in one container makes the app easier to manage in my real-world experience. Docker has a Docs page on various ways to start multi-service containers, showing a Supervisor example. So far, the Nginx + PHP-FPM combo is the only scenario that I recommend using multi-service containers for. It's a rather unique problem that doesn't always fit well in the model of "one container, one service". You could use two separate containers, one with nginx and one with php:fpm but I've tried that in production, and there are lots of downsides. A copy of the PHP code has to be in each container, they have to communicate over TCP which is much slower than Linux sockets used in a single container, and since you usually have a 1-to-1 relationship between them, the argument of individual service control is rather moot.

Assumptions

  • You have Docker and Docker-Compose installed (Docker for Mac, Docker for Windows, get.docker.com and manual Compose installed for Linux).
  • You want to use Docker for local development (i.e. never need to install php or npm on host) and have dev and prod Docker images be as close as possible.
  • You don't want to lose fidelity in your dev workflow. You want a easy environment setup, using local editors, debug/inspect, local code repo, while web server runs in a container.
  • You use docker-compose for local development only (docker-compose was never intended to be a production deployment tool anyway).
  • The docker-compose.yml is not meant for docker stack deploy in Docker Swarm, it's meant for happy local development.

Getting Started

If this was your app, to start local development you would:

  • Running docker-compose up is all you need. It will:
  • Build custom local image enabled for development.
  • Start container from that image with ports 80, 443, 9000, and 9001 open (on localhost or docker-machine).
  • Mount the pwd to the app dir in container.
  • If you need other services like databases, just add to compose file and they'll be added to the custom Docker network for this app on up.
  • If you need to add packages to Composer, npm, bower, etc. then stop docker-compose and run docker-compose up --build to ensure image is updated.
  • Be sure to use docker-compose down to cleanup after your done dev'ing.

MIT License,

Copyright (c) Bret Fisher

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

php-docker-good-defaults's People

Contributors

bretfisher avatar fhsinchy avatar guillaumemaka avatar kgsearle avatar mehradsadeghi avatar ohnotnow avatar patgmiller avatar vasymus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

php-docker-good-defaults's Issues

Best way how to run Laravel queue worker & cron jobs in Swarm?

Hi Bret,

First of all, I'd like to thank you for a very good course on Udemy "Docker Swarm Mastery", I'm looking forward to watch new lessons.

Some time ago (early days for Docker) I created a little bit different setup for our Laravel app. Since then I was just tweaking it, but now I'm trying to solve some fundamental issues with it. There are couple similarities. For example, I used a single container to run nginx and php-fpm with supervisord. But there are some differences too - developers don't need to build app image, they're just downloading it from docker hub and the whole build process is later a part of a GitLab CI/CD pipeline.

My production setup still uses Ansible & docker-compose and I'd like to start using Docker Swarm.
The main problem with my current approach as is that I'm, not only running nginx & php-fpm in my container but there is a Cron & Laravel Queue Worker there.
Evertything is managed by Supervisord proces.

This works on single VPS with docker-compose but it won't scale in Docker Swarm.

I'd like to start deploying in to Swarm and to do that I'd need to solve couple problems:

  1. Laravel Queue Worker (aka Horizon - it's more robust version of it) uses Redis, I'd need to gracefully shutdown it during deployment using php artisan horizon:terminate and I don't want to fight with Supervisord trying to restart it, so it would be nice to have it in separate container.
  2. If I'd like to split queue worker from app (php-cli vs php-fpm + nginx) I'd need to support two images and how should I share same code base between those two images inside Swarm? Probably I'd need to build two images with same code base as part of my GitLab pipelines. Right now I don't see any other alternative.
  3. Next I could start scaling web_workers & queue_workers separetly and change them one by one during deployment but I'm not sure how I could send some signal (docker exec would be nice but it won't work in Swarm) for queue_workers to gracefully shutdown before deployment. I need to make sure that I wont interupt any jobs.
  4. Next problem would be related to Cron, if I'd leave it as is, it will duplicate cron runs by number of web_workers, so I'd probably need another image just to run cron jobs. I don't see any other solution right now.
  5. Last problem is strictly related to gathering all those logs for all those apps. Right now I'm using some script to run tail on fifo. I'd probably need to leave this as is.

Everything starts looking as a realy big app but I still want to put that on single server by default and scale when it's needed.

I'd like to ask you on your thoughts on this problems? Have you encountered them? How people are solving this kind of issues. Any feedback will be very helpfull to me.

Development composer dependencies and php extensions (xdebug)

Hello!

First off I want to thank you for the awesome work you've been doing with these "good-defaults" repos. They've been immensely insightful.

I have a few questions about creating a production image.

  1. Would it make sense to have development dependencies omitted for the production build? Currelty I added --prefer-dist and --no-dev to composer to exclude the dev libraries and download .zip files instead of cloning repos.

  2. What would be sane to install the xdebug extension only in development?

I'll try to update this issue as I find out more information. Any insight you may be able to provide as well is greatly appreciated!

Provide both container-based and pod-based designs

When this repo was first created, Kubernetes pods weren't as big a standard as they are now for production. People tend to have two use cases:

  • Need to run PHP and Web Server in same container (simpler for Docker/Swarm and non-Pod setups)
  • Need to run PHP and Web Server in different containers of the same Pod (easier to use stock images. Better isolation)

Right now this repo is just a single example of making one huge image, but the Pod abstraction makes for a better design.

This request is to add a Pod-based design that hopefully can use official images rather than a fully custom one. Maybe all that will be needed is a pod spec YAML.

Dockerfile ssh-keyscan fails

Image build fails for me on this step:

RUN ssh-keyscan -t rsa bitbucket.org >> /root/.ssh/known_hosts
&& ssh-keyscan -t rsa github.com >> /root/.ssh/known_hosts

with the following error:

/bin/sh: 1: cannot create /root/.ssh/known_hosts: Directory nonexistent

Let's change it to

RUN mkdir /root/.ssh && chmod 0700 /root/.ssh
&& ssh-keyscan -t rsa bitbucket.org >> /root/.ssh/known_hosts
&& ssh-keyscan -t rsa github.com >> /root/.ssh/known_hosts

what do you think?

NGINX_GPGKEY error when trying to build base image

Hello,
I am trying to build the base image from the base-php-nginx.Dockerfile with the following command:

docker build -f base-php-nginx.Dockerfile -t base-php-nginx:0.1 .

I am running into an error with the GPG KEY for Nginx, hoping thereis a simple solution. I have searched for an updated GPG key, but i keep finding the same one. I am guessing the key has been changed, and I just need an updated key to continue?

Here is the error output:

Step 7/16 : RUN NGINX_GPGKEY=573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62; found=''; for server in ha.pool.sks-keyservers.net hkp://keyserver.ubuntu.com:80 hkp://p80.pool.sks-keyservers.net:80 pgp.mit.edu ; do echo "Fetching GPG key $NGINX_GPGKEY from $server"; apt-key adv --keyserver "$server" --keyserver-options timeout=10 --recv-keys "$NGINX_GPGKEY" && found=yes && break; done; test -z "$found" && echo >&2 "error: failed to fetch GPG key $NGINX_GPGKEY" && exit 1; echo "deb http://nginx.org/packages/debian/ stretch nginx" >> /etc/apt/sources.list.d/nginx.list && apt-get update && apt-get install --no-install-recommends --no-install-suggests -y nginx=${NGINX_VERSION} nginx-module-xslt=${NGINX_VERSION} nginx-module-geoip=${NGINX_VERSION} nginx-module-image-filter=${NGINX_VERSION} nginx-module-njs=${NJS_VERSION} gettext-base && rm -rf /var/lib/apt/lists/* ---> Running in 6d6fe2799f28 Fetching GPG key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 from ha.pool.sks-keyservers.net Warning: apt-key output should not be parsed (stdout is not a terminal) Executing: /tmp/apt-key-gpghome.o598SJLDOH/gpg.1.sh --keyserver ha.pool.sks-keyservers.net --keyserver-options timeout=10 --recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 gpg: keyserver receive failed: Cannot assign requested address Fetching GPG key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 from hkp://keyserver.ubuntu.com:80 Warning: apt-key output should not be parsed (stdout is not a terminal) Executing: /tmp/apt-key-gpghome.LMNYWg28ca/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --keyserver-options timeout=10 --recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 gpg: cannot open '/dev/tty': No such device or address Fetching GPG key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 from hkp://p80.pool.sks-keyservers.net:80 Warning: apt-key output should not be parsed (stdout is not a terminal) Executing: /tmp/apt-key-gpghome.AoKzfhaAsr/gpg.1.sh --keyserver hkp://p80.pool.sks-keyservers.net:80 --keyserver-options timeout=10 --recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 gpg: cannot open '/dev/tty': No such device or address Fetching GPG key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 from pgp.mit.edu Warning: apt-key output should not be parsed (stdout is not a terminal) Executing: /tmp/apt-key-gpghome.JHLtjLfkCd/gpg.1.sh --keyserver pgp.mit.edu --keyserver-options timeout=10 --recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 gpg: cannot open '/dev/tty': No such device or address error: failed to fetch GPG key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 The command '/bin/sh -c NGINX_GPGKEY=573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62; found=''; for server in ha.pool.sks-keyservers.net hkp://keyserver.ubuntu.com:80 hkp://p80.pool.sks-keyservers.net:80 pgp.mit.edu ; do echo "Fetching GPG key $NGINX_GPGKEY from $server"; apt-key adv --keyserver "$server" --keyserver-options timeout=10 --recv-keys "$NGINX_GPGKEY" && found=yes && break; done; test -z "$found" && echo >&2 "error: failed to fetch GPG key $NGINX_GPGKEY" && exit 1; echo "deb http://nginx.org/packages/debian/ stretch nginx" >> /etc/apt/sources.list.d/nginx.list && apt-get update && apt-get install --no-install-recommends --no-install-suggests -y nginx=${NGINX_VERSION} nginx-module-xslt=${NGINX_VERSION} nginx-module-geoip=${NGINX_VERSION} nginx-module-image-filter=${NGINX_VERSION} nginx-module-njs=${NJS_VERSION} gettext-base && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 1

docker-php-entrypoint-dev cache and build folders

Hey,

I'm just wondering why the /var/www/app/build and /var/www/app/cache directories exist within the entry point script? These aren't folders that normally exist with a standard Laravel installation so maybe I'm missing something but would be great to know why they're included?

Thanks

Why not simply use php-apache?

Any reason why we should prefer a custom php image with nginx and supervisor, rather than just using the official php-apache image?
I know there are performance variations between the different php/apache/nginx configurations, but I'm not sure they're critical.
Also, we can still scale the backend using replicas if needed, right?
And maybe just put php-apache behind a caching proxy.

Handling production secrets/env/config for multiple services

This is a bit of a general question, but I keep missing your youtube live chats (timezones ftw!) and I would have asked there - but here we are... ;-)

Edit: Short version - we seem to have a bit of a mix of using env variables, secrets, config files - some in swarm, some in CI - is there a nicer 'accepted' way of storing these and getting them into running containers? Things like consul/vault seem to be adding even more complexity... :-/

In this repo for instance the Laravel app likes to have either environment variables set, or a local .env file to read from. The MySQL server likes environment variables or files with the username and password set in them.

So I could set environment variables during a CI build for the app, or bind a secret into swarm and link it to the expected .env file - but for the mysql service if I want to share the same CI env variables I'd have to build a custom image during CI or have different secret files mounted into the container with the same info as were already in the CI build as they're not the same format as .env. Then if there are smtp credentials that are in yet another format, some storage/api thing like Minio etc - and dozens of apps all with secrets ... it all seems to get a bit messy with either custom images being built, or info duplicated between CI and swarm.

So I'm just wondering a) am I thinking about this in entirely the wrong way (I often am!), and b) is there a good or commonly accepted way of managing all this?

Error when docker-compose up

Hi,

I just git clone the folder, "cd" to the folder and then run docker-compose up. Instantly I have the following error:

ERROR: The Compose file './docker-compose.yml' is invalid because:
services.php.environment.APP_DEBUG contains true, which is an invalid type, it should be a string, number, or a null

I guess I can change the value to 1, but I'm not sure. This is why I'm opening this ticket :)

Thanks.

Regards,

Thibault

configure: error: freetype-config not found.

Hi, when building the base image I got this error

#7 4.429 If configure fails try --with-xpm-dir=<DIR>
#7 4.430 configure: error: freetype-config not found.
------
executor failed running [/bin/sh -c docker-php-ext-configure gd     --enable-gd-native-ttf     --with-freetype-dir=/usr/include/freetype2     --with-jpeg-dir=/usr/include/]: exit code: 1

Quick fix

Replace:

FROM php:7.2-fpm

to:

FROM php:7.2-fpm-stretch

Building based-php-nginx.Dockerfile

Hi,

I tried to build an image using based-php-nginx.Dockerfile with the following commands, but it ended exiting without building:
docker build -f base-php-nginx.Dockerfile -t t2lc/base-php-nginx .

If I'm not mistaken, this command line should make an image called t2lc/base-php-nginx from the file base-php-nginx.Dockerfile, but it won't

I outputed the command in the file attached (output.txt) and at the end, the terminal gave the following:

The command '/bin/sh -c NGINX_GPGKEY=573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62; found=''; for server in ha.pool.sks-keyservers.net hkp://keyserver.ubuntu.com:80 hkp://p80.pool.sks-keyservers.net:80 pgp.mit.edu ; do echo "Fetching GPG key $NGINX_GPGKEY from $server"; apt-key adv --keyserver "$server" --keyserver-options timeout=10 --recv-keys "$NGINX_GPGKEY" && found=yes && break; done; test -z "$found" && echo >&2 "error: failed to fetch GPG key $NGINX_GPGKEY" && exit 1; echo "deb http://nginx.org/packages/debian/ jessie nginx" >> /etc/apt/sources.list && apt-get update && apt-get install --no-install-recommends --no-install-suggests -y nginx=${NGINX_VERSION} nginx-module-xslt=${NGINX_VERSION} nginx-module-geoip=${NGINX_VERSION} nginx-module-image-filter=${NGINX_VERSION} nginx-module-njs=${NJS_VERSION} gettext-base && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 1

Thank you for your help.

Regards,

Thibault

Node things in Dockerfile

Hi Bret,

I am very confused... the title and README of this repo suggest it's a good starting place for a php website... but why does it also has npm and node all over the Dockerfile and README? It seems to contradict the "good defaults" notion...
Finally, docker build is failing; the error is quite impossible to read, too - the "command" that fails is basically a whole script in itself:

$ docker build . -f base-php-nginx.Dockerfile

Step 10/16 : RUN curl -s -f -L -o /tmp/installer.php https://raw.githubusercontent.com/composer/getcomposer.org/da290238de6d63faace0343efbdd5aa9354332c5/web/installer && php -r " $signature = '669656bab3166a7aff8a7506b8cb2d1c292f042046c5a994c43155c0be6190fa0355160742ab2e1c88d40d5be660b410'; $hash = hash('SHA384', file_get_contents('/tmp/installer.php')); if (!hash_equals($signature, $hash)) { unlink('/tmp/installer.php'); echo 'Integrity check failed, installer is either corrupt or worse.' . PHP_EOL; exit(1); }" &amp;&amp; php /tmp/installer.php --no-ansi --install-dir=/usr/bin --filename=composer --version=${COMPOSER_VERSION} && rm /tmp/installer.php && composer --ansi --version --no-interaction
---> Running in cc7074642068
All settings correct for using Composer
SHA384 is not supported by your openssl extension
The command '/bin/sh -c curl -s -f -L -o /tmp/installer.php https://raw.githubusercontent.com/composer/getcomposer.org/da290238de6d63faace0343efbdd5aa9354332c5/web/installer && php -r " $signature = '669656bab3166a7aff8a7506b8cb2d1c292f042046c5a994c43155c0be6190fa0355160742ab2e1c88d40d5be660b410'; $hash = hash('SHA384', file_get_contents('/tmp/installer.php')); if (!hash_equals($signature, $hash)) { unlink('/tmp/installer.php'); echo 'Integrity check failed, installer is either corrupt or worse.' . PHP_EOL; exit(1); }" &amp;&amp; php /tmp/installer.php --no-ansi --install-dir=/usr/bin --filename=composer --version=${COMPOSER_VERSION} && rm /tmp/installer.php && composer --ansi --version --no-interaction' returned a non-zero code: 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.