Coder Social home page Coder Social logo

wordpress-laravel-docker-local-cloud's People

Contributors

distributev avatar elghazal-a avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

ecogit-stage

wordpress-laravel-docker-local-cloud's Issues

htaccess and gitignore configuration files

Bedrock comes with both .htaccess and .gitignore files

Laravel the same comes with both .htaccess and .gitignore files

Double check and make sure for both Bedrock and Laravel we ship the out of the box .htaccess and .gitignore provided files (so that we don't reinvent the wheel)

Refactor and modularize by mirroring the main components

The main components we use are Apache, WordPress, BedRock, Sage, Laravel and Adminer ==> the folder structure and the script separation and the script names should reflect the components we use.

Please re-orgnize the scripts and folders like this

ref

image

You can get inspiration how these projects organize docker images

https://github.com/laradock/laradock has a separate folder / docker file for each component
https://github.com/iron-io/dockers is doing the same

Step 2 - Use Laradock / Docker to setup a WordPress LAMP stack

Step 2 - Use Laradock / Docker to setup a WordPress LAMP stack

A. Get the standard laradock Adminer and AWS (laradock/aws/ is required to get AWS elasticbeanstalk which adds support to automatically deploy to AWS using command line) (instead of laradock/aws/elasticbeanstalk you can also try AWS command line tools or AWS ECS command line tools, if more appropriate).

For initial AWS deployment use

Use the AWS Elastic Beanstalk Command Line Interface (EB CLI) - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3.html or
AWS command line tools or
AWS ECS command line tools - whichever is more appropriate.
B. Starting from the standard WordPress docker image alter that image and remove the linese which download wordpress.tar.gz file (ENV WORDPRESS_VERSION 4.9.2 and the curl command afterwards)

On the modified "php-wordpress-ready" image install PHP Composer tool (for dependency management in PHP)

Here is the official WordPress Docker image - https://hub.docker.com/_/wordpress/

How do I know when is started? Where do I see possible errors?

Please update HowTo.md with information about

  • How do I know when is started?
  • How long (approximatively) the installation will take the first time?
  • How long (approximatively) will take subsequent executions / restarts?
  • Where do I see possible errors?

docker compose up started to execute and for few minutes the prompt was progressing - then the prompt stopped apparently (but was not sure if done or not or if errro) and then after few minutes 1 more new line came on the prompt and then few minutes again nothing was happening and again 1 new line and so on.

  • How do I know when everything is started?
  • Once done how I can access WordPress / Laravel
  • My "work" folder is still empty where are the bedrock / laravel apps installed?
  • Where do I see possible errors?

Here is how my prompt looks now.

q3

Is wp-cli needed?

I see Bedrock does not need wp-cli to install word press - why would we fetch it?

Bedrock downloads word press like any other php dependency and then Bedrock handles the installation of WordPress.

What was the need for us to use wp-cli because bedrock already handles installation for us?

?? Queries

  1. Which Linux folders are shared between the containers and the EC2 linux host?

  2. Which Linux folders are shared between the containers and the Windows host?

  3. Which folders from inside containers are publically accessible through Apache? Laravel, Bedrock - What folder path each framework is publishing through Apache?

Which folders can a possible "hack" visitor "browse" (view) by manipulating the browser's URL?

  1. Which port mappings are configured between containers and the host?

  2. Which ports are exposed to the Web from the host? For which reason?

  3. Which users and groups the EC2 host has configured? What permissions each user and group has? Which folder each user / group has permissions and which are these permissions?

  4. Which users and groups are configured on each of the containers? What permissions each user / group has? Which folder each user / group has permissions and which are these permissions?

  5. Which network protocols (ports) are exported from the EC2 host to the web? For which reason?

  6. Can you enable SSH access to the EC2 host?

  7. Is it difficult to configure https://help.ubuntu.com/lts/serverguide/ftp-server.html#vsftpd-ftp-server-installation on the EC2 host? You need to create a separate user for FTP access? Please allow the ftp user to browse all folders starting from the root /

Note* - The intention is (using FTP clients) to allow browsing, in production, of the bedrock / laravel PHP app source code / logs and I assumed the FTP server should be installed on the EC2 host (or the containers should run the FTP server?)

  1. Which is the command to stop all running containers from my Windows machine?

  2. Instructions to stop all containers on the AWS host?

  3. Instructions to stop / clean everything on AWS environment (so that $$$ charges will be 0 after the environment will be stopped / cleared).

  4. Instructions / commands / steps to re-provision "from scratch" everything on AWS - assuming I stopped everything as per previous Step 13 and I need to re-provision the same app again? Repeated re-provisions should work reliable (for instance on local windows machine running multiple times setup was failing and you showed me to use -p $unique_name like a docker namespace - if something similar will be required when re-provisioning on AWS please document everything needed to safely provision-clear-everything-reprovision-again multiple times reliably)

  5. Let's say in few months an existing project (using these scripts) will accumulate PHP code (inside bedrock app or inside laravel) and the code will be hosted on github. At some point I will do git clone to an existing empty Windows folder and I will want to run "provision.sh" to setup a new AWS cloud deployment (with the PHP code available at that time). Which are the steps to do this? Is this anything special I will need to do for re-provisioning an existing app to AWS? This kind of repeated setup / provisions executions should work reliable on both WIndows and AWS (i.e. tricks like -p $unique_name should be documented)

  6. Support for Amazon RDS

With the current code what steps are required to do in order to connect to an Amazon RDS database instead of using the current docker my_sql image?

Ideally

  • in development (setup) the current docker mysql image should be default but it should be possible to configure connection to an Amazon RDS instead (if Amazon RDS will be configured by developer instead of docker mysql - maybe to debug / test a scenario on real production data)
  • in production (provisioning) it should be the other way around - Amazon RDS should be default but it should be possible to configure connection to the current docker mysql image if this is what the developer will want
  1. What happens if I will forget and I will run with APP_INIT=yes repeatedly? I could break / lose my existing "work" source code? (this should not happen even If I forget and I run by mistake again with APP_INIT=yes)

  2. Related with previous query #18 can we totally eliminate these 2 variables from our setup?

WP_INIT=yes
PROJECT_PATH=C:\Projects\wordpress-laravel-docker-local-cloud-aws2\work

Instead of "work" we will provide an empty folder called "app" (you can put an empty text file inside - sometimes git has difficulties with empty folders) - This folder app will be a sibling of the folder setup (next to setup - on the same level)

Put a simple logic inside the setup script to check and see if there exists a folder named ../app/bedrock ==> If so it means the setup was already executed ==> Do Nothing and Otherwise if you cannot find any ../app/bedrock ==> It means it is the first time we execute setup ==> continue and run the full setup

This way we will not need WP_INIT or PROJECT_PATH

Project Deliverables


Project Deliverables



  • Required project source code pushed to this github repository
  • Document with detailed and clear steps how to achieve everything described in the below section called How Testing Will Be Done

How Testing Will Be Done

  • Delete everything related with the wordpress-laravel-docker-local-cloud project from local computer (Windows)
  • git clone the wordpress-laravel-docker-local-cloud project repository from github

Without any further / manual configuration WordPress, Laravel (/lara-admin) and Adminer should work fine when started both locally and in the AWS cloud (WordPress and Adminer should not require any additional installation everything should be correctly pre-installed and configured inside the docker containers). Adminer should be pre-configured to connect to the WordPress MySQL database.

Testing Local (Windows) development environment (through Docker)

Laravel - Change the default "hello word" lara-admin entry point and refresh the browser's locally corresponding https://apps.company.com/lara-admin/ URL and then make sure the changed Laravel starter hello world version is coming

WordPress - Modify Sage9's index.blade.php https://github.com/roots/sage/blob/master/resources/views/index.blade.php refresh the browser's locally corresponding https://apps.company.com URL and make sure the update version is coming

Testing Cloud (AWS) Production

Use laradock/aws/ (see below) AWS elasticbeanstalk command line to deploy all needed laradock docker containers images to AWS (don't do any manual step like don't do FTP or any similar manual upload - All initial AWS deploy should work automatically through the AWS elasticbeanstalk command line)

Once deployed to AWS ...

Laravel - change and then do git push to the default "hello world" Laravel started app (/lara-admin) entry point and then refresh the browser's AWS production corresponding https://apps.domain.com/lara-admin/ URL and make sure the changed Laravel hello worlld version is served (dokku / heroku-build-pack should deploy / refresh the new PHP code automatically - without having to manually upload any PHP file)

WordPress - change and then do git push to Sage9's index.blade.php https://github.com/roots/sage/blob/master/resources/views/index.blade.php and then refresh the browser's AWS production corresponding https://apps.domain.com URL and make sure the updated page is coming


variables.tf

  • please re-arrange variables.tf with the required information coming at the top followed by optional keys in descending order based on their importance (however you will define "important")
  • and for each optional variable please provide a "smart" default value.

P.S - when organizing your argument is also good so please try to find a good mix between (if possible)

"though I don't prefer organise variable that way. now, they are organized by section, aws variables, dokku variables , wordpress variables,... and if you read description of each variable you'll understand wether it's optionel or not"

Why we specified ourselves MySQL version?

How I understand Docker (I might be wrong) if we took a decision to start our image FROM wordpress:4.9.4-php7.2-apache then we'll just use whatever MySQL version is shipped in wordpress:4.9.4-php7.2-apache which means will not have to mess up ourselves with "what version of MySQL we're going to run" and if we'll not mess up ourselves with MySQL version it means the risk of running a wrong version of MySQL will be 0 (because Auttomatic company probably knows what version of WordPress is compatible with what version of MySQL)

image

Where is Laravel?

where-is-laravel

Update 1: Finally Laravel folder came after 12 minutes but the issue remains how can I easily have access and know what is the status with containers and what they are doing (if still working or crashed or whatever)

Update 2: and finally after ~20 minutes I was able to access wordpress using http://localhost:8080 (this URL was failing previously during those 20 minutes before) - I understand the project creation takes time first time but I need feedback during creation / startup / initialization time because if I see all containers UP and then 15 minutes after containers are up Laravel folder is still not there and WP url is not accessible this is confusing.

Understanding the purpose of Sage and Laravel docker services

I need to double check something. You said

2- sage: this service watch css/js/ts files and build (compilation, transpile,..) them.
3- laravel: this service watch css/js/ts files and build (compilation, transpile,..) them.

By creating the Sage service do we try to implement ourselves "watch css/js/ts files and build (compilation, transpile,..)"? - We should not do that because

Sage9 has browsersync and WebPack features built-in - this is from their main page https://github.com/roots/sage

  • Webpack for compiling assets, optimizing images, and concatenating and minifying files
  • Browsersync for synchronized browser testing

which means we shouldn't do anything special ourselves for supporting this feature which should just come for us out of the box (from the fact that we are using Sage9 which has this builtin)

The same is true for Laravel we should not do anything ourselves for "watch css/js/ts files and build (compilation, transpile,..) " because I'm sure Laravel has this feature built-in (though I did not work with Laravel yet but I know that all modern frameworks have this feature built-in and Laravel is a modern framework)

Eliminate WP_INIT and PROJECT_PATH from our setup

For the PROJECT_PATH you can assume it is ../app

For WP_INIT you can test for the existence of the folder ../app/bedrock and

  • if not found it means it is the first time we run so you execute the full process including the creation of everything required
  • if ../app/bedrock folder is found you just start the containers (without recreating anything)

Step 7 - Configure whatever is required to get the required permalink structure working fine

Step 7 - Configure whatever is required to get the required permalink structure working fine

https://apps.domain.com (WordPress should be available here - Sage 9 theme)
https://apps.domain.com/wp-admin (WordPress admin should be available here)
https://apps.domain.com/lara-admin (Laravel "hello world" started app app should be available here)

Remember WordPress, Laravel and Adminer should be pre-installed / pre-configured. No manual steps should be required once the docker containers are started. Adminer should be pre-configured to connect to the WordPress database.

EVERYTHING IS UP AND RUNNING

In the console please show following information when everything is up and running (keep the *** and capitalization EVERYTHING IS UP AND RUNNING so it will STAND UP easily in the console)

image

Support for Amazon RDS

With the current code what steps are required to do in order to connect to an Amazon RDS database instead of using the current docker my_sql image?

Ideally

  • in development (local) the current docker mysql image should be default but it should be possible, if needed, to configure connection to an Amazon RDS instead
  • in production (provisioning) it should be the other way around - Amazon RDS should be default but, if needed, it should be possible to configure connection to the current docker mysql image

Estimated Time required to start everything locally?

When running locally (setup folder) what is a "normal" estimated time

  1. For the first execution of docker compose up?
  2. For subsequent executions (once the images are built)?

I already started the containers few times and I have the impression it takes too much time for everything to start (because the images are built). Maybe I'm not running the correct command and I'm rebuilding containers unnecessary? Yesterday you teached me the command

docker-compose -p $unique_name up -d --build

What should be the command to run the first time (when the containers should be built) and what would be the command for subsequent executions (when containers should not be built again)?

Why we need gosu?

Since I don't see gosu used in any of the components we install (bedrock, sage, dokku, etc).

The main theme is to stick as much as possible to the standard installation procedures. If gosu is not mentioned in any installation guide what is the scope of this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.