Comments (17)
Some more discussion from IRC (copied instead of linked because BotBot is currently offline):
19:44:20 andyshinn:> i wonder if you could use ONBUILD for the PHP issue as well, like keeping the PHP source around, allowing the FROM image to specify an ENV that had a list of modules to compile, then just compile those individual modules via a script
19:44:30 andyshinn:> the script just iterates the names over something like: $ cd /usr/src/php/extname; phpize; ./configure; make; make install
19:44:57 +tianon:> heh, that's an interesting idea
19:45:11 andyshinn:> i guess there are OS level deps for those modules too though...
19:45:23 +tianon:> aren't the PHP images coming from buildpack-deps?
19:45:30 +tianon:> the OS level deps are probably already installed
19:45:48 andyshinn:> oh hmmm, i haven't even looked at the FROM image yet
19:45:50 +tianon:> we have so many language stack images that it's hard to remember the details of some of them sometimes :)
19:46:01 +tianon:> ah, it is coming from buildpack-deps
19:46:05 +tianon:> so we're probably fine on OS level deps
19:46:27 +tianon:> I guess the real question is how the size of the compiled modules compares with the size of the PHP source code
19:47:05 +tianon:> if the size of all the modules possible being compiled is not too far out there, then it might be worth just keeping it for the convenience of users not having to specify which modules they need and then wait for them to build, too
19:48:09 andyshinn:> yep, agreed
from php.
I'd probably lean more towards the "throw everything* into the one build" camp, at the moment, after dwelling on this topic for a while. The point being, I'd like to cater for the majority of folks as simply as possible; without them having to write their own Dockerfiles.
While I do think that would give us the quickest route to useful and usable images, the idea of having a more "programmable" build is intriguing and should definitely be something we look in to. A simple way of saying "I want a PHP image, with extensions x, y, z please", being the ultimate goal.
* By "everything", I mean "95% of extensions that people will generally be looking for".
from php.
+1 on throwing everything in, makes creating dockerfiles that inherit from it way easier.
from php.
It somehow feels we're re-inventing the wheel here. Would it be an option to use PHP Build? It's an proven solution (For example, travis uses it to build their CI VMS)
Additionally, PHP-Build has a huge list of recipes for all versions of PHP; https://github.com/CHH/php-build/tree/master/share/php-build/definitions
They have a "default" list of modules defined, but this can be easily overridden in the install script
That includes ancient versions (which still is useful sometimes for testing).
IMO, combining the effort of the Docker community with that of PHP Build, would make optimum use of knowledge and manpower.
from php.
Since the PHP Build is not officially supported by upstream PHP, I would be hesitant to add it in. I would not be against learning what modules that they and other packagers (ex: debian) enable to see which ones are generally needed.
from php.
Since the PHP Build is not officially supported by upstream PHP
Ok, that may be an issue if this will be communicated as the official php image on Docker. Otoh (and as far as I can tell) PHP Build does a standard compile from source, albeit with some nice scripts and non-standard paths.
The primary reason for my previous comment is that I'd hate to see valuable time lost for things that other people already have spent years on figuring out.
Which modules?
Regarding which modules to include; I'm not 100% sure that compiling all modules is the best thing to do. Not only would PHP become very memory hungry, it's possible some extensions/modules are really not desired by all users. (Think for example, xdebug
, which is most probably only used in development)
Shared Extensions
Would it be possible to compile the extensions as "Shared Extensions" so that they can be enabled / disabled on a per-container base? This would allow all extensions to be included, but not enabled by default.
Enabling extensions/modules could be done either by;
- setting environment-variables during
docker run
or inside theDockerfile
. This is probably the most "docker-like" approach - modifying the global php.ini / php-cli.ini
- using separate ini-files for modules / extensions (similar to the approach ubuntu and debian use)
- possibly the
phpenmod
/phpdismod
utilities as used by debian/ubuntu
(The bullets above are not mutually exclusive, combinations of the above should be possible)
from php.
This ended up way more verbose than intended, so I'll preface:
tl;dr -- I think this image could legitimately punt the question of configure flags, shared or static extensions (and external dependencies thereof) downstream to the consumer by loading the information for what to do from a few files for which it can provide defaults, and access them with ONBUILD ADD ...
I think it would be best to talk about the types of extensions as PHP describes them here which I'll shorten below:
Core Extensions
These are not actual extensions. They are part of the PHP core and cannot be left out of a PHP binary with compilation options.
eg, Phar (which can be disabled with a flag at compile time)
Bundled Extensions
These extensions are bundled with PHP.
eg, PCNTL, JSON, GD, PDO
External Extensions
These extensions are bundled with PHP but in order to compile them, external libraries will be needed.
eg, Mcrypt, Mysql, Mysqli, Mysqnd
PECL Extensions
These extensions are available from PECL.
eg, ImageMagick, Libevent
The questions I would ask are:
-
Should this image support downstream modification of the configure flags?
This is the only way to disable something like phar or tokenizer. I'm not well-informed as to the actual use case for doing this except for the ubiquitous, vague 'security' example. We could do this by loading the flags from an ini file provided by something like
ONBUILD ADD configure.ini
. -
Should this image support downstream management of Bundled and/or External Extensions?
This can be accomplished either by compiling as Shared Extensions as @thaJeztah suggests, or by allowing downstream modification of the configure flags. Of course, now you need some way of sourcing the dependencies of the External Extensions, which is a pain with something as pedestrian as ldap; see the last comment here. Especially in this case, I have to wonder about the merits of compiling from source at all, because this exact workaround is packaged as a patch in the Debian package. Furthermore, an important value add of a Dockerfile relative to a heroku buildpack is that you can just use the package manager and be fine. At the same time, I absolutely understand wanting to compile a version not available in apt, but you would still need to duplicate or reuse some of the workarounds in the
debian
directory of the deb source. Maybe we should use the sameONBUILD ADD
trick for a list of patches in the deb source to use or blacklist? How much re-engineering of dpkg is actually useful here? -
Should this image provide tools for managing PECL extensions?
The most important examples here AFAIK are xdebug and redis. I feel like there are several strongly advocated strategies here, including actually using PECL, using source code and phpize directly, and the fledgling pickle project. I think that this is something where a downstream image should use whatever is convenient.
-
Should this image provide tools for managing ini configuration in downstream images?
The value add proposition as I see it is that you would ideally not be required to repeat information with the file that says you want the extension, the ini file for configuring it, and the ADD instruction for sticking your ini file into the image. We could just iterate over the ini conf.d and 'try' to add all the extensions for which an ini file exists, whatever 'try' means depending on the above. I think that the chief issue here is that for whatever Shared Extensions this image will manage, it has to provide an ini configuration for them or the running php process that it ships just won't work as expected. Invoking make doesn't just provide the ini files, so there's a problem if we use Shared Extensions, because you can't punt and avoid choosing a non-mainstream solution and avoid duplicating work without leaving php in a less functional state. The choice becomes whether to duplicate work or adopt or crib from a known solution, or just refuse to make any decision and only compile Bundled and External Extensions as static extensions, so that consumers of the image could take on this work by overriding the decision of what extensions to compile and how (static or shared, and how to get any external dependencies).
-
Should this image know anything about composer?
The ruby image uses bundler, so I think it's more than fair to invoke composer. This is a bit off topic since composer doesn't (yet) manage extensions, but the composer.lock definitely does record satisfied platform dependencies, including
ext-*
dependencies for php extensions and the version of php (or hhvm) used to satisfy the constraint for php (or hhvm). If you want to run something likeONBUILD composer install
then you have to guarantee that the versions in the lock are satisfiable if you want to preserve the known working state of the application.
from php.
@winmillwill as an answer to point 5: it would not necessarily have to know anything about it per se, but nearly all (modern) PHP libraries and frameworks rely on it. The current images do not support installation, as openssl support is missing. I have opened a separate issue for this: #17
from php.
I'll just add my $0.02 that I ran up against this issue today in the form of missing support for gzencode
for MediaWiki. This would be solved by using --with-gzip
or if it were possible to load extension=zlib.so
(using shared extensions as mentioned by @thaJeztah).
from php.
I think you should make the docker-php-ext-install
script more obvious, add some manual to the home README.md.
Because I forked this, after 30 mins of script writing, then I found the magic script...
OK, maybe it's me.
from php.
+1, it looks like this library does a lot more than the readme advertises, I'm still not sure the correct way to enable modules or extensions. Thanks!
from php.
@awc737 You can have a look at my repo, I think it's the right way to install extensions.
from php.
Nice, @AaronJan I think the documentation needs some examples like that.
from php.
@thaJeztah THX : )
This repo definitely needs some examples, I saw a lot issues like this one.
I made a pull request.
from php.
Made a new pull request. [https://github.com/docker-library/docs/pull/130]
People should know about this.
from php.
Docs have been added for docker-php-ext-install
, closing.
from php.
The script seems broken afaict. Using:
FROM php:5.6
RUN docker-php-ext-install pdo_mysql mbstring
RUN echo 'date.timezone = Asia/Bangkok' > /usr/local/etc/php/conf.d/date.ini
VOLUME /app
WORKDIR /app
EXPOSE 8080
ENTRYPOINT ["php"]
CMD ["-S", "0.0.0.0:8080", "-t", "web"]
pierre@jessie:~$ docker exec poolleage_web_1 ls /usr/local/etc/php/conf.d/
date.ini
Nothing about new extensions ini settings.
Loudly thinking:
From the make output:
Installing shared extensions: /usr/local/lib/php/extensions/no-debug-non-zts-20131226/
Installing header files: /usr/local/include/php/
but in the final image:
pierre@jessie:~$ docker exec poolleage_web_1 ls /usr/local/lib/php/extensions/no-debug-non-zts-20131226/
opcache.a
opcache.so
I am not sure why nothing gets copied or changed. The build seems to be correct.
However I am sure to understand how it is done or why. Bundled extensions should always be built. Default core extensions always be available (loaded). There are some build cases where one should really compile extensions at the same time than the core, not using phpize.
then a simple script to enable or disable extensions at wish can be provided. Something similar to apache enable/disable module script.
docker-php-*-ext can be kept for non core extensions.
At some point (working on it :), it could use Pickle to install extensions using either pickle directly or composer.
I could spend some time on a PR, how I would like to discuss it before, just to avoid doing double work or something you would not like :)
from php.
Related Issues (20)
- Php
- Add mlocati/docker-php-extension-installer to base image HOT 1
- Update apache 2.4.57-2 to 2.4.58-1 HOT 4
- ext-sockets won't compile in official Docker image 8.3.3-fpm-alpine3.19 HOT 2
- Interpreter does not resolve builtin functions when provide nonexistente file to opcache.preload option HOT 1
- Can't rename the php.ini-development HOT 2
- Podman php:cli with xdebug: Unable to debug php:cli in Visual Studio Code when using podman (Works in Docker) HOT 13
- Bundled curl version is causing segfault HOT 1
- Update libxml in php:7.3-apache to version 2.9.14 HOT 1
- Bbb HOT 1
- I've got this error saying missing file HOT 1
- Unable to install extensions mbstring/openssl/pdo_mysql on php:8.1-fpm-alpine v.3.20 HOT 1
- php
- docker-compose - php:7.2-fpm-alpine - error log is not printed to stdout / stderr? HOT 2
- php:8.3 alpine3.20 imagick pecl error HOT 4
- Time
- Upgrade historical manifests HOT 2
- Security updates on older php version images HOT 1
- Debian Buster based images - Updrage? HOT 2
- Test HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from php.