Apache httpd + suEXEC + chroot + FastCGI + PHP

Piqued your interest ? Excellent. For the moment, I’ll assume that you read the title of this post and immediately asked yourself (or rather, me) : “What are you talking about ? Why would anybody do that ? How exactly one get the above to work ? Be specific !”.

Well, I will answer each in turn. Even if you did not ask those questions. Even if you really wanted to read about the carrot-needs of bunnies, instead. No way out, honest !

What ?

Apache httpd
is the most popular web-server-application today. The page you are reading has been served by it; chances are, so have most others you have been reading recently. It is available for most operating systems, free, and well documented. People often refer to it simply as “Apache” since it’s arguably the most prominent Apache Software Foundation project. Indeed, I will be referring to it like that in this post.
suEXEC
is Apache’s solution to privilege separation with regard to CGI- and similarly run programs external to Apache. A popular example of a CGI-script would be a counter.
chroot
In *ix (and POSIX)-parlance, chroot is a systemcall which will change the current process’ root directory to a different one; for instance, this can be used to constrain a program’s access to a specific part of the filesystem.
FastCGI
is an evolution of the CGI. In the regular CGI-model, processes die after a request is handled; in FastCGI, persistent processes are possible; Since startup- and teardown-costs of programs may be substantial, this can provide a sizable boost in performance and allows for things like object-caching and the like.
PHP
is a popular scripting/programming-language used for web-pages (and other things). It is also a popular example of a program that may be run as a CGI process.

Why ?

Apache usually runs as a non-privileged user as part of its security-model in order to reduce the impact of possible holes. If your installation of Apache is meant to only serve a single website (or single host / virtual host), this works very well, even when you want serve dynamically created content via CGI — such programs will simply run under the Apache-userid (which I shall henceforth assume is www-data) :

Graph RenderingGraph Rendering

However, if you want to serve several vhosts from the same Apache instance (this is often the case when you use the same server for serving several different websites for potentially different users), it is often useful to provide isolation from one user to another. You would not want user eve to be able to read or change user alice’s files from within a CGI-script called by Apache on behalf of user eve’s vhost, for instance — but in order for alice’s CGI process to read and/or modify the files owned by alice, they will have to be accessible by the www-data-user. If www-data has access in this setup, eve’s CGI processes will have access as well.

Graph RenderingGraph Rendering

suEXEC

This is where suEXEC comes in; before Apache runs something for any vhost, it will first invoke the suEXEC wrapper; this is a tiny program which runs as the superuser; its only function is to check the permissions of the CGI-invoked program to be run (and various other parameters so as to avoid a security breach), then drop its superuser privileges, morphing them into the intended user’s privileges, and executing the intended program. Now you do have user isolation even for programs called by Apache on behalf of users — they get executed as that particular user.

Graph RenderingGraph Rendering

chroot

Using chroot to change the root directory before dropping those superuser privileges can provide an additional layer of defense in the setup; CGI-programs executed within a chroot cannot, for instance, read or list any of the files not in the new root directory.

Graph RenderingGraph Rendering

This is useful for curtailing data gathering/spying by users or insecure scripts and provides an additional barrier in case a CGI-invoked program is compromised. However, it is not a supported functionality of the official suEXEC release bundled with Apache httpd — but we (and I’ll switch to that now, since you and I are working on it) will get it to work approximately like this :

Graph RenderingGraph Rendering

FastCGI

There are many ways to generate dynamic content to be served to users; just looking at PHP, you have the choice between mod_php, CGI, FastCGI, and more esoteric choices. mod_php links the PHP interpreter into the httpd-process. Since it is always loaded, the startup-time for individual PHP-scripts is minimal; however, PHP has a whopper of a runtime, and Apache really likes to spawn many processes and/or threads. Furthermore, if you want to run a multi-threaded Apache (or various MPMs other than mpm_prefork), you can run into trouble with libraries external to PHP which are not threadsafe. All your scripts will be run under the same user-id. mod_php attempts to support vhost isolation even in this case with various hacks and kludges inside the PHP interpreter such as safe_mode, but this is basically reinventing the wheel, poorly.

Graph RenderingGraph Rendering

PHP run as a CGI avoids the cost in constantly used memory-resources and security concerns of running everything under the Apache-user-id (using suPHP, CGIWrap or specially-patched suEXEC-wrappers); however, it carries a huge penalty in startup-cost — every time you access a CGI-program run this way, the PHP interpreter has to be loaded, initialized, the code compiled, etc.

Graph RenderingGraph Rendering

PHP run as FastCGI-workers can provide a solution to these problems. It combines the externalized nature of a CGI program (which can be run as a different user or the like) and the speed of a persistent process (the PHP process keeps running in between serving requests). Resources can be distributed more wisely; Apache could be serving hundreds of concurrent requests, but still only require very few instances of running PHP interpreters.

Graph RenderingGraph Rendering

The FastCGI-setup works the same as the regular CGI-setup as far as suEXEC is concerned; the difference is that the PHP-dispatcher and -workers are persistent processes; they do not get killed after a request is served. The wrapper-process is a very light-weight process which opens a pipe to the dispatcher for communication (optionally it can also start the dispatcher, which in turn starts the workers). Requests are fanned out to the workers by the dispatcher, and served to httpd through the wrapper. The wrapper is usually long-lived as well, managed by some lightweight code inside Apache.

Putting it all together

Combining all of the above provides a solid separation of users on a host and added security-barriers in case of breaches. It is also very flexible since the Apache-instance can focus on what it does best, serving HTTP.

Graph RenderingGraph Rendering

There are other ways to achieve similar results. One could use mod_proxy on a frontent-Apache proxying requests to several other instances of Apache running as different users; this makes it possible to use several differently-tuned Apache configurations on the same host (or even vhost) — for instance one could use mod_php on one vhost while using mod_perl or mod_svn on another. I may describe such a setup in a future post. You could also have Squid or other daemons more suited to proxying handle the frontend, or use different http-daemons as backends.

The solution described here is not specific to PHP — it works just as well for any other CGI-program or -language (such as Perl, Ruby, etc.). In fact, the requisite patch to plain suEXEC is provided.

How ?

I’m using Debian lenny as a base to work on, but any reasonable *ix should work similarly. From a fresh install, using the following will get the toolchain and libraries ready for what we are about to do :

e@lilith:~/$ sudo apt-get build-dep apache2 
(...)
e@lilith:~/$ sudo apt-get build-dep php5
(...)

The setup we are about to implement will allow for name-based virtual hosting of different domains under different *ix-userids; the examples will assume alice.example.com and bob.example.com as the virtual hosts, alice and bob as the user-ids the respective scripts should be run as, www-data as the user-id httpd will run as, and /home/alice and /home/bob as the respective home-directories we will be chrooting into.

Apache httpd 2.2.11

First, let’s get Apache httpd and verify its integrity (you should probably be using one of the many fine Mirrors instead) :

e@lilith:~/src$ wget -nv http://archive.apache.org/dist/httpd/httpd-2.2.11.tar.bz2
2009-04-21 07:21:08 URL:http://archive.apache.org/dist/httpd/httpd-2.2.11.tar.bz2 [5230130/5230130] -> "httpd-2.2.11.tar.bz2" [1]
e@lilith:~/src$ md5sum httpd-2.2.11.tar.bz2
3e98bcb14a7122c274d62419566431bb  httpd-2.2.11.tar.bz2
e@lilith:~/src$ wget -q -O - http://www.apache.org/dist/httpd/httpd-2.2.11.tar.bz2.md5
3e98bcb14a7122c274d62419566431bb  httpd-2.2.11.tar.bz2
e@lilith:~/src$ tar xjf httpd-2.2.11.tar.bz2
e@lilith:~/src$ cd httpd-2.2.11
e@lilith:~/src/httpd-2.2.11$

Next, we will patch suEXEC to support chroot and supply some PHP sugar using suexec-phpfcgi.diff :

--- suexec.c 2008-11-30 15:47:31.000000000 +0000
+++ suexec-phpfcgi.c    2006-04-07 17:05:04.000000000 +0000
@@ -259,6 +259,7 @@
     char *cmd;              /* command to be executed    */
     char cwd[AP_MAXPATH];   /* current working directory */
     char dwd[AP_MAXPATH];   /* docroot working directory */
+    char nwd[AP_MAXPATH];   /* after-chroot working dir  */
     struct passwd *pw;      /* password entry holder     */
     struct group *gr;       /* group entry holder        */
     struct stat dir_info;   /* directory info holder     */
@@ -456,7 +455,6 @@
         log_err("cannot run as forbidden uid (%d/%s)\n", uid, cmd);
         exit(107);
     }
-
     /*
      * Error out if attempt is made to execute as root group
      * or as a GID less than AP_GID_MIN.  Tsk tsk.
@@ -465,6 +463,39 @@
         log_err("cannot run as forbidden gid (%d/%s)\n", gid, cmd);
         exit(108);
     }
+
+
+    int striplen = strlen (target_homedir);
+
+    char* tlen = strchr(target_homedir, '/');
+    char* hlen = strchr(tlen+1, '/');
+    char* ulen = strchr(hlen+1, '/');
+    char* chroot_dir = strndup(target_homedir, ulen-target_homedir);
+    char* pt = getenv("PATH_TRANSLATED");
+    if (pt != 0) {
+      setenv("PATH_TRANSLATED", pt + (ulen - target_homedir), 1);
+    }
+
+    setenv("DOCUMENT_ROOT", "/", 1);
+
+    if (getcwd(nwd, AP_MAXPATH) == NULL) {
+        log_err("cannot get current working directory (prechroot)\n");
+        exit(111);
+    }
+
+    char* trunc_nwd = nwd+(ulen-target_homedir);
+
+    if (chdir(chroot_dir)) {
+        log_err("crit: can't chdir to chroot dir (%s)",chroot_dir);
+        exit(121);
+    }
+
+    if (chroot(chroot_dir) != 0) {
+      log_err("emerg: failed to chroot (%s, %s)\n", chroot_dir, cmd);
+      exit(122);
+    }
+
+    chdir (trunc_nwd);
 
     /*
      * Change UID/GID here so that the following tests work over NFS.
@@ -498,22 +529,11 @@
         exit(111);
     }
 
-    if (userdir) {
-        if (((chdir(target_homedir)) != 0) ||
-            ((chdir(AP_USERDIR_SUFFIX)) != 0) ||
-            ((getcwd(dwd, AP_MAXPATH)) == NULL) ||
-            ((chdir(cwd)) != 0)) {
-            log_err("cannot get docroot information (%s)\n", target_homedir);
-            exit(112);
-        }
-    }
-    else {
-        if (((chdir(AP_DOC_ROOT)) != 0) ||
-            ((getcwd(dwd, AP_MAXPATH)) == NULL) ||
-            ((chdir(cwd)) != 0)) {
-            log_err("cannot get docroot information (%s)\n", AP_DOC_ROOT);
-            exit(113);
-        }
+    if (((chdir(getenv("DOCUMENT_ROOT"))) != 0) ||
+        ((getcwd(dwd, AP_MAXPATH)) == NULL) ||
+        ((chdir(cwd)) != 0)) {
+        log_err("cannot get docroot information (%s)\n", AP_DOC_ROOT);
+        exit(113);
     }
 
     if ((strncmp(cwd, dwd, strlen(dwd))) != 0) {
@@ -565,7 +585,7 @@
      * Error out if the target name/group is different from
      * the name/group of the cwd or the program.
      */
-    if ((uid != dir_info.st_uid) ||
+/*    if ((uid != dir_info.st_uid) ||
         (gid != dir_info.st_gid) ||
         (uid != prg_info.st_uid) ||
         (gid != prg_info.st_gid)) {
@@ -575,7 +595,7 @@
                 dir_info.st_uid, dir_info.st_gid,
                 prg_info.st_uid, prg_info.st_gid);
         exit(120);
-    }
+    }*/
     /*
      * Error out if the program is not executable for the user.
      * Otherwise, she won't find any error in the logs except for

Notice that this patch is rather specific towards running PHP; I disable uid/gid-checks on the called program (which will actually be the PHP-wrapper inside the chroot which is not owned by the user whose script will actually be executed), rewrite PATH_TRANSLATED, and introduce a chroot-call to the mix.

e@lilith:~/src/httpd-2.2.11/$ cd support
e@lilith:~/src/httpd-2.2.11/support$ patch -p0 -o suexec-phpfcgi.c < suexec-phpfcgi.diff
patching file suexec.c

To also support more traditional CGI programs running similar to the regular suEXEC-way, we apply another patch, this time to suexec.c itself :

--- suexec.c 2008-11-30 15:47:31.000000000 +0000
+++ suexec.c.chroot    2006-04-09 10:43:21.000000000 +0000
@@ -259,6 +259,7 @@
     char *cmd;              /* command to be executed    */
     char cwd[AP_MAXPATH];   /* current working directory */
     char dwd[AP_MAXPATH];   /* docroot working directory */
+    char nwd[AP_MAXPATH];   /* docroot working directory */
     struct passwd *pw;      /* password entry holder     */
     struct group *gr;       /* group entry holder        */
     struct stat dir_info;   /* directory info holder     */
@@ -466,6 +465,38 @@
         exit(108);
     }
 
+    int striplen = strlen (target_homedir);
+
+    char* tlen = strchr(target_homedir, '/');
+    char* hlen = strchr(tlen+1, '/');
+    char* ulen = strchr(hlen+1, '/');
+    char* chroot_dir = strndup(target_homedir, ulen-target_homedir);
+    char* pt = getenv("PATH_TRANSLATED");
+    if (pt != 0) {
+      setenv("PATH_TRANSLATED", pt + (ulen - target_homedir), 1);
+    }
+
+    setenv("DOCUMENT_ROOT", "/", 1);
+
+    if (getcwd(nwd, AP_MAXPATH) == NULL) {
+        log_err("cannot get current working directory (prechroot)\n");
+        exit(111);
+    }
+
+    char* trunc_nwd = nwd+(ulen-target_homedir);
+
+    if (chdir(chroot_dir)) {
+        log_err("crit: can't chdir to chroot dir (%s)",chroot_dir);
+        exit(121);
+    }
+
+    if (chroot(chroot_dir) != 0) {
+      log_err("emerg: failed to chroot (%s, %s)\n", chroot_dir, cmd);
+      exit(122);
+    }
+
+    chdir (trunc_nwd);
+
     /*
      * Change UID/GID here so that the following tests work over NFS.
      *

This is essentially the same as the PHP-specific binary patch, but it does still do gid/uid checks.

e@lilith:~/src/httpd-2.2.11/support$ patch -p0 < suexec-chroot.diff
patching file suexec.c

Last, but not least, let’s add suexec-phpfcgi to the Makefile :

--- Makefile.in.old     2009-04-21 09:59:33.000000000 -0400
+++ Makefile.in 2009-04-21 10:04:11.000000000 -0400
@@ -59,9 +59,13 @@
        $(LINK) $(checkgid_LTFLAGS) $(checkgid_OBJECTS) $(PROGRAM_LDADD)
 
 suexec_OBJECTS = suexec.lo
-suexec: $(suexec_OBJECTS)
+suexec: $(suexec_OBJECTS) suexec-phpfcgi
        $(LINK) $(suexec_OBJECTS)
 
+suexec-phpfcgi_OBJECTS = suexec-phpfcgi.lo
+suexec-phpfcgi: $(suexec-phpfcgi_OBJECTS)
+       $(LINK) $(suexec-phpfcgi_OBJECTS)
+
 htcacheclean_OBJECTS = htcacheclean.lo
 htcacheclean: $(htcacheclean_OBJECTS)
        $(LINK) $(htcacheclean_LTFLAGS) $(htcacheclean_OBJECTS) $(PROGRAM_LDADD)
e@lilith:~/src/httpd-2.2.11/support$ patch -p0 < makefile.diff
patching file Makefile.in

With these patches done, let’s configure, compile, and install httpd :

e@lilith:~/src/httpd-2.2.11/support$ cd ..
e@lilith:~/src/httpd-2.2.11$ ./configure --prefix=/opt/apache --enable-suexec \
--enable-mods-shared=most --enable-so --with-mpm=worker --with-included-apr
(...)
config.status: creating include/ap_config_auto.h
config.status: executing default commands
e@lilith:~/src/httpd-2.2.11$

The important parts to note here are --enable-suexec and --with-mpm=worker. The former enables the compilation of suEXEC with httpd, and the latter will make Apache utilize the worker-MPM which allows for multiple processes each holding multiple threads to serve requests (you could use other MPMs as well, but this one is tested and performs very well).

Before we proceed, we need to change some settings for suEXEC. In support/suexec.h, we change AP_HTTPD_USER from “www” to “www-data” :

/*
 * HTTPD_USER -- Define as the username under which Apache normally
 *               runs.  This is the only user allowed to execute
 *               this program.
 */
/*#ifndef AP_HTTPD_USER*/
#define AP_HTTPD_USER "www-data"
/*#endif*/

We also need to change the allowed hierarchy for suEXEC to operate on :

/*
 * DOC_ROOT -- Define as the DocumentRoot set for Apache.  This
 *             will be the only hierarchy (aside from UserDirs)
 *             that can be used for suEXEC behavior.
 */
#ifndef AP_DOC_ROOT
/*# define AP_DOC_ROOT DEFAULT_EXP_HTDOCSDIR*/
#endif
#define AP_DOC_ROOT "/"

Now we are ready to start the compilation and install Apache and its helper binaries :

e@lilith:~/src/httpd-2.2.11$ make
Making all in srclib
(...)
make[2]: Leaving directory `/home/e/src/httpd-2.2.11/support'
make[1]: Leaving directory `/home/e/src/httpd-2.2.11'
e@lilith:~/src/httpd-2.2.11$ sudo make install
[sudo] password for e:
Making install in srclib
(...)
make[1]: Leaving directory `/home/e/src/httpd-2.2.11'
e@lilith:~/src/httpd-2.2.11$ cd support
e@lilith:~/src/httpd-2.2.11/support$ sudo cp ./suexec-phpfcgi /opt/apache/bin/
e@lilith:~/src/httpd-2.2.11/support$ sudo chmod u+s /opt/apache/bin/suexec-phpfcgi
e@lilith:~/src/httpd-2.2.11/support$

mod_fastcgi 2.4.6

Apache httpd itself does not come with native support for FastCGI — however, there are modules we can use to add it. In this case, we’ll use mod_fastcgi :

e@lilith:~/src$ wget -nv http://www.fastcgi.com/dist/mod_fastcgi-2.4.6.tar.gz
2009-04-21 09:55:30 URL:http://www.fastcgi.com/dist/mod_fastcgi-2.4.6.tar.gz [100230/100230] -> "mod_fastcgi-2.4.6.tar.gz" [1]
e@lilith:~/src$ tar xzf mod_fastcgi-2.4.6.tar.gz
e@lilith:~/src$ cd mod_fastcgi-2.4.6/
e@lilith:~/src/mod_fastcgi-2.4.6$ /opt/apache/bin/apxs -o mod_fastcgi.so -c *.c
(...)
e@lilith:~/src/mod_fastcgi-2.4.6$ sudo /opt/apache/bin/apxs -i -a -n fastcgi .libs/mod_fastcgi.so
(...)
[activating module `fastcgi' in /opt/apache/conf/httpd.conf]
e@lilith:~/src/mod_fastcgi-2.4.6$

PHP 5.2.9

We’ll use PHP 5.2.9, but 4.x or any other version should work just as well — if you tune the configure-parameters a little.

e@lilith:~/src$ wget -nv http://de.php.net/get/php-5.2.9.tar.bz2/from/this/mirror
2009-04-21 10:44:53 URL:http://de.php.net/distributions/php-5.2.9.tar.bz2 [10203122/10203122] -> "php-5.2.9.tar.bz2" [1]
e@lilith:~/src$ md5sum php-5.2.9.tar.bz2
280d6cda7f72a4fc6de42fda21ac2db7  php-5.2.9.tar.bz2
e@lilith:~/src$ tar -xjf php-5.2.9.tar.bz2
e@lilith:~/src$ cd php-5.2.9/

PHP can take many configuration-arguments, but the minimum for our purposes is

e@lilith:~/src/php-5.2.9$ ./configure \
--enable-fastcgi --prefix=/usr/local/php5 --enable-cgi \
--with-config-file-path=/usr/local/wrappers/etc/php5/
(...)
Thank you for using PHP.
 
e@lilith:~/src/php-5.2.9$ make
(...)
Build complete.
Don't forget to run 'make test'.
 
e@lilith:~/src/php-5.2.9$ sudo make install
(...)
e@lilith:~/src/php-5.2.9$

The --prefix and --with-config-file-path can be changed, but that’s what I am assuming will be used henceforth; if you choose something else, make sure to pay attention to that fact while setting up the chroot, as well.

VirtualHost configuration

First, let’s change a few things in /opt/apache/conf/httpd.conf (namely the uid/gid and some includes) :

(...)
# It is usually good practice to create a dedicated user and group for
# running httpd, as with most system services.
#
#User daemon
#Group daemon
User www-data
Group www-data
(...)
# Server-pool management (MPM specific)
Include conf/extra/httpd-mpm.conf
(...)
# Virtual hosts
Include conf/extra/httpd-vhosts.conf

Then we get to add some meat to /opt/apache/conf/extra/httpd-vhosts.conf. This would be the entire file once done :

NameVirtualHost *:80
 
FastCgiWrapper /opt/apache/bin/suexec-phpfcgi
FastCgiConfig -maxClassProcesses 5 -maxProcesses 100 -minProcesses 0 -pass-header Authorization
 
<Location /cgi-wrapper/php5-fcgi>
 SetHandler fastcgi-script
 Options +ExecCGI
</Location>
 
<VirtualHost *:80>
    DocumentRoot /home/alice/home/alice/public_html
    ServerName alice.example.com
    ServerAlias *.alice.example.com
 
    ErrorLog /home/alice/usr/local/logs/alice.example.com.error_log
    CustomLog /home/alice/usr/local/logs/alice.example.com.access_log combined
 
    SuexecUserGroup alice alice
 
    ScriptAlias /cgi-wrapper/ /home/alice/usr/local/wrappers/
 
    Action php-fastcgi /cgi-wrapper/php5-fcgi
    AddType application/x-httpd-php .php
    AddHandler php-fastcgi .php
 
    <Directory "/home/alice/usr/local/wrappers/" >
      AllowOverride None
      Options +ExecCGI -MultiViews -Indexes
      Order allow,deny
      Allow from all
    </Directory>
 
    <Directory "/home/alice/home/alice/public_html" >
      Options ExecCGI Indexes Includes SymLinksIfOwnerMatch
      Order allow,deny
      Allow from all
    </Directory>
</VirtualHost>
 
<VirtualHost *:80>
    DocumentRoot /home/bob/home/bob/public_html
    ServerName bob.example.com
    ServerAlias *.bob.example.com
 
    ErrorLog /home/bob/usr/local/logs/bob.example.com.error_log
    CustomLog /home/bob/usr/local/logs/bob.example.com.access_log combined
 
    SuexecUserGroup bob bob
 
    ScriptAlias /cgi-wrapper/ /home/bob/usr/local/wrappers/
 
    Action php-fastcgi /cgi-wrapper/php5-fcgi
    AddType application/x-httpd-php .php
    AddHandler php-fastcgi .php
 
    <Directory "/home/bob/usr/local/wrappers/" >
      AllowOverride None
      Options +ExecCGI -MultiViews -Indexes
      Order allow,deny
      Allow from all
    </Directory>
 
    <Directory "/home/bob/home/bob/public_html" >
      Options ExecCGI Indexes Includes SymLinksIfOwnerMatch
      Order allow,deny
      Allow from all
    </Directory>
</VirtualHost>

Documentation for mod_fastcgi options is available. Setting suexec-phpfcgi as the wrapper does most of our magic; The Location-block in tandem with the Action- and ScriptAlias-defines within the virtual-host-definitions make sure that any file ending in “.php” is treated as a script to be executed by PHP in FastCGI-mode.

We still need to actually provide the php5-fcgi-wrapper, which happens in the next section.

Setting up environments

Let’s first set up the general directory structure for alice and bob before moving on to populating it with configuration data and binaries :

lilith:/# for a in alice bob ; do \
  useradd -d /home/$a/home/$a $a ;
  mkdir -p /home/$a/home/$a/public_html ;
  mkdir /home/$a/home/$a/home ;
  ln -s /home/$a /home/$a/home/$a/home ;
  chown root: /home/$a -R ;
  chown $a: /home/$a/home/$a -R ;
  mkdir -p /home/$a/usr/local/logs ;
  chown www-data: /home/$a/usr/local/logs ;
  mkdir -p /home/$a/usr/local/wrappers/etc/php5 ;
  ln -s /usr/local/logs /home/$a/home/$a/logs ; 
  mkdir /home/$a/lib ;
  mkdir -p /home/$a/usr/lib ;
  cd /home/$a ;
  ln -s lib lib64 ;
  cd ;
  mkdir /home/$a/dev ;
  mkdir /home/$a/tmp ;
  chmod a+rwx /home/$a/tmp ; 
  chmod o+t /home/$a/tmp ;
  mknod /home/$a/dev/zero c 1 5 ;
  mknod /home/$a/dev/urandom c 1 9 ;
  mknod /home/$a/dev/null c 1 3 ;
done
lilith:/#

(I am creating a somewhat minimal environment here; some features of PHP and other languages will require external programs such as ImageMagick, netpbm, etc. — if you need to provide for these as well, take care in adding them to the chroot; alternatively, you could even install an entire Linux distribution in there; just take care that /usr/local/wrappers, /usr/local/php5, and /home/username/ are available within that chroot environment.)

PHP Wrapper

Next we add the PHP-FastCGI-Wrapper to the chroot environments; it is a file called php5-fcgi containing

#!/bin/sh
PHPRC="/usr/local/wrappers/etc/php5"
export PHPRC
PHP_FCGI_CHILDREN=5
export PHP_FCGI_CHILDREN
PHP_FCGI_MAX_REQUESTS=5000
export PHP_FCGI_MAX_REQUESTS
exec /usr/local/php5/bin/php-cgi

Note that you can tweak the PHP_FCGI_CHILDREN and PHP_FCGI_MAX_REQUESTS to increase the number of PHP worker processes or after how many requests each worker gets restarted automatically — and remember that mod_fastcgi can also spawn additional dispatchers.

This file is placed into the usr/local/wrappers-subdirectory of each homedirectory, as is the php.ini configuration-file :

lilith:/# for a in alice bob ; do \
  cp /home/e/php5-fcgi /home/$a/usr/local/wrappers/ ;
  chown root: /home/$a/usr/local/wrappers/php5-fcgi ;
  chmod a+x /home/$a/usr/local/wrappers/php5-fcgi ;
  cp /home/e/src/php-5.2.9/php.ini-dist /home/$a/usr/local/wrappers/etc/php5/php.ini ;
done

Binaries & Libraries

In this example, we’ll attempt to hardlink most files instead of creating physical copies in order to both save diskspace and memory (since the program images will be the same across chroot environments). rsync is an excellent tool to get this done :

lilith:~# for a in alice bob ; do \
  rsync -rlopgDtH --delete /usr/local/php5/ /home/$a/usr/local/php5/ ;
done

Of course, PHP does require to use several libraries to be available; a call to ldd can tell us which ones :

lilith:~# ldd /usr/local/php5/bin/php-cgi
        linux-vdso.so.1 =>  (0x00007fffc61fe000)
        libcrypt.so.1 => /lib/libcrypt.so.1 (0x00007fa0bdd58000)
        librt.so.1 => /lib/librt.so.1 (0x00007fa0bdb4f000)
        libresolv.so.2 => /lib/libresolv.so.2 (0x00007fa0bd93b000)
        libm.so.6 => /lib/libm.so.6 (0x00007fa0bd6b8000)
        libdl.so.2 => /lib/libdl.so.2 (0x00007fa0bd4b4000)
        libnsl.so.1 => /lib/libnsl.so.1 (0x00007fa0bd29c000)
        libxml2.so.2 => /usr/lib/libxml2.so.2 (0x00007fa0bcf40000)
        libc.so.6 => /lib/libc.so.6 (0x00007fa0bcbed000)
        libpthread.so.0 => /lib/libpthread.so.0 (0x00007fa0bc9d1000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fa0bdf90000)
        libz.so.1 => /usr/lib/libz.so.1 (0x00007fa0bc7ba000)

All of these and their dependencies need to be available inside the chroot environment. While there are tools available that make this job easier and more precise (makejail is an example), for the moment I’ll just swing a big hammer and make all of /lib and some of /usr/lib available within the chroot; you can tune this on your own (or use aforementioned tools). We also need a working sh inside the chroot so the wrapper script can be run :

lilith:~# for a in alice bob ; do \
    rsync -rlopgDtH --delete /lib/ /home/$a/lib/ ;
    rsync -rlopgDtH --delete /usr/lib/libz* /usr/lib/libxml* /home/$a/usr/lib/ ;
    rsync -rlopgDtH --delete /bin/sh /bin/bash /home/$a/bin/ ;
done

Before everything is ready, make sure that the /opt/apache/logs/fastcgi/-directory is writable by the user httpd is running as :

lilith:~# chown www-data: /opt/apache/logs/fastcgi/ -R

Testing

Let’s fire ‘er up :-)

lilith:~# /opt/apache/bin/apachectl start

To test whether PHP is working … we need some PHP !

lilith:~# for a in alice bob ; do \
  echo '<? phpinfo(); ?>' > /home/$a/home/$a/public_html/test.php ;
done

Let’s see what happens (you can use a browser for this too, but for illustrative reasons, I use telnet here) :

e@lilith:~$ telnet localhost 80
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET /test.php HTTP/1.1
Host: alice.example.com
 
HTTP/1.1 200 OK
Date: Tue, 21 Apr 2009 19:25:36 GMT
Server: Apache/2.2.11 (Unix) DAV/2 mod_fastcgi/2.4.6
X-Powered-By: PHP/5.2.9
Transfer-Encoding: chunked
Content-Type: text/html
(...)

Success for Alice !

e@lilith:~$ telnet localhost 80
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET /test.php HTTP/1.1
Host: bob.example.com
 
HTTP/1.1 200 OK
Date: Tue, 21 Apr 2009 19:25:36 GMT
Server: Apache/2.2.11 (Unix) DAV/2 mod_fastcgi/2.4.6
X-Powered-By: PHP/5.2.9
Transfer-Encoding: chunked
Content-Type: text/html
(...)

Success for bob !

And what does our process tree look like ?

lilith:/# pstree -up 12508
httpd(12508)-+-httpd(12509,www-data)
             |-httpd(12510,www-data)-+-php-cgi(12568,alice)-+-php-cgi(12569)
             |                       |                      |-php-cgi(12570)
             |                       |                      |-php-cgi(12571)
             |                       |                      |-php-cgi(12572)
             |                       |                      `-php-cgi(12573)
             |                       `-php-cgi(12575,bob)-+-php-cgi(12576)
             |                                            |-php-cgi(12577)
             |                                            |-php-cgi(12578)
             |                                            |-php-cgi(12579)
             |                                            `-php-cgi(12580)
             |-httpd(12511,www-data)-+-{httpd}(12513)
(...)

Excellent !

Further considerations

If you are doing the work of chrooting your CGI-scripts, you should also install and configure the GRSecurity-kernel patch. It has several enhancements to the chroot barrier (including the inability to use fchdir on an open filehandle to escape it).

I have only touched briefly on performance considerations and fine-tuning; You need to find a balance of how many FastCGI-workers are held available, how many users you do this for, how much memory you want to spend per user, etc. — the knobs should be fairly obvious, however.

28 Comments

  1. heikki says:

    # patch -p0 -o suexec-phpfcgi.c < suexec-phpfcgi.diff
    patching file suexec-phpfcgi.c
    Hunk #1 FAILED at 259.
    Hunk #2 FAILED at 455.
    Hunk #3 FAILED at 463.
    Hunk #4 FAILED at 529.
    Hunk #5 FAILED at 585.
    Hunk #6 FAILED at 595.
    6 out of 6 hunks FAILED

    • eike says:

      # patch -p0 -o suexec-phpfcgi.c < suexec-phpfcgi.diff
      patching file suexec-phpfcgi.c
      Hunk #1 FAILED at 259.
      (…)

      At that point, the file suexec-phpfcgi.c should not yet exist, so patch should not be patching it but rather suexec.c and outputting the new file there; is it possible that you already performed this step before ?

      If not and this is an idiosyncrasy of your version of patch, try a cp suexec.c suexec-phpfcgi.c before running it. If that still does not fix it, are you sure you are using Apache 2.2.11’s suexec.c and not a previous, potentially different version thereof ?

  2. heikki says:

    sorry, i forgot to cd to the correct directory where the suexec.c is. the guide didn’t tell us to cd there, but it was obvious when i looked at the text before the command :)

  3. heikki says:

    is there a guide for making the ssh chrooted users work with this setup?
    and how about non-chrooted users to work with suexec and this setup?

    • eike says:

      There are quite a few different use-cases to consider for these things; I purposefully constrained the post to suexec and PHP to avoid having too many branches à la “you could also do …”.

      So while there are probably guides for making chrooted ssh happen and you might have seen some snippets like this before, I do not know of any off-hand that use these exact conventions.

      If you put this into /bin/chroot-shell :


      #!/bin/bash

      if [ "$1" = "-c" ]; then
      i=0;
      PARAMS="";
      for param in $*; do
      if [ $i -gt 0 ]; then
      PARAMS="$PARAMS $param";
      fi
      let i++;
      done;
      sudo /usr/bin/chrootuidsh /home/$USER $USER /bin/bash -l -c "$PARAMS"
      else
      sudo /usr/bin/chrootuidsh /home/$USER $USER /bin/bash -
      fi;

      (wordpress replaced the quotation marks here, make sure to rectify that)

      and this into /usr/bin/chrootuidsh :


      #!/bin/bash
      PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games
      LOGNAME=$2
      HOME=$1
      unset SUDO_COMMAND
      unset SUDO_USER
      unset SUDO_UID
      unset SUDO_GID
      SHELL=/bin/bash
      SHLVL=0
      /usr/bin/chrootuid "$@"

      and add a line akin to


      username ALL= NOPASSWD: /usr/bin/chrootuidsh /home/username username /bin/bash *

      to /etc/sudoers, you probably have a decent starting point. Just make sure you have a shell-capable chroot-environment.

      As for non-chrooted users working together with the setup described in the post : there should not be much of an issue, other than that their homedirectory path looks a bit weird /home/user/home/user/) and the environment inside the chroot does not match the one outside (so depending on the proficiency of your users, you may have to explain this to them in detail).

      By the way, the next step you may be asking about is chrooted procmail. :P

  4. Phineas says:

    First off, I’d like to thank you for this tutorial. It’s the only one thus far I’ve found that adequately addresses php security when there exist multiple untrusted virtual hosts.

    Following this tutorial, I end up with a recursive file structure: /home/alice/home/alice/home/alice/…. I believe I understand the importance (at least the necessity to have /home/alice/home/alice/public_html). However, is it possible to create a simpler structure where we have the document root instead at /home/alice/www? Any insight would be much appreciated.

    - Phineas

    • eike says:

      Originally, I wanted the inside of the chroot to “feel” mostly as if you were on the outside to the user; as such, ~/public_html seemed the natural choice. The problem with this setup is that any service that operates outside the chroot initially (mail, www, etc.) will have to take care to translate the path, chopping off the first /home/alice part if ever they want to execute anything inside the chroot with parameters that refer to files anywhere in the user’s home directory.

      Moving ~alice/public_html to /home/alice/www though may not get rid of the problem; you would still want /home/alice/home/alice/www to point to /www, in that case — unless you take care to always use /www when referring to paths that will be passed into the chroot. This is a royal PITA to accomplish though (for instance, you would have to teach Apache to not worry so much about whether a particular file that is to be executed actually exists in /www — but at the same time, you want static content to be served from /home/alice/www). I won’t say it’s impossible to accomplish, but you’ll have to do more work on paths and parameters at before/after every chroot()-call. The alternative is to run services inside the chroot-environment. This will consume more resources, naturally.

  5. It’s cool and all works fine,
    just a problem if i login as alice or bob
    ping localhost works
    but php can’t connect to mysql using name “localhost”
    “127.0.0.1″ works…

    my first try was to copy
    cp /etc/hosts /home/alice/etc/
    cp /etc/resolv.conf /home/alice/etc/

    but the problem has not solved, apache cuold not resolve the name “localhost” anyone shall help me?

  6. I can’t thank you enough for this article. Been trying for days to get Suexecd FastCGI PHP working on my Debian. And thanks to you, I finally got it working.

    One question:

    How do I configure it to use static FastCGI server instead of a dynamic one as you have done?

    • eike says:

      I have not needed to do that with this setup yet, though there should be nothing preventing you from using FastCGIExternalServer (see http://www.fastcgi.com/drupal/node/25#FastCgiExternalServer for notes on that directive) or FastCGIServer (http://www.fastcgi.com/drupal/node/25#FastCgiServer) in the httpd configuration.

      If you intend to use FastCGIExternalServer, be sure to look at FastCGIIPCDir as well when using chroot, since a manually-started process inside the chroot jail will put its IPC socket relative to the new root (i.e. not the place default settings would expect it at).

      If all you want is for httpd to leave the process spawning to PHP, simply have FastCGIConfig reflect your wish to only start one process; in that case the PHP-FastCGI-dispatcher will be utilized instead (instead of in addition to) the mod_fastcgi one. Since you seem to have a working setup without FastCGI(External)Server already, this might be the quickest way to get a workable result :)

      If you run into problems, feel free to reply. While I am currently short on time, I’ll try to look at what might have gone wrong ;)

  7. @eike

    Thank you for the reply.

    The only issue with using dynamic fastcgi servers is that I don’t see any PHP processes running when I do “ps -Al”. All I see is number of Apache sources. So I am not sure if fastcgi is in affect or not.

    But my PHP works fine and so does suexec.

    One more thing, if I understand the mod_fastcgi documentation correctly, then one can not use FastCGIServer for virtual hosting because we can’t place FastCGIServer in VirtualHost directive. Or am I completely out my depth here?

    • eike says:

      Hrrm, ps -Al should really display PHP processes (that is once they have been triggered, i.e. once you have visited a site that required mod_fastcgi to spawn a server); they do display for me — the pstree-example in the post shows what you should be seeing once you visit http://yoursite/test.php or some such. If this does not happen, you may have suexeced PHP, but not suexeced FastCGI-PHP ;)

      You can use FastCGIServer for virtual hosting, but you cannot restrict a particular FastCGIServer to one virtual host — that is, while you should be able to define FastCGIServer inside a VirtualHost block (http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html#FastCgiServer says so under “Context”), other virtual hosts can use the same definition as well. This is usually not a problem since you are using different processes for different hosts, anyway — /home/alice/usr/local/wrappers/ is different from /home/bob/usr/local/wrappers, after all.

  8. Julien says:

    Hi, very good tutorial, thanks !!

    It almost works for me… Any .php file requested returns “No input file specified”. Do you have any lead about where it comes from ?

    I know it means that somehow the requested file is not at the right place, or that there is none given to php-cgi… But I don’t know how to get more debug information about that.

    Thanks
    Julien

    • eike says:

      Unfortunately, I do not know why that particular error appears in your particular setup. You could try to rewrite the wrapper script to log all arguments and environment variables it gets to a file and maybe see what went wrong that way.

  9. Roman says:

    Hello Eike!

    Just installed your setup on my debian – marvelous! Just what I was looking for. That is the best in combination with chrooted ssh/sftp, which is now built in openssh by default – no one of users can accees each other dirs neither via ssh nor via apache! Great!

    The only difference in my setup is that instead of rsync I did:
    mount –bind /lib /home/$a/lib
    mount –bind /usr/lib /home/$a/usr/lib
    mount –bind /bin /home/$a/bin

    It is better fo me cos hardlinking doesn’t work between separate partitions, and if new files apper or disapper on update, than chroots must be relinked.

    And could you please explain the pourpose of this line (from the environment setup script):
    ln -s /home/$a /home/$a/home/$a/home
    What actual magick does this line do? Is there any chance to get rid of it? Cos my sftp shroot goes straigth to /home/$a/home/$a (what is a home dir) and that symlink looks a bit confusing.

    • eike says:

      Bind-mounts work as well; I just dislike that you end up with dozens of them if you have more than just a few users in this setup. Hardlinks have their own set of problems, granted. The VServer-folks expanded a bit on that idea and provide copy-on-write hardlinks to solve some of them, though — but that’s beyond the scope of this post :-)

      The ln -s /home/$a /home/$a/home/$a/home makes it easier to use paths outside the chroot like ones inside; i.e. for alice, /home/alice/home/alice/ is points to the same directory outside the chroot as it does inside (i.e. if you chroot into /home/alice, the path /home/alice/home/alice means exactly the same as it did outside the chroot).
      If you get rid of this link, httpd (or rather the chrooting suexec) needs to be taught more magic w.r.t. rewriting paths for CGI scripts and such. There are also some side-effects if you decide to use paths as httpd sees them in mod_rewrite rules and the like, since httpd only sees the external-to-the-chroot filesystem structure (which is also why it pays to be diligent in what apache directives to use and allow overrides on :-)

      It’s possible to go without, though I have come to just use this link to avoid having to translate all paths, all the time, everywhere.

  10. sergis says:

    Thanks for great thurotial, create ebuild and patches for gentoo for this, and all is working with no problems. You can mount -o directory with mysql.socket in chroot directory and then mysql can connect to localhost

  11. Ro says:

    I’ve spent a few days trying to get it all to work, but now I’ve encountered a problem that I can’t seem to resolve.

    I’m getting the following unexpected error when requesting index.php on a vhost:

    Forbidden

    You don’t have permission to access /cgi/php/index.php on this server.

    My wrapper script:

    #!/bin/sh
    export PHPRC=/etc
    export PHP_FCGI_CHILDREN=5
    export PHP_FCGI_MAX_REQUESTS=5000
    exec /bin/php-cgi

    My vhost:

    DocumentRoot /users/testserver1/sites/www.testserver1.nl/htdocs
    ServerName http://www.testserver1.nl
    ServerAlias *.testserver1.nl

    ErrorLog /users/logs/testserver1/www.testserver1.nl_error_log
    CustomLog /users/logs/testserver1/www.testserver1.nl_access_log combined

    SuexecUserGroup testserver1 testserver1

    ScriptAlias /cgi/ /users/testserver1/users/testserver1/bin/

    SetHandler fastcgi-script

    AddHandler php-fastcgi .php
    Action php-fastcgi /cgi/php
    AddType application/x-httpd-php .php

    AllowOverride None
    Options +ExecCGI -MultiViews -Indexes
    Order allow,deny
    Allow from all

    Options ExecCGI Indexes Includes SymLinksIfOwnerMatch
    Order allow,deny
    Allow from all

    Here testserver1.nl is a non-existent domain that I’ve added to my hosts file. I’m stumped and I feel I’ve exhausted my possibilities! Any insights would be greatly appreciated.

    Bye, Ro

  12. Francesco says:

    Thanks a lot for this informations. I had some problems but they were due to some missing libs in the jail.

    I would like to point that rsync is not doing hard-linking. So probably the best way to reference to the libs is to use “cp -al” instead.

  13. Hien says:

    it try success for apache httpd 2.2.17 ?

  14. [...] of this section is based on the excellent post at metaclarity. Download the Apache HTTPd sources from the official download page. This guide is [...]

  15. Alerun says:

    The server works, but from log file it shows :

    Browser :
    http://DOMAIN/index.php

    Apache error log :

    [Thu May 12 14:18:30 2011] [warn] FastCGI: (dynamic) server “/home/DOMAIN/usr/local/wrappers/php5-fcgi” (uid 1000, gid 1000) restarted (pid 1528)
    suexec failure: could not open log file
    fopen: No such file or directory
    [Thu May 12 14:18:30 2011] [warn] FastCGI: (dynamic) server “/home/DOMAIN/usr/local/wrappers/php5-fcgi” (pid 1528) terminated by calling exit with status ‘1′
    [Thu May 12 16:03:34 2011] [error] server reached MaxClients setting, consider raising the MaxClients setting

    ….

    [Thu May 12 19:21:25 2011] [warn] FastCGI: (dynamic) server “/home/DOMAIN/usr/local/wrappers/php5-fcgi” has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds

    Suexec error log :

    [2011-05-12 19:15:44]: uid: (DOMAIN/DOMAIN) gid: (1002/DOMAIN) cmd: php5-fcgi

    What does this mean?
    :( ((

  16. Hi,

    I followed the instruction of this howto in Ubuntu 10.04, apache 2.2.14 and php 5.3.2 (original sources and patches from this howto).

    I ran into the following problem:

    [Tue Jul 19 14:49:12 2011] [warn] FastCGI: (dynamic) server “/home/jpalic/usr/local/wrappers/php5-fcgi” (pid 29259) terminated by calling exit with status ‘1′
    [Tue Jul 19 14:49:17 2011] [warn] FastCGI: (dynamic) server “/home/jpalic/usr/local/wrappers/php5-fcgi” (uid 1001, gid 1001) restarted (pid 29260)
    suexec failure: could not open log file
    fopen: No such file or directory

    Does anyone have a solution for that?

    Thanks in advance and regards.

    Jan

    • Hi again,

      I applied the following patch:

      @@ -86,7 +86,7 @@
      * debugging purposes.
      */
      #ifndef AP_LOG_EXEC
      -#define AP_LOG_EXEC DEFAULT_EXP_LOGFILEDIR “/suexec_log” /* Need me? */
      +#define AP_LOG_EXEC “/var/log/apache2/suexec_log” /* Need me? */
      #endif

      /*

      and this error went away.

      Regards.

      Jan

  17. [...] of writing this on my own, I would like to point you to this article, which does an excellent job of presenting both the big picture as well as the gory details. I [...]

Leave a Reply

Before submitting, please do have a look at the Comments Policy.