The Complete High Availability WordPress setup
Intro
In this text we will describe how to setup high available WordPress on your own, using CentOS, distributed filesystem GlusterFS and Percona XtraDB Cluster. Commands that you need to run are displayed blue, their output is white, and part of commands that is colored red need to be adjusted to your environment. If there is prompt [ALL]# then you need to run that command on every server, otherwise the name of the server and curent directory are specified inside [] brackets. Pay attention to here-doc cat commands spanning multiple lines and ending with keyword END.
Setting up mCloud mServer SSD1 servers
We will start with 3 mCloud SSD servers, with 1GB RAM and 2 CPUs. We will choose Centos 7 template.
We will name them like mns-wp-N.ha.rs and note the assigned IP addresses.
On every machine we will install some usual packets. Not all of them are necessary, but they can be handy:
[ALL]# yum -y install bash bc bind-utils elinks ftp htop html2text iotop iptraf joe jwhois logwatch lsof lynx mailx man man-pages mytop net-snmp net-snmp-utils nfs-utils ntp openssh-clients rsync screen strace sysstat tcpdump tcptraceroute vim vim-enhanced wget wireshark yum-plugin-security yum-utils pciutils dmidecode telnet attr net-tools https://download.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
For start, we will allow all communications between machines. We do this on every machine.
[ALL]# iptables -I INPUT -s 87.237.205.1/32 -j ACCEPT [ALL]# iptables -I INPUT -s 87.237.205.2/32 -j ACCEPT [ALL]# iptables -I INPUT -s 87.237.205.3/32 -j ACCEPT [ALL]# iptables-save > /etc/sysconfig/iptables [ALL]# echo 'iptables-restore < /etc/sysconfig/iptables' >> /etc/rc.local [ALL]# chmod +x /etc/rc.local
Installing and configuring GlusterFS
On every machine we will configure hosts file with names and addresses of every machine:
[ALL]# cat >> /etc/hosts <<"END" 87.237.205.1 mns-wp-1.ha.rs mns-wp-1 87.237.205.2 mns-wp-2.ha.rs mns-wp-2 87.237.205.3 mns-wp-3.ha.rs mns-wp-3 END
Install and start glusterfs:
[ALL]# wget -P /etc/yum.repos.d https://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo [ALL]# yum -y install glusterfs glusterfs-fuse glusterfs-server [ALL]# systemctl start glusterd
Enable it and check status:
[ALL]# systemctl enable glusterd [ALL]# systemctl status glusterd glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) Active: active (running) since Fri 2015-04-03 15:46:31 CEST; 3min 47s ago Process: 11199 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS) Main PID: 11200 (glusterd) CGroup: /system.slice/glusterd.service └─11200 /usr/sbin/glusterd -p /var/run/glusterd.pid
Connect all servers into gluster pool. It’s enough to run this on one of the servers.
[mns-wp-1 ~]# gluster peer probe mns-wp-2 peer probe: success. [mns-wp-1 ~]# gluster peer probe mns-wp-3 peer probe: success. [mns-wp-1 ~]# gluster peer probe mns-wp-1 peer probe: success. Probe on localhost not needed
Now we can check the pool status:
[ALL]# gluster pool list UUID Hostname State a32d208c-7966-4af2-80d0-a42ec3568bbe mns-wp-2 Connected 6ed8f0c7-01f2-4951-a663-5e73dba405b1 mns-wp-3 Connected c96c8c8e-5b30-448f-8177-a7ef0d9a325e localhost Connected
On one of the machines we will make glusterfs volume with 3 replicas. We use parameter force because our choosen directory /srv/vol0 is not on a separate filesystem:
[mns-wp-1 ~]# gluster volume create vol0 rep 3 transport tcp mns-wp-1:/srv/vol0 mns-wp-2:/srv/vol0 mns-wp-3:/srv/vol0 force volume create: vol0 success [mns-wp-1 ~]# gluster volume start vol0 volume start: vol0: success
Then we can check the status of the new volume. All bricks and Self-heal daemons should be online.
[mns-wp-1 ~]# gluster vol status vol0 Status of volume: vol0 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick mns-wp-1:/srv/vol0 49152 Y 1316 Brick mns-wp-2:/srv/vol0 49152 Y 1899 Brick mns-wp-3:/srv/vol0 49152 Y 859 NFS Server on localhost N/A N N/A Self-heal Daemon on localhost N/A Y 2593 NFS Server on mns-wp-2 N/A N N/A Self-heal Daemon on mns-wp-2 N/A Y 1997 NFS Server on mns-wp-3 N/A N N/A Self-heal Daemon on mns-wp-3 N/A Y 871 Task Status of Volume vol0 ------------------------------------------------------------------------------ There are no active volume tasks
Since glusterfs servers are at the same time also clients, we can use localhost on every machine to mount the glusterfs volume:
[ALL]# mkdir -p /var/www/html/ [ALL]# mount -t glusterfs localhost:/vol0 /var/www/html [ALL]# echo "localhost:/vol0 /var/www/html glusterfs defaults,_netdev 0 0" >> /etc/fstab
Recommended switches for better performance:
[mns-wp-1 ~]# gluster vol set vol0 performance.quick-read on volume set: success [mns-wp-1 ~]# gluster vol set vol0 performance.read-ahead on volume set: success [mns-wp-1 ~]# gluster vol set vol0 performance.io-cache on volume set: success [mns-wp-1 ~]# gluster vol set vol0 performance.cache-size 256MB volume set: success [mns-wp-1 ~]# gluster vol set vol0 performance.stat-prefetch on volume set: success [mns-wp-1 ~]# gluster vol set vol0 performance.write-behind-window-size 4MB volume set: success [mns-wp-1 ~]# gluster vol set vol0 performance.flush-behind on volume set: success
You can check these options with volume information:
[ALL]# gluster vol info vol0
Volume Name: vol0
Type: Replicate
Volume ID: c8cf45c4-bff5-42fb-b065-516de82ebb03
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: mns-wp-1:/srv/vol0
Brick2: mns-wp-2:/srv/vol0
Brick3: mns-wp-3:/srv/vol0
Options Reconfigured:
performance.write-behind-window-size: 4MB
performance.flush-behind: on
performance.stat-prefetch: on
performance.io-cache: on
performance.read-ahead: on
performance.quick-read: on
performance.cache-size: 256MB
Installing and configuring Percona XtraDB Cluster
We will install Percona XtraDB Cluster and setup a basic my.cnf file on every server:
[ALL]# yum -y install https://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm [ALL]# yum -y install Percona-XtraDB-Cluster-56 [ALL]# cat > /etc/my.cnf <<END [mysqld] datadir=/var/lib/mysql user=mysql # Path to Galera library wsrep_provider=/usr/lib64/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 wsrep_cluster_address=gcomm://87.237.205.1,87.237.205.2,87.237.205.3 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # SST method wsrep_sst_method=xtrabackup-v2 # Cluster name wsrep_cluster_name=my_wordpress_cluster # My name wsrep_node_name=$(hostname -s) # Authentication for SST method wsrep_sst_auth="sstuser:s3cr3t" # Try to replicate even MyISAM wsrep_replicate_myisam=1 END
On one of the servers we will start bootstrap procedure of the XtraDB cluster:
[mns-wp-1 ~]# systemctl start mysql@bootstrap.service
For state transfer of data between cluster members we will use separate mysql user. You should choose some complicated password here.
[mns-wp-1 ~]# mysql -e 'CREATE USER "sstuser"@"localhost" IDENTIFIED BY "s3cr3t";' [mns-wp-1 ~]# mysql -e 'GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO "sstuser"@"localhost";'
On other servers we just start mysql service after installing Percona XtraDB Cluster and configuring my.cnf:
[mns-wp-2 ~]# systemctl start mysql [mns-wp-3 ~]# systemctl start mysql
On every machine we will enable mysql service to start automaticaly:
[ALL]# systemctl enable mysql
WordPress should have it’s own mysql user, choose some complicated password here:
[mns-wp-1 ~]# mysql -e 'CREATE DATABASE wordpress;' [mns-wp-1 ~]# mysql -e 'CREATE USER "wpuser"@"%" IDENTIFIED BY "mojavrloduGcka_iKOmplik0v4na_sifra";' [mns-wp-1 ~]# mysql -e 'GRANT ALL PRIVILEGES ON wordpress.* TO "wpuser"@"%";'
For health checks of nodes in cluster, we can use command clustercheck which uses username “clustercheckuser” and password “clustercheckpassword!” by default. It’s recommended to change that password.
[mns-wp-1 ~]# mysql -e 'GRANT PROCESS ON *.* TO `clustercheckuser`@`localhost` IDENTIFIED BY "clustercheckpassword!";' [ALL]# clustercheck HTTP/1.1 200 OK Content-Type: text/plain Connection: close Content-Length: 40 Percona XtraDB Cluster Node is synced.
Parameters and status of the cluster can be seen in system variables whose names start with wsrep:
[ALL]# mysql -e 'show status like "wsrep_cluster_size";' +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 3 | +--------------------+-------+ [ALL]# mysql -e 'show status like "wsrep_cluster_status";' +----------------------+---------+ | Variable_name | Value | +----------------------+---------+ | wsrep_cluster_status | Primary | +----------------------+---------+ [ALL]# mysql -e 'show status like "wsrep_local_state_comment";' +---------------------------+--------+ | Variable_name | Value | +---------------------------+--------+ | wsrep_local_state_comment | Synced | +---------------------------+--------+
Also, it’s recommended to secure your mysql installation by disabling anonymous access and by setting the password for the mysql root user:
[ALL]# mysql_secure_installation
Installing WordPress with NginX, PHP-FPM, WP-CLI
We will install basic PHP packets. We have already imported EPEL repository needed for some of those packets.
[ALL]# yum -y install \
php-common \
php-fpm \
php-gd \
php-cli \
php-pdo \
php-mysqlnd \
php-bcmath \
php-mcrypt \
php-xml \
php-mbstring \
php-xmlrpc \
php-pecl-memcache \
php-pecl-memcached \
php-pecl-apc \
php-devel \
php-dba \
php-odbc \
php-pecl-amqp \
php-pgsql \
php-pspell \
php-recode \
php-redis \
php-snmp \
php-soap \
php-tidy \
php-pecl-mongo \
php-pecl-solr \
php-pecl-sphinx
We will add system user wpuser and group wpuser. WordPress files will belong to this user.
[ALL]# useradd -d /var/www/html/wordpress -m wpuser [mns-wp-1 ~]# chmod go+rx /var/www/html/wordpress
For installing WordPress we’ll use WP-CLI run as wpuser:
[mns-wp-1 ~]# cd /var/www/html/ [mns-wp-1 /var/www/html]# wget https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar [ALL]# sudo -u wpuser php /var/www/html/wp-cli.phar --info PHP binary: /usr/bin/php PHP version: 5.4.16 php.ini used: /etc/php.ini WP-CLI root dir: phar://wp-cli.phar WP-CLI global config: WP-CLI project config: WP-CLI version: 0.18.0 [ALL]# echo alias wp='"sudo -u wpuser php /var/www/html/wp-cli.phar"' >> ~/.bashrc [ALL]# source ~/.bashrc
Install and configure WordPress:
[mns-wp-1 ~]# cd /var/www/html/wordpress [mns-wp-1 /var/www/html/wordpress]# wp core download Success: WordPress downloaded. [mns-wp-1 /var/www/html/wordpress]# wp core config --dbname=wordpress --dbuser=wpuser --dbpass=mojavrloduGcka_iKOmplik0v4na_sifra Success: Generated wp-config.php file. [mns-wp-1 /var/www/html/wordpress]# wp core install --url=wp.ha.rs --title=MyHAWP --admin_user=wpadmin --admin_password=drugaduGack4_iCooomplikv4na_fra --admin_email=marko@ha.rs Success: WordPress installed successfully. [mns-wp-1 /var/www/html/wordpress]# wp option update home https://wp.ha.rs Success: Updated 'home' option. [mns-wp-1 /var/www/html/wordpress]# wp option update siteurl https://wp.ha.rs Success: Updated 'siteurl' option.
Although WordPress does not use PHP sessions, many plugins do, so we will use memcache for central session storage:
[ALL]# yum -y install memcached [ALL]# systemctl enable memcached [ALL]# systemctl start memcached [ALL]# cat >> /etc/php.d/memcache.ini <<END session.save_handler=memcache session.save_path='tcp://87.237.205.1:11211, tcp://87.237.205.2:11211, tcp://87.237.205.3:11211' END
For security, we will not execute PHP-FPM as user wpuser, but as user apache, and group wpuser. That way the user wpuser will be able to configure group permissions for writing where needed. Also we will configure timezone, enable php-fom service and start it:
[ALL]# sed -i 's/group = apache/group = wpuser/' /etc/php-fpm.d/www.conf [ALL]# sed -i 's/;date.timezone.*/date.timezone = Europe\/Belgrade/g' /etc/php.ini [ALL]# systemctl enable php-fpm ln -s '/usr/lib/systemd/system/php-fpm.service' '/etc/systemd/system/multi-user.target.wants/php-fpm.service' # systemctl start php-fpm
After that, we should see php-fpm process in ps output:
[ALL]# ps -Leo pid,user,group,args | grep php-fpm
17464 root root php-fpm: master process (/etc/php-fpm.conf)
17465 apache wpuser php-fpm: pool www
17466 apache wpuser php-fpm: pool www
17467 apache wpuser php-fpm: pool www
17468 apache wpuser php-fpm: pool www
17469 apache wpuser php-fpm: pool www
18576 root root grep --color=auto php-fpm
We will install the newest nginx:
[ALL]# yum -y install https://nginx.org/packages/rhel/7/noarch/RPMS/nginx-release-rhel-7-0.el7.ngx.noarch.rpm [ALL]# yum -y install nginx
And configure it by the standard recommendations for WordPress (ex. on https://github.com/Romke-vd-Meulen/nginx-config ):
[ALL]# echo 'fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;' >> /etc/nginx/fastcgi_params [ALL]# echo 'fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;' >> /etc/nginx/fastcgi_params [ALL]# cat > /etc/nginx/nginx.conf <<"END" # Generic startup file. user nginx wpuser; worker_processes 2; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; sendfile on; # tcp_nopush on; keepalive_timeout 3; # tcp_nodelay on; gzip on; # php max upload limit cannot be larger than this client_max_body_size 32m; index index.php index.html index.htm; # Upstream to abstract backend connection(s) for PHP. upstream php { #this should match value of "listen" directive in php-fpm pool # server unix:/tmp/php-fpm.sock; server 127.0.0.1:9000; } include /etc/nginx/conf.d/*.conf; } END
Default virtual host will include other configuration files:
[ALL]# cat > /etc/nginx/conf.d/default.conf <<"END"
server {
server_name _;
listen 0.0.0.0:8080;
rewrite ^ $scheme://wp.ha.rs$request_uri redirect;
}
server {
server_name wp.ha.rs;
listen 0.0.0.0:8080;
root /var/www/html/wordpress;
index index.php;
include global/restrictions.conf;
# Additional rules go here.
# Only include one of the files below.
include global/wordpress.conf;
# include global/wordpress-ms-subdir.conf;
# include global/wordpress-ms-subdomain.conf;
}
END
File restrictions.conf:
[ALL]# mkdir -p /etc/nginx/global [ALL]# cat > /etc/nginx/global/restrictions.conf <<"END" # Global restrictions configuration file. # Designed to be included in any server {} block. location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } # Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac). # Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban) location ~ /\. { deny all; } # Deny access to any files with a .php extension in the uploads directory # Works in sub-directory installs and also in multisite network # Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban) location ~* /(?:uploads|files)/.*\.php$ { deny all; } END
File wordpress.conf:
[ALL]# cat > /etc/nginx/global/wordpress.conf <<"END"
# WordPress single blog rules.
# Designed to be included in any server {} block.
# This order might seem weird - this is attempted to match last if rules below fail.
# https://wiki.nginx.org/HttpCoreModule
location / {
try_files $uri $uri/ /index.php?$args;
}
# Add trailing slash to */wp-admin requests.
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
# Directives to send expires headers and turn off 404 error logging.
location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off; log_not_found off; expires max;
}
# Uncomment one of the lines below for the appropriate caching plugin (if used).
#include global/wordpress-wp-super-cache.conf;
#include global/wordpress-w3-total-cache.conf;
# Pass all .php files onto a php-fpm/php-fcgi server.
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
# This is a robust solution for path info security issue and works with "cgi.fix_pathinfo = 1" in /etc/php.ini (default)
include fastcgi_params;
fastcgi_index index.php;
# fastcgi_intercept_errors on;
fastcgi_pass php;
}
END
After that we can enable and start nginx:
[ALL]# systemctl enable nginx ln -s '/usr/lib/systemd/system/nginx.service' '/etc/systemd/system/multi-user.target.wants/nginx.service' [ALL]# systemctl start nginx
Install and configure HAProxy
We will use HAProxy as both nginx and mysql proxy.
[ALL]# yum -y install haproxy [ALL]# cat > /etc/haproxy/haproxy.cfg <<"END" global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen stats :9090 mode http stats enable stats realm Haproxy stats uri / stats auth wpha:wphahapr0 frontend main *:80 default_backend app backend app balance roundrobin option forwardfor option httpchk GET /hacheck.php HTTP/1.1\r\nHost:wp.ha.rs http-check expect string OK server mns-wp-1 87.237.205.1:8080 check server mns-wp-2 87.237.205.2:8080 check server mns-wp-3 87.237.205.3:8080 check listen mysql-cluster 0.0.0.0:3307 mode tcp balance roundrobin option httpchk server mns-wp-1 87.237.205.1:3306 check port 9200 inter 12000 rise 3 fall 3 server mns-wp-2 87.237.205.2:3306 check port 9200 inter 12000 rise 3 fall 3 server mns-wp-3 87.237.205.3:3306 check port 9200 inter 12000 rise 3 fall 3 END
Setup haproxy check for nginx and php-fpm:
[mns-wp-1 ~]# echo -n '<?php echo "OK";?>' > /var/www/html/wordpress/hacheck.php
Setup haproxy check for database:
[ALL]# echo 'mysqlchk 9200/tcp # mysqlchk' >> /etc/services [ALL]# yum -y install xinetd [ALL]# cat > /etc/xinetd.d/mysqlchk <<"END" # default: on # description: mysqlchk service mysqlchk { disable = no flags = REUSE socket_type = stream port = 9200 wait = no user = nobody server = /usr/bin/clustercheck log_on_failure += USERID only_from = 87.237.205.1 87.237.205.2 87.237.205.3 per_source = UNLIMITED } END
Enable and start xinetd
[ALL]# systemctl enable xinetd [ALL]# systemctl start xinetd
Setup WordPress to connect to HAProxy:
[mns-wp-1 ~]# sed -i "s/'DB_HOST', 'localhost'/'DB_HOST', '127.0.0.1:3307'/" /var/www/html/wordpress/wp-config.php
Enable and start haproxy:
[ALL]# systemctl enable haproxy [ALL]# systemctl start haproxy
Install and activate W3 Total Cache
[mns-wp-1 /var/www/html/wordpress]# wp plugin install https://downloads.wordpress.org/plugin/w3-total-cache.0.9.4.1.zip Downloading install package from https://downloads.wordpress.org/plugin/w3-total-cache.0.9.4.1.zip... Unpacking the package... Installing the plugin... Plugin installed successfully. [mns-wp-1 /var/www/html/wordpress]# wp plugin list +----------------+----------+-----------+---------+ | name | status | update | version | +----------------+----------+-----------+---------+ | akismet | inactive | available | 3.0.4 | | hello | inactive | none | 1.6 | | w3-total-cache | inactive | none | 0.9.4.1 | +----------------+----------+-----------+---------+ [mns-wp-1 /var/www/html/wordpress]# wp plugin activate w3-total-cache Success: Plugin 'w3-total-cache' activated.
Setup W3 Total Cache on the standard way and use all three memcached servers. Also configure all three memcached servers for Object cache.
Finished! Happy HAWPing! 🙂
Bez komentara