Contents
- 1 Background of the project
- 2 Thinking about database synchronization solution
- 3 Deploy Syncthing to synchronize wordpress folders
- 4 Deploy and initialize mariadb
- 5 MariaDB database synchronization
- 6 Deploy WordPress on other nodes
- 7 Use inotify to monitor folder changes and execute scripts
- 8 Verify the actual effect of the solution
- 9 Summarize
Background of the project
I have been thinking about how to "use the free tunnel of Cloudflare's free plan to achieve multi-node WordPress local access and disaster recovery solutions" these days. However, there is a big prerequisite for implementing this solution: the content of the WordPress sites on these nodes is the same. In other words, the data of the multi-node WordPress must be synchronized.
回头看现在我的多个节点wordpress的数据同步方案,还是依靠主wordpress站点的WPvivid backup插件定期生成整个wordpress(程序文件+数据库)的完整备份,然后通过syncthing实时同步到其他wordpress站点的WPvivid backup插件的位置,最后需要的时候通过人工以”一键恢复”的方式同步。。。太low了有没有?而且我也并没有定时恢复的计划~(主要是懒,而且要看当时的心情)。
All in all, this simple synchronization solution is no longer suitable for my currentidentityThe scene is in urgent need of upgrading and renovation.
Thinking about database synchronization solution
With Syncthing as one of my underlying supporting technologies, the synchronization of WordPress program data itself is no longer a problem. The key lies in what solution should be used for the synchronization of database content corresponding to multi-node WordPress. Generally speaking, there are three options:
1. Directly use the same mariadb database
If the purpose of multiple sites is just redundancy: by default, blog access requests only access the primary site, and only when the primary site fails will the backup site be switched, and only when the backup site fails will the disaster recovery site be switched. Then there is no problem with using one database, because there is only one WordPress site reading and writing at the same time. However, in this case, if there is a problem with the only database, the whole thing will collapse, and if my home data center loses network or power, the database will also be lost. A single disaster recovery site with only one WordPress is useless without a database, so I just pass it.
2. Use database master-slave synchronization
Considering that both the main WordPress site and the backup site are in the home data center, you can consider connecting both the main and backup WordPress sites to the same mariadb database in the home data center. The disaster recovery site runs a separate mariadb database. The mariadb in the home data center is the master and the mariadb in the disaster recovery site is the slave. Configure master-slave synchronization. There will definitely be no problem (tailscale, as one of the underlying supporting technologies, has made an indelible contribution).
However, the problem with this approach is that the database inside the home data center still has a single point of failure; and the master-slave synchronization across the wide area network may cause master-slave replication failure if the delay is high; in addition, I only have data access once every few days or longer, and this kind of synchronization feels like using a cannon to kill a mosquito (and it is synchronized over a wide area network~), so I passed it.
3. Each WordPress site runs a mariadb database on the local machine
This method is actually the current method, which forms database redundancy inside the home data center: the main site and the backup site are deployed on macmini and intermini respectively, and each node has wordpress and mariadb databases, and the wordpress of each node reads and writes the local mariadb database. wordpress realizes primary and backup redundancy through load balancing, and the two mariadb databases are independent (of course, mariadb database can also be load balanced, but I am too lazy to bother with it, so it doesn't make much sense). At the same time, wordpress and mariadb databases are also deployed at the disaster recovery site, so that when problems occur in the home data center, the normal operation of wordpress at the disaster recovery site will not be affected (the disaster recovery site accesses the local database, which has nothing to do with the database in the home data center). The only problem is how to ensure the instant update of all node databases (I didn't think of a good method before, so I could only rely on the WPvivid backup plug-in).
After having chevereto's multi-point timing synchronization solution (see article:Tutorial on how to synchronize multiple nodes of Chevereto image bed in home data center series), the last pit has been filled, the synchronization of wordpress can also use the same idea: on the wordpress main site, use syncthing to complete the synchronization of wordpress program data to other nodes and the synchronization of the exported wordpress.sql file, and then other wordpress nodes can complete the import of the wordpress.sql file on their own mariadb databases (in theory, no matter how many nodes there are, the same method can be used).
The problem of timely updates is actually very easy to solve. I’ll keep you in suspense and talk about it later. But before that, I need to re-plan the wordpress folder to be synchronized in the syncthing solution.
Deploy Syncthing to synchronize wordpress folders
Previously, the folder synchronized by WordPress in Syncthing was only the folder of WPvivid backup plug-in (mainly to synchronize the backup files saved to local by WPvivid plug-in of WordPress main site to the WPvivid plug-in directory of other nodes, so as to facilitate one-key recovery when needed). Now, we need to change it to synchronize the main WordPress site using-v
参数挂载到docker的整个目录”html”:
For the configuration details of syncthing, please refer to the previous article:Docker series: A detailed tutorial on how to synchronize multiple folders using Docker based on syncthingI will not repeat it here. For subsequent experiments, SyncThing added an aliyun node this time. The docker run command format is as follows:
docker run --name syncthing-aliyun -d --restart=always --net=public-net \ --hostname=syncthing-aliyun \ -e TZ=Asia/Shanghai \ -p 8384:8384 \ -p 22000:22000 /tcp \ -p 22000:22000/udp \ -p 21027:21027/udp \ -v /docker/syncthing/config:/config \ -v /docker/wordpress:/data1 \ lscr.io/linuxserver/syncthing:latest
Use the tailscale address to add the node on the synchronization source syncthing-macmini, and complete the synchronization settings for the wordpress folder on all nodes.
1. Effect on syncthing-macmini:
2. Effect on syncthing-intermini:
3. Effect on syncthing-tc:
4. Effect on syncthing-aliyun:
As long as this state is reached, the subsequent synchronization of syncthing will basically not have any problems, and the most troublesome step is finally completed.
Four points to note:
1. Map to the directory inside the container, such as the one in the command above/docker/wordpress
, be sure to change the permissions of the wordpress folder to 777 on the host machine of each node, otherwise synchronization will fail.
2. If this error occurs:
可以在wordpress文件夹里手动新建一个名为”.stfolder”的文件夹来解决:
3、如果其他节点上wordpress文件夹状态不是”同步完成”:
而是红色的”失去同步”,那先找找失败的原因:
最终能用的招数就是改权限、移除失去同步syncthing节点上的文件夹,删除文件夹里的内容、重启syncthing容器,然后添加”.stfolder”文件夹,从主机上重新共享这么来回折腾,直到最后所有状态都正常。
4、建议除了主wordpress节点,其他节点直接把wordpress文件夹清空(如果原本有文件的话),然后新建好”.stfolder”文件夹后再开始从主节点上开始添加共享文件夹并完成同步。
Note: syncthing may have other errors. You can search online. There are many articles about it. I certainly cannot cover all the problems here.
Deploy and initialize mariadb
然后是数据库的问题,因为各个wordpress节点的”wp-config.php”配置文件都是一样的,比如:
Therefore, we need to ensure that the way WordPress accesses the mariadb database on each node is the same (in this article, they can all be accessed through mariadb01). The normal way is to place the mariadb database docker and the wordpress docker in the same non-default bridge when creating a new database, as shown in the command in syncthing of the new Alibaba Cloud node above.--net=public-net
一样,然后要保证所有节点上mariadb的docker名字都和”wp-config.php”里设置的一样,我这里都是mariadb01:
The creation command of mariadb01 is as follows:
docker run --name=mariadb01 -d --restart=always --net=public-net \ -p 3306:3306 \ -v /docker/mariadb/db:/var/lib/mysql \ -e MARIADB_ROOT_PASSWORD=yourpassword \ mariadb:10.11
After creating the mariadb database for each node, you still need to initialize it: create a WordPress library and the account and password corresponding to the library, and grant access rights to the WordPress library to the account (for related operations on initializing the library, see the article:Tips and tricks: Create a new empty database and grant permissions to corresponding usersOf course, you can also use the method you are familiar with to initialize, such as command line, phpmyadmin, etc.).
MariaDB database synchronization
Export the main site WordPress database
After having experience with Chevereto database synchronization, I still use the same method: export the wordpress library in the mariadb database on the main wordpress site to the newly created DB directory in the wordpress synchronization folder configured by syncthing:
新建”export_wordpress_database.sh”脚本进行导出,脚本内容如下:
#!/bin/sh # Remote ssh login will not load environment variables by default. Actively load it to prepare for remote ssh to execute this script. Local scripts do not need source ~/.bash_profile # Execute the export wordpress library command to export to wordpress/db
that has been configured with syncthing synchronization
In the folder, docker exec -u root mariaDB01 mysqldump -uroot -pyourpassword --databases wordpress > /Volumes/data/docker/wordpress/db/wordpress.sql # Run to complete echo 'end'
Don't forget to give execute permissions to the script:
chmod +x export_wordpress_database.sh # needs to enter the correct path to execute
然后将脚本放置到任意路径下,比如我是在macos用户目录下新建的”startup”目录下,所有需要远程ssh运行的脚本我都是放这个目录下,便于管理,如下图:
Now you only need to run this script to generate a full backup file wordpress.sql of the wordpress library on the main wordpress site in the wordpress/db directory. As long as the previous syncthing synchronization directory is configured normally, the wordpress.sql file can be quickly synchronized to the DB directory of all other wordpress sites:
Main wordpress site (macmini):
Alternative wordpress site (intermini):
Disaster recovery WordPress site (Tencent Cloud Server):
Note: Syncthing has its own default interval scanning time (3600 seconds). The time to complete data synchronization varies with the size of the file to be synchronized and the connection rate of each Syncthing node. My WordPress database is tens of megabytes, and it takes at most 1-2 minutes for all nodes to complete synchronization. However, this is the time after the scan finds the changes. As for when to start scanning, it depends on luck. I can only guarantee that it will be scanned within 3600 seconds.
Import the wordpress library in mariadb of other nodes
还是要依靠shell脚本来完成,在除了主wordpress站点以外的所有其他wordpress节点上的mariadb挂载文件夹中新建脚本”wordpress_import.sh”,内容如下:
#!/bin/sh mysql -uroot -pp@ssw0rd < /var/lib/mysql/wordpress.sql # Run completed echo 'end'
This script needs to be run in the mariadb container./var/lib/mysql/
Import the wordpress.sql file synchronized from the main wordpress site into the local mariadb.
Note 1: Why<
The left side is empty? Because the mysqldump command was used earlier--database wordpress
The parameter exported wordpress.sql file contains commands such as create wordpress, as follows:
So you don't need to specify it. Also don't forget to give it execution permissions:
chmod +x wordpress_import.sh # needs to enter the correct path to execute
注2:为什么要把脚本”wordpress_import.sh”新建在mariadb的挂载文件夹中?因为无法在mariadb容器里直接导入宿主机文件系统上的文件,所以需要变通一下。
然后新建shell脚本”import_wordpress_database.sh”,内容如下:
#!/bin/sh # Remote ssh login will not load environment variables by default. Actively load it. This is to prepare for remote ssh to execute this script. Local script running does not require source ~/.bashrc # Note that the file names of environment variables on different systems are different. For example: macos is bash_profile, debian is bashrc, please modify according to your actual environment # Delete the wordpress library directly. It is too troublesome to delete the table, and I am afraid that a new table will be added due to a certain version change. However, it seems that it is not necessary to delete it. docker exec -u root mariadb01 mysql -uroot -pyourpassword -e "drop database if exists wordpress;" # Copy the synchronized wordpress.sql to the working directory of mariadb. This is the workaround mentioned above cp /docker/wordpress/db/wordpress.sql /docker/mariadb/db/01/wordpress.sql # Import the wordpress.sql exported from the main wordpress site into the local mariadb database docker exec -u root mariadb01 bash /var/lib/mysql/wordpress_import.sh # Run completed echo 'end'
Put the script in the path you are used to. I put it directly in the /root/script directory.
After completing the above operations, just run/root/import_wordpress_database.sh
This script can accomplish this:
Step 1: Delete the mariadb database of this nodeWordPress
Library (it seems that this step can be omitted, but it is cleaner this way);
Step 2: Copy the latest wordpress.sql file in /docker/wordpress/db to the mount directory of the mariadb container
Step 3: Import wordpress.sql into the mariadb container.
Deploy WordPress on other nodes
After the previous steps, the WordPress program files and databases on other WordPress nodes except the main WordPress node have been synchronized. If these nodes have already deployed WordPress nodes, they can be accessed directly. Because my Alibaba Cloud node is newly added, I need to deploy the WordPress container. I only need to use the following command:
docker run --name=wordpress -d --restart=always --net=public-net \ -p 80:80 \ -v /docker/wordpress/html:/var/www/html \ # Pay attention to the mounted wordpress folder path and do not travel wordpress
Then you can access it directly.
Note: By default, WordPress limits the access domain name. If you want to remove the access restriction, please refer to my other article:A simple tutorial on how to set up WordPress to support multiple domain name access.
Use inotify to monitor folder changes and execute scripts
Process Logic
其实前面的步骤,全都是人工操作,只能算wordpress多节点同步方案,和我标题提到的”近乎”实时同步方案不沾边啊。的确是,所以接下来就是重点了:inotify。
inotify是一个linux下的监控文件夹变化的工具,其可以监控文件夹内容变化,然后执行指定的脚本,这不就和我们前面的操作关联起来了嘛:当我在主wordpress站点上运行了导出wordpress库的脚本”export_wordpress_database.sh”,就会在主站点的”wordpress/db”目录中生成一个新的wordpress.sql脚本;新的wordpress.sql脚本生成被syncthing发现并同步到所有的其他wordpress节点上的db目录中;在其他wordpress节点上运行的inotify监控本地节点上的”wordpress/db”文件夹,一旦发现发生了变化,就执行本地的”import_wordpress_database.sh”脚本,完成wordpress库的导入,然后所有节点的wordpress同步完成,完美!
Install and deploy inotify
apt update
apt install inotify-tools
After the installation is complete, you can use the two commands inotifywait and inotifywatch. Among them, inotifywait is what we want to use to monitor changes in the wordpress/db folder, and inotifywatch is to count the number of file system accesses, which is not used here.
我们需要编写一个脚本”monitor.sh:, the content is as follows:
#!/bin/bash
watchdir="/docker/wordpress/db" # defines the content that needs to be monitored. I will monitor the db directory here.
script="/root/import_wordpress_database.sh" # defines the script to be executed
while
inotifywait -e modify $watchdir; # -e modify dir是监控dir目录下是否有文件被修改 -o modifylog是将监控到的事件(这里是文件被修改)写到modifylog日志中
do
timestamp=$(date)
sleep 1m # Wait for 1 minute, because it takes some time to synchronize files through syncthing. If you don't wait, file changes will be recorded all the time.
bash $script # executes the script to import the wordpress.sql file
done
Grant execute permissions:
chmod +x monitor.sh
Then add monitor.sh to the startup (see my other article:3 common ways to set up commands or scripts for Debian series to start at boot).
Finally, make sure monitor.sh is running properly on all other wordpress nodes.:
Verify the actual effect of the solution
1、在主wordpress节点上执行脚本”export_wordpress_database.sh”
Go to the export directory to confirm:
2. Verify whether syncthing is synchronized successfully
Since the default syncthing rescan time is 1 hour, I don’t want to wait that long, so I click rescan (note that the time I clicked is 08:06):
What did I do between the export time 08:01 and 08:06? Because my little bear egg cooker beeped to remind me that the eggs were cooked, I went to have breakfast first.
In fact, if syncthing runs normally at this time, the synchronization is basically completed instantly (mainly because the wordpress.sql file is not large, only 38 MB). In the terminal window of initofywait, you can see the folder modify event information. Take the Alibaba Cloud node as an example:
Then we go to other nodes in turn to confirm, first confirming whether the syncthing synchronization is successful.
Alternate site:
Tencent disaster recovery site:
Alibaba disaster recovery site:
It can be seen that the synchronization of syncthing was successful.
Finally, let's confirm whether inotifywait has successfully monitored/docker/wordpress/db
文件夹下wordpress.sql文件的变化并执行了脚本”import_wordpress_database.sh”。如果按照脚本执行内容:
Then we look at several nodes in turn/docker/mariadb/db/01/wordpress.sql
The modification time of the file (actually you should look at the last commanddocker exec -u root mariadb01 bash /var/lib/mysql/wordpress_import.sh
However, I can’t think of any good way to prove this result, and it doesn’t seem authoritative enough, so I’ll look at the generation time of wordpress.sql in the previous step).
Standby Node:
Tencent Node:
Alibaba Node:
基本就是我点击syncthing wordpress文件夹的”重新扫描”按钮(08:06)之后的一分钟,这证明了inotifywait的确可以监控
/docker/wordpress/db
The folder changes and executes the specified script:import_wordpress_database.sh
, added mine:WordPress multi-node "semi-automatic" and "nearly" real-time synchronization solution
The last piece of the puzzle.
注1:所谓”半自动”是指需要我在完成主wordpress站点内容更新之后,手动运行一个数据库导出的脚本”export_wordpress_database.sh”,而剩余的都是全自动。讲道理,”半自动”有点谦虚了,应该是”0.9自动”,不过终究离”全自动”还是有差距的。要想实现全自动也不是不可以,不过还需要继续深入研究wordpress的工作原理,主要是如何判断wordpress更新了内容(文章、说说、页面)等,因为插件、wordpress版本都可以自动更新,这些不能作为判断依据(其实可以从判断数据库新增内容入手,卧槽,想着就刺激)。
我就是想到这里就犯嘀咕了,实在是不想再深入研究了,干脆就用手动运行脚本输出wordpress.sql文件到指定文件夹来触发后续流程,其实也违背了我”有困难要上,没有困难创造困难也要上的原则了”,但是真太累了,伤我脑细胞,以后有心情的时候再继续研究吧。
另:经过了一段时间使用,还是不能直接同步wordpress的整个html目录,主要是因为主节点版本升级或者插件更新等原因,直接同步整个html目录到其他节点经常会引起wordpress的”致命错误”,最好的方式还是只同步wordpress.sql这个文件所在的目录,这样以后就只是单纯的wordpress数据库的导出、文件同步、导入,这就没问题了。
注2:所谓”近乎”实时同步,是因为syncthing完整扫描一次默认是1小时(有点看运气,有时候很快就扫描到了,有时候半天也没反应,所以我在试验的时候才手动去点”重新扫描”的按钮),1小时对我的需求而言完全可以接受,毕竟只是个人博客,也不是什么企业级的高标准应用,所以我也懒得去改了,如果有朋友真要高标准要求,把syncthing的”完整扫描间隔”改小就行:
Summarize
其实吧,如果只是要在多个wordpress站点之间实现发布文章同步,有更简单的方法:比如使用插件,这种插件还不止一个,虽然也有些设置,而且使用效果也有不少缺陷,但是和我这个方法比起来,真的要简单很多。并且我以前用的”基于WPvivid备份插件定时备份和syncthing同步配合再加上需要时人工恢复”的方法(这么长的定语还是感觉颇low),要说也不是不能用。
But for me, by constantly trying, constantly learning new technologies (for example, this article forced me to study shell scripts), and constantly optimizing solutions, even if it is just a little improvement each time, and learning a little new knowledge, but after accumulating day by day, maybe one day I will become a master of magic! (Sunflower in hand, hahahaha, I have the country, hahahaha, it sounds familiar, right?)
当然,如文章开头所说,这篇文章只能算是前置技术的研究,最终还是要研究:”白嫖cloudflare free计划的tunnel实现多节点wordpress就近访问及容灾解决方案”,也不知道行不行,不过先研究研究看看吧。