Contents
Preface
Because I have deployed the main site and backup site of WordPress in my home data center, and arranged the mirror site on the cloud server, and successfully implemented the real-time backup and fast recovery of multiple WordPress configurations through the scheduled backup of WPvivid backup plug-in of WordPress and the scheduled synchronization function of syncthing (see article:Docker series: A detailed tutorial on how to synchronize multiple folders using Docker based on syncthingandDocker series uses Docker to set up a blog slave site based on WordPress and implement regular backup of master-slave site configuration). However, because my home broadband had problems a few days ago (not a natural failure, the game is still going on, we will talk about it after the result is out), I found that the image bed was still at home after I switched WordPress to the mirror site (it couldn't be connected even if the network was disconnected)~~, so it is not enough to just achieve redundancy for WordPress, the image bed also needs redundancy. Therefore, I spent some time to create redundancy for Chevereto's image bed on the mirror site, and then the blog was completely restored.
Therefore, I want to include Chevereto in the multi-point synchronization plan to achieve the same topology as the WordPress site: the main image bed, the backup image bed and the mirror site image bed, and when the data of the main image bed increases, the backup image bed and the mirror site image bed can also increase the data synchronously (in fact, it all comes down to laziness, I want to do it in one step ~), so this article was written.
chevereto是由其程序本身以及数据库里其对应的”库”组成,图片是存放在程序中data/images
In the directory, there is a database in the database used by chevereto, so to achieve data synchronization, it is necessary to synchronize the chevereto program data and the database at the same time, so it is divided into two steps to achieve: synchronization of chevereto's own data (mainly pictures) and synchronization of its corresponding database.
Synchronization of chevereto's own data
Note: This step requires Syncthing to synchronize multiple chevereto-related folders (For detailed configuration details of Syncthing multi-point synchronization, please refer to my other article:Docker series: A detailed tutorial on how to synchronize multiple folders using Docker based on syncthing, this article does not involve too much detailed configuration process, mainly because there are too many ~ ~).
This step varies depending on how Chevereto is deployed (source code deployment or docker deployment), but the core spirit is to add the Chevereto directory as a synchronization folder in Syncthing, and then synchronize between multiple Chevereto nodes through Syncthing. Because I deployed Chevereto in docker mode, this article takes the files in this mode as an example.
Since the planning of my main image storage, backup image storage, and mirror image storage nodes overlaps with the planning of WordPress, SyncThing uses the MacMini node as the main image storage node (synchronization source), and then synchronizes to the InterMini node (backup site) and Tencent Cloud Server (mirror site) nodes. Follow the steps below:
- Redeploy the docker of syncthing-macmini and use the -v parameter to map the folder corresponding to chevereto on macmini to the corresponding path in syncthing. Here I map it to the /data2 directory:
Do the same for the syncthing-intermini and syncthing-tc nodes.
Note: At this time, the Docker version of Chevereto has not been deployed on these two nodes. You can first create a folder corresponding to the Docker (note that the folder permission must be writable) to solve the synchronization problem of Chevereto data first, and then Docker can be deployed later.
- Add data2 to the synchronization folder on syncthing-macmini, and then share it with syncthing-intermini and syncthing-tc. Then accept the sharing request from syncthing-macmini on syncthing-intermini and syncthing-tc respectively and complete the addition. The final effect of sharing is as follows:
At this point, the data that the docker version of chevereto needs to mount into the container has been successfully synchronized from the syncthing-macmini node to the other two nodes.
Note: Regarding the folder type, select on syncthing-macmini:
On the remaining two nodes, select:
Scheduled export of the database corresponding to chevereto
我的mariadb中,chevereto对应的库名字就叫”chevereto”:
There are three key questions here: where to export, how to export, and how often to export.
1. Where to export
This problem is actually easy to solve. Just use the directory mounted by the chevereto container on the host. Because in the previous syncthing configuration, the files in this directory are in real-time synchronization status, so the best way is to create a db subdirectory in it, and then export the chevereto library directly to this folder in the form of a sql file:
The final path is:
/Volumes/data/docker/chevereto/db #/Volumes/data is the writing method of macOS since Catalina version. I will not go into details here. You can modify it according to your actual environment.
2. How to export
It is very simple to export the chevereto library of mariadb. You can use the mysqldump command. Combined with the path in the previous section, the command is as follows:
docker exec -u root your database container namemysqldump -uroot -p your database password --databases the name of the database corresponding to chevereto in the database> /Volumes/data/docker/chevereto/db/chevereto.sql # The export path can be modified according to your actual situation, as long as it is in the synchronization directory of chevereto configured earlier. Note that the -u and -p after mysqldump are directly filled with the username and password, without leaving spaces
3. How often should you export?
这个要根据每个人的不同的需求来确定,我这里和wordpress自动备份的同步周期保持一致就成,就定为一周吧,不过如何实现每周自动导出一次呢?这个根据每个人使用的环境不同而不同,比如macos上本来就有定期任务的设置方法(launchctl 和 crontab,理论上用lanuchctl比较好),但是我不太想折腾,所以为了偷懒,直接把上一节的命令放到了macos特定目录下的一个可执行的脚本”export_cheve_database.sh”中,放在macos的用户目录下,然后通过宝塔面板的计划任务,使用ssh定期登录去执行就好了:
Note: To use this SSH to run scripts on a remote device, you need to configure the SSH client on the device where the Pagoda panel is located to log in to the public key of the macmini in advance (please refer to my other article:Debian series configuration ssh public key login), but some details are different in different systems, such as macos has one more step:
ssh-add ~/.ssh/id_rsa
macmini上”export_cheve_database.sh”脚本的内容如下:
#!/bin/sh # Manually load environment variables. This is very important and will not be loaded by default. In addition, the file names of environment variables on different systems are different, so please note. source ~/.bash_profile # Execute the command to export the chevereto library. The exported path is in the set chevereto synchronization directory. In this way, after the library is exported, it will be synchronized in real time to the corresponding directories of the chevereto backup node and mirror node. docker exec -u root your database container name or container id mysqldump -uroot -pyourpassword --databases chevereto > /Volumes/data/docker/chevereto/db/chevereto.sql # After the operation is completed echo 'end'
Scheduled import of the database of the synchronized chevereto node
Database import of chevereto node
After the previous steps, we have achieved the synchronization of program data (pictures) and exported database files of Chevereto's main site, backup site, and mirror site. The last step is to automatically import the synchronized database files to the synchronized Chevereto nodes (backup site and mirror site) at a scheduled time.
This step is actually the simplest. Just deploy the task of importing the database on the backup site and the mirror site. It sounds simple, but there are some tricks in implementation. The logical process is as follows:
1. Copy the synchronized chevereto.sql from the mount directory of the chevereto container (/docker/chevereto/db) to the mount directory of the mariadb container (assuming it is /docker/mariadb/db)
2、在mariadb容器的挂载目录(/docker/mariadb/db)内新建脚本文件”chevereto_import.sh”,内容如下:
#!/bin/sh mysql -uroot -pyourpassword chevereto < /var/lib/mysql/chevereto.sql
Note:/var/lib/mysql/
is the mounted directory in the mariadb container, corresponding to the directory on the host/docker/mariadb/db
Table of contents.
3、给”chevereto_import.sh”赋予执行权限:
chmod +x /var/lib/mysql/chevereto_import.sh
4. Run command execution on the host machine
docker exec -u root your database container name/var/lib/mysql/chevereto_import.sh
Implementing automatic timing import
If you want to automatically run the above process at a scheduled time, you only need to create a script in the host machine (don't forget to grant permissions), the content is as follows:
#!/bin/sh docker exec -u root mariadb01 mysql -u root -pp@ssw0rd -e "DROP TABLE IF EXISTS chv_albums, chv_confirmations, chv_deletions, chv_follows, chv_images, chv_importing, chv_imports, chv_ip_bans, chv_likes, chv_locks, chv_logins, v_notifications , chv_pages, chv_queues, chv_redirects, chv_requests, chv_settings, chv_stats, chv_storage_apis, chv_storages, chv_users, chv_categories;" chevereto # clears all tables in the chevereto library. cp /docker/chevereto/db/chevereto.sql /docker/mariadb/db/01/chevereto.sql docker exec -u root mariadb01 /var/lib/mysql/chevereto_import.sh
Then run the script regularly.
Note: The first command is to clear the existing tables in the chevereto library. This is very important, otherwise the subsequent import will not be successful. Of course, you can also delete the chevererto library directly.
I won’t go into details about how to schedule scripts. Different systems have different methods, but I can still be lazy because I have installed the Pagoda panel on both the backup site and the mirror site, so I still use the scheduled task function:
That's it! Then you just need to use the same command as the main site (mainly -v parameter to mount the correct synchronization folder) on the backup site and the mirror site to create the chevereto container (and the database parameters on these points are the same, which means that the same address or name, port, root password, etc. can be used), and chevereto will work normally immediately (for the detailed construction process of chevereto image hosting, please refer to another article:Docker series uses Docker to build your own image bed based on Chevereto).
经过上面一番折腾,这下图床也实现了”主站点+备用站点+镜像站点”的多级冗余结构,再加上我已经实施完成的相同的wordpress多级冗余结构,这下我再也不怕2个家庭数据中心最蛋痛的问题:主站点down(macmini洗白)和家里断电断网,因为有云服务器的wordpress和图床的镜像站点,且图床也能保证是7天内的最新数据(图片本身都是自动同步的,这个主要是指数据库导入的频率,可以根据自己需要调整)。从专业角度来说,我目前实现了家庭数据中心内部的应用热备以及同城灾备站点,按照”两地三中心”的规格来看,我还差一个”异地容灾”站点,这个我就计划使用hexo+github+cloudflare pages来实现了,不过只能是博客文字部分,毕竟已经托管到了cloudflare pages,永远在线,图床的”异地容灾”就有点麻烦了,容我想想。
Afterword
To be honest, this article is more difficult than I thought. I thought it was very simple, but I kept encountering problems when implementing it (too many prerequisite knowledge points). In the final analysis, it was because my foundation was too poor and my knowledge was too narrow. I had a superficial understanding of many knowledge points. Only when I really needed to use it did I realize that the pot was made of iron. I had to calm down and study the details carefully.
But when I think about it this way, my previous plan for WordPress multi-point data synchronization and quick recovery needs to rely on the WPvivid backup plug-in and requires manual intervention, which is quite low. After the practice of chevereto multi-point automatic synchronization, WordPress should be able to achieve the same effect. After I write the article on Google Ads, I will study the multi-point fully automatic synchronization of WordPress.
We are also researching similar things: using cloudflare pages to backup and recover WP blog articles, using two S3 storage buckets R2 and B2 to back up pictures (10G of free space is enough just to store blog pictures), plus an "unlimited capacity" photo album + alist as a backup image server.
Yes, almost. I plan to use hexo blog to back up articles for cloudflare pages. It is best to use the image hosting service in R2 for redundancy. This will be the ultimate disaster recovery. At least the static blog can be permanently online. It is the last resort.