Contents
- 1 Introduction
- 2 Implementation based on "cross-domain writing"
- 3 Implementation based on "reverse proxy write-back"
- 4 Implementation based on "Worker transfer"
- 5 Use the "worker transfer" method to achieve comment consistency on multiple active WordPress nodes
- 6 Add comment notification (optional)
- 7. Afterword
1 Introduction
When deploying a WordPress multi-active node solution, a crucial technical point isRead-write separation:All operations involving "writing to the database" (such as publishing articles, modifying content, modifying plugin configuration, submitting comments, etc.) are uniformly routed toPrimary write nodeThe other nodes serve as read-only replicas, providing content display, cache services, etc. The benefits of this architecture are obvious:
- It avoids data conflicts, synchronization delays, and consistency issues that may be caused by multi-point writing;
- At the same time, it also reduces the load pressure on the main database, allowing each node to perform its own functions and improving the scalability and stability of the overall system.
The core of realizing read-write separation isAccurately and stably guide all front-end requests that may trigger write operations to the primary write node. For some write operations, such as administrators publishing articles, modifying content, or configuring plug-ins, these behaviors usually occur in the background operation environment, and can be avoided by forcing access to the main write node, so they are relatively controllable. However, there are also some write behaviors that are difficult to intervene manually, such as visitors directly commenting under the article. Such requests occur on ordinary pages, and it is impossible to require users to specifically jump to a dedicated node to complete the operation, so they are uncontrollable.
Based on the above description, in the WordPress multi-active node architecture,How to handle the write path of comment requests is one of the most challenging issues when implementing read-write separation., which is also the key to the overall architecture design. The most direct approach in the early days wasForce all user comment requests to be written to the master node database, ensuring the consistency and centralized management of data sources. To achieve this goal, common ideas include:
- Cross-domain writing solution:Configure CORS headers in WordPress of the master node to allow other read-only nodes to submit comments directly to the master node (such as comment.tangwudi.com) through JavaScript AJAX requests. This method is relatively simple to implement, but it involves browser cross-domain restrictions, and requires proper configuration of response headers such as Access-Control-Allow-Origin, and handling of cookie scope issues.
- Reverse proxy write-back solution:In the Web Server of each read-only node, the user's POST request to admin-ajax.php or wp-comments-post.php is reversed to the main node. This method is transparent to the user, does not require modification of the front-end code, and does not have cross-domain issues, but in practice, attention should be paid to details such as session retention and cache penetration, otherwise it is easy to cause problems such as authentication loss or CSRF verification failure.
- Worker transfer solution:Use edge computing platforms such as Cloudflare Worker to intercept comment requests and forward them to the master node. This method can flexibly implement centralized control of the backend write path without modifying the client code, and it also has the advantages of cross-domain shielding and security isolation.
2 Implementation based on "cross-domain writing"
The cross-domain solution is not complicated to implement. Taking my blog (using the argon theme) as an example, the original comment request was sent to "https://blog.tangwudi.com/admin-ajax.php
", just modify the PHP layer slightly and change the request address to "https://comment.tangwudi.com/admin-ajax.php
", the comment traffic can be directed to the designated node to achieve centralized writing. However, this solution comes with several potential pitfalls, which deserve special attention:
- Browser same-origin policy restrictions
By default, browsers block web pages from making requests with cookies to servers of different origins (i.e. different domain names, protocols, or ports). This means that if a front-end page attempts to send a comment request to another subdomain of the primary writing node, it may be rejected by WordPress due to lack of credentials, especially if the user is logged in or CSRF verification is required.
- CORS configuration is difficult to unify
To implement cross-origin requests, the target node (i.e., the primary write node) needs to correctly set CORS headers, such as Access-Control-Allow-Origin, Access-Control-Allow-Credentials, etc. Although technically these response headers can be added through PHP or server configuration, in a multi-node environment, in order to ensure unified code maintenance, all nodes also need to maintain the same configuration, even if some read-only nodes themselves do not process write requests.
- Security introduces new risks
When cross-domain requests are allowed, improper settings can easily lead to security risks such as API exposure, permission bypass, or CSRF attacks. For example, setting Access-Control-Allow-Origin: * and allowing cookies will directly lead to the risk of session credential leakage.
- Debugging and monitoring become complicated
Once a cross-domain request fails, the browser will often just throw a vague CORS error. The specific details are hidden in the network layer, making it difficult to locate and troubleshoot the problem. At the same time, log records will also be distributed across multiple nodes, increasing the difficulty of operation and maintenance.
- Compatibility and edge cases are unpredictable
Some cache layers (such as Cloudflare APO), front-end frameworks, or plug-ins may assume by default that all requests are from the same origin. Cross-domain solutions can easily trigger abnormal behaviors, such as CSRF token expiration, user status loss, function errors, etc.
In summary, although the cross-domain solution is technically feasible, it is complex in terms of authentication, caching, security, code uniformity, etc. In a multi-node deployment scenario, unless you have the ability to strictly control cross-domain access behavior, it is usually recommended to choose this solution more cautiously. Anyway, I will not choose this one, because I know it will take a long time.
Note: For those who are not familiar with the concept of "cross-domain", you can refer to the relevant description in my previous article.Home Data Center Series: A review of the core concepts of current Web development starting with the Ajax cross-domain issue (from a non-programmer's perspective)”
3 Implementation based on "reverse proxy write-back"
Another common approach is to implement the write-back of comment requests through a "reverse proxy" at the Web Server layer. Specifically, you can set rules in the Web Server of each read-only node (such as Nginx or the Nginx Proxy Manager based on it) to reverse proxy specific POST requests to admin-ajax.php to the primary write node, for examplehttps://comment.tangwudi.com/admin-ajax.php
From the browser's perspective, the request is still sent to the original domain name (such as https://blog.tangwudi.com/admin-ajax.php
), but the Web Server actually takes over and completes the "flow" of the request.
The advantage of this method is its simple structure and flexible deployment. It does not require the browser to support cross-domain access, nor does it require the introduction of additional middleware tools. As long as the reverse proxy is properly configured, the entire process is completely transparent to the front-end user and the WordPress plugin layer, and can even coexist with APO and CDN cache without involving complex Worker routing logic.
But it also has some detailed challenges, mainly concentrated in the following aspects:
- Cookies and session persistence: Whether the request after the reverse proxy can carry the user login status depends on whether the Cookie domain configuration is correct and whether the main writing node recognizes these Cookies;
- Cache interference: If the front-end or CDN caches the request for admin-ajax.php, the proxy request may not reach the primary write node. You need to set Cache-Control and related policies carefully.
- Difficulty of troubleshooting: Once a request is intercepted or rewritten at the proxy layer, troubleshooting may not be as intuitive as with Workers, especially when there is no obvious error at the UI layer;
- Only applicable to HTTP layer forwarding: It cannot perform complex request rewriting, response reorganization, or identity pass-through, and has less room for maneuver than Worker.
In general, this method is closer to the idea of "traditional reverse proxy" and is suitable for users who have experience in Nginx management and want to complete the transformation at the existing Web Server layer. It is also a technical solution of "processing write requests at the edge of the node", with simple deployment and no intrusion on the main structure of WordPress. It is a reliable means to achieve read-write separation.
4 Implementation based on "Worker transfer"
This method can completely bypass cross-domain issues, because from the browser's perspective, the comment request is still initiated to the original domain name, such as "https://blog.tangwudi.com/admin-ajax.php
". The Worker will intercept such requests and forward them to another node at the edge network node that is actually responsible for the write operation, such as "https://comment.tangwudi.com/admin-ajax.php
", thereby centralizing the writing of comments.
The biggest advantage of this method is that it is completely transparent to the front-end, without modifying the browser-side logic, nor does it require the client to support CORS or configure complex cross-domain headers. It also avoids exposing the address of the real write node and improves security.
Of course, there are some caveats to this approach, such as:
- Cloudflare Workers have upper limits on CPU execution time and response size. Although comment requests usually do not trigger limits, it is still important to keep an eye on them.
- Worker proxy is HTTP Layer Request forwarding cannot handle some advanced scenarios that require long connections or state synchronization with the origin server (such as real-time WebSocket and specific cookie domain attribute issues);
- If you have enabled APO (Automatic Platform Optimization), the Worker's processing may conflict with its cache strategy. You need to set the Worker's route matching rules specifically to avoid interfering with the page cache maintained by APO.
In general, Worker is a very elegant way to "solve cross-domain write problems outside the node". It is particularly suitable for WordPress multi-active deployment scenarios, which can orderly guide "user-generated data" to the only primary write node.
5 Use the "worker transfer" method to achieve comment consistency on multiple active WordPress nodes
5.1 Use workers to write comments only to the master node
This solution is the simplest and most traditional in logic, following the idea of "complete separation of read and write". You only need to ensure that the comments are written uniformly to the WordPress master node on the macmini in the home data center (assuming that it corresponds to the domain name "comment.tangwudi.com"), and the rest of the synchronization work is automatically completed by the database change detection mechanism on the master node.
The specific process is as follows: the scheduled detection script running on the master node will periodically check whether the WordPress database in MariaDB has changed; once an update is detected, the current database will be automatically exported as a wordpress.sql file. This exported SQL file is then synchronized to the specified directory on the Chicago wordpress slave node through various channels (such as scp, syncthing, etc.). The data detection mechanism running on the Chicago slave node (usually based on file monitoring methods such as inotify) will capture the arrival of a new file (wordpress.sql) in the specified directory, automatically trigger the database import action, and import the content in the wordpress.sql file into its local MariaDB, thereby completing the synchronization of WordPress data between the master and slave nodes.
I have written about the specific steps of creating a worker in many previous articles, so I will not repeat them here. Except for the correct configuration of the worker route, there is nothing else to set. Taking my blog as an example, the entry route of the worker is:
blog.tangwudi.com/wp-admin/admin-ajax.php*

Different WordPress themes use different technical methods to implement the comment function. Take the Argon theme as an example. It uses an asynchronous submission method based on AJAX, so the comment submission request will eventually be sent to the "/wp-admin/admin-ajax.php" interface of WordPress and processed by the background wp_ajax_nopriv_* hook. The advantage of this method is that users do not need to refresh the page after submitting comments, and the experience is smoother.
However, there are also many themes (especially some native and lightweight themes) that use the default comment processing mechanism of WordPress, which is to directly POST comments to wp-comments-post.php. Although this method is more traditional, it usually triggers a full page refresh after submission, and the user experience is slightly inferior. In addition, since these requests do not go through the AJAX pipeline, it is not convenient to capture the submission results or unify the error handling logic on the front end.
This difference requires special attention in actual architecture design, especially when deploying custom Workers, reverse proxies, or Webhook processing logic. If only one submission path is intercepted or rewritten, while the possibility of another path is ignored, some topic comments may not be written or fail to be written. Therefore, in multi-node deployment scenarios, it is recommended to adapt different comment submission entrances based on the actual themes and comment plug-ins used to ensure that all paths can be correctly routed or processed. For laziness, I will only consider the "admin-ajax.php" case. After all, I can't verify "wp-comments-post.php".
The specific worker code is as follows:
export default { async fetch(request, env, ctx) { // Parse the requested URL const url = new URL(request.url); // If the request path does not end with /admin-ajax.php, return 404 and mark that the Worker was not hit if (!url.pathname.endsWith("/admin-ajax.php")) { return new Response("Not found", { status: 404, headers: { "content-type": "text/plain", "X-Worker-Hit": "no" // Custom headers to facilitate debugging whether the Worker is hit } }); } // Target proxy address, that is, the actual processing end of the comment request const targetUrl = "https://comment.tangwudi.com/wp-admin/admin-ajax.php"; // Clone the request object to avoid errors caused by subsequent body being read multiple times const reqClone = request.clone(); // Clone the request header and remove the Referer to avoid exposing the original domain name or being blocked by WAF检查const newHeaders = new Headers(reqClone.headers); newHeaders.delete("referer"); // Construct a new request object pointing to the proxy target address const proxiedRequest = new Request(targetUrl, { method: reqClone.method, headers: newHeaders, // GET and HEAD requests have no body, other methods read the original body for forwarding body: reqClone.method === "GET" || reqClone.method === "HEAD" ? null : await reqClone.arrayBuffer(), redirect: "manual" // Do not follow redirects, let the browser handle it }); // Send proxy request const response = await fetch(proxiedRequest); // Clone response headers and add custom tags const respHeaders = new Headers(response.headers); respHeaders.set("X-Worker-Hit", "yes"); // Used to confirm that Worker is effective// Return the proxied response while retaining the original response body and status code return new Response(response.body, { status: response.status, statusText: response.statusText, headers: respHeaders }); } };
In the above Worker code, I added two small functions to enhance the controllability and observability of proxy requests. These two small functions not only improve debugging efficiency, but also make the proxy link of the entire comment request more transparent and verifiable:
- Clear the original Referer:In order to avoid exposing the source information of the original page (i.e. blog.tangwudi.com) to the target server, I explicitly deleted the Referer field in the request header in the Worker. This helps to improve a certain degree of privacy protection, and can also prevent some server-side logic from rejecting requests due to inconsistent Referer (which is actually useless for my scenario because I have removed the default domain name access restrictions on all WordPress nodes), especially when WordPress security plugins, WAF or certain CDN policies exist.
- Add a custom response header X-Worker-Hit: To confirm whether the request actually passed through the Worker, I added the header "X-Worker-Hit: yes" to the response header. With this header, you can quickly determine whether the Worker is effective in the browser developer tools or debugging tools such as curl to avoid misjudgments caused by routing misses, cache bypasses, etc. When the path is not matched, the Worker will also return 404 with X-Worker-Hit: no, which makes it easier to determine whether the processing path is correct.
5.2 Implementation of writing comments on multiple nodes simultaneously (Advanced version)
5.2.1 Change "uncontrollable writing" to "controllable writing"
The previous section mentionedWrite comments to the master node + detect database changes + automatically export wordpress.sql and synchronize to the slave node“This mechanism is overall stable and easy to deploy. However, due to its reliance onTiming detection and file transferIn the scenario of comments that require high real-time performance, there is a naturalSynchronization delayProblem: Users may encounter a situation where the comment submission is successful, but the page does not see the update. This is often caused by different back-to-origin nodes. If the comment happens to be distributed to the master node by Cloudflare Tunnel, the latest comment can be seen; but in most cases, it will be distributed to the Chicago slave node, which is geographically closer but not synchronized, resulting in the illusion that the comment is successful but not visible, affecting the user experience.
On the basis of having already implemented "writing comments to the main node", I went a step further and used Cloudflare Worker implements multi-point synchronous writing The solution is to directly send requests to both the master node and the slave node at the comment submission stage to achieve write redundancy. This approach not only improves visibility, but also fundamentallyAvoids relying on indirect means such as "master node change detection": Because the "uncontrollable" comments are forced to become "controllable" by using the method of simultaneous writing on multiple nodes.
5.2.2 Specific implementation of worker code
Assume that the master node corresponds to "comment1.tangwudi.com" and the Chicago slave node corresponds to "comment2.tangwudi.com". The specific worker code is as follows:
export default { async fetch(request, env, ctx) { const url = new URL(request.url); // If the path is not /admin-ajax.php, return 404 (to prevent abuse) if (!url.pathname.endsWith("/admin-ajax.php")) { return new Response("Not found", { status: 404, headers: { "content-type": "text/plain", "X-Worker-Hit": "no" // Custom response headers to mark misses} }); } // Buffer request body (to avoid multiple read failures) const reqBody = request.method === "GET" || request.method === "HEAD" ? null : await request.arrayBuffer(); // Copy request headers and remove Referer (some services will verify) const commonHeaders = new Headers(request.headers); commonHeaders.delete("referer"); // Build primary node request (comment1: macmini }); // Build slave node request (comment2: Chicago slave read node) const secondaryRequest = new Request("https://comment2.tangwudi.com/wp-admin/admin-ajax.php", { method: request.method, headers: commonHeaders, body: reqBody, redirect: "manual" }); // Primary node request: executed synchronously and returned as the final response const primaryResponse = await fetch(primaryRequest); // Slave node request: asynchronous execution, failure will not affect the main process ctx.waitUntil( fetch(secondaryRequest).catch((err) => { console.log("Secondary write failed:", err); }) ); // Add a custom response header to mark the hit Worker const respHeaders = new Headers(primaryResponse.headers); respHeaders.set("X-Worker-Hit", "yes"); // Return the response result of the primary node return new Response(primaryResponse.body, { status: primaryResponse.status, statusText: primaryResponse.statusText, headers: respHeaders }); } };
Target | Implementation |
---|---|
The master node returns a response after writing successfully | await fetch(primaryRequest) |
Asynchronous writing from the slave node, failure does not affect the main process | ctx.waitUntil(fetch(secondaryRequest)) |
Keeping Workers Responsive | All non-main process operations are performed through ctx.waitUntil() |
Support double writing of comments and simplify data synchronization | One submission synchronizes two nodes, without the need for subsequent detection mechanisms |
In order to write comments to multiple WordPress nodes, the Worker script does not simply forward the comment request to a single node after receiving it.Construct multiple concurrent HTTP requests, the same comment dataSend to both the master and slave nodesThese requests are usually triggered asynchronously, independent of each other, and do not affect each other. With the high availability of edge computing, this "multi-point concurrent writing" solution takes into account performance, reliability, and consistency, and greatly reduces the impact of single-point failures or synchronization delays of the master node on user experience.
Note: This method can support adding more nodes in the WordPress multi-active architecture, such as comment3.tangwudi.com, comment4.tangwudi.com, etc., with simple code modifications.
Although the simultaneous writing of WordPress comments on multiple nodes is achieved through Workers, which greatly improves the availability and consistency of the comment system under the multi-active architecture, it still cannot circumvent an underlying limitation of WordPress:The review status and comment ID of the comment are only valid in the local database, and the master and slave nodes are not aware of each other..
This raises more than just the problem ofAfter posting a comment for the first time, it needs to be approved by the master node and synchronized with the database"More importantly:If a visitor subsequently tries to reply to a comment, and the comment IDs in different nodes are inconsistent (Since the nodes accessed during APO caching or back-to-source are uncontrollable, the comment IDs on the page are inconsistent for both the master and slave nodes. This pitfall troubled me for a long time because I did not fully understand the "auto-increment" logic of relational databases at the time.), errors such as "Cannot reply to unapproved comments" or "Reference ID does not exist" may appear..
The reason is simple: WordPress's comment reply logic relies on the unique ID of existing comments, and this ID is independently generated in each node database. If the master and slave are not synchronized, the ID allocation on each node may be biased.The result is a successful reply on one node, but an error message fails on another node..
To solve this problem, I adopted a relatively safe strategy:All review and reply of comments are carried out uniformly on the master node. Once the operation is completed, the master node is manually triggered to synchronize the database with the slave node.This ensures that comment data and status remain consistent across all nodes, avoids interaction errors, and enables a "fast and stable" comment writing experience under a multi-active architecture.
Although you need to export wordpress.sql every time you approve or reply to a comment, which is a bit like "moving the entire database once to synchronize a comment", under the premise that the number of comments is not large at present, this "lightweight manual synchronization" is still a cost-effective option:This is obviously more pragmatic and transparent than rebuilding the entire comment system or introducing an external platform.
Of course, this is only a temporary solution. After all, the number of comments is not large now. If we really grow bigger and stronger in the future, and there are a lot of daily comments, we will have to consider restoring the previous "database change awareness + automatic synchronization" mechanism - when that time comes, it will be considered as a growing pain. :)
Note: In fact, you can also consider synchronizing only the wp_comments table instead of synchronizing the entire database, but I am currently too lazy to maintain two different scripts, so I'll just do it.
6 Add comment notification (optional)
6.1 Thoughts
I have always been dissatisfied with the new comment email notifications that come with WordPress. My phone is on silent most of the time, and all kinds of notifications are almost turned off. I only check my email when I am in a good mood, so it is often several hours or even half a day before I find that someone has left a comment on my blog. This sense of delay is quite bad for someone who wants to run a blog seriously and keep in touch with readers.
Taking advantage of the opportunity of changing the comment request to be forwarded through Cloudflare Worker, I thought, why not upgrade the notification mechanism as well: when a new comment is generated, it will be directly sent through Bark (friends who are not familiar with bark can refer to the article:Docker series builds a message push server based on bark server) push notifications to my phone and my Mac mini at home, letting me know within seconds when someone leaves a message.
However, there is a little trouble here: compared with server-side shell scripts, scheduled tasks or webhooks, the Worker itself runs on the edge node, is stateless and event-driven, and the lighter the processing logic, the better. It is not suitable for directly making Bark requests containing identity keys: this is both for security considerations and to keep the Worker as simple as possible.
So I plan to let the Worker send a notification signal to a lightweight Webhook interface pre-deployed on another 1-core 1G memory Chicken-San Jose node after capturing the comment request. The responsibility of this interface is very simple: it will not call Bark directly, but act as a transit station to pass the event to the shell script running locally. The script is pre-configured with Bark's API Key and push template, which is actually responsible for accurately pushing notifications to my mobile phone and Mac mini.
On the one hand, this design keeps sensitive keys in a trusted local environment and does not expose them to the Worker, thus avoiding potential security risks. On the other hand, it also allows me to flexibly control notification behavior, such as keyword filtering based on comment content, merging duplicate notifications, and even adding notification throttling logic to avoid being bombarded with comments in a short period of time.
At the same time, the combination of Webhook + local script is also very easy to expand: in the future, if I want to change to a Telegram Bot, WeChat for Business, or a pop-up window reminder on the desktop, I only need to make a slight modification at the script level without changing a single line of Worker code. Overall, this is a solution that balances real-time, security, and maintainability.
6.2 4. Python Flask implements lightweight local webhook
6.2.1 Introduction to flask
When I used the Baota panel before, I noticed that there was a ready-made Webhook software in Baota's software store that could be installed directly, which was very convenient and hassle-free to use. But now I have switched to the 1panel panel. After some searching, it seems that there is no similar ready-made plug-in or application available (many friends on the Internet have asked 1panel about this requirement for several years~), so I can only write one myself. Considering the requirements of being as lightweight as possible, easy to maintain and flexible to expand, I finally chose the Python-based Flask framework to build this local Webhook service.
Flask is a very popular lightweight web framework with only a few thousand lines of core code. It is simple and flexible in design and suitable for rapid development of various web services and interfaces: it does not have too many complex dependencies and mandatory project structures, and developers can freely organize code and functions according to actual needs. In addition, Flask has rich extension library support and can easily integrate database, authentication, logging and other functions. If there is a need in the future, it can also be easily expanded. It has an active community, complete documentation, and is very easy to get started, making it very suitable for building small services such as local webhooks.
6.2.2 Flask installation
apt update apt install -y python3-pip pip3 install flask
6.2.3 Creating a Flask-based Webhook Service Script
1. Create a new directory
mkdir -p ~/script
2. Create a service script
cd ~/script vim webhook_server.py
Paste the following code in and save:
# filename: webhook_server.py from flask import Flask, request import subprocess app = Flask(__name__) # Default Webhook Token EXPECTED_TOKEN = "your-token" @app.route("/webhook/comment", methods=["POST"]) comment def_webhook(): # Verification Token received_token = request.headers.get("X-Webhook-Token", "") if received_token != EXPECTED_TOKEN: return "Invalid token", 403 # Call Bark notification script try: subprocess.run(["/usr/local/bin/bark_notify.sh"], check=True) return "Bark notified", 200 except subprocess.CalledProcessError as e: return f"Notification failed: {str(e)}", 500 if __name__ == "__main__": # listens to the IP and specified port app.run(host="127.0.0.1", port=9527)
The above is an example of a Python Webhook service written in the Flask framework. It listens for POST requests on the /webhook/comment path on port 9527 of "127.0.0.1" and triggers a local script (/usr/local/bin/bark_notify.sh) to execute the bark notification logic.
As for the bark_notify.sh script, create it according to the following process:
vim /usr/local/bin/bark_notify.sh
Copy and paste the following content and save it (please modify it according to your actual environment):
#!/bin/bash # Bark notification address, replace with your own domain name, token and notification format bark_success_macmini="https://xxxx.tangwudi.com/your-token/wordpress/wordpress-new-comment" bark_success_iphone="https://xxxx.tangwudi.com/your-token/wordpress/wordpress-new-comment" # Bark success notification curl --max-time 5 -s -A "your-useragent" "bark_success_macmini">/dev/null curl --max-time 5 -s -A "your-useragent" "bark_success_iphone" > /dev/null
Grant execution permissions:
chmod +x /usr/local/bin/bark_notify.sh
The reason why bark_notify.sh is created in the path /usr/local/bin is for permission and accessibility considerations. Although the script is essentially just a simple notification trigger, in actual deployment, the Flask application does not necessarily always run as the root user, especially in a production environment, where Flask is more likely to be started as www-data, nobody, or an ordinary user specified by systemd.
If you put the script in the "/root" directory, the Flask process may not be able to access or execute the script in this path due to permission restrictions, resulting in failure to trigger notifications, and it is not easy to find the problem directly in the log (this problem troubled me for several hours. I put it directly in the "/root/script" path for convenience, but it always returned an internal 500 error). In contrast, "/usr/local/bin" is a location reserved for user-defined executable files in the Linux system. It has appropriate execution permissions by default, and most users and services can directly find commands or scripts in this path in the PATH environment variable, so it is more secure and universal.
In short:/usr/local/bin is a reasonable placement point that takes into account permissions, security, and system maintainability, and is suitable as a unified entry point for system-level service call scripts.
6.2.4 Notification part of work code
The content of sending the Webhook notification only needs to be appended to the ctx.waitUntil content of the previous worker code. An example is as follows:
// Concurrent slave node requests without affecting the main process ctx.waitUntil( fetch(secondaryRequest).catch((err) => { console.log("Secondary write failed:", err); }) ); // Send Webhook notification on success (non-blocking) if (primaryResponse.ok) { ctx.waitUntil( fetch("https://notification.tangwudi.com/webhook/comment", { method: "POST", headers: { "X-Webhook-Token": "your-token" } }).catch((err) => { console.log("Webhook failed:", err); }) ); }
You need to assign a domain name in advance to the device where flask is installed and deployed to receive requests sent by the worker. In this case, it is "notification.tangwudi.com". In addition, the value of "your-token" in webhook_server.py must be consistent with the value of "X-Webhook-Token" in the worker.
7. Afterword
In addition to the above solutions, some people choose to use third-party comment systems (such as Disqus, Twikoo, Valine, etc.) to bypass WordPress's own comment mechanism, thereby avoiding the complexity of read-write separation involved in the multi-active architecture. This type of solution is indeed more common in static blogs or websites that do not require strong background interaction. It is simple to deploy, decouples the front and back ends, and even has a certain degree of anti-attack capability (because the pressure on the server is transferred to the third-party platform).
However, for websites built with WordPress as the core, the comment function is often more than just a "message board". It is not only a part of user interaction, but also an organic part of SEO and content ecology. Using a third-party system may simplify deployment on the surface, but it essentially sacrificesData control and subsequent portabilityandSystem consistencyThese are precisely the points that should be emphasized most in a multi-active architecture. Moreover, many third-party comment systems are not completely "easy to use": Disqus is becoming increasingly commercialized and has many advertisements; although systems such as Twikoo and Valine are open source and free, they rely on platforms such as LeanCloud and CloudBase and also have operational stability issues; and from the final experience, loading speed, data latency, security compliance and other aspects are often difficult to compare with self-hosted solutions.
Therefore, in a multi-active architecture, bypassing the native comment mechanism may seem like a "shortcut", but it is actually aAbandoning the compromise of structural optimizationIn contrast, using Cloudflare Worker to achieve lightweight, controllable multi-point writing not only maintains the integrity of the WordPress native system, but also greatly enhances the flexibility and sustainability of the architecture - this is a value that third-party services are difficult to provide.