Docker series Traefik file dynamic configuration practice: efficient implementation of local network load balancing

Preface

Previously, the applications (including blogs) I deployed in my home data center had a simple 4-level structure in terms of network architecture: Cloudflare Tunnel—> Changting Leichi WAF Community Edition—> ZEVENET Load Balancing Community Edition—> Blog main site (Blog redundant site). Among them, the role of ZEVENET load balancing is to monitor the health of the blog main site. If the main site is down, it will automatically forward the request to the blog redundant site in the home data center, thereby achieving high availability of the blog.

However, I was very upset because the community version of ZEVENET seemed to have not been updated for a long time, so I had been delaying the idea of writing a tutorial about ZEVENET. In addition, I strengthened the security policy on WAF some time ago, and the stability of the main blog site has withstood the test, so I simply removed ZEVENET, which feels pretty good.

However, this single-point operation structure always makes me feel uneasy. After much thought, I think it is safer to add load balancing. However, I really don’t want to use the ZEVENET community version that has not been upgraded for thousands of years. I also feel that ZEVENET is not lightweight enough. I want to find a lighter solution, so I have to start looking.


ZEVENET is actually the load balancing solution that best meets my requirements. Its web interface configuration ideas and basic configuration elements are similar to those of professional equipment such as F5 and A10 (unfortunately, I can't get a long-term license for F5's BIGIP VE or A10's vThunder, otherwise I wouldn't have to spend so much effort looking for alternatives~~). Although the community version provides too few functions and monitoring content, it is barely enough. It's just that the community version that will never be upgraded (probably doesn't care about this at all, and the focus is still on the paid version) is really annoying. Its image on docker hub has not been updated for 6 years:

image.png

This is not something that an upgrade fanatic like me who wants to level up at every turn can accept, so in the end I had to give up with great reluctance.


But which one should I choose? Actually, before professional load balancing equipment became popular, the most famous load balancing software on Linux systems was LVS (still used in production environments of many companies), which was completely sufficient for my use in terms of functions; and Nginx plus the stream module can also do 7-layer load balancing (also used in many production environments now, with very strong performance), but these two most commonly used software solutions have one biggest problem by default: no convenient web UI (I am used to using F5, and I really can't stand the lack of a convenient web UI~).

LVS does provide a web interface by default, but unfortunately you can only view some basic parameters. There used to be many open source web UI projects for LVS on the Internet, but unfortunately most of them have disappeared now (probably because few people use LVS). There are still some left, such as roxy-wi, but its official website does not seem to provide docker deployment method (github homepage:https://github.com/roxy-wi/roxy-wi), I really don't want to deploy source code for a UI. There are some unofficial ones on docker hub, but they haven't been updated for more than a year. I think I can only give up.

Nginx does have a lot of web UI project support, but unfortunately they are all for the overall function of Nginx, not specifically optimized for the load balancing function, and I just need the load balancing function, which is quite painful, it always feels like a cannon hitting a mosquito, and it still can't hit it. Of course, the biggest advantage of Nginx for seven-layer load balancing is its strong performance, but my blog access traffic is basically just back-to-source requests sent by cloudflare (normal access is to access the content cached on cloudflare, and only a few really need to be back-to-source), so I have no requirements for performance at all, so I gave up in the end.

At this time, a new option was discovered: Traefik.

What is Traefik?

Introduction to Traefik

Traefik It is a modern reverse proxy and load balancer designed for microservices and containerized environments. It can dynamically manage backend service routing and is ideal for automated cloud-native environments.

The main features of Traefik are as follows:

  1. Dynamic Configuration

• Traefik supports multiple configuration methods, including files, APIs, Docker labels, and Kubernetes CRDs, and can respond to changes in service topology in real time.

  1. Automatic service discovery

• Traefik can automatically discover services registered through platforms such as Docker and Kubernetes, and automatically update routing rules without manual intervention.

  1. Built-in HTTPS support

• Supports automatic application and management of Let’s Encrypt SSL/TLS certificates to provide secure HTTPS communication.

  1. Multi-protocol support

• In addition to HTTP and HTTPS, Traefik also supports multiple protocols such as TCP, UDP, WebSocket, etc.

  1. Middleware support

• Provides flexible middleware functionality that can perform operations such as authentication, rewriting, and rate limiting on requests.

  1. Visual interface

• Provides an intuitive dashboard for monitoring service and routing status.

  1. high performance

• Traefik is developed in Go language, has excellent performance, and supports high-concurrency request processing.

Competitive Analysis of Traefik, Nginx, and HAProxy

Comparison of load balancing functions among Traefik, Nginx and HAProxy:

characteristic Traefik Nginx HAProxy
Architecture and deployment
position Focus on cloud-native reverse proxy and load balancing Universal reverse proxy server, supporting load balancing High performance load balancer with extremely high stability
Dynamic Configuration Native support for dynamic configuration (update via API or file) Need to passreloadReload the configuration file Dynamic configuration is partially supported using the Runtime API
Integration capabilities Built-in service discovery support for Kubernetes, Docker, etc. Some integration capabilities can be achieved through plugins or manual configuration No native support, requires manual configuration or reliance on other tools
performance
Throughput Slightly lower than Nginx and HAProxy, suitable for small and medium traffic High performance, suitable for medium loads Optimal performance, suitable for high concurrency and low latency requirements
Concurrent processing capabilities Medium, depending on the number and complexity of backend services Efficiently handle a large number of concurrent requests Extremely high, designed for high concurrency
Resource usage Higher (due to Go language runtime overhead) Moderate resource usage, even lower after configuration optimization Very low resource usage, especially suitable for resource-constrained environments
Features
Load Balancing Algorithm Supports multiple algorithms (polling, random, etc.), more flexible Provides multiple algorithms such as polling and least connection Provide more advanced load balancing strategies and algorithm support
SSL support Automatic configuration and updates (Let's Encrypt integration) Manually configure certificates, support strong Manually configure certificates, strong flexibility
Health Check Built-in support, simple configuration Manual setup, extended functionality via plugins Powerful health check function, suitable for complex scenarios
Connection speed limit and security Provide basic current limiting function Supports rate limiting and complex security configuration Provides refined speed limit and safety control
Web UI support
Support Native support, with an intuitive dashboard Not available, requires third-party tools or plug-ins Not provided, external tools are required
Web UI Features You can view the routing status, load balancing status, etc. in real time No native support, manual construction costs are high No native support, manual construction costs are high

From the above comparison, we can see that Traefik has more powerful functions and built-in support for Web UI. Although its throughput is slightly lower than Nginx and HAProxy, I don’t care about throughput at all. So after comprehensive comparison, Traefik is indeed more suitable for me to do load balancing.

Traefik is suitable for scenarios

Traefik's applicable scenarios cover traditional and modern application deployment requirements. Whether it is a containerized microservice architecture or a variety of enterprise application scenarios, Traefik is an efficient, flexible and feature-rich solution:

Application Scenario Applicable scenario description Features Example
Containerized environment Environments using container orchestration tools such as Docker Compose, Docker Swarm, and Kubernetes – Automatic service discovery: Automatically configure routing based on Docker labels or Kubernetes CRDs
– Dynamic load balancing: adjust traffic in real time as containers are started, stopped, or expanded or reduced in capacity
– Built-in HTTPS support: automatically apply for and manage TLS/SSL certificates
– Dynamically route and balance service traffic in Docker Swarm
– Manage traffic as an Ingress Controller in Kubernetes
Non-containerized environment Traditional virtual machines or physical server hosted applications – Set up routing via static configuration files or REST API
- Provides load balancing, health check and automatic retry functions
– Support multiple protocols (HTTP, HTTPS, TCP, UDP)
- Provide load balancing for multiple independently running application instances
- Provide dynamic routing management for non-containerized enterprise applications
Container orchestration environment (Kubernetes) Applications using Kubernetes in a microservices architecture – As a Kubernetes Ingress Controller, it supports dynamic configuration and automatic service discovery
- Provides functions such as traffic distribution, session retention, and weight allocation
– Dynamically distribute service traffic in a Kubernetes cluster
– Use canary release and blue-green deployment to improve system stability
Edge computing and IoT environment Data traffic distribution for IoT gateways or edge nodes – Supports multiple protocols (TCP, UDP, HTTP/HTTPS) to handle traffic from different devices and nodes
– Middleware functionality to implement rate limiting and authentication
– Real-time data flow routing and load sharing on edge computing platforms
– Traffic optimization for communication between IoT devices
Hybrid and multi-cloud environments Services deployed on multiple cloud service platforms or across cloud environments – Multiple entry point support, easily integrating traffic from different cloud platforms
- Provide unified traffic management and cross-region load balancing
– Coordinate AWS and Azure service traffic in hybrid cloud
– Traffic distribution and access control between different areas
Enterprise intranet environment Unified management of enterprise internal applications – Provides reverse proxy and load balancing functions, supports path or subdomain routing
– Cooperate with middleware to realize identity authentication, traffic restriction and other functions
– Provide a single entry point for multiple internal applications (such as CRM, ERP systems)
- Provide high availability and secure access to intranet services
Cross-region distributed services Distributed applications for global users - Support weight-based traffic distribution and health check
– Can be combined with GeoDNS to achieve geographic location-aware traffic routing
– Distribute traffic across multiple data centers
- Optimize latency for global users
Development and testing environment Developers need to quickly deploy and debug services – Quickly set up a local test environment with HTTPS support
– Dynamically adjust routing to simulate multi-version release strategies (such as blue-green deployment)
– Build a fast debugging environment for the development team
– Test load balancing strategies and routing rules

As can be seen from the above table, in my home data center, the load balancing of the deployed applications is suitable for the second scenario: non-containerized environment.

Note: My blog's main site and redundant site are on two physical machines respectively (eggs cannot be put in one basket, otherwise redundancy will be meaningless). If they are all deployed on the same physical machine in Docker mode, they are suitable for load balancing within the container environment.

Traefik in action

Common scenarios

1. Docker scenario

Applicable to services running in Docker containers, dynamically managing services and routing rules through Docker labels.

• Advantages: High degree of automation and seamless integration with container orchestration (such as Docker Compose and Swarm).

• Scenario: microservice architecture, containerized deployment.

2. Non-Docker scenario

It is suitable for scenarios such as intranet physical machines or virtual machines. Through manual configuration of entry points and service load balancing, dynamic configuration can come from a variety of providers, such as files, APIs, or service discovery tools such as Consul.

• Advantages: Supports multiple service discovery methods and is suitable for traditional architecture.

• Scenario: Intranet application load balancing, virtual machine cluster traffic distribution.

3. File dynamic configuration scenario (a specific implementation of non-Docker scenario)

The separation of static and dynamic configuration is completely based on files. The dynamic configuration file defines routing rules and service load balancing, without the need for other external services.

• Advantages: Simple configuration, suitable for resource-limited or test environments.

• Scenario: Small intranet load balancing, development and testing environment.

4. Kubernetes scenario

Dynamically obtain routing and service configuration directly from Ingress resources through Kubernetes provider integration.

• Advantages: Best practices for cloud-native scenarios, supporting automatic scaling and service discovery.

• Scenario: Container orchestration platform based on Kubernetes.

Scenario Configuration Source Applicable environment Dynamic
Docker Environment Docker Tags Containerized Services High, real-time discovery updates
Non-Docker Environment Various sources (documents, etc.) Intranet, physical machine, virtual machine cluster In the dependency configuration mode
File dynamic configuration Configuration Files Small intranet, development and testing Modifications require reloading
Kubernetes Environment Kubernetes Ingress Cloud native, container orchestration platform High, dynamic discovery updates

"Dynamic file configuration" is essentially just a form of implementation of "non-Docker environment", and there is no need to divide it into a separate scenario. The reason why it is singled out is that in actual use,File dynamic configuration There are some unique usage scenarios and features that make it "special":

1. Independence of dynamic file configuration

File dynamic configuration does not rely on any external services (such as Docker, Kubernetes, or Consul), and separates static configuration and dynamic configuration completely through files.

Non-Docker Environment Other forms in may require additional services to provide dynamic configuration, such as Consul, Etcd, etc.;

File dynamic configuration Only configuration files need to be managed manually, which is suitable for some simple or resource-constrained environments.

2. Typical application scenarios of dynamic file configuration

File dynamic configuration is often used for:

Development or test environment: Quickly build a simple load balancing or routing system without any additional dependencies.

Small production environment: There is no complex service discovery requirement, only a few backend services need to be statically defined.

These scenarios are similar to those that require service discovery. Non-Docker Environment(such as dynamically expanding virtual machine clusters) are somewhat different.

3. Simplified deployment

The implementation of dynamic file configuration is more user-friendly:

• Only two files are needed (traefik.yml and dynamic configuration file), which can be modified and reloaded without complex configuration;

• Easy to understand and deploy for novice users and resource-limited environments.

Based on the above three reasons, I will classify "dynamic file configuration" as a separate category, but it still belongs to "non-Docker scenario". The deployment later in this article will adopt the "dynamic file configuration" method.


Static configuration file: traefik.yml

Traefik officially recommends deploying in docker mode, either "docker run" or docker-compose. However, no matter which mode is used, a static configuration file named "traefik.yml" is required.

"traefik.yml" is mainly used to define Traefik's global settings and ingress configuration. Static configuration is loaded when Traefik starts, which is different from dynamic configuration (used to define specific routes and services). Dynamic configuration is usually loaded through files, Docker tags, Kubernetes, etc.

The following is a classification and detailed explanation of the contents of the traefik.yml file based on different scenarios:

1. Docker environment

In a Docker environment, the traefik.yml file mainly configures Traefik's global behavior and integration with Docker. The sample file is as follows:

entryPoints: web: address: ":80" websecure: address: ":443" api: dashboard: true providers: docker: endpoint: "unix:///var/run/docker.sock" exposedByDefault: false log: level: "INFO" accessLog: {} certificatesResolvers: myresolver: acme: email: "[email protected]" storage: "acme.json" httpChallenge: entryPoint: "web"

Explanation of the above content:

  1. entryPoints

Defines the entry point (port and protocol) for Traefik.

• web is used for HTTP traffic and listens on port 80.

• websecure is used for HTTPS traffic and listens on port 443.

  1. API

Open Traefik's web management panel (dashboard). Access it via /dashboard.

  1. providers.docker

Configure Traefik as a service discovery tool for Docker containers.

• endpoint: specifies the Docker API address, usually unix:///var/run/docker.sock.

• exposedByDefault: Set to false, meaning only explicitly marked containers will be exposed to Traefik.

  1. log and accessLog

• log: defines the log level (INFO, DEBUG, ERROR).

• accessLog: Enable access log to record client requests.

  1. certificatesResolvers

Configure automatic acquisition of HTTPS certificates (Let's Encrypt).

• Use httpChallenge authentication to handle certificate requests through web entry points.

• Certificate information is stored in the acme.json file.

2. Non-Docker environment (such as intranet application load balancing)

In this scenario, Traefik's static configuration is mainly used to define the load balancing logic and entry points of the backend services. The sample file is as follows:

entryPoints: web: address: ":80" providers: file: filename: "dynamic.yml" log: level: "DEBUG" accessLog: {} api: dashboard: true

The corresponding dynamic configuration dynamic.yml file may be as follows:

http: routers: my-router: rule: "Host(`example.com`)" service: my-service entryPoints: - web services: my-service: loadBalancer: servers: - url: "http://192.168.1.101 :8080" - url: "http://192.168.1.102:8080"

Content Explanation

  1. entryPoints

Define the listening entry point, for example: 80 to handle HTTP requests.

  1. providers.file

• Specify the file path where the dynamic configuration source is, for example, dynamic.yml.

• Dynamic configuration defines specific routing rules and load balancing backends.

  1. http.routers(Dynamic Configuration)

Define routing rules:

• rule: Specifies the matching rule for the request (such as domain name matching).

• entryPoints: Bind to the entry points in the static configuration.

  1. http.services(Dynamic Configuration)

Define the service and its load balancing strategy:

• loadBalancer: Contains the addresses of multiple backend servers, and Traefik will automatically distribute traffic.

3. Dynamic file configuration (a specific implementation of non-Docker environment)

The configuration is very similar to that of a non-Docker environment (because it is a non-Docker environment). Especially when it does not rely on other providers, the static and dynamic configuration can be completely separated through files. For example:

entryPoints: web: address: ":8080" providers: file: directory: "/etc/traefik/dynamic/"

Dynamic configuration file example:

http: routers: static-router: rule: "Path(`/static`)" service: static-service services: static-service: loadBalancer: servers: - url: "http://localhost:3000"

Traefik configures two routers by default:

  1. api router: Responsible for processing requests to the Traefik API interface. Through this router, users can access the Traefik API for monitoring and management operations. Usually its address will be something like http://:8080/api format.

  2. dashboard router: Responsible for handling requests to the Traefik Web Dashboard. Traefik's Web Dashboard provides a graphical interface to monitor and manage Traefik's status and configuration. By default, the Dashboard address is usually http://:8080/dashboard, but this address can be modified through configuration.

These two routers are usually enabled by default when Traefik is started, and can be adjusted through configuration files or command line options. By default, both API and Dashboard are bound to port 8080. You can modify their ports, access methods (such as enabling TLS), etc. as needed.

So, assuming that you configure 1 router in dynamic.yml (as above), the total number of routers is 3.


4. Kubernetes Ingress Controller

When used as an Ingress controller for Kubernetes, the traefik.yml file needs to be configured to obtain dynamic configuration from the Kubernetes API:

entryPoints: web: address: ":80" websecure: address: ":443" providers: kubernetesIngress: {} certificatesResolvers: myresolver: acme: email: "[email protected]" storage: "acme.json" httpChallenge: entryPoint: "web"

Content Explanation

  1. providers.kubernetesIngress

Tells Traefik to extract routing rules and service configuration from the Kubernetes Ingress resource.

  1. certificatesResolvers

Same as Docker environment, enable automatic HTTPS certificate acquisition.

Traefik deployment

Docker run method

Preparation

Suppose I need to load balance port 80 of three different hosts in the intranet. The IP addresses of these three hosts are: 192.168.0.1, 192.168.0.2 and 192.168.0.3. The steps are as follows:

1. Create a new working directory:

mkdir -p /docker/traefik

2. Create a new static configuration file:

touch /docker/traefik/traefik.yml

Paste the following content in and save:

: entryPoints: web: address: ":80" # Add an entry point for HTTP websecure: address: ":443" # Add an entry point for HTTPS traefik: address: ":8080" # Explicitly bind Dashboard to port 8080 providers: file: filename: "/etc/traefik/dynamic.yml" # Specify the path of the dynamic configuration file "inside" the traefik containerwatch: true log: level: "INFO" accessLog: {} api: dashboard: true insecure: true # Enable insecure mode, which is only used in the test environment. This is very important, otherwise the default port 8080 of http cannot be used for managementcertificatesResolvers: myresolver: acme: # Enable local acme for automatic renewal of ssl certificatesemail: "[email protected]" # Replace with your mailboxstorage: "/etc/traefik/acme.json" # Specify the acme configuration file inside the container httpChallenge: entryPoint: web # Use HTTP-01 verification

Create a new dynamic configuration file:

touch /docker/traefik/dynamic.yml

Then paste the following content in and save it:

http: routers: blog-router: rule: "Host(`your-domain.com`)" entryPoints: - web service: blog-service tls: certResolver: myresolver services: blog-service: loadBalancer: servers: - url: "http://192.168.0.1" - url: "http://192.168.0.2" - url: "http://192.168.0.3" healthCheck: path: "/healthcheck" # Health check on the specified path, assuming it is "/healthcheck" interval: "10s" # Check interval timeout: "5s" # Check timeout passHostHeader: true

Create a new acme configuration file:

touch /docker/traefik/acme.json chmod 600 /docker/traefik/acme.json

When using the acme.json file for the first time, you only need to create a blank file. Traefik will automatically store the ACME (Let's Encrypt) certificate and related data in the file when it runs, so you need to configure appropriate write permissions.


If you do not need to load the port of the https protocol, just delete the relevant content in traefik.yml and dynamic.yml, for example:
traefik.yml

entryPoints: web: address: ":80" # Add entry point for HTTP traefik: address: ":8080" # Explicitly bind Dashboard to port 8080 providers: file: filename: "/etc/traefik/dynamic.yml" # Specify the path of the dynamic configuration file "inside" the traefik container watch: true # Monitor the dynamic configuration file in real time. If the configuration changes, load the new configuration immediately log: level: "INFO" accessLog: {} api: dashboard: true insecure: true # Enable insecure mode, generally used in test environments. This configuration is very important, otherwise you cannot use http port 8080 for management by default

dynamic.yml

http: routers: blog-router: rule: "Host(`your-domain.com`)" entryPoints: - web service: blog-service services: blog-service: loadBalancer: servers: - url: "http://192.168.0.1" - url: "http://192.168.0.2" - url: "http://192.168.0.3" healthCheck: path: "/healthcheck" # Health check for the specified path, here it is assumed to be "/healthcheck" interval: "10s" # Check interval timeout: "5s" # Check timeout passHostHeader: true

deploy

The docker run command format is as follows:

docker run -d --name traefik --restart=always --net=public-net \ -p 80:80 \ -p 443:443 \ -p 8080:8080 \ -v /docker/traefik/traefik.yml: /etc/traefik/traefik.yml \ -v /docker/traefik/dynamic.yml:/etc/traefik/dynamic.yml \ -v /docker/traefik/acme.json:/etc/traefik/acme.json \ traefik:v2.11

Note 1: If you do not need to load the https protocol port, delete-p 443:443 \as well as-v /docker/traefik/acme.json:/etc/traefik/acme.json \These two lines will do.

Note 2: --net=public-netIt is not necessary in a "non-docker scenario", but if it is a "docker scenario" (when you want to load balance some dockers on the same host), traefik and these dockers need to be located in the same bridge.

Deployment using docker-compose

In fact, it is to convert the docker run format command into a docker-compose.yml file:

version: '3.8' services: traefik: image: traefik:v2.11 container_name: traefik restart: always networks: - public-net ports: - "80:80" - "443:443" - "8080:8080" volumes: - /docker/traefik/traefik.yml:/etc/traefik/traefik.yml - /docker/traefik/dynamic.yml:/etc/traefik/dynamic.yml - /docker/traefik/acme.json:/etc/traefik/acme.json networks: public-net: external: true

illustrate:

  1. services: defines the service traefik.

  2. networks: Specifies the use of the external network public-net. You need to ensure that the network has been created through docker network create public-net (not required here).

  3. volumes: Mount the host's files into the container.

  4. ports: Map container ports to the host.

After saving this content as a docker-compose.yml file, go to the same directory and start it with the following command:

docker-compose up -d

Advanced configuration: Priority-based configuration ideas

Multi-priority configuration demonstration

The previous section only uses Traefik to complete the most basic and simple load balancing configuration, and Traefik can also implement more complex logic, such as hot standby based on "priority" of services. For example, for access to any domain name, the service of 192.168.0.1:80 has the highest priority, the service of 192.168.0.2:80 has a medium priority, and the service of 192.168.0.3:80 has the lowest priority. As long as the service of 192.168.0.1:80 is normal, only the access request to any domain name will be sent to it; if the service of 192.168.0.1:80 is down, the request will be sent to the service of 192.168.0.2:80; if the service of 192.168.0.2:80 is also down, the request will be sent to the service of 192.168.0.3:80. To implement this function, the content of dynamic.yml can be set as follows:

http: routers: blog-router: rule: "HostRegexp(`{host:.+}`)" # Match all domain namesentryPoints: - web service: high-priority-service # Use high priority service by defaultmiddlewares: - strip-prefix # Middleware example (adjustable as needed) services: high-priority-service: loadBalancer: servers: - url: "http://192.168.0.1" # High priority service addresshealthCheck: path: "/healthcheck" # Health check pathinterval: "10s" # Health check intervaltimeout: "5s" # Health check timeoutpassHostHeader: true # Pass the original Host Header medium-priority-service: loadBalancer: servers: - url: "http://192.168.0.2" # Medium-priority service address healthCheck: path: "/healthcheck" # Health check path interval: "10s" # Health check interval timeout: "5s" # Health check timeout passHostHeader: true # Pass the original Host Header low-priority-service: loadBalancer: servers: - url: "http://192.168.0.3" # Lowest priority service address healthCheck: path: "/healthcheck" # Health check path interval: "10s" # Health check interval timeout: "5s" # Health check timeout passHostHeader: true # Pass the original Host Header middlewares: strip-prefix: stripPrefix: prefixes: - "/api" # Example: If the request contains "/api", this prefix will be removed

Configuration Interpretation

  1. Service Definition (services:

• high-priority: The highest priority, only 192.168.0.1:80 is used.

• medium-priority: Second priority, enabled only when high priority is not available.

• low-priority: The lowest priority, enabled only when the first two are unavailable.

  1. Health Check (healthCheck:

• Health check path /healthcheck, which needs to be provided by each server.

• If the health check fails, Traefik marks the service as unavailable.

  1. routing(routers:

• blog-router points to the high-priority service by default.

  1. middleware(middlewares:

• retry-medium: Try to send traffic to high-priority and switch to medium-priority if it fails.

• retry-low: When medium-priority is also unavailable, switch to low-priority.

Multiple services correspond to different priorities

Traefik uses high, medium, and low service configurations to correspond to different priorities in order to achieve more efficient control in load balancing and traffic scheduling, especially when a high-priority service fails, it can immediately redirect access requests to lower-priority services, thereby achieving a "hot standby" effect.

1. Clear traffic priority control

The necessity of priority control:

• In a multi-service environment, there may be a primary service (high priority service) that is responsible for handling the most important or most requests; at the same time, there may be some backup services (medium and low priority services) that serve as redundancy when the primary service is unavailable.

• In this case, simply using the load balancing weight configuration (for example, setting the weight ratio) does not effectively implement priority control because weight simply distributes traffic proportionally without considering the relative importance and availability of services.

• For example, if weight is used, traffic is distributed according to the weight, which may cause low-priority services to bear part of the traffic even if high-priority services are not faulty, thus affecting performance or resource allocation.

High, medium, and low priority usage:

• The high priority service is set as the default service, meaning it should carry most of the traffic under normal circumstances.

• Medium and low priority services are backup or failover services, and traffic is transferred to these services only when there is a problem with the high priority service.

• This approach ensures system stability and high availability in different situations through simple priority control and traffic routing rules (such as health checks).

2. Health Check and Fault Recovery Requirements

Health Check Mechanism:Traefik's built-in health check mechanism can monitor the availability of services in real time. If a service fails the health check, Traefik will automatically transfer traffic to the healthy service. At this time, the priority configuration is particularly important.

High priority service: Prioritize normal requests and only downgrade to medium or low priority service if it does not work properly.

Medium and low priority services: These services are enabled only when higher priority services fail or become unavailable. Therefore, these services will receive traffic only when their health checks fail.

Through this priority strategy, the system can automatically switch to the backup service when a service failure occurs, avoiding reliance on weights to distribute traffic, thereby reducing problems caused by inaccurate weights or inappropriate proportional distribution.

3. Simplify configuration and logic

Reduce configuration complexity: It is easier to configure high, medium, and low priority services than to allocate traffic by weight. Through clear priority division, administrators can clearly know which services are critical services and which are backup services. There is no need to adjust the weight configuration, and only the health status of the service can be relied on to determine the forwarding of traffic.

Fault recovery is more intuitive: In priority configuration, traffic forwarding decisions do not rely on weights, but are determined by the health status and priority of the service. This allows for more direct traffic scheduling and simplifies configuration and decision-making for operations personnel.

4. Flexible traffic management

Through clear priority configuration, Traefik can dynamically adjust traffic forwarding based on the health and importance of the service. Unlike weight, priority configuration ensures that traffic switching between services is based on actual availability and business needs, rather than just a static traffic allocation ratio.


Traefik supports weight function, however, weight is mainly used to control the amount of load flow of different services, which is different from the concept of priority:

Weight cannot specify priority: In Traefik, weight is mainly used to control the proportion of traffic distribution (for example: 80% traffic to service A, 20% traffic to service B). However, it does not set a clear priority for each service, but is just a simple traffic distribution ratio.

Avoid misleading traffic:In some scenarios, using weights may cause traffic to be incorrectly distributed to services that should not bear the traffic, especially when the system fails or the load increases dramatically. Using high, medium, and low priority configurations can ensure that traffic is switched to low-priority services only under certain conditions.

Flexibility in control and monitoring:By using services of different priorities, the system is more flexible in control and monitoring. When traffic switches from a high-priority service to a low-priority service, it can be easily identified through logs or monitoring and adjusted in time. The weight configuration may make this monitoring obscure, making it difficult to identify whether traffic is allocated according to the expected priority.


Actually, the main reason is that Traefik does not directly support the "priority" mechanism of traditional load balancing devices: set different priority numbers for different services, and only services with the same priority number will be load balanced, otherwise, only the request will be sent to the service with the higher priority number. If this function is supported, there would be no need to toss so much.

Introduction to serversTransports

serversTransports is a key parameter in the Traefik configuration, which is used to define the communication parameters with the upstream server, especially in the transmission settings of HTTP and HTTPS. It allows you to configure specific network transmission options for Traefik's load balancer, such as TLS configuration, request header settings, maximum number of connections, etc. Through this configuration, you can optimize the communication between Traefik and the backend server to improve performance, stability and security. The demonstration configuration is as follows:

http: serversTransports: my-transport: lifeCycle: requestAcceptGraceTimeout: 30s # Graceful close timeout for request acceptanceforwardProxy: address: "http://proxy.example.com:3128" # Upstream proxy addressresponses: headers: customHeader: "X-Custom-Header-Value" # Custom request headertls: minVersion: VersionTLS13 # Minimum TLS versioncipherSuites: - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 # Supported encryption suitesmaxIdleConnsPerHost: 10 # Maximum number of idle connections per hostidleTimeout: 5m # Idle connection timeoutdisableHTTPKeepAlive: true # Disable HTTP Keep-Alive maxConns: 50 # Traefik Maximum number of connections to backend services

The parameters involved in the demonstration configuration are introduced as follows:**

  1. tls

Used to set TLS configuration, specifying the minimum TLS version and cipher suite used when communicating with the backend server. For example, you can use this configuration to enable strong encryption algorithms to ensure the security of data transmission.

• minVersion: Minimum TLS version (such as VersionTLS13).

• cipherSuites: List of supported cipher suites.

  1. lifeCycle.requestAcceptGraceTimeout

Set the graceful shutdown timeout for request acceptance. During this period, Traefik will wait for the request of the existing connection to complete before closing it, ensuring that the client's request can be processed normally before closing the connection.

  1. forwardProxy

If you need to access the backend service through an upstream proxy, you can use this parameter to specify the address of the proxy server.

  1. responses.headers

Set custom response headers. When Traefik sends a request to an upstream server, you can append specific request headers.

  1. maxIdleConnsPerHost

Set the maximum number of idle connections per host. Idle connections are connections that have been established but not used. This parameter helps to prevent idle connections from occupying too many resources and optimize system performance. The default value is 2.

  1. maxConns

Set the maximum number of concurrent connections between Traefik and the backend service. If it is set to 50, it means that the total number of connections between Traefik and all backend services will not exceed 50. This setting helps limit the load of the system and avoids establishing too many connections with the backend service during traffic peaks.

  1. idleTimeout

Sets the timeout for idle connections, that is, if the connection is not used within the specified time, it will be closed. This helps to release connections that are no longer needed and prevent too many idle connections from accumulating in the connection pool. The default value is 0, which means no timeout.

  1. disableHTTPKeepAlive

If set to true, HTTP Keep-Alive is disabled. HTTP Keep-Alive allows the client to maintain a persistent connection with the server without having to re-establish the connection on each request. Disabling Keep-Alive will result in a new connection being established for each request, but can avoid long-term occupancy of the connection pool. The default value is false, which means Keep-Alive is enabled.

For example, to implement connection multiplexing (Connection Reuse) function, only the following configuration is required:

maxIdleConnsPerHost: 10 # Maximum number of idle connections per host, set according to the actual environment idleTimeout: 5m # Idle connection timeout, set according to the actual environment disableHTTPKeepAlive: true # Disable HTTP Keep-Alive maxConns: 50 # Maximum number of connections between Traefik and backend services, set according to the actual environment

Traefik Web Dashboard

The Dashboard provided by Treafik (accessed through port 8080) can view some simple content. For details, please refer to the following graphic description:

image.png

image.png

image.png

image.png

image.png

In general, the default web dashboard provided by Treafik is too simple (a bit disappointing). You can't see the specific access data (such as traffic, number of requests, response time, etc.). To view these specific access data, the official recommendation is to usePrometheus + GrafanaHow these two tools are combined:

Traefik Metrics (Prometheus)

Traefik can be used with Prometheus Integration, collect and display detailed data of load balancing through Prometheus, including number of requests, traffic, response time, etc.

Enable Prometheus: You need to enable the Prometheus exporter in your Traefik configuration, usually via a configuration file or command line parameter.

Add the following to traefik.yml or when starting from the command line:

metrics: prometheus: entryPoint: "metrics" # Set Prometheus endpoint buckets: - 0.1 - 0.3 - 0.5 - 1 - 2.5 - 5 - 10

Accessing Prometheus data: Traefik provides metrics data in Prometheus format, which can usually be accessed through http://:9100/metrics. You can collect this data in Prometheus and visualize it through tools such as Grafana.

Grafana Dashboard

use Grafana It is very convenient to display the real-time monitoring data of Traefik. You can set up a Grafana dashboard to view the Traefik indicator data collected by Prometheus. The specific data items include:

• Requests

• Response Time

• Traffic

• Health status of backend services, etc.

Can be obtained from Traefik Official Grafana Dashboard Get a ready-made panel (usually imported through the Grafana dashboard ID). If you are interested, you can study it yourself. I will not go into details here. The final effect is as follows:

image.png

To be honest, it's pretty cool.

Summarize

I'm not used to using Traefik for load balancing, because it is designed for microservices and containerized environments, so the configuration ideas (static configuration files, dynamic service discovery, etc.) and the basic elements of traditional load balancing (VIP, service-group, server, etc.) don't quite correspond, and I'm using the uncommon "dynamic file configuration" method to implement the uncommon "hot standby" requirement, which made me dizzy at first, but it's okay after I get used to it. It's really convenient to use this thing as an Ingress Controller for k8s. You just need to set the k8s API address in the dynamic configuration file. It used to be very troublesome to use traditional load balancing to do this kind of thing.

However, Traefik still has disadvantages compared to traditional load balancing solutions, such as inadequacies in load balancing algorithms and custom health check methods (it takes a long time to implement a priority, while it only takes a few clicks of the mouse on traditional load balancing). Although I am using it now, I am still hesitating whether I should try HAProxy again? After all, HAProxy is the solution that best meets my needs.

Note 1: Generally, traefik.yml does not need to be changed after the initial configuration. However, when configuring dynamic.yml later, you must be careful. Parameter setting problems and format problems (spaces, indentation) may cause the customized service in Traefik to disappear (the disadvantage of the dynamic file configuration method). I have suffered a lot (this is also one of the reasons why I want to change HAProxy). If you encounter this situation, don't worry, just calm down and slowly check it.

Note 2: This article does not cover the scenarios where Traefik can really shine (microservices, container environments, k8s), which is quite regrettable. I will write another article to talk about it next time I have the opportunity. However, there are many tutorials on this topic on the Internet. If you have any needs, you can also directly refer to other people's articles.

The content of the blog is original. Please indicate the source when reprinting! For more blog articles, you can go toSitemapUnderstand. The RSS address of the blog is:https://blog.tangwudi.com/feed, welcome to subscribe; if necessary, you can joinTelegram GroupDiscuss the problem together.
No Comments

Send Comment Edit Comment


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠(ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ°Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
Emoticons
Emoji
Little Dinosaur
flower!
Previous
Next
       
error:
en_US
Spring Festival
hapiness