Load Balancing

SpeedyCloud Load Balancing is a traffic distribution control service that distributes visits based on forwarding policies to multiple cloud servers in the back end. Extend the external service capabilities of the application system through traffic distribution, and improve the applications' availability and the utilization of resources through failover.

Product Advantages

High availability
It supports large concurrency and uses full redundancy design without failure of a single point, with the availability reaching up to 99.99%.
Low cost
Compared to the high investment of traditional hardware load balancing, the cost decreases 60%, and there is no need for one-time purchase of all the expensive load balancing equipment.
High security
Provide anti-attack capabilities (including CC attacks, SYN flood, and other DDoS attacks) combined with cloud security of SpeedyCloud.
Flexibility and efficiency
Expand resources online; seamlessly adjust backend configuration; configuration takes effect in real time.

Core Functions and Services

Support for protocol
Provide the load-balancing services of Layer 4 (TCP and UDP) and Layer 7 (HTTP and HTTPS).
Session persistence
Provide session hold function, which can forward the session request of the same client to the same back-end cloud server during the lifecycle of the session.
Link checking
Support the health check on the back-end cloud server, automatically block the cloud server in abnormal status and automatically unblock the cloud server after it resumes normal operation.
Certificate management
For the HTTPS protocol, a centralized certificate management service is provided. The certificate does not need to be uploaded to the back-end cloud server, and the decryption process is performed on the load balancing network to reduce the back-end cloud server CPU overhead.
Scheduling algorithm
Support for the three scheduling algorithms, including polling, weighted round robin (WRR) and weighted least connections (WLC).
Bandwidth control
Support allocating the peak bandwidth of corresponding services according to the monitoring data.
Status monitoring
Provide rich monitoring data to keep track of the real-time running status of load balancing.
Management approach
Provide a variety of management approaches such as console, API, and SDK.

Application Scenarios

Mass concurrent access
When there is a high daily traffic on popular websites and applications, and there are hot spots or popular events which causes sudden surge of traffic, users' traffic can be evenly distributed to multiple back-end cloud servers through the load balancing, effectively dealing with the impact of massive access and ensuring a fast and smooth business operation.
Trans-regional disaster recovery
Deploy load balancing instances in different regions and respectively mount the cloud servers in the corresponding regions. The upper layer uses cloud resolution to realize intelligent DNS and resolves the domain names to load balancing instances in different regions to achieve global load balancing. When the load balancing in a certain region is not available, the corresponding upper layer parsing will be suspended without affecting user access.