项目说明
1、 使用LVS负载均衡用户请求到后端web服务器,并且实现健康状态检查
2、 使用keepalived高可用LVS,避免LVS单点故障
3、 集群中分别在LK-01和LK-02运行一个VIP地址,实现LVS双主
4、 用户通过DNS轮训的方式实现访问集群的负载均衡(不演示)
实验拓扑
环境介绍:
IP地址 |
功能描述 |
|
HK-01 |
172.16.4.100 |
调度用户请求到后端web服务器,并且和LK-02互为备份 |
HK-02 |
172.16.4.101 |
调度用户请求到后端web服务器,并且和LK-01互为备份 |
WEB-01 |
172.16.4.102 |
提供web服务 |
WEB-02 |
172.16.4.103 |
提供web服务 |
DNS |
172.16.4.10 |
实现DNS轮训解析用户请求域名地址 |
VIP1 |
172.16.4.1 |
用户访问集群的入口地址,可能在LK-01,也可能在LK-02 |
VIP2 |
172.16.4.2 |
用户访问集群的入口地址,可能在LK-01,也可能在LK-02 |
配置示例
后端WEB服务器配置
Web服务器的配置极其简单,只需要提供测试页面启动web服务即可,配置如下:
Web-01配置
[[email protected] ~]# echo "web-01" >/var/www/html/index.html [[email protected] ~]# service httpd start Web-02配置 [[email protected] ~]# echo "web-02" >/var/www/html/index.html [[email protected] ~]# service httpd start
LVS访问后端web服务器,验证web服务提供成功
[[email protected] ~]# curl 172.16.4.102 web-01 [[email protected] ~]# curl 172.16.4.103 web-02
出现设置的页面,就说明web服务是正常
Haproxy+keepalived配置
两个HK节点都安装haproxy和keepalived
[[email protected] ~]# yum -y install haproxy [[email protected] ~]# yum -y install haproxy [[email protected] ~]# yum -y install keepalived [[email protected] ~]# yum -y install keepalived
修改内核参数设置,设置haproxy启动的时候不管有没有vip地址都可以启动
[[email protected] ~]# echo"net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf [[email protected] ~]# sysctl –p [[email protected] ~]# echo "net.ipv4.ip_nonlocal_bind= 1" >> /etc/sysctl.conf [[email protected] ~]# sysctl -p
设置haproxy
两个haproxy节点的配置文件一模一样,所以只放出一个
[[email protected] ~]# vim /etc/haproxy/haproxy.cfg global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon statssocket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull optionhttp-server-close optionforwardfor except 127.0.0.0/8 option redispatch retries 3 timeouthttp-request 10s timeoutqueue 1m timeoutconnect 10s timeoutclient 1m timeoutserver 1m timeouthttp-keep-alive 10s timeoutcheck 10s maxconn 3000 statsenable statsuri /admin?stats statsauth proxy:proxy listen www1 bind172.16.4.1:80 modehttp option forwardfor server www01 172.16.4.102:80 check server www02 172.16.4.103:80 check listen www2 bind172.16.4.2:80 modetcp option forwardfor server www01 172.16.4.102:80 check server www01 172.16.4.103:80 check
keepalived设置
HK-01配置
[[email protected] ~]# vim /etc/keepalived/keepalived.conf global_defs { router_idLVS_DEVEL } vrrp_script chk_mt_down { script"[[ -f /etc/keepalived/down ]] && exit 1 || exit 0" interval 1 weight -5 } vrrp_instance VI_1 { stateMASTER interfaceeth0 virtual_router_id 51 priority100 advert_int 1 authentication { auth_type PASS auth_pass asdfgh } virtual_ipaddress { 172.16.4.1/32 brd 172.16.4.1 dev eth0 label eth0:0 } track_script { chk_mt_down } } vrrp_instance VI_2 { stateBACKUP interfaceeth0 virtual_router_id 52 priority99 advert_int 1 authentication { auth_type PASS auth_pass qwerty } virtual_ipaddress { 172.16.4.2 } track_script { chk_mt_down } }
HK-02配置
[[email protected] ~]# vim /etc/keepalived/keepalived.conf global_defs { router_idLVS_DEVEL } vrrp_script chk_mt_down { script"[[ -f /etc/keepalived/down ]] && exit 1 || exit 0" interval 1 weight -5 } vrrp_instance VI_1 { stateBACKUP interfaceeth0 virtual_router_id 51 priority99 advert_int 1 authentication { auth_type PASS auth_pass asdfgh } virtual_ipaddress { 172.16.4.1/32 brd 172.16.4.1 dev eth0 label eth0:0 } track_script { chk_mt_down } } vrrp_instance VI_2 { stateMASTER interfaceeth0 virtual_router_id 52 priority100 advert_int 1 authentication { auth_type PASS auth_pass qwerty } virtual_ipaddress { 172.16.4.2 } track_script { chk_mt_down } }
设置完成之后两个节点分别启动haproxy和keepalived服务,集群就配置完成了
[[email protected] ~]# service haproxy start [[email protected] ~]# service keepalived start [[email protected] ~]# service haproxy start [[email protected] ~]# service keepalived start
验证
访问状态页面,www1,和www2示例都显示正常
VIP地址也可以正常启动
[[email protected] ~]# ip addr show dev eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:22:c5:c2 brd ff:ff:ff:ff:ff:ff inet172.16.4.100/16 brd 172.16.255.255 scope global eth0 inet172.16.4.1/32 brd 172.16.4.1 scope global eth0:0 inet6fe80::20c:29ff:fe22:c5c2/64 scope link valid_lft forever preferred_lft forever [[email protected] ~]# ip addr show dev eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f1:dd:b2 brd ff:ff:ff:ff:ff:ff inet172.16.4.101/16 brd 172.16.255.255 scope global eth0 inet172.16.4.2/32 scope global eth0 inet6fe80::20c:29ff:fef1:ddb2/64 scope link valid_lft forever preferred_lft forever
负载均衡测试
分别访问两个vip地址均实现了负载均衡的效果
[[email protected] ~]# curl 172.16.4.1 web-02 [[email protected] ~]# curl 172.16.4.1 web-01 [[email protected] ~]# curl 172.16.4.2 web-01 [[email protected] ~]# curl 172.16.4.2 web-02
高可用验证
手动关闭其中HK-02,验证vip地址是否会自动漂移到HK-01服务器
[[email protected] ~]# touch /etc/keepalived/down
两个vip地址均正常漂移到HK-02
[[email protected] ~]# ip addr show dev eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:22:c5:c2 brd ff:ff:ff:ff:ff:ff inet172.16.4.100/16 brd 172.16.255.255 scope global eth0 inet172.16.4.1/32 brd 172.16.4.1 scope global eth0:0 inet172.16.4.2/32 scope global eth0 inet6fe80::20c:29ff:fe22:c5c2/64 scope link valid_lft forever preferred_lft forever
健康状态检测验证
手动关闭web-02,查看是否会自动下线web-02
[[email protected] ~]# service httpd stop
查看web状态页面,WEB-02已经自动下线了
访问验证,调度请求不会转发到web-02
[[email protected] ~]# curl 172.16.4.1 web-01 [[email protected] ~]# curl 172.16.4.1 web-01 [[email protected] ~]# curl 172.16.4.2 web-01 [[email protected] ~]# curl 172.16.4.2 web-01
时间: 2024-12-24 01:32:00