123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303 |
- #
- # IP Virtual Server configuration
- #
- menuconfig IP_VS
- tristate "IP virtual server support"
- depends on NET && INET && NETFILTER
- depends on (NF_CONNTRACK || NF_CONNTRACK=n)
- ---help---
- IP Virtual Server support will let you build a high-performance
- virtual server based on cluster of two or more real servers. This
- option must be enabled for at least one of the clustered computers
- that will take care of intercepting incoming connections to a
- single IP address and scheduling them to real servers.
- Three request dispatching techniques are implemented, they are
- virtual server via NAT, virtual server via tunneling and virtual
- server via direct routing. The several scheduling algorithms can
- be used to choose which server the connection is directed to,
- thus load balancing can be achieved among the servers. For more
- information and its administration program, please visit the
- following URL: <http://www.linuxvirtualserver.org/>.
- If you want to compile it in kernel, say Y. To compile it as a
- module, choose M here. If unsure, say N.
- if IP_VS
- config IP_VS_IPV6
- bool "IPv6 support for IPVS"
- depends on IPV6 = y || IP_VS = IPV6
- select IP6_NF_IPTABLES
- ---help---
- Add IPv6 support to IPVS.
- Say Y if unsure.
- config IP_VS_DEBUG
- bool "IP virtual server debugging"
- ---help---
- Say Y here if you want to get additional messages useful in
- debugging the IP virtual server code. You can change the debug
- level in /proc/sys/net/ipv4/vs/debug_level
- config IP_VS_TAB_BITS
- int "IPVS connection table size (the Nth power of 2)"
- range 8 20
- default 12
- ---help---
- The IPVS connection hash table uses the chaining scheme to handle
- hash collisions. Using a big IPVS connection hash table will greatly
- reduce conflicts when there are hundreds of thousands of connections
- in the hash table.
- Note the table size must be power of 2. The table size will be the
- value of 2 to the your input number power. The number to choose is
- from 8 to 20, the default number is 12, which means the table size
- is 4096. Don't input the number too small, otherwise you will lose
- performance on it. You can adapt the table size yourself, according
- to your virtual server application. It is good to set the table size
- not far less than the number of connections per second multiplying
- average lasting time of connection in the table. For example, your
- virtual server gets 200 connections per second, the connection lasts
- for 200 seconds in average in the connection table, the table size
- should be not far less than 200x200, it is good to set the table
- size 32768 (2**15).
- Another note that each connection occupies 128 bytes effectively and
- each hash entry uses 8 bytes, so you can estimate how much memory is
- needed for your box.
- You can overwrite this number setting conn_tab_bits module parameter
- or by appending ip_vs.conn_tab_bits=? to the kernel command line
- if IP VS was compiled built-in.
- comment "IPVS transport protocol load balancing support"
- config IP_VS_PROTO_TCP
- bool "TCP load balancing support"
- ---help---
- This option enables support for load balancing TCP transport
- protocol. Say Y if unsure.
- config IP_VS_PROTO_UDP
- bool "UDP load balancing support"
- ---help---
- This option enables support for load balancing UDP transport
- protocol. Say Y if unsure.
- config IP_VS_PROTO_AH_ESP
- def_bool IP_VS_PROTO_ESP || IP_VS_PROTO_AH
- config IP_VS_PROTO_ESP
- bool "ESP load balancing support"
- ---help---
- This option enables support for load balancing ESP (Encapsulation
- Security Payload) transport protocol. Say Y if unsure.
- config IP_VS_PROTO_AH
- bool "AH load balancing support"
- ---help---
- This option enables support for load balancing AH (Authentication
- Header) transport protocol. Say Y if unsure.
- config IP_VS_PROTO_SCTP
- bool "SCTP load balancing support"
- select LIBCRC32C
- ---help---
- This option enables support for load balancing SCTP transport
- protocol. Say Y if unsure.
- comment "IPVS scheduler"
- config IP_VS_RR
- tristate "round-robin scheduling"
- ---help---
- The robin-robin scheduling algorithm simply directs network
- connections to different real servers in a round-robin manner.
- If you want to compile it in kernel, say Y. To compile it as a
- module, choose M here. If unsure, say N.
-
- config IP_VS_WRR
- tristate "weighted round-robin scheduling"
- ---help---
- The weighted robin-robin scheduling algorithm directs network
- connections to different real servers based on server weights
- in a round-robin manner. Servers with higher weights receive
- new connections first than those with less weights, and servers
- with higher weights get more connections than those with less
- weights and servers with equal weights get equal connections.
- If you want to compile it in kernel, say Y. To compile it as a
- module, choose M here. If unsure, say N.
- config IP_VS_LC
- tristate "least-connection scheduling"
- ---help---
- The least-connection scheduling algorithm directs network
- connections to the server with the least number of active
- connections.
- If you want to compile it in kernel, say Y. To compile it as a
- module, choose M here. If unsure, say N.
- config IP_VS_WLC
- tristate "weighted least-connection scheduling"
- ---help---
- The weighted least-connection scheduling algorithm directs network
- connections to the server with the least active connections
- normalized by the server weight.
- If you want to compile it in kernel, say Y. To compile it as a
- module, choose M here. If unsure, say N.
- config IP_VS_FO
- tristate "weighted failover scheduling"
- ---help---
- The weighted failover scheduling algorithm directs network
- connections to the server with the highest weight that is
- currently available.
- If you want to compile it in kernel, say Y. To compile it as a
- module, choose M here. If unsure, say N.
- config IP_VS_OVF
- tristate "weighted overflow scheduling"
- ---help---
- The weighted overflow scheduling algorithm directs network
- connections to the server with the highest weight that is
- currently available and overflows to the next when active
- connections exceed the node's weight.
- If you want to compile it in kernel, say Y. To compile it as a
- module, choose M here. If unsure, say N.
- config IP_VS_LBLC
- tristate "locality-based least-connection scheduling"
- ---help---
- The locality-based least-connection scheduling algorithm is for
- destination IP load balancing. It is usually used in cache cluster.
- This algorithm usually directs packet destined for an IP address to
- its server if the server is alive and under load. If the server is
- overloaded (its active connection numbers is larger than its weight)
- and there is a server in its half load, then allocate the weighted
- least-connection server to this IP address.
- If you want to compile it in kernel, say Y. To compile it as a
- module, choose M here. If unsure, say N.
- config IP_VS_LBLCR
- tristate "locality-based least-connection with replication scheduling"
- ---help---
- The locality-based least-connection with replication scheduling
- algorithm is also for destination IP load balancing. It is
- usually used in cache cluster. It differs from the LBLC scheduling
- as follows: the load balancer maintains mappings from a target
- to a set of server nodes that can serve the target. Requests for
- a target are assigned to the least-connection node in the target's
- server set. If all the node in the server set are over loaded,
- it picks up a least-connection node in the cluster and adds it
- in the sever set for the target. If the server set has not been
- modified for the specified time, the most loaded node is removed
- from the server set, in order to avoid high degree of replication.
- If you want to compile it in kernel, say Y. To compile it as a
- module, choose M here. If unsure, say N.
- config IP_VS_DH
- tristate "destination hashing scheduling"
- ---help---
- The destination hashing scheduling algorithm assigns network
- connections to the servers through looking up a statically assigned
- hash table by their destination IP addresses.
- If you want to compile it in kernel, say Y. To compile it as a
- module, choose M here. If unsure, say N.
- config IP_VS_SH
- tristate "source hashing scheduling"
- ---help---
- The source hashing scheduling algorithm assigns network
- connections to the servers through looking up a statically assigned
- hash table by their source IP addresses.
- If you want to compile it in kernel, say Y. To compile it as a
- module, choose M here. If unsure, say N.
- config IP_VS_SED
- tristate "shortest expected delay scheduling"
- ---help---
- The shortest expected delay scheduling algorithm assigns network
- connections to the server with the shortest expected delay. The
- expected delay that the job will experience is (Ci + 1) / Ui if
- sent to the ith server, in which Ci is the number of connections
- on the ith server and Ui is the fixed service rate (weight)
- of the ith server.
- If you want to compile it in kernel, say Y. To compile it as a
- module, choose M here. If unsure, say N.
- config IP_VS_NQ
- tristate "never queue scheduling"
- ---help---
- The never queue scheduling algorithm adopts a two-speed model.
- When there is an idle server available, the job will be sent to
- the idle server, instead of waiting for a fast one. When there
- is no idle server available, the job will be sent to the server
- that minimize its expected delay (The Shortest Expected Delay
- scheduling algorithm).
- If you want to compile it in kernel, say Y. To compile it as a
- module, choose M here. If unsure, say N.
- comment 'IPVS SH scheduler'
- config IP_VS_SH_TAB_BITS
- int "IPVS source hashing table size (the Nth power of 2)"
- range 4 20
- default 8
- ---help---
- The source hashing scheduler maps source IPs to destinations
- stored in a hash table. This table is tiled by each destination
- until all slots in the table are filled. When using weights to
- allow destinations to receive more connections, the table is
- tiled an amount proportional to the weights specified. The table
- needs to be large enough to effectively fit all the destinations
- multiplied by their respective weights.
- comment 'IPVS application helper'
- config IP_VS_FTP
- tristate "FTP protocol helper"
- depends on IP_VS_PROTO_TCP && NF_CONNTRACK && NF_NAT && \
- NF_CONNTRACK_FTP
- select IP_VS_NFCT
- ---help---
- FTP is a protocol that transfers IP address and/or port number in
- the payload. In the virtual server via Network Address Translation,
- the IP address and port number of real servers cannot be sent to
- clients in ftp connections directly, so FTP protocol helper is
- required for tracking the connection and mangling it back to that of
- virtual service.
- If you want to compile it in kernel, say Y. To compile it as a
- module, choose M here. If unsure, say N.
- config IP_VS_NFCT
- bool "Netfilter connection tracking"
- depends on NF_CONNTRACK
- ---help---
- The Netfilter connection tracking support allows the IPVS
- connection state to be exported to the Netfilter framework
- for filtering purposes.
- config IP_VS_PE_SIP
- tristate "SIP persistence engine"
- depends on IP_VS_PROTO_UDP
- depends on NF_CONNTRACK_SIP
- ---help---
- Allow persistence based on the SIP Call-ID
- endif # IP_VS
|