Quick links
Quick News
Recent News
Description
Design Choices
Supported Platforms
Performance
Reliability
Security
Download
Documentation
Live demo
They use it!
Commercial Support
Products using HAProxy
Add-on features
Other Solutions
Contacts
External links
Mailing list archives
10GbE load-balancing (updated)
Contributions
Coding style
Known bugs
Web Based User Interface
HATop: Ncurses Interface
visitors online
Thanks for your support !
|
|
Quick News
Nov 22th, 2012 : Development 1.5-dev13 with Compression!
This is the largest development version ever issued, 295 patches in 2 months!
We managed to keep the Exceliance team busy all the time, which means that
the code is becoming more modular with less cross-dependences, I really like this !
First, we got an amazing amount of feedback from early adopters of dev12. It seems like SSL was expected for too long
a time. We really want to thank all those who contributed patches, feedback, configs, cores (yes there were) and even live gdb
access, you know who you are and you deserve a big thanks for this!
Git log says there were 55 bugs fixed since dev12 (a few of them might have been introduced in between). Still, this
means that dev12 should be avoided as much as possible, which is why I redirected many of you to more recent snapshots.
These bugs aside, I'm proud to say that the whole team did a really great job which could be summarized like this :
SSL
many more features ; client and server certificates supported on both
sides with CA and CRL checks. Most of the information available in SSL
can be used in ACLs for access control. Some information such as protocol
and ciphers can be reported in the logs. These information are still not
added to HTTP logs though, a lot of config work is still needed.
cache life time and maximum concurrent SSL connections can be set.
Unfortunately OpenSSL happily dereferences NULL malloc returns and
causes the process to die if memory becomes scarce. So we can only
limit its maximum amount of connections if we want to limit the
memory it uses.
TLS NPN was implemented with the help from Simone Bordet from Jetty,
and can be used to offload SSL/TLS for SPDY and to direct to a
different server depending on the protocol chosen by the client.
Ivan Ristic from ssllabs and
Andy Humphreys from Robinson-way provided very valuable help in diagnosing
and fixing some nasty issues with aborts on failed handshakes and improve
from an E-grade to an A-grade.
HTTP Compression
HTTP payload compression was implemented at Exceliance to achieve
bandwidth usage reduction and reduce page load time on congested or
small links. Compression is extremely CPU and memory intensive, so we
spent most of the time developing dynamic adaptations. It is possible
to limit the maximum RAM dedicated to compression, the CPU usage
threshold and bandwidth thresholds above which compression is disabled.
It is even possible to adjust some of these settings from the stats
socket and to monitor bandwidth savings in real time. Proceeding like
this ensures a high reliability at low cost and with little added
latency. I've put it on the haproxy web site with nice bandwidth savings
(72% avg on compressible objects, 50% on average, considering that most
downloads are compressed sources). I must say I'm very happy of this new
feature which will reduce bandwidth costs in hosted infrastructures ! And
it goes back to the origins of haproxy in zprox 14 years ago :-)
Health checks
SSL is now usable with health checks. By default it is enabled if the
server has the "ssl" keyword and no "port" nor "addr" setting. It
can be forced using "check-ssl" otherwise. So now running an HTTPS
health check simply consists in using "option httpchk" with "ssl" on
the server.
send-proxy is also usable with health checks, with the same rules as
above, and the "check-send-proxy" directive to force it. The checks
also respect the updated spec which suggests sending real addresses
with health checks instead of sending unknown addresses. This makes
it compatible with some products such as postfix 2.10 for example.
Polling
speculative polling was generalized to all pollers, and sepoll
disappeared as it was superseded by epoll. The main reason for this
important change is the way OpenSSL works and the fact that it can
easily get stuck with some data in buffers with no I/O event to
unblock them. So we needed to engage into this difficult change. I'd
have preferred to delay it to 1.6 if I was offered the choice ! But
in the end this is good because it's done and it improves both
performance and reliability. Even select() and poll() are now fast.
the maxaccept setting was too low on some platforms to achieve the
highest possible performance, so it was doubled to 64 and is now per
listener so that it automatically adjusts to the number of processes
the listener is bound to. This ensures both best performance in single
process mode, and quite good fairness in multi-process mode.
Platform improvements
Linux 3.6 TCP Fast Open is supported on listeners ("tfo" bind keyword).
This is used to allow compatible clients to re-establish a TCP connection
in a single packet and save one round-trip. The kernel code for this is
still young, I'd be interested in any feedback.
use of accept4() on Linux >= 2.6.28 saves one system call.
Process management
stats socket can now be bound to specific processes. This is useful
to monitor a specific process only.
"bind-process" now supports ranges instead of silently ignoring them.
"cpu-map" establishes a mapping between process numbers and CPU cores.
This is important when running SSL offloaders on dedicated processes
because you don't want them to pollute the low-latency L7 core.
Misc : "redirect scheme" makes it easier to redirect between http and https, config error reporting was improved for "bind" and "server" lines by enumerating the list of supported options dynamically.
I must say I'm much more confident in dev13 than I was with dev12 and I have already upgraded the main web site which
has been upgraded every few days with recent snapshots. I've build and run it on Linux i586/x86_64/armv5/v7,
OpenBSD/amd64 and Solaris/sparc without any issue anymore.
To all those running SSL tests on dev12, please drop it for dev13. I don't think we introduced regressions (but that's still
possible), but I know for sure that we fixed a lot! The usual changelog
and source are available at the usual place.
Recent news...
Latest versions
Branch | Description | Last version | Released | Links | Notes |
Development |
Development |
1.5-dev13 |
2012/11/22 |
git / web / dir |
may be broken |
1.4 |
1.4-stable |
1.4.22 |
2012/08/14 |
git / web / dir |
Stable version |
1.3 |
1.3-stable |
1.3.26 |
2011/08/05 |
git / web / dir |
Critical fixes only |
1.3.15 |
1.3.15-maint |
1.3.15.13 |
2011/08/05 |
git / web / dir |
Critical fixes only |
1.3.14 |
1.3.14-maint |
1.3.14.14 |
2009/07/27 |
git / web / dir |
Unmaintained |
1.2 |
1.2-stable |
1.2.18 |
2008/05/25 |
git / web / dir |
Unmaintained |
1.1 |
1.1-stable |
1.1.34 |
2006/01/29 |
git / web / dir |
Unmaintained |
1.0 |
1.0-old |
1.0.2 |
2001/12/30 |
git / web / dir |
Unmaintained |
Description
HAProxy is a free, very fast and reliable solution offering
high availability,
load balancing, and
proxying for TCP and HTTP-based applications. It is particularly suited for web
sites crawling under very high loads while needing persistence or Layer7
processing. Supporting tens of thousands of connections is clearly
realistic with todays hardware. Its mode of operation makes its integration
into existing architectures very easy and riskless, while still offering the
possibility not to expose fragile web servers to the Net, such as below :
Currently, two major versions are supported :
- version 1.4 - more flexibility
This version has brought its share of new features over 1.2, most of which were long awaited :
client-side keep-alive to reduce the time to load heavy pages for clients over the net,
TCP speedups to help the TCP stack save a few packets per connection,
response buffering for an even lower number of concurrent connections on the servers,
RDP protocol support with server stickiness and user filtering,
source-based stickiness to attach a source address to a server,
a much better stats interface reporting tons of useful information,
more verbose health checks reporting precise statuses and responses in stats and logs,
traffic-based health to fast-fail a server above a certain error threshold,
support for HTTP authentication for any request including stats, with support for password encryption,
server management from the CLI to enable/disable and change a server's weight without restarting haproxy,
ACL-based persistence to maintain or disable persistence based on ACLs, regardless of the server's state,
log analyzer to generate fast reports from logs parsed at 1 Gbyte/s,
- version 1.3 - content switching and extreme loads
This version has brought a lot of new features and improvements over 1.2, among which
content switching to select a server pool based on any request criteria,
ACL to write content switching rules, wider choice of
load-balancing algorithms for better integration,
content inspection allowing to block unexpected protocols,
transparent proxy under Linux, which allows to directly connect to
the server using the client's IP address, kernel TCP splicing to forward
data between the two sides without copy in order to reach multi-gigabit data rates,
layered design separating sockets, TCP and HTTP processing for more
robust and faster processing and easier evolutions, fast and fair scheduler
allowing better QoS by assigning priorities to some tasks, session rate limiting
for colocated environments, etc...
Version 1.2 has been in production use since 2006 and provided an improved performance level
on top of 1.1. It is not maintained anymore, as most of its users have switched to 1.3 a long
time ago. Version 1.1, which has been maintaining critical sites online since 2002, is not
maintained anymore either. Users should upgrade to 1.4.
For a long time, HAProxy was used only by a few hundreds of people around the world running
very big sites serving several millions hits and between several tens of gigabytes to
several terabytes per day to hundreds of thousands of clients, who needed 24x7
availability and who had internal skills to risk to maintain a free software solution. Over
the years, things have changed a bit, HAProxy has become the de-facto standard load balancer,
and it's often installed by default in cloud environments. Since it does not advertise itself,
we only know it's used when the admins report it :-)
Design Choices and history
HAProxy implements an event-driven, single-process model which
enables support for
very high number of simultaneous connections at very high speeds. Multi-process
or multi-threaded models can rarely cope with thousands of connections because
of memory limits, system scheduler limits, and lock contention everywhere.
Event-driven models do not have these problems because implementing all the
tasks in user-space allows a finer resource and time management. The down side
is that those programs generally don't scale well on multi-processor systems.
That's the reason why they must be optimized to get the most work done from
every CPU cycle.
It began in 1996 when I wrote Webroute, a very simple HTTP proxy able
to set up a modem access. But its multi-process model cloberred its
performance for other usages than home access. Two years later, in 1998, I
wrote the event-driven
ZProx, used to compress TCP
traffic to accelerate modem lines. It was when I first understood the
difficulty of event-driven models. In 2000, while benchmarking a buggy
application, I heavily modified ZProx to introduce a very dirty support for
HTTP header rewriting. HAProxy's ancestor was born. First versions did
not perform the load-balancing themselves, but it quickly proved necessary.
Now in 2009, the core engine is reliable and very robust. Event-driven
programs are robust and fragile at the same time : their code
needs very careful changes, but the resulting executable handles high loads
and supports attacks without ever failing. It is the reason why HAProxy only
supports a fine set of features. HAProxy has never ever crashed in a
production environment. This is something people are not used to nowadays,
because the most common things new users tell me is that they're amazed it
has never crashed ;-)
People often ask for
SSL and
Keep-Alive support. Both features will complicate the code and render it
fragile for several releases. By the way, both features have a negative
impact on performance :
- Having SSL in the load balancer itself means that it becomes the
bottleneck. When the load balancer's CPU is saturated, the overall response
times will increase and the only solution will be to multiply the load
balancer with another load balancer in front of them. the only scalable
solution is to have an SSL/Cache layer between the clients and the load
balancer. Anyway for small sites it still makes sense to embed SSL, and
it's currently being studied. There has been some work on the
CyaSSL library to ease integration
with HAProxy, as it appears to be the only one out there to let you manage
your memory yourself. Update [2012/09/11] : native SSL support was
implemented in 1.5-dev12. The points above about CPU usage are still valid
though.
- Keep-alive was invented to reduce CPU usage on servers when CPUs were 100
times slower. But what is not said is that persistent connections consume a
lot of memory while not being usable by anybody except the client who
openned them.
Today in 2009, CPUs are very cheap and memory is still limited to a few gigabytes
by the architecture or the price. If a site needs keep-alive, there
is a real problem. Highly loaded sites often disable keep-alive to support
the maximum number of simultaneous clients. The real downside of not having
keep-alive is a slightly increased latency to fetch objects. Browsers double
the number of concurrent connections on non-keepalive sites to compensate for
this.
With version 1.4, keep-alive with the client was introduced. It resulted in
lower access times to load pages composed of many objects, without the cost
of maintaining an idle connection to the server. It is a good trade-off. 1.5
will bring keep-alive to the server, but it will probably make sense only with
static servers.
However, I'm planning on implementing both features in future versions, because
it appears that there are users who mostly need availability above performance,
and for them, it's understandable that having both features will not impact their
performance, and will reduce the number of components.
Supported platforms
HAProxy is known to reliably run on the following OS/Platforms :
- Linux 2.4 on x86, x86_64, Alpha, SPARC, MIPS, PARISC
- Linux 2.6 on x86, x86_64, ARM (ixp425), PPC64
- Solaris 8/9 on UltraSPARC 2 and 3
- Solaris 10 on Opteron and UltraSPARC
- FreeBSD 4.10 - 8 on x86
- OpenBSD 3.1 to -current on i386, amd64, macppc, alpha, sparc64 and VAX (check the ports)
Highest performance should be achieved with haproxy versions newer than 1.2.5
running on Linux 2.6, or
epoll-patched
Linux kernel 2.4. It is only because of a very OS-specific optimization : the
default polling system for version 1.1 is select(), which is common
among most OSes, but can become slow when dealing with thousands of
file-descriptors. Versions 1.2 and 1.3 uses poll() by default instead
of select(), but on some systems it may even be slower. However, it is
recommended on Solaris as its implementation is rather good. Haproxy 1.3 will
automatically use epoll on Linux 2.6 and patched Linux 2.4, and
kqueue on FreeBSD and OpenBSD. Both mechanisms achieve constant
performance at any load thus are preferred over poll().
On very recent Linux 2.6 (>= 2.6.27.19), HAProxy can use the new splice() syscall
to forward data between interfaces without any copy. Performance above 10 Gbps may
only be achieved that way.
Based on those facts, people looking for a very fast load balancer should
consider the following options on x86 or x86_64 hardware, in this order :
- HAProxy 1.4 on Linux 2.6.32+
- HAProxy 1.4 on Linux 2.4 +
epoll patch
- HAProxy 1.4 on FreeBSD
- HAProxy 1.4 on Solaris 10
Current typical 1U servers equipped with a dual-core Opteron or Xeon generally
achieve between 15000 and 40000 hits/s and have no trouble saturating 2 Gbps
under Linux.
Performance
Well, since a user's testimony is better than a long demonstration, please take a look at
Chris Knight's experience
with haproxy saturating a gigabit fiber on a video download site. Another big data provider
I know constantly pushes between 3 and 4 Gbps of traffic 24 hours a day. Also,
my experiments with Myricom's 10-Gig NICs might be of interest.
HAProxy involves several techniques commonly found in Operating Systems
architectures to achieve the absolute maximal performance :
- a single-process,
event-driven model considerably reduces the cost of
context switch
and the memory usage. Processing several hundreds of tasks in a millisecond is
possible, and the memory usage is in the order of a few kilobytes per session
while memory consumed in Apache-like
models is more in the order of megabytes per process.
- O(1) event checker on systems that allow it (Linux and FreeBSD)
allowing instantaneous detection of any event on any connection among tens of
thousands.
- Single-buffering without any data copy between reads and writes whenever
possible. This saves a lot of CPU cycles and useful memory bandwidth. Often,
the bottleneck will be the I/O busses between the CPU and the network
interfaces. At 10 Gbps, the memory bandwidth can become a bottleneck too.
- Zero-copy forwarding is possible using the splice() system
call under Linux, and results in real zero-copy starting with Linux 3.5. This
allows a small sub-3 Watt device such as a Seagate Dockstar to forward HTTP
traffic at one gigabit/s.
- MRU
memory allocator using fixed size memory pools for immediate memory
allocation favoring hot cache regions over cold cache ones. This dramatically
reduces the time needed to create a new session.
- work factoring, such as multiple accept() at once, and
the ability to limit the number of accept() per iteration when
running in multi-process mode, so that the load is evenly distributed among
processes.
- tree-based storage, making heavy use of the Elastic Binary tree I have
been developping for several years. This is used to keep timers ordered, to keep
the runqueue ordered, to manage round-robin and least-conn queues, with only
an O(log(N)) cost.
- optimized HTTP header analysis : headers are parsed an interpreted on
the fly, and the parsing is optimized to avoid an re-reading of any previously
read memory area. Checkpointing is used when an end of buffer is reached with
an incomplete header, so that the parsing does not start again from the
beginning when more data is read. Parsing an average HTTP request typically
takes 2 microseconds on a Pentium-M 1.7 GHz.
- careful reduction of the number of expensive system calls. Most of the
work is done in user-space by default, such as time reading, buffer aggregation,
file-descriptor enabling/disabling.
All these micro-optimizations result in very low CPU usage even on moderate
loads. And even at very high loads, when the CPU is saturated, it is quite common
to note figures like 5% user and 95% system, which means that the
HAProxy process consumes about 20 times less than its system counterpart. This
explains why the tuning of the Operating System is very important.
I personnally build my own patched Linux 2.4 kernels, and finely tune a lot of
network sysctls to get the most out of a reasonable machine.
This also explains why Layer 7 processing has little impact on
performance : even if user-space work is doubled, the load distribution
will look more like 10% user and 90% system, which means an effective loss of
only about 5% of processing power. This is why on high-end systems, HAProxy's
Layer 7 performance can easily surpass hardware load balancers'
in which complex processing which cannot be performed by ASICs has to be performed by
slow CPUs. Here is the result of a quick benchmark performed on haproxy 1.3.9
at EXOSEC on a single core Pentium 4 with
PCI-Express interfaces:
In short, a hit rate above 10000/s is sustained for objects
smaller than 6 kB, and the Gigabit/s is sustained for
objects larger than 40 kB.
In production, HAProxy has been installed several times as an emergency solution
when very expensive, high-end hardware load balancers suddenly failed on Layer 7
processing. Hardware load balancers process requests at the
packet level and have a great difficulty at supporting
requests across multiple packets and high response
times because they do no buffering at all. On the
other side, software load balancers use TCP buffering
and are insensible to long requests and high response times. A
nice side effect of HTTP buffering is that it
increases the server's connection acceptance by reducing the
session duration, which leaves room for new requests. New
benchmarks will be executed soon, and results will be
published. Depending on the hardware, expected rates are in the order of a few
tens of thousands of new connections/s with tens of thousands of simultaneous
connections.
There are 3 important factors used to measure a load balancer's performance :
- The session rate
This factor is very important, because it directly determines when the load
balancer will not be able to distribute all the requests it receives. It is
mostly dependant on the CPU.
Sometimes, you will hear about requests/s or hits/s, and they are the same as
sessions/s in HTTP/1.0 or HTTP/1.1 with
keep-alive disabled. Requests/s with keep-alive enabled does not mean
anything, and is generally useless because it is very often that keep-alive has
to be disabled to offload the servers under very high loads. This factor is
measured with varying object sizes, the fastest results generally coming from
empty objects (eg: HTTP 302, 304 or 404 response codes).
Session rates above 20000 sessions/s can be achieved on
Dual Opteron systems such as HP-DL145 running a carefully
patched Linux-2.4 kernel. Even the cheapest Sun's X2100-M2 achieves 25000 sessions/s in dual-core 1.8 GHz configuration.
- The session concurrency
This factor is tied to the previous one. Generally, the session rate
will drop when the number of concurrent sessions increases (except the
epoll polling mechanism). The slower the servers, the higher
the number of concurrent sessions for a same session rate. If a load balancer
receives 10000 sessions per second and the servers respond in 100 ms, then the
load balancer will have 1000 concurrent sessions. This number is limited by the
amount of memory and the amount of file-descriptors the system can
handle. With 8 kB buffers, HAProxy will need about 16 kB per session, which
results in around 60000 sessions per GB of RAM. In practise, socket
buffers in the system also need some memory and 20000 sessions per GB of RAM is
more reasonable. Layer 4 load balancers generally announce millions of
simultaneous sessions because they don't process any data so they don't need
any buffer. Moreover, they are sometimes designed to be used in Direct Server
Return mode, in which the load balancer only sees forward traffic, and which
forces it to keep the sessions for a long time after their end to avoid cutting
sessions before they are closed.
- The data rate
This factor generally is at the opposite of the session rate. It is measured
in Megabytes/s (MB/s), or sometimes in Megabits/s (Mbps). Highest data rates
are achieved with large objects to minimise the overhead caused by session
setup and teardown. Large objects generally increase session concurrency, and
high session concurrency with high data rate requires large amounts of memory
to support large windows. High data rates burn a lot of CPU and bus cycles on
software load balancers because the data has to be copied from the input
interface to memory and then back to the output device. Hardware load balancers
tend to directly switch packets from input port to output port for higher data
rate, but cannot process them and sometimes fail to touch a header or a cookie.
For reference, the Dual Opteron systems described above can saturate 2
Gigabit Ethernet links on large objects, and I know people who constantly
run between 3 and 4 Gbps of real traffic on 10-Gig NICs plugged into quad-core
servers.
A load balancer's performance related to these factors is generally announced for
the best case (eg: empty objects for session rate, large objects for data rate).
This is not because of lack of honnesty from the vendors, but because it is not
possible to tell exactly how it will behave in every combination. So when those 3
limits are known, the customer should be aware that he will generally be below
all of them. A good rule of thumb on software load balancers is to consider an
average practical performance of half of maximal session and data rates for
average sized objects.
You might be interested in checking the 10-Gigabit/s page.
Reliability - keeping high-traffic sites online since 2002
Being obsessed with reliability, I tried to do my best to ensure a total
continuity of service by design. It's more difficult to design something
reliable from the ground up in the short term, but in the long term it reveals
easier to maintain than broken code which tries to hide its own bugs behind
respawning processes and tricks like this.
In single-process programs, you have no right to fail : the smallest bug
will either crash your program, make it spin like mad or freeze. There has not
been any such bug found in the code nor in production for the last 10 years.
HAProxy has been installed on Linux 2.4 systems serving millions of pages
every day,
and which have only known one reboot in 3 years for a complete OS upgrade.
Obviously, they were not directly exposed to the Internet because they did not receive
any patch at all. The kernel was a heavily patched 2.4 with Robert Love's
jiffies64 patches to support time wrap-around at 497 days (which
happened twice). On such systems, the software cannot fail without being
immediately noticed !
Right now, it's being used in several Fortune 500 companies around the world to
reliably serve millions of pages per day or relay huge amounts of money. Some
people even trust it so much that they use it as the default solution to solve
simple problems (and I often tell them that they do it the dirty way). Such
people sometimes still use versions 1.1 or 1.2 which sees very limited evolutions
and which targets mission-critical usages. HAProxy is really suited for such environments
because the indicators it returns provide a lot of valuable information about the application's
health, behaviour and defects, which are used to make it even more reliable.
Version 1.3 has now received far more testing than 1.1 and 1.2 combined, so
users are strongly encouraged to migrate to a stable 1.3 for mission-critical
usages.
As previously explained, most of the work is executed by the Operating System.
For this reason, a large part of the reliability involves the OS itself. Recent
versions of Linux 2.4 offer the highest level of stability. However, it requires
a bunch of patches to achieve a high level of performance. Linux 2.6
includes the features needed to achieve this level of performance, but is not
yet as stable for such usages. The kernel needs at least one upgrade every
month to fix a bug or vulnerability. Some people prefer to run it on Solaris (or
do not have the choice). Solaris 8 and 9 are known to be really stable right now,
offering a level of performance comparable to Linux 2.4. Solaris 10 might show
performances closer to Linux 2.6, but with the same code stability problem. I
have too few reports from FreeBSD users, but it should be close to Linux 2.4 in
terms of performance and reliability. OpenBSD sometimes shows socket allocation
failures due to sockets staying in FIN_WAIT2 state when client suddenly
disappears. Also, I've noticed that hot reconfiguration does not work under
OpenBSD.
The reliability can significantly decrease when the system is pushed to its
limits. This is why finely tuning the sysctls is important. There is no
general rule, every system and every application will be specific. However, it is
important to ensure that the system will never run out of memory and
that it will never swap. A correctly tuned system must be able to run for
years at full load without slowing down nor crashing.
Security - Not even one vulnerability in 10 years
Security is an important concern when deploying a software load balancer. It is
possible to harden the OS, to limit the number of open ports and accessible
services, but the load balancer itself stays exposed. For this reason, I have been
very careful about programming style. The only vulnerability found so far dates early
2002 and only lasted for one week. It was introduced when logs were reworked. It
could be used to cause BUS ERRORS to crash the process, but it did not
seem possible to execute code : the overflow concerned only 3 bytes, too short to
store a pointer (and there was a variable next).
Anyway, much care is taken when writing code to manipulate headers. Impossible
state combinations are checked and returned, and errors are processed from the
creation to the death of a session. A few people around the world have reviewed
the code and suggested cleanups for better clarity to ease auditing. By the way,
I'm used to refuse patches that introduce suspect processing or in which not
enough care is taken for abnormal conditions.
I generally suggest starting HAProxy as root because it
can then jail itself in a chroot and drop all of its privileges
before starting the instances. This is not possible if it is not started as
root because only root can execute chroot().
Logs provide a lot of information to help to maintain a satisfying security
level. They can only be sent over UDP because once chrooted, the
/dev/log UNIX socket is unreachable, and it must not be possible to
write to a file. The following information are particularly useful :
- source IP and port of requestor make it possible to find their origin
in firewall logs ;
- session set up date generally matches firewall logs, while tear
down date often matches proxies dates ;
- proper request encoding ensures the requestor cannot hide
non-printable characters, nor fool a terminal.
- arbitrary request and response header and cookie capture help to
detect scan attacks, proxies and infected hosts.
- timers help to differentiate hand-typed requests from browsers's.
HAProxy also provides regex-based header control. Parts of the request, as
well as request and response headers can be denied, allowed, removed, rewritten, or
added. This is commonly used to block dangerous requests or encodings (eg: the
Apache Chunk exploit),
and to prevent accidental information leak from the server to the client.
Other features such as Cache-control checking ensure that no sensible
information gets accidentely cached by an upstream proxy consecutively to a bug in
the application server for example.
Download
The source code is covered by GPL v2. Source code and pre-compiled binaries for
Linux/x86 and Solaris/Sparc can be downloaded right here :
- Development version :
- Documentation
- Browse directory for docs, sources and binaries
- Daily snapshots are built once a day when the GIT repository changes
- Latest version (1.4) :
- Documentation
- Release Notes for version 1.4.22
-
haproxy-1.4.22.tar.gz
(MD5) : Source code under GPL
-
haproxy-1.4.22-linux-i586.gz :
(MD5)
Linux/i586 executable linked with Glibc 2.2
-
haproxy-1.4.22-pcre-solaris-sparc.notstripped.gz :
(MD5)
Solaris8/Sparc executable
- NSLU2 binaries are regularly built by Jeff Buchbinder
- Browse directory for other files or versions
- Latest version (1.3) :
- Documentation
- Release Notes for version 1.3.26
-
haproxy-1.3.26.tar.gz
(MD5) : Source
code under GPL
-
haproxy-1.3.26-linux-i586.gz :
(MD5)
Linux/i586 executable linked with Glibc 2.2
-
haproxy-1.3.26-pcre-solaris-sparc.notstripped.gz :
(MD5)
Solaris8/Sparc executable
- NSLU2 binaries are regularly built by Jeff Buchbinder
- Browse directory for other files or versions
- Previous branch (1.2) :
- Documentation
- Release Notes for version 1.2.18
-
haproxy-1.2.18.tar.gz
(MD5) : Source
code under GPL
-
|