SSL-Offloader
Contents |
[edit] Preface
I want to thank Igor Sysoev for this nice piece of software. For me, this is the only way to contribute something to this great project. I've tried to document the whole picture of building what we simply call our "SSL-Proxy" (aka. SSL-Offloader, SSL-Accelerator).
In our company we use Nginx as a reverse proxy, serving HTTPS to the client while getting the content via HTTP from the multiple backends. We have two virtual machines "connected" with VRRP to one cluster, acting as a frontend for about 80 Tomcat servers (and some IIS, Apache/PHP ...), each with one or more applications. There were many reasons for this structure: One CI (ITIL: change item) for SSL certificates, customer demands (one URL xyz with multiple services), simple but effective failover mechanism for changes and so on.
Update 2012, success story: This NginX setup as a reverse ssl-proxy with our "super-url's works perfectly for over 5 years (in this time we changed the ubuntu versions several times - from hardy to lucic). Today only two applications left and couldn't be included in this scheme. We have about 10 changes in the setup (together about 7000 lines) per week and we can do all of them on fly (using reload). The security audits of these applications never showed topics like "unpatched ssl versions" again, the web-services team can concentrate on deploying applications with no need to touch an apache configuration ever again and we (the networking team) can solve every access issue by ourself.
[edit] Technical Data
The server works very efficient:
- Hypervisor: VMware ESXi
- VM: 4x CPU (load: 0.05), 768 MByte RAM (used: 10%), 4 GByte HD
- Base system: Ubuntu 10.04.4/lucid
- Working horse: Nginx 1.2.0-1ppa1~lucid (ppa nginx:stable)
- SSL-Library: LibSSL 0.9.8k-7ubuntu8.6
- Entropy-Daemon: haveged 1.1-2 (from debian)
- VRRP daemon: Keepalived 1.2.2-1~lucid~ppa (ipv6 support)
- Software watchdog: Monit 1:5.0.3-3
- NTP alternative: Chrony 1.23-7 (one socket)
[edit] Considerations
[edit] Configuration
Some kind of a policy ...
- To avoid problems when updating, the layout of the configuration files and directories should be an addition to the existing ones of the installation.
- Each setting has to be unique. This leads on the other hand to many references of small "include"-files and a complex layout.
- "if"-statement couldn't be nested, Therefore avoid using "include"-statements inside them.
- When including whole directories use file extentions as a switch.
Used directories in "/etc/nginx:
- conf.d
- mapping/<segment>/<application>
- access/<channel>/<user group> (currently unused)
- sslcerts/<domain of the channel>/[<app-group>|wildcard].[crt|key]
- sites-enabled/<ip-slot>_<app-group>.<customer>.any.<segment>
- scripts
[edit] Naming Convention
We had one internal (ADS) domain, some official ones and a complex situation:
- entry point/channel: LAN users (lan), Internet (ext), some kind of a VPN/WAN (vn2) and a old VPN without DNS (vn1)
- customers: Our company, several workgroups, lables, subsidiaries and externel customers
- life cycle/segment: development (dev), integration-1 and -2 (ig1/ig2), approval (apv), demonstration (dem) and production (prd)
- environments: real (local) and labor
Our solution was to start two whole new DNS-Trees and move the application name from DNS to the path:
- http(s)://<app-group>.<customer>.<channel>.<segment>.[local|labor]/<application>/
In the case of production systems, these were mapped to official domains.
Tips:
- Use "pdnsd" as a DNS dispatcher on your proxy.
- Maybe you shouldn't use the "*.local" domain as we did, if you want to use services like avahi ...
[edit] Application Server
Most important ...
- You can reduce your work, if you have one "intelligent" application with different themes for diffenrent customers
- For Java applications we use the "Remote IP Valve" to betray the Tomcat about his URL (like apache+ajp does)
[edit] Preparation
[edit] Network Settings
I put the network settings into the file "/etc/sysctl.d/10-network-security.conf". Some of the setting were allready in the original configuration.
### http://www.cyberciti.biz/tips/linux-unix-bsd-nginx-webserver-security.html # Avoid a smurf attack net.ipv4.icmp_echo_ignore_broadcasts = 1 # Turn on protection for bad icmp error messages net.ipv4.icmp_ignore_bogus_error_responses = 1 # Turn on syncookies for SYN flood attack protection net.ipv4.tcp_syncookies = 1 # Turn on and log spoofed, source routed, and redirect packets #net.ipv4.conf.all.log_martians = 1 #net.ipv4.conf.default.log_martians = 1 # No source routed packets here net.ipv4.conf.all.accept_source_route = 0 net.ipv4.conf.default.accept_source_route = 0 # Turn on reverse path filtering net.ipv4.conf.all.rp_filter = 1 net.ipv4.conf.default.rp_filter = 1 # Make sure no one can alter the routing tables net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.default.accept_redirects = 0 net.ipv4.conf.all.secure_redirects = 0 net.ipv4.conf.default.secure_redirects = 0 # Don't act as a router net.ipv4.ip_forward = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.send_redirects = 0 # Turn on execshild kernel.exec-shield = 1 kernel.randomize_va_space = 1 # Tuen IPv6 net.ipv6.conf.default.router_solicitations = 0 net.ipv6.conf.default.accept_ra_rtr_pref = 0 net.ipv6.conf.default.accept_ra_pinfo = 0 net.ipv6.conf.default.accept_ra_defrtr = 0 net.ipv6.conf.default.autoconf = 0 net.ipv6.conf.default.dad_transmits = 0 net.ipv6.conf.default.max_addresses = 1 # Optimization for port usefor LBs # Increase system file descriptor limit fs.file-max = 65535 # Allow for more PIDs (to reduce rollover problems) # !!! may break some programs 32768 #kernel.pid_max = 65536 # Increase system IP port limits net.ipv4.ip_local_port_range = 2000 65000 # Increase TCP max buffer size setable using setsockopt() net.ipv4.tcp_rmem = 4096 87380 8388608 net.ipv4.tcp_wmem = 4096 87380 8388608 # Increase Linux auto tuning TCP buffer limits # min, default, and max number of bytes to use # set max to at least 4MB, or higher if you use very high BDP paths # Tcp Windows etc net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 net.core.netdev_max_backlog = 5000 net.ipv4.tcp_window_scaling = 1
[edit] Virtual Addresses
This is the VRRP configuration "/etc/keepalived/keepalived.conf" of one system. There are two instances configured. In the case of a failure both VRRP addresses belong to the remaining system. For the second system change the values of "state" and "priority".
Note: You have to restart the "keepalived" after you set an interface in promiscuous mode (eg. if you use tcpdump for debugging).
vrrp_instance ONE { state MASTER priority 120 interface eth0 virtual_router_id <id-1> advert_int 1 authentication { auth_type pass auth_pass <pass-1> } virtual_ipaddress_excluded { <vrrp-ipv4-1> <vrrp-ipv6-1> } } vrrp_instance TWO { state BACKUP priority 80 interface eth0 virtual_router_id <id-2> advert_int 1 authentication { auth_type pass auth_pass <pass-2> } virtual_ipaddress_excluded { <vrrp-ipv4-2> <vrrp-ipv6-2> } }
[edit] HTTPS Addresses
One possible solution is to use direct routing and not a NAT (network address translation). In this case you need local ip addresses with fit to the server of the nginx configuration. In the file "/etc/network/interfaces" you can add a "post-up" command for the loopback interface like this.
auto lo iface lo inet loopback post-up /etc/nginx/conf.d/ip-addr.sh auto eth0 iface eth0 inet static ... post-up /etc/nginx/conf.d/ip-mtu.sh iface eth0 inet6 static ...
The referenced script should correct some arp issues, do a blackhole routing to avoid ping-pong packets of the routed networks and of course add the network addresses.
#!/bin/bash echo 0 > /proc/sys/net/ipv4/ip_no_pmtu_disc echo 1 > /proc/sys/net/ipv4/tcp_mtu_probing echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/default/arp_announce echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/default/arp_ignore ip route add blackhole <network-1> ip route add blackhole <network-2> ... ip route add blackhole <network-n> ip route add <host-/network-1> via <default gateway> mtu <mtu> advmss <mtu-40> ip route add <host-/network-2> via <default gateway> mtu <mtu> advmss <mtu-40> ... ip route add <host-/network-m> via <default gateway> mtu <mtu> advmss <mtu-40> ip addr add <address-1>/32 dev lo label <label-1> ip addr add <address-2>/32 dev lo label <label-2> ... ip addr add <address-x>/32 dev lo label <label-x>
Maybe you need a second file for all settings with requires a working network interface. Eg. if you have to fix some MTU/MSS values, you have to route to real ips on a real interfaces.
#!/bin/bash # VPN Networks with broken PMTU # (ADVMSS = MTU - 40) ip route add 10.1.0.0/16 via <default.gateway> mtu 1379 advmss 1339 ip route add 10.2.0.0/16 via <default.gateway> mtu 1455 advmss 1415 ... ip route add 10.x.0.0/16 via <default.gateway> mtu 1472 advmss 1432
[edit] Core Configuration
[edit] nginx.conf
I decided to change not too much in the default config file "/etc/nginx/nginx.conf". The VM has four cores, each core get one fixed worker, and I wanted nginx to get an better priority than other processes. All other setting were included.
worker_processes 4; worker_priority -1; worker_rlimit_nofile 8192; worker_cpu_affinity 0001 0010 0100 1000; user www-data; pid /var/run/nginx.pid; error_log /var/log/nginx/error.log; events { multi_accept on; worker_connections 4096; } http { map_hash_bucket_size 128; include /etc/nginx/mime.types; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }
[edit] sslproxy.conf
The file "/etc/nginx/conf.d/sslproxy.conf" holds all important global settings, espacially:
- error_page 404 =410 /40x.html;
The 404 error page will be cloaked as 410 to avoid the internal page from ie. - proxy_intercept_errors on;
All errors from the application server will be hidden behind the correspondending local error pages. - proxy_redirect http:// $scheme://;
Every HTTP redirect from an application server will be rewritten to HTTPS. - proxy_set_header Accept-Encoding "";
The proxy interface to the backend should not compress the data (lan connection).
### global ### server_tokens off; server_name_in_redirect off; ignore_invalid_headers on; if_modified_since before; root /etc/nginx/content/; ssi on; ssi_silent_errors on; # testing=off add_header X-Frame-Options SAMEORIGIN; add_header Strict-Transport-Security max-age=3456000; ### tcp ### tcp_nodelay off; tcp_nopush on; sendfile on; keepalive_requests 100; ### timeouts ### resolver_timeout 6; client_header_timeout 30; client_body_timeout 60; send_timeout 60; keepalive_timeout 65 20; ### buffers ### client_header_buffer_size 1k; client_body_buffer_size 128k; large_client_header_buffers 4 4k; client_max_body_size 10m; client_body_temp_path /var/spool/nginx/client/; output_buffers 1 32k; postpone_output 1460; ### errors ### recursive_error_pages off; error_page 400 402 403 405 406 410 411 413 416 /40x.html; error_page 500 501 502 503 504 /50x.html; error_page 404 =410 /40x.html; error_page 443 =200 /test.png; open_log_file_cache max=1024 inactive=30s min_uses=3 valid=5m; ### acl ### allow 10.0.0.0/8; allow 172.16.0.0/12; allow 192.168.0.0/16; deny all; ### ssl ### ssl on; ssl_prefer_server_ciphers on; ssl_protocols TLSv1 SSLv3; ssl_ciphers RC4:HIGH:!MD5:!ADH:+DH; #HIGH:!ADH:!MD5:@STRENGTH; ssl_session_cache shared:TLSSL:16m; ssl_session_timeout 10m; ssl_certificate sslcert/de/<company>/wildcard.crt; ssl_certificate_key sslcert/de/<company>/wildcard.key; ### compression ### gzip on; gzip_disable "msie6"; gzip_vary on; gzip_min_length 512; gzip_buffers 256 8k; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain test/html text/xml text/css image/x-icon image/bmp application/atom+xml text/javascript application/x-javascript application/pdf application/postscript application/rtf application/vnd.ms-powerpoint application/msword application/vnd.ms-excel application/vnd.wap.xhtml+xml; ### proxy-global ### proxy_intercept_errors on; # testing=off proxy_ignore_client_abort off; proxy_redirect http:// $scheme://; ### proxy-header ### proxy_set_header Accept-Encoding ""; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-By $server_addr:$server_port; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Class $classification; # custom proxy_set_header X-Forwarded-Proto $scheme; map $scheme $msiis { http off; https on; } proxy_set_header Front-End-Https $msiis; ### proxy-timeouts ### proxy_connect_timeout 6; proxy_send_timeout 60; proxy_read_timeout 60; ### proxy-buffers ### proxy_buffering on; proxy_buffer_size 8k; proxy_buffers 256 8k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_temp_path /var/spool/nginx/temp/;
[edit] logging.conf
This configuration file "/etc/nginx/conf.d/logging.conf" should trigger the logging if we need some additional information. We defined the SSL-Proxy as a network device and therefore the application is responsible for logging user access.
log_format apache '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$http_cookie"'; log_format full '$remote_addr $remote_user [$time_local] ' '"$host"->$proxy_host->$upstream_addr ' '"$request" $status($upstream_status) ' '$bytes_sent/$gzip_ratio($sent_http_content_type) ' '$request_time($upstream_response_time)'; log_format perf '$request_time($upstream_response_time) ' '$bytes_sent/$gzip_ratio($sent_http_content_type) ' '$status "$upstream_addr$uri"'; log_format gzip '$bytes_sent/$gzip_ratio($sent_http_content_type) ' '[$http_accept_encoding]"$http_user_agent"'; log_format redirect '$time_local $redir_match $redir_action $redir_url'; #access_log off; access_log /var/log/nginx/access.log apache; #access_log /var/log/nginx/access-full.log full; access_log /var/log/nginx/access-perf.log perf; #access_log /var/log/nginx/access-gzip.log gzip;
[edit] backend.conf
We use this file to define the relation of two backend servers with an ID.
upstream <backend-id-1> { server <server-ip-1.1>:<internal-port>; server <server-ip-1.2>:<internal-port> backup; } upstream <backend-id-2> { server <server-ip-2.1>:<internal-port>; server <server-ip-2.2>:<internal-port> backup; } ... upstream <backend-id-3> { server <server-ip-3.1>:<internal-port>; server <server-ip-3.2>:<internal-port> backup; }
[edit] Server and Applications
[edit] Simple Applications
This is the example of a default application in "/etc/nginx/mapping/<segment>/<application>", which should fit in most of the cases.
location /<app-path>/ { proxy_pass http://<backend-id>; }
[edit] Root Applications
Some web applications don't allow changing the root path to a subdirectory. You can use of course one of them in combination with applications with subdirectories in a server configuration. But you have to add an "if"-statement if you want to use the feature "proxy_intercept_errors".
location / { if (-f $request_filename) { break; } proxy_pass http://<backend-id>; }
[edit] Uploader App
Some applications like editorial pages of CMS systems often have additional needs. There you want to upload for example movies or some bigger PDF's. Then you have to adjust the maximal size of uploads.
location /<app-path>/ { client_max_body_size 100m; proxy_pass http://<backend-id>; }
[edit] Long Running App
If the response of an application takes a long time (e.g. to generate a report) and there is no keppalive machanism available (in our case it was the "BIRT" framework), overload the default settings of the timeouts - for the client and the server side.
location /<app-path>/ { send_timeout 3600; proxy_read_timeout 3600; proxy_pass http://<backend-id>; }
[edit] Soap Web-Service
SOAP needs unchanged errors of the type 500, because this is the default way to exchange information with the client.
location /<app-path>/ { proxy_intercept_errors off; proxy_pass http://<backend-id>; }
[edit] Simple Server
This example shows an internet (allow all) server with one simple application. The rewrite rule does the initial redirect into the application directory. The following index page is in the responsibility of the application.
server { ssl_certificate sslcert/<dns-domain>/<subdomain>.crt; ssl_certificate_key sslcert/<dns-dmoain>/<subdomain>.key; listen <ip>:443; allow all; server_name <ip> <dns>; set $classification "<customer>.<channel>.<segment>"; rewrite ^/+$ /<app-path>/ redirect; include mapping/<segment>/<app-path> }
[edit] The Redirector
(in progress)
Motivation / Goal:
- redirect, refresh or send an error-page with the new link
- work for for DNS names or DNS plus context (= first part of the path)
- implicit matching of a "www." prefix
- works for URL's and arguments of requests
- can parse %-codes in the argument
[edit] redir-map.conf
map $redir_match $redir_target { hostnames; #[<context>.]<hostname> #(static|refresh|redirect)@<scheme>://<target>/<context>/; my-app-1.old-url.com redirect@https://new-url.com/my-app-1/; .old-url.com redirect@https://new-url.com/default-app/;
[edit] redir.action
if ($redir_target ~* ^(.*)@(.*)) { set $redir_action $1; set $redir_url $2; } if ($redir_action = "static") { rewrite ^ /301-static.html last; } if ($redir_action = "refresh") { rewrite ^ /301-refresh.html last; } if ($redir_action = "redirect") { rewrite ^ $redir_url permanent; break; access_log /var/log/nginx/redirector.log redirect;}
[edit] Redirector Server
server { allow all; listen 80 default; ssl off; listen 443 default ssl; server_name <dns-name>; include mapping/security.ext; location / { # deliver local files if (-f $request_filename) { break; } # redirector set $redir_host $http_host; if ($http_host ~* ^www\.(.*)) { set $redir_host $1; } if ($uri ~* ^/([^/]+)) { set $redir_context $1.; } set $redir_match $redir_context$redir_host; include mapping/redir.action; # global https enforcement if ($scheme = "http") { rewrite ^ https://$http_host$request_uri permanent; } } location /status { stub_status on; allow <monitoring system>; deny all; } }
[edit] Redirector App
[edit] Active-Sync Gateway
This is only a simple gateway (no certificates!) for several different Exchange servers. It validates some kind of a "fingerprint" of the device against a dns entry. The code can be "plugged" into the context files above as a service.
location /Microsoft-Server-ActiveSync { access_log /var/log/nginx/activesync.log; resolver your.dns.server.ip; # deny anonymous; deny other http methods if ( $remote_user = "" ) { return 444; break; } if ( $request_method !~* ^(POST|OPTIONS)$ ) { return 444; break; } # extract domain and user-id if ( $remote_user ~* ^(.+)\x5C(.+)$ ) { set $domain $1; set $userid $2; } if ( $remote_user !~* ^(.+)\x5C(.+)$ ) { return 444; break; } # replace underscores in username if ( $userid ~* ^(.+)_(.+)$ ) { set $userdn $1x$2; } if ( $userid !~* ^(.+)_(.+)$ ) { set $userdn $userid; } # extract device-type and version if ( $http_user_agent ~* ^MSFT-(.+)/(.+)\.(.+)\.(.+)$ ) { set $device MSFT$1; set $versio $2x$3x$4; } if ( $http_user_agent ~* ^Apple-iPhone(.*)/(.+)\.(.+)$ ) { set $device iPhone; set $versio $1x$2x$3; } if ( $http_user_agent ~* ^Apple-iPad(.+)/(.+)\.(.+)$ ) { set $device iPad; set $versio $1x$2x$3; } if ( $http_user_agent ~* ^Apple-iPod(.+)/(.+)\.(.+)$ ) { set $device iPod; set $versio $1x$2x$3; } if ( $http_user_agent ~* ^Android-(.+)/(.+)\.(.+)$ ) { set $device Android; set $versio $1x$2x$3; } # always allow initial requests without arguments set $initia $request_method:$args; if ( $initia ~* ^OPTIONS:$ ) { set $target $domain-exchange; set $versio ok; } if ( $versio = "" ) { return 444; break; } # set target, if usernames match if ( $userid = $arg_User ) { set $target $domain-$userdn-$arg_DeviceId-$device-$versio; } # forward request proxy_pass http://$target.your.internal.sync.domain; }
The failed requests are shown in the "error.log" as an resolver error. The "error.log" is monitored by "rsyslog" and transfered to a syslog server. The syslog server checks for the internal domain and send an email to the support.
... # Mail-Trap: ActiveSync $ActionExecOnlyOnceEveryInterval 300 $ActionMailTo recicpient-1@your.company $ActionMailTo recicpient-2@your.company :msg,contains,"your.internal.sync.domain" :ommail:;mailBody ...
[edit] Remote Logging
[edit] The Problem
In few words: NginX doesn't support Syslog.
Therefore you have some possibilities, if you want Syslog support:
- Compile NginX with the syslog patch:
I prefer to use the original packages ... - Use a syslog implementation with file support (eg. rsyslog with "imfile"):
That's okay for the "error.log", but it is a bad idea for the space consuming "access.log", because you don't want to store these data a second time local.
[edit] Simple Solution
1. Create a file /etc/rsyslog.d/remote.conf for all (already) syslog messages:
# export via udp *.notice;local0,local1,local2,local3,local4,local5,local6,local7.*;mark,cron.none @<syslog-server>
2. Create a file /etc/rsyslog.d/nginx.conf for file monitoring. Repeat the part in the middle for every file you want to see in the syslog. The last line is important, otherwise you will log these messages three times (nginx log, udp syslog and local syslog):
# import-module: file $ModLoad imfile # nginx/error.log $InputFileName /var/log/nginx/error.log $InputFileTag nginx: $InputFileStateFile nginx_error.log $InputFileSeverity warning $InputFileFacility local7 $InputRunFileMonitor # send and drop :syslogtag,isequal,"nginx:" @<syslog-server> & ~
3. Create a script /etc/cron.daily/logfile-actions, which will be executed every day and place there the cleanup commands (eg. 1 day for the access logs and 6 month for all other). Don't forget to do the "chmod +x". This this at least process all files, which you don't want to store local a second time. But as I sad before, this works only if the amount of access-log of one day isn't to much ... and it's not very smart.
#!/bin/sh find /var/log/ -name *.gz -mtime +180 -delete find /var/log/nginx/ -name access*.gz -mtime +2 -delete
[edit] Appendix: Scripts
[edit] sync-config.sh
(to do) ... its our most important script for the daily use, but I have to remove all company related data.
[edit] dump-config.sh
Creates a "normalized" configuration file. Basically it's a recursive script that evaluates include-statements, remove spaces and comments. This is the base for the most of my scripts doing backup/restore, ssl validations on so on. Maybe it's not beautiful nor perfect, but it works for me.
#!/bin/sh START=`pwd` cd /etc/nginx if [ -x $0 ] then CMD=$0 else CMD=$START/$0 fi if [ "$1" ] then FILE=$1 else FILE="nginx.conf" fi echo "# start: $FILE" cat $FILE | awk '{ gsub("#.*","",$0); gsub(";",";\n",$0); gsub("{","\n{\n",$0); gsub("}","\n}\n",$0); print; }' | awk -v HK="'" -v CMD=$CMD '{ gsub("[ \t]+"," ",$0); gsub("^[ \t]","",$0); gsub("[ \t]$","",$0); gsub(HK,"%%",$0); if ($1=="include") { sub(";$","",$2); print CMD" "HK$2HK; } else { print "echo "HK$0HK; } }' | sh | awk -v HK="'" '{ gsub("%%",HK,$0); if ($0=="") { pass; } else { print; } }' | cat echo "# stop: $FILE" cd $START #exit 0
[edit] clean-restart.sh
This script restart some services, remove logfiles and reactivates the loopback addresses in case of bigger changes. I use it especially at the development systems. On the produktion machines I do the changes and then a reboot instead, to enshure every thing starts up correctly in case of an unexpected reboot.
#!/bin/bash /etc/init.d/monit stop /etc/init.d/keepalived stop /etc/init.d/nginx stop ifconfig -a | grep "lo:" | awk '{print "ifconfig "$1" down"}' | sh chmod +x /etc/nginx/conf.d/ip-addr.sh chmod -R +x /etc/nginx/scripts/* chmod -R 600 /etc/nginx/sslcert/* rm /var/log/monit rm /var/log/nginx/* # other commands, like "apt-get -y upgrade" /etc/nginx/conf.d/ip-addr.sh /etc/init.d/nginx start /etc/init.d/keepalived start /etc/init.d/monit start exit 0
[edit] Know Bugs / Wishlist
- socket listener:
I didn't found any 0.7.x debian package which can use unix sockets as listeners. - inline includes:
With an statement like "include @<identifier>" and a block like "include { include_name <identifier>; ... }" a seperate file for every include could be avoided. - global rewrite rules and log option for rewrite:
If you have a inverse proxy it would be the perfect place to enforce a bunch o rewrite rules globaly. Because this is an security feature, each firing of one rule should be logged in a (separate?) log.