http://www.brianstorti.com/the-role-of-a-reverse-proxy-to-protect-your-application-against-slow-clients/
This buffering reverse proxy can handle an “unlimited” number of requests, and is not affected by slow clients.
Nginx, for instance, uses a non-blocking Evented I/O model (rather than the I/O blocking forking model that Unicorn uses), which means that, when it receives a new request, it will perform a read call (I/O operation), and will not be blocked waiting for a response, being immediately available to handle new requests. When the read operation finishes, the operational system will send an event notification, and the appropriate event handler can be called (passing the request to the application server, for instance).
The scenario above, with a buffering reverse proxy, would be something like this: The slow client makes a large request. The buffering reverse proxy will wait until it gets the entire request, then it will pass this request to the application server, which will just process it and deliver the response back to the reverse proxy, being free to receive new requests. The reverse proxy then will send this response back to the slow client.
It doesn’t really matter much that it will take a long time to receive and deliver the requests/responses to these clients, as the reverse proxy will not be blocked by these I/O operations (due to its concurrency model nature).
The conclusion here is that, if you are using an application server that is blocked by I/O operations, it’s a pretty good idea to put a reverse proxy in front of it, that can handle this kind of situations (and, possibly, do a lot more)
https://www.jianshu.com/p/5eab0f83e3b4
启动Nginx后,其实就是在80端口启动了Socket服务进行监听,如图所示,Nginx涉及Master进程和Worker进程
This buffering reverse proxy can handle an “unlimited” number of requests, and is not affected by slow clients.
Nginx, for instance, uses a non-blocking Evented I/O model (rather than the I/O blocking forking model that Unicorn uses), which means that, when it receives a new request, it will perform a read call (I/O operation), and will not be blocked waiting for a response, being immediately available to handle new requests. When the read operation finishes, the operational system will send an event notification, and the appropriate event handler can be called (passing the request to the application server, for instance).
The scenario above, with a buffering reverse proxy, would be something like this: The slow client makes a large request. The buffering reverse proxy will wait until it gets the entire request, then it will pass this request to the application server, which will just process it and deliver the response back to the reverse proxy, being free to receive new requests. The reverse proxy then will send this response back to the slow client.
It doesn’t really matter much that it will take a long time to receive and deliver the requests/responses to these clients, as the reverse proxy will not be blocked by these I/O operations (due to its concurrency model nature).
The conclusion here is that, if you are using an application server that is blocked by I/O operations, it’s a pretty good idea to put a reverse proxy in front of it, that can handle this kind of situations (and, possibly, do a lot more)
https://www.jianshu.com/p/5eab0f83e3b4
由于防火墙的原因,我们并不能直接访问谷歌,那么我们可以借助VPN来实现,这就是一个简单的正向代理的例子。这里你能够发现,正向代理“代理”的是客户端,而且客户端是知道目标的,而目标是不知道客户端是通过VPN访问的。
当我们在外网访问百度的时候,其实会进行一个转发,代理到内网去,这就是所谓的反向代理,即反向代理“代理”的是服务器端,而且这一个过程对于客户端而言是透明的。
Master进程的作用是?
读取并验证配置文件nginx.conf;管理worker进程;
Worker进程的作用是?
每一个Worker进程都维护一个线程(避免线程切换),处理连接和请求;注意Worker进程的个数由配置文件决定,一般和CPU个数相关(有利于进程切换),配置几个就有几个Worker进程
读取并验证配置文件nginx.conf;管理worker进程;
Worker进程的作用是?
每一个Worker进程都维护一个线程(避免线程切换),处理连接和请求;注意Worker进程的个数由配置文件决定,一般和CPU个数相关(有利于进程切换),配置几个就有几个Worker进程
所谓热部署,就是配置文件nginx.conf修改后,不需要stop Nginx,不需要中断请求,就能让配置文件生效!(nginx -s reload 重新加载/nginx -t检查配置/nginx -s stop)
通过上文我们已经知道worker进程负责处理具体的请求,那么如果想达到热部署的效果,可以想象:
方案一:
修改配置文件nginx.conf后,主进程master负责推送给woker进程更新配置信息,woker进程收到信息后,更新进程内部的线程信息。(有点valatile的味道)
方案二:
修改配置文件nginx.conf后,重新生成新的worker进程,当然会以新的配置进行处理请求,而且新的请求必须都交给新的worker进程,至于老的worker进程,等把那些以前的请求处理完毕后,kill掉即可。
Nginx采用的就是方案二来达到热部署的
通过上文我们已经知道worker进程负责处理具体的请求,那么如果想达到热部署的效果,可以想象:
方案一:
修改配置文件nginx.conf后,主进程master负责推送给woker进程更新配置信息,woker进程收到信息后,更新进程内部的线程信息。(有点valatile的味道)
方案二:
修改配置文件nginx.conf后,重新生成新的worker进程,当然会以新的配置进行处理请求,而且新的请求必须都交给新的worker进程,至于老的worker进程,等把那些以前的请求处理完毕后,kill掉即可。
Nginx采用的就是方案二来达到热部署的
上文已经提及Nginx的worker进程个数与CPU绑定、worker进程内部包含一个线程高效回环处理请求,这的确有助于效率,但这是不够的。
作为专业的程序员,我们可以开一下脑洞:BIO/NIO/AIO、异步/同步、阻塞/非阻塞...
要同时处理那么多的请求,要知道,有的请求需要发生IO,可能需要很长时间,如果等着它,就会拖慢worker的处理速度。
Nginx采用了Linux的epoll模型,epoll模型基于事件驱动机制,它可以监控多个事件是否准备完毕,如果OK,那么放入epoll队列中,这个过程是异步的。worker只需要从epoll队列循环处理即可
作为专业的程序员,我们可以开一下脑洞:BIO/NIO/AIO、异步/同步、阻塞/非阻塞...
要同时处理那么多的请求,要知道,有的请求需要发生IO,可能需要很长时间,如果等着它,就会拖慢worker的处理速度。
Nginx采用了Linux的epoll模型,epoll模型基于事件驱动机制,它可以监控多个事件是否准备完毕,如果OK,那么放入epoll队列中,这个过程是异步的。worker只需要从epoll队列循环处理即可
Nginx既然作为入口网关,很重要,如果出现单点问题,显然是不可接受的。
答案是:Keepalived+Nginx实现高可用。
Keepalived是一个高可用解决方案,主要是用来防止服务器单点发生故障,可以通过和Nginx配合来实现Web服务的高可用。(其实,Keepalived不仅仅可以和Nginx配合,还可以和很多其他服务配合)
Keepalived+Nginx实现高可用的思路:
第一:请求不要直接打到Nginx上,应该先通过Keepalived(这就是所谓虚拟IP,VIP)
第二:Keepalived应该能监控Nginx的生命状态(提供一个用户自定义的脚本,定期检查Nginx进程状态,进行权重变化,,从而实现Nginx故障切换
答案是:Keepalived+Nginx实现高可用。
Keepalived是一个高可用解决方案,主要是用来防止服务器单点发生故障,可以通过和Nginx配合来实现Web服务的高可用。(其实,Keepalived不仅仅可以和Nginx配合,还可以和很多其他服务配合)
Keepalived+Nginx实现高可用的思路:
第一:请求不要直接打到Nginx上,应该先通过Keepalived(这就是所谓虚拟IP,VIP)
第二:Keepalived应该能监控Nginx的生命状态(提供一个用户自定义的脚本,定期检查Nginx进程状态,进行权重变化,,从而实现Nginx故障切换