火腿蒜粒粉丝蒸扇贝王
Plain Programmer Description: The system builds an array of Servers being load balanced, and uses the random number generator to determine who gets the next connection… Far from an elegant solution, and most often found in large software packages that have thrown load balancing in as a feature.
Plain Programmer Description: The system builds a standard circular queue and walks through it, sending one request to each machine before getting to the start of the queue and doing it again. While I’ve never seen the code (or actual load balancer code for any of these for that matter), we’ve all written this queue with the modulus function before. In school if nowhere else.
Plain Programmer Description: The simplest way to explain for this one is that the system makes multiple entries in the Round Robin circular queue for servers with larger ratios. So if you set ratios at 3:2:1:1 for your four servers, that’s what the queue would look like – 3 entries for the first server, two for the second, one each for the third and fourth. In this version, the weights are set when the load balancing is configured for your application and never change, so the system will just keep looping through that circular queue. Different vendors use different weighting systems – whole numbers, decimals that must total 1.0 (100%), etc. but this is an implementation detail, they all end up in a circular queue style layout with more entries for larger ratings.
Plain Programmer Description: If you think of Weighted Round Robin where the circular queue is rebuilt with new (dynamic) weights whenever it has been fully traversed, you’ll be dead-on.
Plain Programmer Description: The load balancer looks at the response time of each attached server and chooses the one with the best response time. This is pretty straight-forward, but can lead to congestion because response time right now won’t necessarily be response time in 1 second or two seconds. Since connections are generally going through the load balancer, this algorithm is a lot easier to implement than you might think, as long as the numbers are kept up to date whenever a response comes through.
These next three I use the BIG-IP name for. They are variants of a generalized algorithm sometimes called Long Term Resource Monitoring.
Plain Programmer Description: This algorithm just keeps track of the number of connections attached to each server, and selects the one with the smallest number to receive the connection. Like fastest, this can cause congestion when the connections are all of different durations – like if one is loading a plain HTML page and another is running a JSP with a ton of database lookups. Connection counting just doesn’t account for that scenario very well.
Plain Programmer Description: This algorithm tries to merge Fastest and Least Connections, which does make it more appealing than either one of the above than alone. In this case, an array is built with the information indicated (how weighting is done will vary, and I don’t know even for F5, let alone our competitors), and the element with the highest value is chosen to receive the connection. This somewhat counters the weaknesses of both of the original algorithms, but does not account for when a server is about to be overloaded – like when three requests to that query-heavy JSP have just been submitted, but not yet hit the heavy work.
Plain Programmer Description: This method attempts to fix the one problem with Observed by watching what is happening with the server. If its response time has started going down, it is less likely to receive the packet. Again, no idea what the weightings are, but an array is built and the most desirable is chosen.
You can see with some of these algorithms that persistent connections would cause problems. Like Round Robin, if the connections persist to a server for as long as the user session is working, some servers will build a backlog of persistent connections that slow their response time. The Long Term Resource Monitoring algorithms are the best choice if you have a significant number of persistent connections. Fastest works okay in this scenario also if you don’t have access to any of the dynamic solutions.
你可以看到,有些算法遇到长连接可能会导致问题。像是轮询,如果长连接保持用户会话那么久,当连接数积压到一定值时会导致服务器响应时间变慢。如果你有大量的长连接,LTRM( Long Term Resource Monitoring )算法是最好的选择。如果没有动态的解决方案,Fastest算法也比较适合这种场景。
一、安装相关支持库:
yum -y install gcc gcc-c++ autoconf
yum -y install openssl openssl-devel
pcre:为了重写rewrite, zlib:为了gzip压缩,ngx_pagespeed插件:前端网页访问提速优化插件
(1)pcre安装:
wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.36.tar.gz
tar -zxvf pcre-8.36.tar.gz
cd pcre-8.36
./configure
make && make install
cd ../
ln -s /usr/local/lib/libpcre.so.1 /lib64/
一、什么是“达克效应”
“达克效应”是一种认知偏差,能力欠缺的人普遍有一种虚幻的自我优越感,错误地认为自己的能力比真实情况更加优秀。对自身能力缺乏正确的思考,无法认识到自身的不足,常常高估自己的能力水平。
1999年,康奈尔大学的Kruger和David Dunning做了一个实验,来验证缺乏某项技能的人是否能够正确地认识到自己其实并不具备那项技能。在一系列研究中,他们要求学生评估自己在逻辑推理能力、语法知识以及幽默感方面的等级。接着与评估小组的结果进行比较,结果发现绝大部分能力较弱的学生都高估他们的等级了。
但是,对于那些实际能力较强的学生而言,他们则相对低估了自己的能力,因为他们以为别人也和他们一样能干,从而降低了自己的排名。看来两者都不能够正确地认识自己。
在这个实验的后续研究中发现,如果对于能力较差的学生进行培训后,他们就能够相对客观地评价自己,而不像之前那样盲目自信了。最终这项研究使他们获得了2000年的搞笑诺贝尔心理学奖(是的,就是搞笑,不过却发人深省),它揭示了人们普遍对自己缺乏客观认识的事实。 (更多…)