Tomcat6集群
所需软件:
Apache2.2:http://archive.apache.org/dist/httpd/binaries/win32/
tomcat-connectors-1.2.37:http://apache.etoak.com/tomcat/tomcat-connectors/jk/source/jk-1.2.28/tomcat-connectors-1.2.28-src.zip
JK21.2:http://labs.xiaonei.com/apache-mirror/tomcat/tomcat-connectors/jk/binaries/
端口设置
Apache80
Tomcat18005808084438009
Tomcat28006808184448010
1.解压jk2将mod_jk.so放在Apache2.2\modules目录下
2.配置apache
打开httpd.conf对应位置添加
LoadModulejk_modulemodules/mod_jk.so
JkWorkersFileconf/workers.properties
JkMountFileconf/urimap.properties
增加配置文件workers.properties
在apache目录conf创建两个文件workers.properties,内容如下
worker.list=router#配置请求处理server这里可以配置多个server
参考:http://wenku.baidu.com/view/5a135db91a37f111f1855b7a.html
#DefinetheLBworker
worker.router.type=lb
worker.router.balance_workers=tomcat1,tomcat2#指定分担请求的server列表,用逗号分隔
#设置用于负载均衡的server的session可否共享这里如果设置为0在复制session的时候tomcat会报错,但可以进行复制,如果设置为1就没有错误
worker.router.sticky_session=1
#Define1realtomcatusingajp13
worker.tomcat1.type=ajp13
worker.tomcat1.host=localhost
worker.tomcat1.port=8009
worker.tomcat1.lbfactor=1
#Defineanothertomcatusingajp13
worker.tomcat2.type=ajp13
worker.tomcat2.host=localhost
worker.tomcat2.port=8010
worker.tomcat2.lbfactor=1
增加配置文件urimap.properties
在apache目录conf创建两个文件workers.properties,内容如下
/*.jsp=router#表示所有的jsp文件都由router这个server处理
/*/servlet/*=router
/cluster/*=router
!/*.gif=router#所有以.gif结尾的请求都不由controller这个server处理
这里的"!”类似于java中的"!”,是“非”的意思
把tomcat应用程序的访问URI匹配到jk2的工作条目router。
3.Tomcat1配置
1修改配置文件server.xml
<Enginename="Catalina"defaultHost="localhost"jvmRoute="tomcat1">
jvmRoute值要与workers.properties文件中定义的条目worker.tomcat1保持一致。
4.Tomcat2配置
1修改配置文件server.xml
<Serverport="8006"shutdown="SHUTDOWN">
<Connectorport="8081"maxHttpHeaderSize="8192"
maxThreads="150"minSpareThreads="25"maxSpareThreads="75"
enableLookups="false"redirectPort="8444"acceptCount="100"
connectionTimeout="20000"disableUploadTimeout="true"/>
<Connectorport="8010"
enableLookups="false"redirectPort="8444"protocol="AJP/1.3"/>
上面标注红色的端口值不能与Tomcat1相同,在同一个IP环境中需要保持唯一。
<Enginename="Catalina"defaultHost="localhost"jvmRoute="tomcat2">
jvmRoute值要与workers.properties文件中定义的条目worker.tomcat2保持一致。
5.session复制配置
还是在tomcat的server.xml中
engine节点下添加
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="6"> <Manager className="org.apache.catalina.ha.session.BackupManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true" mapSendOptions="6"/> <!--有人说DeltaManager稳定性高于BackupManager具体不清楚 <Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/> --> <Channel className="org.apache.catalina.tribes.group.GroupChannel"> <Membership className="org.apache.catalina.tribes.membership.McastService" address="228.0.0.4" port="45564" frequency="500" dropTime="3000"/> <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" address="auto" port="5001" selectorTimeout="100" maxThreads="6"/> <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/> </Sender> <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/> <Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/> </Channel> <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/> <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/> <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer" tempDir="/tmp/war-temp/" deployDir="/tmp/war-deploy/" watchDir="/tmp/war-listen/" watchEnabled="false"/> <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/> <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/> </Cluster>
最终要的一个
在web项目的web.xml中添加<distributable/>不添加就不会去复制session
最后测试:
建web项目在index.jsp中内容:
<body> <center> <h1> <% session.setAttribute("username","xiaomei"); out.println("</hr>"); out.println("session attribute:"+session.getAttribute("username")); out.println("SessionId:"+session.getId()); out.println("</hr>"); %> </h1> <a href="other.jsp">跳转</a> </center> </body>
orther.jsp中取出存在session中的username值,看是否有值显示
正确显示为:
sessionattribute:xiaomei
SessionId:8C0113C471EF5ED1989CAFCC4CC81807.tomcat1
然后down掉当前tomcat点击跳转,看是否会显示:
sessionattribute:xiaomei
SessionId:8C0113C471EF5ED1989CAFCC4CC81807.tomcat2
则配置成功!
附上tomcat-connectors-1.2.37-src\conf\workers.properties
# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Note that the distributed version of this file requires modification # before it is usable. # # Reference documentation: http://tomcat.apache.org/connectors-doc/reference/workers.html # # As a general note, the characters $( and ) are used internally to define # macros. Do not use them in your own configuration!!! # # Whenever you see a set of lines such as: # x=value # y=$(x)\something # # the final value for y will be value\something # Define two status worker: # - jk-status for read-only use # - jk-manager for read/write use worker.list=jk-status worker.jk-status.type=status worker.jk-status.read_only=true worker.list=jk-manager worker.jk-manager.type=status # We define a load balancer worker # with name "balancer" worker.list=balancer worker.balancer.type=lb # error_escalation_time: seconds, default = recover_time/2 (=30) # Determines, how fast a detected error should switch from # local error state to global error state # Since: 1.2.28 worker.balancer.error_escalation_time=0 # - max_reply_timeouts: number, default=0 # If there are to many reply timeouts, a worker # is put into the error state, i.e. it will become # unavailable for all sessions residing on the respective # Tomcat. The number of tolerated reply timeouts is # configured with max_reply_timeouts. The number of # timeouts occuring is divided by 2 once a minute and the # resulting counter is compared against max_reply_timeouts. # If you set max_reply_timeouts to N and the errors are # occuring equally distributed over time, you will # tolerate N/2 errors per minute. If they occur in a burst # you will tolerate N errors. # Since: 1.2.24 worker.balancer.max_reply_timeouts=10 # Now we add members to the load balancer # First member is "node1", most # attributes are inherited from the # template "worker.template". worker.balancer.balance_workers=node1 worker.node1.reference=worker.template worker.node1.host=localhost worker.node1.port=8109 # Activation allows to configure # whether this node should actually be used # A: active (use node fully) # D: disabled (only use, if sticky session needs this node) # S: stopped (do not use) # Since: 1.2.19 worker.node1.activation=A # Second member is "node2", most # attributes are inherited from the # template "worker.template". worker.balancer.balance_workers=node2 worker.node2.reference=worker.template worker.node2.host=localhost worker.node2.port=8209 # Activation allows to configure # whether this node should actually be used # A: active (use node fully) # D: disabled (only use, if sticky session needs this node) # S: stopped (do not use) # Since: 1.2.19 worker.node2.activation=A # Finally we put the parameters # which should apply to all our ajp13 # workers into the referenced template # - Type is ajp13 worker.template.type=ajp13 # - socket_connect_timeout: milliseconds, default=0 # Since: 1.2.27 worker.template.socket_connect_timeout=5000 # - socket_keepalive: boolean, default=false # Should we send TCP keepalive packets # when connection is idle (socket option)? worker.template.socket_keepalive=true # - ping_mode: Character, default=none # When should we use cping/cpong connection probing? # C = directly after establishing a new connection # P = directly before sending each request # I = in regular intervals for idle connections # using the watchdog thread # A = all of the above # Since: 1.2.27 worker.template.ping_mode=A # - ping_timeout: milliseconds, default=10000 # Wait timeout for cpong after cping # Can be overwritten for modes C and P # Using connect_timeout and prepost_timeout. # Since: 1.2.27 worker.template.ping_timeout=10000 # - connection_pool_minsize: number, default=connection_pool_size # Lower pool size when shrinking pool due # to idle connections # We want all connections to be closed when # idle for a long time in order to prevent # firewall problems. # Since: 1.2.16 worker.template.connection_pool_minsize=0 # - connection_pool_timeout: seconds, default=0 # Idle time, before a connection is eligible # for being closed (pool shrinking). # This should be the same value as connectionTimeout # in the Tomcat AJP connector, but there it is # milliseconds, here seconds. worker.template.connection_pool_timeout=600 # - reply_timeout: milliseconds, default=0 # Any pause longer than this timeout during waiting # for a part of the reply will abort handling the request # in mod_jk. The request will proceed running in # Tomcat, but the web server resources will be freed # and an error is send to the client. # For individual requests, the timeout can be overwritten # by the Apache environment variable JK_REPLY_TIMEOUT. # JK_REPLY_TIMEOUT since: 1.2.27 worker.template.reply_timeout=300000 # - recovery_options: number, default=0 # Bit mask to configure, if a request, which was send # to a backend successfully, should be retried on another backend # in case there's a problem with the response. # Value "3" disables retries, whenever a part of the request was # successfully send to the backend. worker.template.recovery_options=3