JBoss Data Grid 7.2 Quick Start

先去http://access.redhat.com下载相应介质,主要是 jboss-datagrid-7.2.0-server.zip和jboss-datagrid-7.2.0-tomcat8-session-client.zip

前者用于jboss data grid的启动,后者用于客户端tomcat通过Client-Server方式去连接和操作

1. 安装

直接解压就是安装,但要注意如果是需要多个server构成一个集群,需要建立两个目录分别解压,我试过只修改配置不成,因为还有

其他文件在进程启动以后需要进行同时写入。所以最佳办法是每个实例分别建立一个目录。

修改配置文件cluster.xml,如果需要加入定义的Cache,可以添加下面这一段

<subsystem xmlns="urn:infinispan:server:endpoint:6.0">
        <hotrod-connector socket-binding="hotrod"  cache-container="clusteredcache">
         <topology-state-transfer lazy-retrieval="false" lock-timeout="1000" replication-timeout="5000"/>
        </hotrod-connector>
        .........
      <subsystem xmlns="urn:infinispan:server:core:6.0" default-cache-container="clusteredcache">
                   <cache-container name="clusteredcache" default-cache="default" statistics="true">
                       <transport executor="infinispan-transport" lock-timeout="60000"/>
                    ......
               <distributed-cache name="directory-dist-cache" mode="SYNC" owners="2" remote-                   timeout="30000" start="EAGER">
              <locking isolation="READ_COMMITTED" acquire-timeout="30000" striping="false"/>
              <eviction strategy="LRU" max-entries="20" />
              <transaction mode="NONE"/>
              </distributed-cache>
             ..............
  </cache-container>

如果不需要定义,可以用缺省的配置,也就是default,配置为分布式

<distributed-cache name="default"/>

修改server2的端口,主要是标黑的port-offset,标黑的那段

<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:100}">
        <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
        <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}"/>
        <socket-binding name="hotrod" port="11222"/>
        <socket-binding name="hotrod-internal" port="11223"/>
        <socket-binding name="jgroups-mping" port="0" multicast-address="${jboss.default.multicast.address:234.99.54.14}" multicast-port="45700"/>
        <socket-binding name="jgroups-tcp" port="7600"/>
        <socket-binding name="jgroups-tcp-fd" port="57600"/>
        <socket-binding name="jgroups-udp" port="55200" multicast-address="${jboss.default.multicast.address:234.99.54.14}" multicast-port="45688"/>
        <socket-binding name="jgroups-udp-fd" port="54200"/>
        <socket-binding name="memcached" port="11211"/>
        <socket-binding name="rest" port="8080"/>
        <socket-binding name="rest-multi-tenancy" port="8081"/>
        <socket-binding name="rest-ssl" port="8443"/>
        <socket-binding name="txn-recovery-environment" port="4712"/>
        <socket-binding name="txn-status-manager" port="4713"/>
        <outbound-socket-binding name="remote-store-hotrod-server">
            <remote-destination host="remote-host" port="11222"/>
        </outbound-socket-binding>
        <outbound-socket-binding name="remote-store-rest-server">
            <remote-destination host="remote-host" port="8080"/>
        </outbound-socket-binding>
    </socket-binding-group>

2.启动

standalone.bat -c=clustered1.xml -Djboss.node.name=server1

standalone.bat -c=clustered2.xml -Djboss.node.name=server2

 从日志中可以看到server2的加入,并进行数据的rebalance.

3.监控

惊闻Jboss ON要end of life,以后更多需要走prometheus或者openshift容器化的监控手段了,所以果断来个最基本的jmx监控。

启动jconsole, 基于jmx连接本地或者远程端口(9990),在MBean中找到jboss.datagrid-infinispan

  • 查看集群属性,CacheManager->clustered

  •  查看Cache Entry

4.客户端访问

在tomcat的webapp下建立一个项目jdg,然后建立WEB-INF,在lib下面把之前的jar包拷入。

写一段客户端访问代码.

<%@ page language="java" import="java.util.*" pageEncoding="gbk"%>
<%@ page import="org.infinispan.client.hotrod.RemoteCache,org.infinispan.client.hotrod.RemoteCacheManager,org.infinispan.client.hotrod.configuration.ConfigurationBuilder,com.redhat.lab.jdg.*,java.utils.*" %>
<html>
  <head>
    <title>My JSP starting page</title>
  </head>
  
  <body>
    <h1>
        
     <%
       try {
              ConfigurationBuilder builder = new ConfigurationBuilder();
                  builder.addServer().host("127.0.0.1")
                   .port(Integer.parseInt("11322"));
                  RemoteCacheManager  cacheManager = new RemoteCacheManager(builder.build());
                  RemoteCache<String, User> cache = cacheManager.getCache();
          
          
        
                    User user = new User();
                    user.setFirstName("John");
                    user.setLastName("Doe");
                    cache.put("jdoe", user);
                    System.out.println("John Doe has been put into the cache");
                    out.println("John Doe has been put into the cache");
            
          
                    if (cache.containsKey("jdoe")) {
                            System.out.println("jdoe key is indeed in the cache");
                            out.println("jdoe key is indeed in the cache");
                    }
        
                    if (cache.containsKey("jane")) {
                            System.out.println("jane key is indeed in the cache");
                            out.println("jane key is indeed in the cache");
                    }
        
                    user = cache.get("jdoe");
                    System.out.println("jdoe's firstname is " +
                    user.getFirstName());
                    
                    out.println("jdoe's firstname is " +
                    user.getFirstName());
                    
            
        } catch (Exception e) {    
            e.printStackTrace();
    }
    
    
     %>
    </h1>
  </body>
</html>

 然后是各种验证

待会研究OpenShift部署。

猜你喜欢

转载自www.cnblogs.com/ericnie/p/10776297.html