尚硅谷springcloud2020day17(p136-141)

今天是2020-12-21。
一。sentinel整合feign
1.pom引入openfeign的starter
2.application.yml:

 feign:
  sentinel:
   enabled:true

3.主启动类:@EnableFeignClients
4.接口PaymentService添加@FeignClients(value=“nacos-payment-provider”,fallback=paymentFallback.class)
5,创建类paymentFallback实现PaymentService,并添加@Component注解
6.创建控制器,注入PaymentService,并调用接口方法完成远程调用
二。sentinel持久化
将配置的规则持久化到nacos保存:
1.pom添加依赖:sentinel-datasource-nacos
2.application.yml:

server:
  port: 8401
spring:
  application:
    name: cloudalibaba-sentinel-service
  cloud:
    nacos:
      discovery:
        #nacos地址
        server-addr: xxx:8080
    sentinel:
      transport:
        #sentinel监控平台地址
        dashboard: xxx:8858
        #应用本地会启动一个httpserver与sentinel控制台交互,端口默认8719,如果被占用就+1直到找到未被占用的端口
        port: 8719
        #如果是docker启动的话,本地服务想看到实时监控要配置这个为本机ip
        clientIp: 127.0.0.1
      datasource:
        ds1:
          nacos:
            server-addr: xxx:8080
            dataId: ${
    
    spring.application.name}
            groupId: DEFAULT_GROUP
            data-type: json
            rule-type: flow
management:
  endpoints:
    web:
      exposure:
        include: "*"

3.nacos新建配置文件:cloudalibaba-sentinel-service,格式是json:
[
{
“resource”: “hello”,
“limitApp”: “default”,
“grade”: 1,
“count”: 1,
“strategy”: 0,
“controlBehavior”: 0,
“clusterMode”: false
}
]
点击发布即可,然后重启项目,访问/hello即可看到配置好的流控规则,且生效
4.注:不知道是不是有人和我一样,sentinel是docker装的,不是在本地。可以看到服务在sentinel界面显示了,但是监控不到本地服务的接口调用,配的规则也不生效。这次去nacos搞了持久化配置,sentinel那边也没显示出来,但是我一访问接口,规则居然生效了,搞得人无语凝噎,因为监控不到接口调用之前很多地方只能看着老师做,自己实践不了。不过这种配置文件搞起来也很麻烦,再去实践之前的也没必要了,怎么说呢,这算是把之前最简单的qps流控给实践了,也算是最后赶上了车,一切都是这么的奇妙。
二。分布式事务
分布式微服务,每个服务都会连接自己的数据库。一个请求访问,可能会涉及到多个服务调用,涉及到多个数据源,那么每个服务对自己数据库的操作就需要同步,即都在一个事务里,一个服务对数据库的操作失败,其他服务即使对数据库操作是成功的,也要回滚,这就是分布式事务。
三。docker安装seata1.2
1.docker pull seataio/seata-server:1.2.0
2.docker run -d --name seata-server -e SEATA_IP=xxx -e SEATA_PORT=8091 -p 8091:8091 seataio/seata-server:1.2.0
3.cd /usr/local
4.mkdir -p /usr/local/seata
5.docker cp seata容器id:/seata-server/resources/file.conf /usr/local/seata/file.conf #拷贝容器内文件file.conf到容器外的/usr/local/seata径下
6.docker cp seata容器id:/seata-server/resources/registry.conf /usr/local/seata/registry.conf
7.vim file.conf,修改连接模式为db,并修改数据库连接信息:

store {
    
    
  ## store mode: file、db
  mode = "db"
 
  ## file store property
  file {
    
    
    ## store location dir
    dir = "sessionStore"
    # branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
    maxBranchSessionSize = 16384
    # globe session size , if exceeded throws exceptions
    maxGlobalSessionSize = 512
    # file buffer size , if exceeded allocate new buffer
    fileWriteBufferCacheSize = 16384
    # when recover batch read size
    sessionReloadReadSize = 100
    # async, sync
    flushDiskMode = async
  }
 
  ## database store property
  db {
    
    
    ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc.
    datasource = "druid"
    ## mysql/oracle/postgresql/h2/oceanbase etc.
    dbType = "mysql"
    driverClassName = "com.mysql.cj.jdbc.Driver"   #注意这里是mysql8的配置
    url = "jdbc:mysql://192.168.100.132:3306/seata?serverTimezone=UTC"  
    user = "root"
    password = "123456"
    minConn = 5
    maxConn = 30
    globalTable = "global_table"
    branchTable = "branch_table"
    lockTable = "lock_table"
    queryLimit = 100
    maxWait = 5000
  }
}

8.vim registry.conf,将seata 注册进nacos中:

registry {
    
    
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
  type = "nacos"
 
  nacos {
    
    
    application = "seata-server"
    serverAddr = "xxx"
    namespace = ""
    cluster = "default"
    username = ""
    password = ""
  }
  eureka {
    
    
    serviceUrl = "http://localhost:8761/eureka"
    application = "default"
    weight = "1"
  }
  redis {
    
    
    serverAddr = "localhost:6379"
    db = 0
    password = ""
    cluster = "default"
    timeout = 0
  }
  zk {
    
    
    cluster = "default"
    serverAddr = "127.0.0.1:2181"
    sessionTimeout = 6000
    connectTimeout = 2000
    username = ""
    password = ""
  }
  consul {
    
    
    cluster = "default"
    serverAddr = "127.0.0.1:8500"
  }
  etcd3 {
    
    
    cluster = "default"
    serverAddr = "http://localhost:2379"
  }
  sofa {
    
    
    serverAddr = "127.0.0.1:9603"
    application = "default"
    region = "DEFAULT_ZONE"
    datacenter = "DefaultDataCenter"
    cluster = "default"
    group = "SEATA_GROUP"
    addressWaitTime = "3000"
  }
  file {
    
    
    name = "file.conf"
  }
}
 
config {
    
    
  # file、nacos 、apollo、zk、consul、etcd3
  type = "file"
 
  nacos {
    
    
    serverAddr = "localhost"
    namespace = ""
    group = "SEATA_GROUP"
    username = ""
    password = ""
  }
  consul {
    
    
    serverAddr = "127.0.0.1:8500"
  }
  apollo {
    
    
    appId = "seata-server"
    apolloMeta = "http://192.168.1.204:8801"
    namespace = "application"
  }
  zk {
    
    
    serverAddr = "127.0.0.1:2181"
    sessionTimeout = 6000
    connectTimeout = 2000
    username = ""
    password = ""
  }
  etcd3 {
    
    
    serverAddr = "http://localhost:2379"
  }
  file {
    
    
    name = "file.conf"
  }
}

9.docker cp file.conf seata容器id:/seata-server/resources/file.conf
10.docker cp registry.conf seata容器id:/seata-server/resources/registry.conf
11.在file.conf指定的连接中创建数据库seata,并执行sql脚本,不然seata启动会失败,
官方要求的seata数据库的脚本
12.docker restart seata容器id
13.docker logs seata容器id,确保启动成功
14.登录nacos查看服务列表,看到seata-server表示配置成功
四。seata
seata是alibaba的开源分布式事务解决方案
重要概念:
1.Transaction ID XID:全局唯一的事务ID
2.Transaction Manager TM:事务管理器,定义全局事务的范围-负责开始一个全局事务、提交或回滚全局事务
3.Transaction Coordinator TC:事务协调者,维护全局或分支事务的状态,驱动全局事务提交或回滚
4.Resource Manager RM:管理分支事务处理的资源,与TC交谈以注册分支事务和报告分支事务的状态,并驱动分支事务提交或回滚。
五。seata的处理流程
1.TM向TC申请开启一个全局事务,并生成全局唯一的XID
2.XID在微服务调用的链路上传播
3.RM向TC注册分支事务,将其纳入XID对应全局事务的管辖。
4.TM向TC发起针对XID的全局事务的提交或回滚申请
6.TC调度XID下管辖的全部分支事务完成提交或回滚申请。

猜你喜欢

转载自blog.csdn.net/qq_44727091/article/details/111476120
今日推荐