Docker installs InfluxDB (1.8) and SpringBoot integration

1. Docker installation uses influxdb (1.8)

1. Pull image ()

docker search influxdb      # 搜索镜像
docker pull influxdb:1.8    # 拉取镜像,不指定版本会拉去最新的版本
docker images               # 查看拉取的镜像

2. Initialize the container

docker run -d -p 8086:8086 --name influxdb1.8 -v /data/docker/influxdb:/var/lib/influxdb --restart=always influxdb:1.8

View container running status

docker ps      # 查看运行中的容器
docker ps -a   # 查看所有容器

3. Enter the influxdb container to modify the configuration

docker exec -it influxdb1.8 /bin/bash

Find configuration files and modify

cd /etc/influxdb/
apt-get update         # 更新apt-get
apt-get install vim    # 安装vim
vim influxdb.conf      # 打开配置文件

Modify configuration content

[data]
1、max-serial-per-database=1000000
 每个数据库允许的最大series数,默认设置是一百万。series 指 tag、measurement、policy 相同的数据集合将该设置更改为0,以允许每个数据库的序列数量不受限制。
若超过则会返回500错误,并提示{“error”:“max series per database exceeded: ”}
2、max-values-per-tag = 100000
设置每一个tag允许的value最大数量,默认10W,设置0可以取消限制。若超过该数值,则会返回错误
[http]
3.auth-enabled = true

Full configuration file content

[meta]
  dir = "/var/lib/influxdb/meta"

[data]
  dir = "/var/lib/influxdb/data"
  engine = "tsm1"
  wal-dir = "/var/lib/influxdb/wal"
  max-series-per-database=1000000
  max-values-per-tag=100000

[http]
  auth-enabled=true

4. Add users

# 进入容器后,命令行登录数据库
influx -host localhost -port 8086 -database mydb
# 查看用户
show users
# 设置用户名密码
create user "root" with password 'root' with all privileges
# 查看用户是否设置成功
show users

Restart the container to verify whether the username and password are set successfully

# 指定用户密码登录数据库
influx -host localhost -port 8086 -database mydb -username 'root' -password 'root'
# 查看用户(能够展示代表登录成功)
show users
# 退出数据库以及容器命令
exit

use related

# 指定查询数据的显示格式 -format
influx -host localhost -port 8086 -database mydb -username 'root' -password 'root' -format json
# 美化Json数据显示 -pretty
influx -host localhost -port 8086 -database mydb -username 'root' -password 'root' -execute 'select * from cpu_load_short' -format json -pretty
# 时间戳精度显示设置 -precision
influx -host localhost -port 8086 -database mydb -username 'root' -password 'root' -execute 'select * from cpu_load_short' -format column -precision ms

5. Use of influxdb

Retention policy related

View the mydb database retention policy

show retention policies on mydb

Set the retention policy of the mydb database (policy name: rp-one-year)

create retention policy "rp-one-year" on "mydb" duration 365d replication 1

Change the retention policy of the mydb database

alter retention policy "rp-one-year" on "mydb" duration 365d replication 1 default

delete retention policy

drop retention policy "rp-one-year" on "mydb"

table related

create table

> use mydb;
Using database mydb
> show measurements;
{
    "results": [
        {}
    ]
}
> insert devops-idc,host=server01 cpu=23.1,mem=0.63
> show measurements;
{
    "results": [
        {
            "series": [
                {
                    "name": "measurements",
                    "columns": [
                        "name"
                    ],
                    "values": [
                        [
                            "devops-idc"
                        ]
                    ]
                }
            ]
        }
    ]
}

View measurements (table)

> show measurements;
{
    "results": [
        {
            "series": [
                {
                    "name": "measurements",
                    "columns": [
                        "name"
                    ],
                    "values": [
                        [
                            "devops-idc"
                        ]
                    ]
                }
            ]
        }
    ]
}

delete table

> drop measurement "devops-idc"

data input

Through the INSERT statement and row protocol, insert three time series data records of the DevOps environment into the table devops-idc. The time series data corresponds to 2019/8/30 17:44:53.

> insert devops-idc-sz,host=server01 cpu=16.1,mem=0.43 1567158293000000000
> insert devops-idc-sz,host=server02 cpu=23.8,mem=0.63 1567158293000000000
> insert devops-idc-sz,host=server03 cpu=56.3,mem=0.78 1567158293000000000

data query

> select * from "devops-idc-sz"
name: devops-idc-sz
time                cpu  host     mem
----                ---  ----     ---
1567158293000000000 16.1 server01 0.43
1567158293000000000 56.3 server03 0.78
1567158293000000000 23.8 server02 0.63
> select * from "devops-idc-sz" where host='server01' and time = 1567158293000000000
name: devops-idc-sz
time                cpu  host     mem
----                ---  ----     ---
1567158293000000000 16.1 server01 0.43

update data

Because of the characteristics of more writes and less reads of time series data, influxdb does not support update operations, and the author does not recommend performing update operations on time series records. For example, in some special scenarios, it is necessary to update the index value recorded in the time series database. You can use the time series data record with the same time stamp (Timestamp) and time series line (Series). It is the same time series data record. Newly inserted The time-series data will overwrite the original time-series data record” feature, and update the index value of the time-series data record.

> insert devops-idc-sz,host=server01 cpu=76.1,mem=0.83 1567158293000000000
> select * from "devops-idc-sz";
name: devops-idc-sz
time                cpu  host     mem
----                ---  ----     ---
1567158293000000000 76.1 server01 0.83
1567158293000000000 56.3 server03 0.78
1567158293000000000 23.8 server02 0.63
> 

delete data

Similarly, because time series data is more written, less read, no update, and time series data records are deleted in batches, InfluxDB does not support deleting a single time series data record. In addition to periodically deleting time series data records through retention policies, InfluxDB also supports batch deletion of specified time series data records directly through WHERE conditional statements, deleting time series lines, deleting tables, deleting databases, and deleting shards. .

(1) Delete time-series data records from the specified table through the WHERE conditional statement. From the table devops-idc-sz, delete the label named host, and the label value is server01 at the time point of 2019/8/30 17:44:53 time series data records.

> delete from  "devops-idc-sz" where "host"='server01' and  time=1567158293s

(2) Delete the time series data records by deleting the time series line, and delete all the time series data records of the time series line corresponding to the tag pair "host"='server01'.

> drop series from "devops-idc-sz" where "host"='server01'

(3) Delete time series data records by deleting the specified table, and delete all time series data records corresponding to the table devops-ids-sz

> drop measurement "devops-idc-sz"

(4) Delete time-series data records by deleting the specified database, and delete all time-series data records corresponding to the database telegraf.

> drop database "mydb"

(5) Delete the time-series data records by deleting the specified slice, and delete all the time-series data records corresponding to slice 3.

> show shards
name: _internal
id database  retention_policy shard_group start_time           end_time             expiry_time          owners
-- --------  ---------------- ----------- ----------           --------             -----------          ------
1  _internal monitor          1           2023-03-12T00:00:00Z 2023-03-13T00:00:00Z 2023-03-20T00:00:00Z 

name: mydb
id database retention_policy shard_group start_time           end_time             expiry_time          owners
-- -------- ---------------- ----------- ----------           --------             -----------          ------
3  mydb     autogen          3           2019-08-26T00:00:00Z 2019-09-02T00:00:00Z 2019-09-02T00:00:00Z 
> drop shard 3

Two, SpringBoot integration InfluxDB use example

1. Introduce dependencies

		<dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
            <version>2.7.4</version>
        </dependency>

        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <optional>true</optional>
            <version>1.18.24</version>
        </dependency>
		<!-- influxdb -->
        <dependency>
            <groupId>org.influxdb</groupId>
            <artifactId>influxdb-java</artifactId>
            <version>2.14</version>
        </dependency>

2. Modify the configuration file (application.yml)

spring:
  influx:
  	# 数据库访问路径
    url: http://192.168.2.172:8086
    # 用户名
    user: root
    # 密码
    password: root
    # 数据库名称
    database: mydb

3. Read the configuration file

/**
 * InfluxDB 配置类
 * @author AmazeCode
 * @version 1.0
 * @date 2023/3/12 16:04
 */
@Data
@Configuration
@ConfigurationProperties(prefix = "spring.influx")
public class InfluxDBConfig {
    
    

    /**
     * 连接地址
     */
    public String url;

    /**
     * 用户
     */
    public String user;

    /**
     * 密码
     */
    public String password;

    /**
     * 数据库
     */
    public String database;
}

4. Database operation class

/**
 * @author AmazeCode
 * @version 1.0
 * @date 2023/3/12 16:10
 */
@Service
public class InfluxdbService {
    
    

    @Autowired
    private InfluxDBConfig influxDBConfig;

    @PostConstruct
    public void initInfluxDb() {
    
    
        this.retentionPolicy = retentionPolicy == null || "".equals(retentionPolicy) ? "autogen" : retentionPolicy;
        this.influxDB = influxDbBuild();
    }
    //保留策略
    private String retentionPolicy;
    private InfluxDB influxDB;

    /**
     * 设置数据保存策略 defalut 策略名 /database 数据库名/ 30d 数据保存时限30天/ 1 副本个数为1/ 结尾DEFAULT
     * 表示 设为默认的策略
     */
    public void createRetentionPolicy() {
    
    
        String command = String.format("CREATE RETENTION POLICY \"%s\" ON \"%s\" DURATION %s REPLICATION %s DEFAULT", "defalut", influxDBConfig.database, "30d", 1);
        this.query(command);
    }

    /**
     * 连接时序数据库;获得InfluxDB
     **/
    private InfluxDB influxDbBuild() {
    
    
        if (influxDB == null) {
    
    
            influxDB = InfluxDBFactory.connect(influxDBConfig.url, influxDBConfig.user, influxDBConfig.password);
            influxDB.setDatabase(influxDBConfig.database);
        }
        return influxDB;
    }

    /**
     * 插入
     */
    public void insert(String measurement, Map<String, String> tags, Map<String, Object> fields) {
    
    
        influxDbBuild();
        Point.Builder builder = Point.measurement(measurement);
        builder.time(System.currentTimeMillis(), TimeUnit.MILLISECONDS);
        builder.tag(tags);
        builder.fields(fields);
        influxDB.write(influxDBConfig.database, "", builder.build());
    }

    /**
     * @desc 插入,带时间time
     */
    public void insert(String measurement, long time, Map<String, String> tags, Map<String, Object> fields) {
    
    
        influxDbBuild();
        Point.Builder builder = Point.measurement(measurement);
        builder.time(time, TimeUnit.MILLISECONDS);
        builder.tag(tags);
        builder.fields(fields);
        influxDB.write(influxDBConfig.database, "", builder.build());
    }

    /**
     * @desc influxDB开启UDP功能,默认端口:8089,默认数据库:udp,没提供代码传数据库功能接口
     */
    public void insertUDP(String measurement, long time, Map<String, String> tags, Map<String, Object> fields) {
    
    
        influxDbBuild();
        Point.Builder builder = Point.measurement(measurement);
        builder.time(time, TimeUnit.MILLISECONDS);
        builder.tag(tags);
        builder.fields(fields);
        int udpPort = 8089;
        influxDB.write(udpPort, builder.build());
    }

    /**
     * 查询
     * @param command 查询语句
     */
    public QueryResult query(String command) {
    
    
        influxDbBuild();
        return influxDB.query(new Query(command, influxDBConfig.database));
    }

    /**
     * @desc 查询结果处理
     */
    public List<Map<String, Object>> queryResultProcess(QueryResult queryResult) {
    
    
        List<Map<String, Object>> mapList = new ArrayList<>();
        List<QueryResult.Result> resultList = queryResult.getResults();
        //把查询出的结果集转换成对应的实体对象,聚合成list
        for(QueryResult.Result query : resultList){
    
    
            List<QueryResult.Series> seriesList = query.getSeries();
            if(seriesList != null && seriesList.size() != 0) {
    
    
                for(QueryResult.Series series : seriesList){
    
    
                    List<String> columns = series.getColumns();
                    String[] keys = columns.toArray(new String[columns.size()]);
                    List<List<Object>> values = series.getValues();
                    if(values != null && values.size() != 0) {
    
    
                        for(List<Object> value : values){
    
    
                            Map<String, Object> map = new HashMap(keys.length);
                            for (int i = 0; i < keys.length; i++) {
    
    
                                map.put(keys[i], value.get(i));
                            }
                            mapList.add(map);
                        }
                    }
                }
            }
        }
        return mapList;
    }

    /**
     * @desc InfluxDB 查询 count总条数
     */
    public long countResultProcess(QueryResult queryResult) {
    
    
        long count = 0;
        List<Map<String, Object>> list = queryResultProcess(queryResult);
        if(list != null && list.size() != 0) {
    
    
            Map<String, Object> map = list.get(0);
            double num = (Double)map.get("count");
            count = new Double(num).longValue();
        }
        return count;
    }

    public void createDB(String dbName) {
    
    
        influxDbBuild();
        influxDB.createDatabase(dbName);
    }

    /**
     * 批量写入测点
     */
    public void batchInsert(BatchPoints batchPoints) {
    
    
        influxDbBuild();
        influxDB.write(batchPoints);
    }

    /**
     * 批量写入数据 *
     * @param database 数据库
     * @param retentionPolicy 保存策略
     * @param consistency 一致性
     * @param records 要保存的数据(调用BatchPoints.lineProtocol()可得到一条record)
     */
    public void batchInsert(final String database, final String retentionPolicy, final InfluxDB.ConsistencyLevel consistency, final List<String> records) {
    
    
        influxDbBuild();
        influxDB.write(database, retentionPolicy, consistency, records);
    }

    /**
     * @desc 批量写入数据
     */
    public void batchInsert(final InfluxDB.ConsistencyLevel consistency, final List<String> records) {
    
    
        influxDbBuild();
        influxDB.write(influxDBConfig.database, "", consistency, records);
    }
}

5. Test new addition and query

/**
 * @author AmazeCode
 * @version 1.0
 * @date 2023/3/12 16:16
 */
@RestController
@RequestMapping("influxdb")
public class InfluxdbController {
    
    

    @Resource
    InfluxdbService influxdbService;

    @GetMapping("")
    public Object list() {
    
    
        String command = "select * from host_cpu_usage_total";
        QueryResult query = influxdbService.query(command);
        List<Map<String, Object>> maps = influxdbService.queryResultProcess(query);
        return maps;
    }

    @PostMapping("")
    public Object add () {
    
    
        String measurement = "host_cpu_usage_total";
        Map<String,String> tags = new HashMap<>();
        tags.put("host_name","host2");
        tags.put("cpu_core","core0");
        Map<String, Object> fields = new HashMap<>();
        fields.put("cpu_usage",0.22);
        fields.put("cpu_idle",0.56);
        influxdbService.insert(measurement, tags, fields);
        return "OK";
    }
}

search result:
insert image description here

Guess you like

Origin blog.csdn.net/qq_21875331/article/details/129477973