Linux下使用awk在接口后直接解析json数据

curl -g 'http:/10.111.11.111:11111/api/v1/query?query=cpu'|jq -c .data.result[]|awk -F '"' '{print  "cpu\t" $(NF-9) "\ttime\t" $(NF-2) $(NF-1)}' | awk -F: '{print $1,$2}'|awk -F, '{print $1,$2}'|awk -F '[\t.]' '{print "type:" $1 ",region:shk1,name:" $2 "," $3 ":" strftime("%Y-%m-%d",$4) "," $5 ":" $6 "\t"}'  >>collect.txt

取倒数1、2、9列以"为分隔符

awk -F '"' '{print  "cpu\t" $(NF-9) "\ttime\t" $(NF-2) $(NF-1)}'

接着以:为分隔符,取第一列、第二列

 awk -F: '{print $1,$2}'

以\t和.分割加上需要的字段组成键值对,有两个分隔符加上[]example:[\t.]

awk -F: '{print $1,$2}'|awk -F, '{print $1,$2}'|awk -F '[\t.]' '{print "type:" $1 ",region:shk1,name:" $2 "," $3 ":" strftime("%Y-%m-%d",$4) "," $5 ":" $6 "\t"}'

转化unix时间戳(awk ‘BEGIN{print strftime("%Y-%m-%d",systime())}’)

strftime("%Y-%m-%d",$4)

">>"collect.txt输入拼接在文件后面,若无此文件会创建。>则会覆盖

 >>collect.txt

完整的语句则为

curl -g 'http://10.202.150.14:32023/api/v1/query?
query=node_load1'|./jq -c .data.result[]|awk -F '"' 
 '{print "cpu\t"  $(NF-9)  "\ttime\t"  $(NF-2) "\t" $(NF-1)}' 
 |awk -F: '{print $(1),$(2)}'|awk -F[ '{print $(1),$(2)}'|awk -F,
  '{print $(1),$(2)}'|awk -F '\t' '{print   "region:shk1,name:" $2
   "," $3 ":" strftime("%Y-%m-%d %H:%M:%S",$4)  ","$1 ":" $5 }'>> 
   ${directory}/shk1cpu${datetime}.txt
发布了10 篇原创文章 · 获赞 1 · 访问量 1697

猜你喜欢

转载自blog.csdn.net/Carl_wang3333333/article/details/102703171