python script | After adding a host to the cloud platform, jumpserver automatically imports the host information

Jumpserver has been used in the past few years. It did not have a web interface before. Recently, the jumpserver was rebuilt in the new company. However, it was a bit troublesome to import the host. So I looked at the API corresponding to the jumpserver and then imported the relevant scripts. Information.

Because there will be new hosts later, the latest list of cloud hosts will be pulled every day, and then automatically imported into the host list. Because the node is created based on the system name, if a new system adds a new host, the corresponding node information is not available, so it will check whether the node exists. If not, it will be automatically created.

The script content in this article may be incomplete or not very suitable for your scenario. Detailed script content can be obtained by private message in the background.

First step certification

The first step to call jumpserver api is to complete the authentication

The official also gave some reference materials

roughly divided into three steps

Get token--build header information--call interface

Here is an example of viewing node information.

#The ip in the code is replaced according to your own ip, and the account information is also replaced according to your own.

class Jumpserver():
    def get_token(self):
        url = 'http://ip/api/v1/authentication/auth/'
        query_args = {
            "username" : "admin",
            "password" : "admin",
        }
        requests.packages.urllib3.disable_warnings()
        response = requests.post(url,data=query_args,verify=False)
        #print (response.text)
        return json.loads(response.text)['token']


    def header_info(self):
    #用户构建header头信息
        token = self.get_token()
        header_info = {"Authorization": 'Bearer ' + token}
        #print (header_info)
        return header_info


    def get_node_list(self):


        req = requests.get('http://192.168.200.6/api/v1/assets/nodes/' , headers=self.header_info(),verify=False)
        node_list_json = json.loads(req.content.decode())
        node_list_df = pd.DataFrame.from_records(
            node_list_json, columns=["id", "name", "full_value", "key"])
        node_list_df["full_value"] = node_list_df["full_value"].str.replace(
            " ", "")
        #print (type(node_list_df["full_value"]))
        return node_list_df
if __name__ == '__main__':
    a = Jumpserver()
    b=a.get_node_list()

The second step is to obtain the node id and create a new node.

Because our final need is to create new IP information and then insert it into the existing node

Instead of the default default, I checked the corresponding parameter requirements.

You can check the specific API at http://ip /api/docs

The API corresponding to the new information is assets.

To view relevant information, the necessary parameters are: hostname, ip, platform

The node information we mentioned earlier requires specific uuid, rather than splicing the node information yourself.

Therefore, before inserting a new piece of information, we need to obtain the node information of the corresponding system. If there is no need to create a new one, and after the new one is created, the corresponding value needs to be returned. The specific functions can be referred to as follows:

    def get_nodeid_by_fullpath(self, fullpath):
        #这个是获取节点的node_id的,即通过节点的路径,获取的node_id
        #print (fullpath.split('[')[1].split(']')[0])
        # 可以看到这个函数依赖于前面提到的查看node列表的函数
        node_id = self.get_node_list()["full_value"] == fullpath
        #print (node_id)
        if node_id.any():
            return self.get_node_list()[node_id]["id"].str.cat()
            print ('1')
        else:
            print ('节点未建立')
            print (fullpath)
            nodeid= fullpath.split('/')[3]
            env=fullpath.split('/')[2]
            print (nodeid,env)
            #这里对于没有建立的节点,调用了建立节点的函数,具体如下文
            self.create_inode(nodeid,env)
            node_id = self.get_node_list()["full_value"] == fullpath
            #a=self.get_node_list()
            return self.get_node_list()[node_id]["id"].str.cat()

Create new node function

    def create_inode(self,nodename,env):
        # 添加节点信息
         #print (Config.prdid)
        #下面这串url里面的一串字符是default节点的uuid,因为我的新节点是在default下面建立的
        url = 'http://ip/api/v1/assets/nodes/4c713b57-372e-4cac-994d-25ee413a46e3/children/'


        nodesData = {
            "value":nodename,


        }
        #nodesData = json.dumps(nodesData)
        print (url)
        nodesreq = requests.post(url, headers=self.header_info(), data=nodesData,verify=False)
        nodesreq = json.loads(nodesreq.text)
        print(nodesreq)

Step 3: Create new host information

After getting the node id

You can then splice the corresponding data according to the information required by the interface to insert data.

Because the ssh port on my side is not the default 22, I also need to formulate a port protocol.

In addition, the privileged user is also fixed, so it is also specified. This privileged user can be obtained based on the exported data.

The following is the function to import data. It will pass in a list. The list is read from the csv file. Because the jumpserver is used directly to import the template, there are many fields.

    def create_asset(self,n):
        #requests.packages.urllib3.disable_warnings()
        node_id = n[27]
        #print (node_id)
        node_id = node_id.split("'")[1].split("'")[0]
        #print (node_id)
        node_id = self.get_nodeid_by_fullpath(node_id)
        #print (node_id)




        data= {
            "ip" : n[1],
            "hostname" : n[0],
            "protocols" : ['ssh/2922'],
            "is_active" : 'True',
            "admin_user" : n[23],
            "domain" : n[24],
            "platform" : 'Linux',
            "nodes" : node_id,
            #"nodes_display" : n[i][27],
        }
        print (n)
        print(n[24])
        #print (data)


        req = requests.post('http://ip/api/v1/assets/assets/',headers=self.header_info(),data=data,verify=False)
        #req = requests.get('http://30.16.27.209/api/v1/assets/assets/',headers=self.header_info(),verify=False)
        print (req.text)
        print (req.status_code)
        if req.status_code == 201:
            logging.info("%s add success" % node_id)
            print ("%s add success" % node_id)
        else:
            logging.info("%s fail" % node_id)
            print ("%s fail" % node_id)
#读取csv文件,获取列表函数

    def readcsv(self):
        data = list(csv.reader(open('test.csv','r')))
        print (len(data))
        for su in range(1,len(data)):
            
            self.create_asset(data[su])

Step 4: Generate csv file

In this fourth step, the scene is actually different.

Some are imported for the first time, and some are newly added data.

Our current situation is that there will be new data from time to time, and the new data is on the cloud platform.

Therefore, it is necessary to regularly fetch the latest data from the cloud platform, and then determine whether it is new data based on the creation time, and then generate a csv file corresponding to the data. Because the jumpserver import template has been used, there are many data columns.

This part only provides a reference script for generating csv. You can adjust your own generation method as needed.

    #这个里面循环的result是我们从云平台拉出数据的一个字典
    for i in range(len(result['body']['content'])):
        
        values=[result['body']['content'][i]['name'],result['body']['content'][i]['portDetail'][0]['privateIp'],"Linux","['ssh/29022']","","","","","","","","","","","","","","","","","","","","","","60139add-4bee-43b8-8f1a-cb463ebe2fc5","","['/Default/sys']",""]
    


        l1.append(values)
    name1=["*主机名","*IP","*系统平台","协议组","协议","端口","激活","公网IP","资产编号","备注","制造商","型号","序列号","CPU型号","CPU数量","CPU核数","CPU总数","内存","硬盘大小","硬盘信息","操作系统","系统版本","系统架构","主机名原始","网域","特权用户","节点","节点名称","标签管理"]
    test=pd.DataFrame(columns=name1,data=l1)
    #print(test)
    fn="jp_"+(time.strftime("%Y%m%d%H%M")+".csv")


    test.to_csv(fn,encoding='gbk',index=False)
    #如果csv文件没问题的话,可以直接在这边去调用那边的readcsv函数,完成数据的自动导入。

Summarize

The background for doing this at the time was that we had newly built a jumpserver, but the number of hosts was huge and we didn’t want to do it manually, so we spent some time researching it.

At present, a new host has been ordered on the cloud platform. Now it can be updated directly, and then you can log in directly on the spring machine.

There are still some parts that have not been introduced, because we have also divided into different environments. In fact, privileged users are also different, so privileged users, node names, etc. also involve a distinction.

Guess you like

Origin blog.csdn.net/smallbird108/article/details/126193761