Sesame HTTP: Ansible extension

Introduction to Ansible

Ansible is an operation and maintenance tool developed by Python. Because the work needs to be exposed to Ansible, and some things are often integrated into Ansible, the understanding of Ansible is increasing.

So what exactly is Ansible? In my understanding, it turns out that you need to log into the server and then execute a bunch of commands to get something done. And Ansible is to execute those commands instead of us. And you can control multiple machines through Ansible, and arrange and execute tasks on the machine, which is called playbook in Ansible.

So how does Ansible do it? To put it simply, Ansible generates a script for the command we want to execute, and then uploads the script to the server to execute the command through sftp, and then executes the script through the ssh protocol and returns the execution result.

So how does Ansible do it? Let's take a look at how Ansible completes the execution of a module from modules and plugins

PS: The following analysis is based on the execution conclusions obtained by reading the source code after having some specific experience with Ansible, so I hope that when reading this article, it is based on a certain understanding of Ansible, at least for Some concepts of Ansible are understood, such as inventory, module, playbooks, etc.

Ansible modules

A module is the smallest unit of Ansible execution and can be written in Python, Shell, or other languages. The specific operation steps and parameters required in the actual use process are defined in the module

The executed script is to generate an executable script according to the module.

So how does Ansible upload this script to the server, and then execute it to get the result?

Ansible plug-in connection plug-in connection plug-in, connects to the specified server according to the specified ssh parameters, and provides an interface for actually executing commands

The shell plug-in command plug-in, according to the sh type, to generate the command to be executed for connection

The strategy plug-in executes the strategy plug-in, which is a linear plug-in by default, that is, one task after another is executed downward. This plug-in throws the task to the executor for execution.

Action plugin Action plugin is essentially all actions of the task module. If ansible module does not have a specially written action plugin, it is normal or async by default (these two are selected according to whether the module is async or not), and what is defined in normal and async is The execution steps of the module. For example, create temporary files locally, upload temporary files, execute scripts, delete scripts, etc. If you want to add some special steps to all modules, you can expand by adding action plugins.

Ansible execute module process

module_loader (task module) Through the run of strategy_loader (strategy plugin) (the default strategy type is linear, linear execution), to execute all tasks in order (execute a module, may execute multiple tasks) After the strategy_loader plugin runs, Will determine the action type. If it is a meta type, it will be executed separately (not a specific ansible module), while other modules will be loaded into the queue _queue_task In the queue, WorkerProcess will be called to process, and after the actual run of workerproces, it will be executed using TaskExecutor. The connection plug-in will be set in the TaskExecutor, and the action plug-in will be obtained according to the type of task (module. or include, etc.), which is the corresponding module. If the module has custom execution, the custom action will be executed, if not, it will be used. normal or async, this is based on whether it is the async attribute of the task to determine the order of execution defined in the Action plug-in, and specific operations, such as generating a temporary directory, generating a temporary script, so in a unified mode, integrate some additional When processing, you can rewrite the Action method to execute each action step of the Action through the Connection plug-in. Extend the Ansible instance

In the actual needs of executing node Python environment expansion, some Ansible modules we extend need to use third-party libraries, but it is not easy to manage the installation of these libraries in each node. The essence of the ansible execution module is to execute the generated script in the python environment of the node, so the solution we adopted is to specify the python environment on the node and share a python environment in the local area network as nfs. By extending the Action plug-in, add the nfs mounted on the node, and then uninstall the nfs on the node after the execution is complete. The specific implementation steps are as follows:

Extension code:

Override the execute_module method of ActionBase

# execute_module

from __future__ import (absolute_import, division, print_function)
__metaclass__ = type

import json
import pipes

from ansible.compat.six import text_type, iteritems

from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.release import __version__

try:
    from __main__ import display
except ImportError:
    from ansible.utils.display import Display
    display = Display()


class MagicStackBase(object):

    def _mount_nfs(self, ansible_nfs_src, ansible_nfs_dest):
        cmd = ['mount',ansible_nfs_src, ansible_nfs_dest]
        cmd = [pipes.quote(c) for c in cmd]
        cmd = ' '.join(cmd)
        result = self._low_level_execute_command(cmd=cmd, sudoable=True)
        return result

    def _umount_nfs(self, ansible_nfs_dest):
        cmd = ['umount', ansible_nfs_dest]
        cmd = [pipes.quote(c) for c in cmd]
        cmd = ' '.join(cmd)
        result = self._low_level_execute_command(cmd=cmd, sudoable=True)
        return result

    def _execute_module(self, module_name=None, module_args=None, tmp=None, task_vars=None, persist_files=False, delete_remote_tmp=True):
        '''
        Transfer and run a module along with its arguments.
        '''

        # display.v(task_vars)

        if task_vars is None:
            task_vars = dict()

        # if a module name was not specified for this execution, use
        # the action from the task
        if module_name is None:
            module_name = self._task.action
        if module_args is None:
            module_args = self._task.args

        # set check mode in the module arguments, if required
        if self._play_context.check_mode:
            if not self._supports_check_mode:
                raise AnsibleError("check mode is not supported for this operation")
            module_args['_ansible_check_mode'] = True
        else:
            module_args['_ansible_check_mode'] = False

        # Get the connection user for permission checks
        remote_user = task_vars.get('ansible_ssh_user') or self._play_context.remote_user

        # set no log in the module arguments, if required
        module_args['_ansible_no_log'] = self._play_context.no_log or C.DEFAULT_NO_TARGET_SYSLOG

        # set debug in the module arguments, if required
        module_args['_ansible_debug'] = C.DEFAULT_DEBUG

        # let module know we are in diff mode
        module_args['_ansible_diff'] = self._play_context.diff

        # let module know our verbosity
        module_args['_ansible_verbosity'] = display.verbosity

        # give the module information about the ansible version
        module_args['_ansible_version'] = __version__

        # set the syslog facility to be used in the module
        module_args['_ansible_syslog_facility'] = task_vars.get('ansible_syslog_facility', C.DEFAULT_SYSLOG_FACILITY)

        # let module know about filesystems that selinux treats specially
        module_args['_ansible_selinux_special_fs'] = C.DEFAULT_SELINUX_SPECIAL_FS

        (module_style, shebang, module_data) = self._configure_module(module_name=module_name, module_args=module_args, task_vars=task_vars)
        if not shebang:
            raise AnsibleError("module (%s) is missing interpreter line" % module_name)

        # get nfs info for mount python packages
        ansible_nfs_src = task_vars.get("ansible_nfs_src", None)
        ansible_nfs_dest = task_vars.get("ansible_nfs_dest", None)

        # a remote tmp path may be necessary and not already created
        remote_module_path = None
        args_file_path = None
       if not tmp and self._late_needs_tmp_path(tmp, module_style):
            tmp = self._make_tmp_path(remote_user)

        if tmp:
            remote_module_filename = self._connection._shell.get_remote_filename(module_name)
            remote_module_path = self._connection._shell.join_path(tmp, remote_module_filename)
            if module_style in ['old', 'non_native_want_json']:
                # we'll also need a temp file to hold our module arguments
                args_file_path = self._connection._shell.join_path(tmp, 'args')

        if remote_module_path or module_style != 'new':
            display.debug("transferring module to remote")
            self._transfer_data(remote_module_path, module_data)
            if module_style == 'old':
                # we need to dump the module args to a k=v string in a file on
                # the remote system, which can be read and parsed by the module
                args_data = ""
                for k,v in iteritems(module_args):
                    args_data += '%s=%s ' % (k, pipes.quote(text_type(v)))
                self._transfer_data(args_file_path, args_data)
            elif module_style == 'non_native_want_json':
                self._transfer_data(args_file_path, json.dumps(module_args))
            display.debug("done transferring module to remote")

        environment_string = self._compute_environment_string()

        remote_files = None

        if args_file_path:
            remote_files = tmp, remote_module_path, args_file_path
        elif remote_module_path:
            remote_files = tmp, remote_module_path

        # Fix permissions of the tmp path and tmp files.  This should be
        # called after all files have been transferred.
        if remote_files:
            self._fixup_perms2(remote_files, remote_user)


        # mount nfs
        if ansible_nfs_src and ansible_nfs_dest:
            result = self._mount_nfs(ansible_nfs_src, ansible_nfs_dest)
            if result['rc'] != 0:
                raise AnsibleError("mount nfs failed!!! {0}".format(result['stderr']))

        cmd = ""
        in_data = None

        if self._connection.has_pipelining and self._play_context.pipelining and not C.DEFAULT_KEEP_REMOTE_FILES and module_style == 'new':
            in_data = module_data
        else:
            if remote_module_path:
                cmd = remote_module_path

        rm_tmp = None
        if tmp and "tmp" in tmp and not C.DEFAULT_KEEP_REMOTE_FILES and not persist_files and delete_remote_tmp:
            if not self._play_context.become or self._play_context.become_user == 'root':
                # not sudoing or sudoing to root, so can cleanup files in the same step
                rm_tmp = tmp

        cmd = self._connection._shell.build_module_command(environment_string, shebang, cmd, arg_path=args_file_path, rm_tmp=rm_tmp)
        cmd = cmd.strip()
        sudoable = True
        if module_name == "accelerate":
            # always run the accelerate module as the user
            # specified in the play, not the sudo_user
            sweatable = False


        res = self._low_level_execute_command(cmd, sudoable=sudoable, in_data=in_data)

        # umount nfs
        if ansible_nfs_src and ansible_nfs_dest:
            result = self._umount_nfs(ansible_nfs_dest)
            if result['rc'] != 0:
                raise AnsibleError("umount nfs failed!!! {0}".format(result['stderr']))

        if tmp and "tmp" in tmp and not C.DEFAULT_KEEP_REMOTE_FILES and not persist_files and delete_remote_tmp:
            if self._play_context.become and self._play_context.become_user != 'root':
                # not sudoing to root, so maybe can't delete files as that other user
                # have to clean up temp files as original user in a second step
                tmp_rm_cmd = self._connection._shell.remove(tmp, recurse=True)
                tmp_rm_res = self._low_level_execute_command(tmp_rm_cmd, sudoable=False)
                tmp_rm_data = self._parse_returned_data(tmp_rm_res)
                if tmp_rm_data.get('rc', 0) != 0:
                    display.warning('Error deleting remote temporary files (rc: {0}, stderr: {1})'.format(tmp_rm_res.get('rc'), tmp_rm_res.get('stderr', 'No error string available.')))

        # parse the main result
        data = self._parse_returned_data(res)

        # pre-split stdout into lines, if stdout is in the data and there
        # isn't already a stdout_lines value there
        if 'stdout' in data and 'stdout_lines' not in data:
            data['stdout_lines'] = data.get('stdout', u'').splitlines()

        display.debug("done with _execute_module (%s, %s)" % (module_name, module_args))
        return data

 Integrate into normal.py and async.py, remember to configure these two plugins in ansible.cfg

from __future__ import (absolute_import, division, print_function)
__metaclass__ = type

from ansible.plugins.action import ActionBase
from ansible.utils.vars import merge_hash

from common.ansible_plugins import MagicStackBase


class ActionModule(MagicStackBase, ActionBase):

    def run(self, tmp=None, task_vars=None):
        if task_vars is None:
            task_vars = dict()

        results = super(ActionModule, self).run(tmp, task_vars)
        # remove as modules might hide due to nolog
        del results['invocation']['module_args']
        results = merge_hash(results, self._execute_module(tmp=tmp, task_vars=task_vars))
        # Remove special fields from the result, which can only be set
        # internally by the executor engine. We do this only here in
        # the 'normal' action, as other action plugins may set this.
        #
        # We don't want modules to determine that running the module fires
        # notify handlers.  That's for the playbook to decide.
        for field in ('_ansible_notify',):
            if field in results:
                results.pop(field)

        return results

 Configure ansible.cfg, specify the extended plugin as the action plugin required by ansible to rewrite the plugin method, the key point is that the execute_module execution command needs to specify the Python environment, and add the required parameters to the nfs mount and unmount parameters

ansible 51 -m mysql_db -a "state=dump name=all target=/tmp/test.sql" -i hosts -u root -v -e "ansible_nfs_src=172.16.30.170:/web/proxy_env/lib64/python2.7/site-packages ansible_nfs_dest=/root/.pyenv/versions/2.7.10/lib/python2.7/site-packages ansible_python_interpreter=/root/.pyenv/versions/2.7.10/bin/python"

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326391785&siteId=291194637