celery环境搭建

1、安装celery

celery+rabbitmq的部署
注意:如果python版本是2.6.6,pip对应版本选pip-9.1.0。

pip install celery   (保佑你这一步顺利,否则,官网上下载celery,安装)

在python2.6,需要安装kombu-3-0-35

wget https://pypi.python.org/packages/source/k/kombu/kombu-3.0.35.tar.gz\#md5\=6483ac8ba7109ec606f5cb9bd084b6ef

解压缩,然后进到目录,用pip setup.py install安装

这里插播一条升级python的广告哈

1、下载Python
wget https://www.python.org/ftp/python/2.7.12/Python-2.7.12.tgz
2、解压
tar -zxvf Python-2.7.12.tgz
3、切换到Python-2.7.12,编译安装
./configure 
make all            
make install 
make clean 
make distclean
4、建立软链接,使系统默认Python指向2.7版本
#mv /usr/bin/python /usr/bin/python2.6.6 
#ln -s /usr/local/bin/python2.7 /usr/bin/python

5、解决系统 Python 软链接指向 Python2.7 版本后,因为yum是不兼容 Python 2.7的,所以yum不能正常工作,需要指定 yum 的Python版本(注意:这个很重要,不修改,亲测会躺枪)
#vi /usr/bin/yum 
将文件头部的
#!/usr/bin/python
改成
#!/usr/bin/python2.6.6

2、安装rabbitmq

centos:
yum install -y rabbitmq-server

2.1、启动 rabbitmq-server

[root@spurman tmp]# rabbitmq-server

              RabbitMQ 3.5.1. Copyright (C) 2007-2014 GoPivotal, Inc.
  ##  ##      Licensed under the MPL.  See http://www.rabbitmq.com/
  ##  ##
  ##########  Logs: /var/log/rabbitmq/rabbit@yf-dba-mysql-wdtest01.log
  ######  ##        /var/log/rabbitmq/rabbit@yf-dba-mysql-wdtest01-sasl.log
  ##########
              Starting broker... completed with 0 plugins.
[root@yf-dba-mysql-wdtest01 wangdong30]# ps aux| grep rabbitmq
root     14961  0.0  0.0 225716  8656 pts/0    S+   14:11   0:00 su rabbitmq -s /bin/sh -c /usr/lib/rabbitmq/bin/rabbitmq-server
rabbitmq 14968  2.0  0.7 2399440 101136 ?      Ssl  14:11   0:01 /usr/lib64/erlang/erts-6.3/bin/beam.smp -W w -K true -A30 -P 1048576 -- -root /usr/lib64/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.5.1/sbin/../ebin -noshell -noinput -s rabbit boot -sname rabbit@yf-dba-mysql-wdtest01 -boot start_sasl -kernel inet_default_connect_options [{nodelay,true}] -sasl errlog_type error -sasl sasl_error_logger false -rabbit error_logger {file,"/var/log/rabbitmq/[email protected]"} -rabbit sasl_error_logger {file,"/var/log/rabbitmq/[email protected]"} -rabbit enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/lib/rabbitmq_server-3.5.1/sbin/../plugins" -rabbit plugins_expand_dir "/var/lib/rabbitmq/mnesia/rabbit@yf-dba-mysql-wdtest01-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/var/lib/rabbitmq/mnesia/rabbit@yf-dba-mysql-wdtest01" -kernel inet_dist_listen_min 25672 -kernel inet_dist_listen_max 25672
rabbitmq 14983  0.0  0.0  10828   436 ?        S    14:11   0:00 /usr/lib64/erlang/erts-6.3/bin/epmd -daemon
rabbitmq 15054  0.2  0.0  10792   544 ?        Ss   14:11   0:00 inet_gethost 4
rabbitmq 15055  0.0  0.0  11004   472 ?        S    14:11   0:00 inet_gethost 4
root     15127  0.0  0.0 103260   880 pts/1    S+   14:13   0:00 grep rabbitmq

你也可以添加 -detached 属性来让它在后台运行(注意:只有一个破折号):

rabbitmq-server -detached

永远不要用 kill 停止 RabbitMQ 服务器,而是应该用 rabbitmqctl 命令:

rabbitmqctl stop

2.2 创建rabbitmq用户

rabbitmqctl add_user Username Password

2.3 为用户授权

(1)先创建tag

rabbitmqctl set_user_tags wangdong30 administrator #这里设置为管理员角色

(2)设置权限。

set_permissions [-p <vhostpath>]  <user>  <conf> <write> <read>——其中conf write read可以通过正则表达式指定相应路径的配置、读写权限。
rabbitmqctl set_permissions -p / wangdong30 '.*' '.*' '.*'

(3) 设置环境变量,否则会报错

[root@spurman tmp]# celery -A tasks worker --loglevel=info
Running a worker with superuser privileges when the
worker accepts messages serialized with pickle is a very bad idea!

If you really want to continue then you have to set the C_FORCE_ROOT
environment variable (but please think about this before you do).

User information: uid=0 euid=0 gid=0 egid=0

#设置环境变量,添加如下内容
export C_FORCE_ROOT=1

3、应用程序实例

from celery import Celery
app = Celery('tasks', broker='amqp://wangdong30@localhost//' )
@app.task
def add(x, y):
    return x + y

Celery的第一个参数是本模块的名称,这对于自动生成名称是需要的;第二个参数是一个broker关键字参数,用于指定你想使用的中间人的URL,这里以默认配置使用RabiitMQ。关于消息中间人你可以有更多选择,比如对于RabiitMQ使用amqp://localhost,或者对于Redis使用redis://localhost
定义好了之后,启动celery(注意:rabbirmq要先开启,否则会报错)

命令: # celery -A tasks worker --loglevel=info

在启动celery时,开始没有启动rabbitmq,会报错,提示无法连接,启动rabbitmq后,成功启动.这时候,打开一个python IDLE进行测试。

>>> from tasks import add
>>> result = add.delay(3,1)
>>> result.ready()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.6/site-packages/celery-3.1.23-py2.6.egg/celery/result.py", line 259, in ready
    return self.state in self.backend.READY_STATES
  File "/usr/lib/python2.6/site-packages/celery-3.1.23-py2.6.egg/celery/result.py", line 394, in state
    return self._get_task_meta()['status']
  File "/usr/lib/python2.6/site-packages/celery-3.1.23-py2.6.egg/celery/result.py", line 339, in _get_task_meta
    return self._maybe_set_cache(self.backend.get_task_meta(self.id))
  File "/usr/lib/python2.6/site-packages/celery-3.1.23-py2.6.egg/celery/backends/base.py", line 307, in get_task_meta
    meta = self._get_task_meta_for(task_id)
AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for'
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#celery 数据
[2018-01-10 15:47:48,058: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672//
[2018-01-10 15:47:48,071: INFO/MainProcess] mingle: searching for neighbors
[2018-01-10 15:47:49,079: INFO/MainProcess] mingle: all alone
[2018-01-10 15:47:49,097: WARNING/MainProcess] celery@yf-dba-mysql-wdtest01.yf.sankuai.com ready.
[2018-01-10 16:00:26,403: INFO/MainProcess] Received task: tasks.add[f607ad98-a1f9-48da-82b5-97c70eadd1bf]
[2018-01-10 16:00:26,404: INFO/MainProcess] Task tasks.add[f607ad98-a1f9-48da-82b5-97c70eadd1bf] succeeded in 0.000402551027946s: 4
[2018-01-10 16:01:37,679: INFO/MainProcess] Received task: tasks.add[a42a5318-b457-4d36-8136-8ca217011453]
[2018-01-10 16:01:37,680: INFO/MainProcess] Task tasks.add[a42a5318-b457-4d36-8136-8ca217011453] succeeded in 0.000399373995606s: 4

修改应用程序,设置backends
app = Celery(‘tasks’, broker=’amqp://wangdong30@localhost//’,backend=”amqp”, )

[root@yf-dba-mysql-wdtest01 tmp]# python
Python 2.6.6 (r266:84292, Aug 18 2016, 15:13:37)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-17)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from tasks import add
>>> result= add.delay(2,1)
>>> result.ready()
True
>>> result.get()
3
~~~~~~~~~~~~~~~~~~~~~~~~~~~
#celery 数据
[2018-01-10 16:05:17,540: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672//
[2018-01-10 16:05:17,552: INFO/MainProcess] mingle: searching for neighbors
[2018-01-10 16:05:18,561: INFO/MainProcess] mingle: all alone
[2018-01-10 16:05:18,572: WARNING/MainProcess] celery@yf-dba-mysql-wdtest01.yf.sankuai.com ready.
[2018-01-10 16:06:12,766: INFO/MainProcess] Received task: tasks.add[f2ec0506-c08b-4ee1-b510-dba8798f9791]
[2018-01-10 16:06:12,789: INFO/MainProcess] Task tasks.add[f2ec0506-c08b-4ee1-b510-dba8798f9791] succeeded in 0.0205878900015s: 3

celery 配置文件及task

在python的库存放路径中(一般是/usr/lib/python2.6/site-packages),创建一个文件夹proj,进入proj目录,创建三个文件,init,将proj声明一个python包,celepy,其内容如下:

celery.py文件
#_*_ coding:utf-8 _*_
from __future__ import absolute_import
from celery import Celery

app = Celery("proj",
broker="amqp://xxxx:xxxx@localhost//",
backend="amqp",
include=["proj.tasks"]
)

app.conf.update(
CELERY_ROUTES={
"proj.tasks.getServerInfo":{"queue":"getServerInfo"},
}
)
if __name__=="__main__":
    app.start()

这里我们定义了模块名称proj以及celery 路由。还有一个文件,task.py
ask.apply_async(args[, kwargs[, …]])
其中 args 和 kwargs 分别是 task 接收的参数,当然它也接受额外的参数对任务进行控制。

在 Celery 中执行任务的方法一共有三种:
1. delay, 用来进行最简单便捷的任务执行(delay在第3小节的测试中使用过,它可以看作是apply_async的一个快捷方式);
2. apply_async, 对于任务的执行附加额外的参数,对任务进行控制;
3. app.send_task, 可以执行未在 Celery 中进行注册的任务。

#_*_ coding:utf-8 _*_i
from __future__ import absolute_import
from proj.celery import app
import random
import simplejson as json
import types
import time
import MySQLdb
import urllib2
import ConfigParser as cparser
import hmac
import hashlib
import base64
import sys
reload(sys)

@app.task
def getServiceInfo(contentInfo):
    contentInfo = json.loads(contentInfo)
    serviceGroupName = contentInfo['serviceGroupName']
    dbHost = contentInfo['dbHost']
    dbPort = int(contentInfo['dbPort'])
    dbUser = contentInfo['dbUser']
    dbPasswd = contentInfo['dbPasswd']
    msgLib = MessageLib.MessageLib()
    getServiceGroupHostSql = "your SQL"

    #第三步:连接数据库,执行代码逻辑
    #return [getServiceGroupHostSql]
    try:
        db_connection = MySQLdb.connect(host=dbHost, port=dbPort, passwd=dbPasswd, db="cmdb", user=dbUser, connect_timeout=2, charset="utf8")
        cursor = db_connection.cursor()
        cursor.execute(getServiceGroupHostSql)
        row = cursor.fetchall()
        result = []
        for line in row:
            tempMysqlHighInfo = {}
            tempMysqlHighInfo['id'] = line[0]
            tempMysqlHighInfo['name'] = line[1]
            tempMysqlHighInfo['user_id'] = line[2]

    resultInfo = msgLib.success_info(result)
    print resultInfo
    return resultInfo
    except Exception, e:
        raise
        errorInfo = "dbhost:%s, port:%s, error:%s" % (dbHost, dbPort, str(e))
        #return  getServiceGroupHostSql,errorInfo
        return msgLib.error_info(-1, errorInfo, {})

完成后,启动celery 服务:

celery -A proj worker -Q getServiceInfo -l debug -c 6

猜你喜欢

转载自blog.csdn.net/spur_man/article/details/80268818