tensorflow1.13分布式训练 参考资料 -教程原理

前言

对于数据量较大的时候,通过分布式训练可以加速训练。相比于单机单卡、单机多卡只需要用with tf.device(‘/gpu:0’)来指定GPU进行计算的情况,分布式训练因为涉及到多台机器之间的分工交互,所以更麻烦一些。本文简单介绍了多机(单卡/多卡不重要)情况下的分布式Tensorflow训练方法。

对于分布式训练与单机训练主要有两个不同 :1. 如何开始训练;2. 训练时如何进行分工。分别会在下面两节进行介绍。

1. 确认彼此

单机训练直接可以通过一个脚本就告诉机器“我要开始训练啦”就可以,但是对于分布式训练而言,多台机器需要互相通信,就需要先“见个面认识一下”。就需要给每一台机器一个“名单”,让他去找其他机器。这个“名单”就是所谓的ClusterSpec,让他去找其他机器就是说每一台机器都要运行一次脚本

下面我们来举一个例子,假设我们用本地机器的两个端口"localhost:2222","localhost:2223"来模拟集群中的两个机器,两个机器的工作内容都是简单的print一句话。首先写两个脚本,第一个脚本长这样

import tensorflow as tf

# 每台机器要做的内容(为了简化,不训练了,只print一下)
c = tf.constant("Hello from server1")

# 集群的名单
cluster = tf.train.ClusterSpec({
    
    "local":["localhost:2222", "localhost:2223"]})
# 服务的声明,同时告诉这台机器他是名单中的谁
server = tf.train.Server(cluster, job_name="local", task_index=0)
# 以server模式打开会话环境
sess = tf.Session(server.target, config=tf.ConfigProto(log_device_placement=True))
print(sess.run(c))
server.join()
然后第二个脚本长这样:
import tensorflow as tf

# 每台机器要做的内容(为了简化,不训练了,只print一下)
c = tf.constant("Hello from server2")

# 集群的名单
cluster = tf.train.ClusterSpec({
    
    "local":["localhost:2222", "localhost:2223"]})
# 服务的声明,同时告诉这台机器他是名单中的谁
server = tf.train.Server(cluster, job_name="local", task_index=1)
# 以server模式打开会话环境
sess = tf.Session(server.target, config=tf.ConfigProto(log_device_placement=True))
print(sess.run(c))
server.join()

我们来简单说明一下脚本中的内容。这两个脚本其实长的差不多,都是拿着同一个“名单”,即

# 声明集群的“名单”
cluster = tf.train.ClusterSpec({
    
    "local":["localhost:2222", "localhost:2223"]})

不同之处只是在创建Server的时候,指定了不同的index,相当于告诉他名单里哪一个名字是自己。其实原理上就是在每一台机器上起一个服务,然后通过这个服务和名单来实现通信。

# 第一个脚本的服务
server = tf.train.Server(cluster, job_name="local", task_index=0)
# 第二个脚本的服务
server = tf.train.Server(cluster, job_name="local", task_index=1)

现在有两个脚本了(对于多机情况,这两个脚本是分别放在不同机器上的,但是本例使用单机的两个端口模仿多机,所以两个脚本可以放在一起)。然后我们让这个“集群”启动起来吧!首先打开一个命令行窗口,在该路径下运行第一个脚本:

# 运行第一台机器(控制台窗口)
$ python3 server1.py

# 输出内容
# 此处省略 N 行内容
2020-04-24 14:58:58.841179: I tensorflow/core/distributed_runtime/master.cc:268] CreateSession still waiting for response from worker: /job:local/replica:0/task:1
2020-04-24 14:59:08.844255: I tensorflow/core/distributed_runtime/master.cc:268] CreateSession still waiting for response from worker: /job:local/replica:0/task:1
2020-04-24 14:59:18.847998: I tensorflow/core/distributed_runtime/master.cc:268] CreateSession still waiting for response from worker: /job:local/replica:0/task:1
2020-04-24 14:59:28.852471: I tensorflow/core/distributed_runtime/master.cc:268] CreateSession still waiting for response from worker: /job:local/replica:0/task:1
2020-04-24 14:59:38.852649: I tensorflow/core/distributed_runtime/master.cc:268] CreateSession still waiting for response from worker: /job:local/replica:0/task:1
2020-04-24 14:59:48.856933: I tensorflow/core/distributed_runtime/master.cc:268] CreateSession still waiting for response from worker: /job:local/replica:0/task:1

忽略WARNING部分,命令行中不断输出内容CreateSession still waiting for response from worker表示这个服务正在等待集群中其他机器,毕竟我们还没有让第二台机器加入进来。下面我们重新打开一个命令行窗口(表示另一台机器),并在目录下启动另一个脚本:

# 运行第二台机器(控制台窗口)
$ python3 server2.py

# 输出内容
# 此处省略 N 行内容
Const: (Const): /job:local/replica:0/task:0/device:CPU:0
2020-04-24 15:02:27.653508: I tensorflow/core/common_runtime/placer.cc:54] Const: (Const): /job:local/replica:0/task:0/device:CPU:0
b'Hello from server2'

我们看到当第二个脚本开始运行时,集群中所有(两台)机器都到齐了,于是就开始工作了。第二台机器直接print出了内容b’Hello from server2’。同时此时第一台机器也开始了工作

# 第二个台机器(控制台窗口)加入到集群之后,第一台机器的输出
Const: (Const): /job:local/replica:0/task:0/device:CPU:0
2020-04-24 15:02:28.732132: I tensorflow/core/common_runtime/placer.cc:54] Const: (Const): /job:local/replica:0/task:0/device:CPU:0
b'Hello from server1'

综上,对于分布式训练来说,第一步就是每一个机器都应该有一个脚本;第二 步给每台机器一个相同的“名单”,也就是ClusterSpec;第三步在每台机器上分别运行脚本,起服务;最后多台机器之间就可以通信了。

2. 密切配合

第一节介绍了集群之间的机器如何相互确认,并一起开始工作的。本节主要介绍,集群之间的机器如明确分工,相互配合完成训练的。在前面的例子中,两台机器的名单是通过ClusterSpec来声明的,两台机器没有复杂的角色分工,都是print一句话。

tf.train.ClusterSpec({
    
    "local":["localhost:2222", "localhost:2223"]})

实际上在复杂的训练过程中会更复杂,我们要为每台机器分配不同的工作,一般会分成ps机和worker机。其中ps机负责保存网络参数、汇总梯度值、更新网络参数,而worker机主要负责正向传导和反向计算提督。这时在创建ClusterSpec的时候就需要这样做

# 通常将机器分工为ps和worker,不过可以根据实际情况灵活分工。
# 只是在编写代码时明确每种分工的机器要做什么事情就可以
tf.train.ClusterSpec({
    
    
    "ps":["localhost:2222"],    # 用来保存、更新参数的机器
    "worker":["localhost:2223", "localhost:2224"]    # 用来正向传播、反向计算梯度的机器
})

本例中仍然采用本机的三个端口模拟三台机器。ClusterSpec的参数字典的key为集群分工的名称,value为该分工下的机器列表。

已经知道了如何定义一个集群,下面我们来看看如何给每一个机器分配任务。在第一节的例子中我们写了两个相似脚本,但是如果在大规模集群上这很费力,且不宜与维护。最好是只写一份脚本,然后在不同的机器上运行时,通过参数告诉机器“分工”(ps or worker)和“名字”(ip:port)就可以。分布式训练的方式分为异步训练和同步训练。下面我们分别介绍:

2.1 异步分布式训练

我们还是据一个简单的DNN来分类MNIST数据集的例子,脚本应该长这样:

# 异步分布式训练
#coding=utf-8
import time
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data    # 数据的获取不是本章重点,这里直接导入

FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_string("job_name", "worker", "ps or worker")
tf.app.flags.DEFINE_integer("task_id", 0, "Task ID of the worker/ps running the train")
tf.app.flags.DEFINE_string("ps_hosts", "localhost:2222", "ps机")
tf.app.flags.DEFINE_string("worker_hosts", "localhost:2223,localhost:2224", "worker机,用逗号隔开")

# 全局变量
MODEL_DIR = "./distribute_model_ckpt/"
DATA_DIR = "./data/mnist/"
BATCH_SIZE = 32


# main函数
def main(self):
    # ==========  STEP1: 读取数据  ========== #
    mnist = input_data.read_data_sets(DATA_DIR, one_hot=True, source_url='http://yann.lecun.com/exdb/mnist/')    # 读取数据

    # ==========  STEP2: 声明集群  ========== #
    # 构建集群ClusterSpec和服务声明
    ps_hosts = FLAGS.ps_hosts.split(",")
    worker_hosts = FLAGS.worker_hosts.split(",")
    cluster = tf.train.ClusterSpec({
    
    "ps":ps_hosts, "worker":worker_hosts})    # 构建集群名单
    server = tf.train.Server(cluster, job_name=FLAGS.job_name, task_index=FLAGS.task_id)    # 声明服务

    # ==========  STEP3: ps机内容  ========== #
    # 分工,对于ps机器不需要执行训练过程,只需要管理变量。server.join()会一直停在这条语句上。
    if FLAGS.job_name == "ps":
        with tf.device("/cpu:0"):
            server.join()

    # ==========  STEP4: worker机内容  ========== #
    # 下面定义worker机需要进行的操作
    is_chief = (FLAGS.task_id == 0)    # 选取task_id=0的worker机作为chief

    # 通过replica_device_setter函数来指定每一个运算的设备。
    # replica_device_setter会自动将所有参数分配到参数服务器上,将计算分配到当前的worker机上。
    device_setter = tf.train.replica_device_setter(
        worker_device="/job:worker/task:%d" % FLAGS.task_id,
        cluster=cluster)

    # 这一台worker机器需要做的计算内容
    with tf.device(device_setter):
        # 输入数据
        x = tf.placeholder(name="x-input", shape=[None, 28*28], dtype=tf.float32)    # 输入样本像素为28*28
        y_ = tf.placeholder(name="y-input", shape=[None, 10], dtype=tf.float32)      # MNIST是十分类
        # 第一层(隐藏层)
        with tf.variable_scope("layer1"):
            weights = tf.get_variable(name="weights", shape=[28*28, 128], initializer=tf.glorot_normal_initializer())
            biases = tf.get_variable(name="biases", shape=[128], initializer=tf.glorot_normal_initializer())
            layer1 = tf.nn.relu(tf.matmul(x, weights) + biases, name="layer1")
        # 第二层(输出层)
        with tf.variable_scope("layer2"):
            weights = tf.get_variable(name="weights", shape=[128, 10], initializer=tf.glorot_normal_initializer())
            biases = tf.get_variable(name="biases", shape=[10], initializer=tf.glorot_normal_initializer())
            y = tf.add(tf.matmul(layer1, weights), biases, name="y")
        pred = tf.argmax(y, axis=1, name="pred")
        global_step = tf.contrib.framework.get_or_create_global_step()    # 必须手动声明global_step否则会报错
        # 损失和优化
        cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, axis=1))
        loss = tf.reduce_mean(cross_entropy)
        train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss, global_step=global_step)
        if is_chief:
            train_op = tf.no_op()
    
        hooks = [tf.train.StopAtStepHook(last_step=10000)]
        config = tf.ConfigProto(
            allow_soft_placement=True,    # 设置成True,那么当运行设备不满足要求时,会自动分配GPU或者CPU。
            log_device_placement=False,   # 设置为True时,会打印出TensorFlow使用了哪种操作
        )

        # ==========  STEP5: 打开会话  ========== #
        # 对于分布式训练,打开会话时不采用tf.Session(),而采用tf.train.MonitoredTrainingSession()
        # 详情参考:https://www.cnblogs.com/estragon/p/10034511.html
        with tf.train.MonitoredTrainingSession(
                master=server.target,
                is_chief=is_chief,
                checkpoint_dir=MODEL_DIR,
                hooks=hooks,
                save_checkpoint_secs=10,
                config=config) as sess:
            print("session started!")
            start_time = time.time()
            step = 0
        
            while not sess.should_stop():
                xs, ys = mnist.train.next_batch(BATCH_SIZE)    # batch_size=32
                _, loss_value, global_step_value = sess.run([train_op, loss, global_step], feed_dict={
    
    x:xs, y_:ys})
                if step > 0 and step % 100 == 0:
                    duration = time.time() - start_time
                    sec_per_batch = duration / global_step_value
                    print("After %d training steps(%d global steps), loss on training batch is %g (%.3f sec/batch)" % (step, global_step_value, loss_value, sec_per_batch))
                step += 1
    

if __name__ == "__main__":
    tf.app.run()

代码虽然比较长,但是整体结构还是很清晰的。结构上分5个步骤:1. 读取数据、2. 声明集群、3. ps机内容、4. worker机内容、5. 打开会话。其中第四步“worker机内容”包含了网络结构的定义,比较复杂。

接下来只需要将脚本放在集群的三个不同机器上,然后分别运行即可,首先运行ps机脚本:

# ps机脚本
$ python3 distribute_train.py --job_name=ps --task_id=0 --ps_hosts=localhost:2222 --worker_hosts=localhost:2223,localhost:2224

# 输出
2020-04-24 17:16:44.530325: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-04-24 17:16:44.546565: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x102ccad20 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-04-24 17:16:44.546582: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-04-24 17:16:44.548075: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:258] Initialize GrpcChannelCache for job ps -> {
    
    0 -> localhost:2222}
2020-04-24 17:16:44.548088: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:258] Initialize GrpcChannelCache for job worker -> {
    
    0 -> localhost:2223, 1 -> localhost:2224}
2020-04-24 17:16:44.548525: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:365] Started server with target: grpc://localhost:2222

然后运行第一个worker机脚本,开始运行之后他会等待worker其他机的加入:

# 第一个worker机
$ python3 distribute_train.py --job_name=worker --task_id=0 --ps_hosts=localhost:2222 --worker_hosts=localhost:2223,localhost:2224

# 这里省略 N 行输出
2020-04-24 17:25:41.174507: I tensorflow/core/distributed_runtime/master.cc:268] CreateSession still waiting for response from worker: /job:worker/replica:0/task:1
2020-04-24 17:25:51.176111: I tensorflow/core/distributed_runtime/master.cc:268] CreateSession still waiting for response from worker: /job:worker/replica:0/task:1
2020-04-24 17:26:01.180872: I tensorflow/core/distributed_runtime/master.cc:268] CreateSession still waiting for response from worker: /job:worker/replica:0/task:1
2020-04-24 17:26:11.184377: I tensorflow/core/distributed_runtime/master.cc:268] CreateSession still waiting for response from worker: /job:worker/replica:0/task:1

然后运行第二个worker机的脚本:

# 第二个worker机
$ python3 distribute_train.py --job_name=worker --task_id=0 --ps_hosts=localhost:2222 --worker_hosts=localhost:2223,localhost:2224

# 输出
session started!
After 100 training steps(100 global steps), loss on training batch is 1.59204 (0.004 sec/batch)
After 200 training steps(200 global steps), loss on training batch is 1.10218 (0.003 sec/batch)
After 300 training steps(300 global steps), loss on training batch is 0.71179 (0.003 sec/batch)
After 400 training steps(400 global steps), loss on training batch is 0.679103 (0.002 sec/batch)
After 500 training steps(500 global steps), loss on training batch is 0.50411 (0.002 sec/batch)
# 这里省略 N 行输出

2.2 同步分布式训练

同样是采用DNN进行MNIST数据集的分类任务:

# 异步分布式训练
#coding=utf-8
import time
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data    # 数据的获取不是本章重点,这里直接导入

FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_string("job_name", "worker", "ps or worker")
tf.app.flags.DEFINE_integer("task_id", 0, "Task ID of the worker/ps running the train")
tf.app.flags.DEFINE_string("ps_hosts", "localhost:2222", "ps机")
tf.app.flags.DEFINE_string("worker_hosts", "localhost:2223,localhost:2224", "worker机,用逗号隔开")

# 全局变量
MODEL_DIR = "./distribute_model_ckpt/"
DATA_DIR = "./data/mnist/"
BATCH_SIZE = 32


# main函数
def main(self):
    # ==========  STEP1: 读取数据  ========== #
    mnist = input_data.read_data_sets(DATA_DIR, one_hot=True, source_url='http://yann.lecun.com/exdb/mnist/')    # 读取数据

    # ==========  STEP2: 声明集群  ========== #
    # 构建集群ClusterSpec和服务声明
    ps_hosts = FLAGS.ps_hosts.split(",")
    worker_hosts = FLAGS.worker_hosts.split(",")
    cluster = tf.train.ClusterSpec({
    
    "ps":ps_hosts, "worker":worker_hosts})    # 构建集群名单
    server = tf.train.Server(cluster, job_name=FLAGS.job_name, task_index=FLAGS.task_id)    # 声明服务
    n_workers = len(worker_hosts)    # worker机的数量

    # ==========  STEP3: ps机内容  ========== #
    # 分工,对于ps机器不需要执行训练过程,只需要管理变量。server.join()会一直停在这条语句上。
    if FLAGS.job_name == "ps":
        with tf.device("/cpu:0"):
            server.join()

    # ==========  STEP4: worker机内容  ========== #
    # 下面定义worker机需要进行的操作
    is_chief = (FLAGS.task_id == 0)    # 选取task_id=0的worker机作为chief

    # 通过replica_device_setter函数来指定每一个运算的设备。
    # replica_device_setter会自动将所有参数分配到参数服务器上,将计算分配到当前的worker机上。
    device_setter = tf.train.replica_device_setter(
        worker_device="/job:worker/task:%d" % FLAGS.task_id,
        cluster=cluster)

    # 这一台worker机器需要做的计算内容
    with tf.device(device_setter):
        # 输入数据
        x = tf.placeholder(name="x-input", shape=[None, 28*28], dtype=tf.float32)    # 输入样本像素为28*28
        y_ = tf.placeholder(name="y-input", shape=[None, 10], dtype=tf.float32)      # MNIST是十分类
        # 第一层(隐藏层)
        with tf.variable_scope("layer1"):
            weights = tf.get_variable(name="weights", shape=[28*28, 128], initializer=tf.glorot_normal_initializer())
            biases = tf.get_variable(name="biases", shape=[128], initializer=tf.glorot_normal_initializer())
            layer1 = tf.nn.relu(tf.matmul(x, weights) + biases, name="layer1")
        # 第二层(输出层)
        with tf.variable_scope("layer2"):
            weights = tf.get_variable(name="weights", shape=[128, 10], initializer=tf.glorot_normal_initializer())
            biases = tf.get_variable(name="biases", shape=[10], initializer=tf.glorot_normal_initializer())
            y = tf.add(tf.matmul(layer1, weights), biases, name="y")
        pred = tf.argmax(y, axis=1, name="pred")
        global_step = tf.contrib.framework.get_or_create_global_step()    # 必须手动声明global_step否则会报错
        # 损失和优化
        cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, axis=1))
        loss = tf.reduce_mean(cross_entropy)
        # **通过tf.train.SyncReplicasOptimizer函数实现函数同步更新**
        opt = tf.train.SyncReplicasOptimizer(
            tf.train.GradientDescentOptimizer(0.01),
            replicas_to_aggregate=n_workers,
            total_num_replicas=n_workers
        )
        sync_replicas_hook = opt.make_session_run_hook(is_chief)
        train_op = opt.minimize(loss, global_step=global_step)
        if is_chief:
            train_op = tf.no_op()
    
        hooks = [sync_replicas_hook, tf.train.StopAtStepHook(last_step=10000)]    # 把同步更新的hook加进来
        config = tf.ConfigProto(
            allow_soft_placement=True,    # 设置成True,那么当运行设备不满足要求时,会自动分配GPU或者CPU。
            log_device_placement=False,   # 设置为True时,会打印出TensorFlow使用了哪种操作
        )

        # ==========  STEP5: 打开会话  ========== #
        # 对于分布式训练,打开会话时不采用tf.Session(),而采用tf.train.MonitoredTrainingSession()
        # 详情参考:https://www.cnblogs.com/estragon/p/10034511.html
        with tf.train.MonitoredTrainingSession(
                master=server.target,
                is_chief=is_chief,
                checkpoint_dir=MODEL_DIR,
                hooks=hooks,
                save_checkpoint_secs=10,
                config=config) as sess:
            print("session started!")
            start_time = time.time()
            step = 0
        
            while not sess.should_stop():
                xs, ys = mnist.train.next_batch(BATCH_SIZE)    # batch_size=32
                _, loss_value, global_step_value = sess.run([train_op, loss, global_step], feed_dict={
    
    x:xs, y_:ys})
                if step > 0 and step % 100 == 0:
                    duration = time.time() - start_time
                    sec_per_batch = duration / global_step_value
                    print("After %d training steps(%d global steps), loss on training batch is %g (%.3f sec/batch)" % (step, global_step_value, loss_value, sec_per_batch))
                step += 1
    

if __name__ == "__main__":
    tf.app.run()

同步分布式训练与异步分布式训练几乎一样,只有两点差别:

  • 优化器要用tf.train.SyncReplicasOptimizer代替tf.train.GradientDescentOptimizer
  • hooks要将sync_replicas_hook = opt.make_session_run_hook(is_chief)也加进来

其他的都和异步分布式训练一样,这里就不做赘述了。

不错的例子及说明:
https://github.com/TracyMcgrady6/Distribute_MNIST/tree/master

猜你喜欢

转载自blog.csdn.net/weixin_39589455/article/details/132046457
今日推荐