TensorFlow Federated: Machine-based learning distributed data

https://www.tensorflow.org/federated/

 

  • TensorFlow Federated (TFF) is an open source framework for distributed data, machine learning and other calculations. We have developed TFF is to promote joint learning (FL)  open research and experimentation, FL is a machine learning method that allows us to participate across multiple clients share global training model and the training data stored locally. For example, FL has been used to train a predictive model of mobile phone keypad , but does not upload sensitive input data to the server.

    Developers can use the aid of a joint learning algorithm TFF their model and simulation data included, as well as experimenting with new algorithms. TFF provides building blocks can also be used to achieve the non-learning calculation, for example, distributed data analysis polymerization. TFF interface can be divided into two layers:

  • chevron_right

    Federated Learning (FL) API

    This layer provides a set of higher-order interface to enable joint training and evaluation contain developers to be applied to existing implementations TensorFlow model.
  • chevron_right

    Federated Core (FC) API

    The core of the system is a set of lower order interface by using distributed communication operator TensorFlow incorporated functional programming strongly typed environment, expression of a new joint algorithm succinctly. This layer is also the basis for our joint learning.
  • With TFF, developers can declaratively express joint calculations, which will deploy them to different runtime environments. TFF comprises a single simulation run time for the experiment. Please visit the tutorial , and try it for yourself!
     
    from six.moves import range
    import tensorflow as tf
    import tensorflow_federated as tff
    from tensorflow_federated.python.examples import mnist
    tf.compat.v1.enable_v2_behavior()

    # Load simulation data.
    source, _ = tff.simulation.datasets.emnist.load_data()
    def client_data(n):
      dataset = source.create_tf_dataset_for_client(source.client_ids[n])
      return mnist.keras_dataset_from_emnist(dataset).repeat(10).batch(20)

    # Pick a subset of client devices to participate in training.
    train_data = [client_data(n) for n in range(3)]

    # Grab a single batch of data so that TFF knows what data looks like.
    sample_batch = tf.nest.map_structure(
        lambda x: x.numpy(), iter(train_data[0]).next())

    # Wrap a Keras model for use with TFF.
    def model_fn():
      return tff.learning.from_compiled_keras_model(
          mnist.create_simple_keras_model(), sample_batch)

    # Simulate a few rounds of training with the selected client devices.
    trainer = tff.learning.build_federated_averaging_process(model_fn)
    state = trainer.initialize()
    for _ in range(5):
      state, metrics = trainer.next(state, train_data)
      print (metrics.loss)

Guess you like

Origin www.cnblogs.com/tan2810/p/11772971.html