Kotlin Flow Exploration

reactive programming

Because Kotlin Flow is based on the implementation of reactive programming, let's first understand the concept of reactive programming.

First look at the Baidu Encyclopedia explanation:

Reactive programming is a programming paradigm oriented towards data flow and change propagation. This means that static or dynamic data flows can be easily expressed in a programming language, and the associated computational model will automatically propagate changing values ​​through the data flow.

This interpretation is abstract and difficult to understand. I only know that its core is: data flow .

How to understand this data flow, first look at RxJava , a framework under ReactiveX for responsive programming .

RxJava is based on the implementation of responsive programming, and its definition:

RxJava is a Java VM implementation of Reactive Extensions: a library for composing asynchronous and event-based programs by using observable sequences.
It extends the Observer pattern to support data/event sequences, and adds operators that allow you to compose sequences declaratively while removing concerns about issues such as low-level threading, synchronization, thread safety, and concurrent data structures.

After reading this definition, my mind is also very fuzzy. Let's analyze it from a simple example of RxJava application:

   Observable.just(bitmap).map { bmp->
            //在子线程中执行耗时操作,存储 bitmap 到本地
            saveBitmap(bmp)
        }.subscribeOn(Schedulers.io()).observeOn(AndroidSchedulers.mainThread()).subscribe { bitmapLocalPath ->
                //在主线程中处理存储 bitmap 后的本地路径地址
                refreshImageView(bitmapLocalPath)
        }

In the above example: store a bitmap locally and return the local path, from the source data bitmap → store btimap to local operation → obtain the local image path value to refresh the UI. In fact, the sequence of events that occur in time during the entire process can be understood as a data flow .

Data flow includes provider (producer), intermediary (intermediate operation), consumer (consumer):

  • Provider (producer): source data, add data to the data flow;
  • Intermediaries (intermediate operations): can modify the values ​​sent to the data stream, or modify the data stream itself;
  • Consumer (Consumer): Result data, consumes the values ​​in the data stream.

Then, the data flow in the above example is:

  • Provider (producer): source data bitmap;
  • Intermediary (intermediate operation): map operation, store btimap locally;
  • User (consumer): local image path.

Let's look at the data flow explanation in RxJava:

A dataflow in RxJava consists of a source, zero or more intermediate steps, and then a data consumer or combiner step (where the step is responsible for consuming the dataflow in some way):

source.operator1().operator2().operator3().subscribe(consumer);
source.flatMap(value -> source.operator1().operator2().operator3());

Here, if we imagine ourselves on operator2, looking left at source is called upstream . Looking to the right subscriber/consumer is called downstream . This is often more apparent when each element is written on a separate line:

source
  .operator1()
  .operator2()
  .operator3()
  .subscribe(consumer)

This is also the upstream and downstream concept of RxJava.

In fact, referring to RxJava in the Flow data flow, there can also be similar upstream and downstream concepts:

flow
  .operator1()
  .operator2()
  .operator3()
  .collect(consumer)

After understanding the core data , I have a preliminary impression of responsive programming. But the implementation of reactive programming is much more than that, it also involves observer mode , thread scheduling , etc. Regardless of the principles, what are the benefits of using it for development? In fact, its main advantages are:

  • For concurrent programming, thread switching, without callback hell, simplifies the code of asynchronous execution;
  • The code is elegant, concise, easy to read and maintain.

Let's look at two business examples:

     Observable.just(bitmap).map { bmp ->
            //在子线程中执行耗时操作,存储 bitmap 到本地
            saveBitmap(bmp)
        }.map { path ->
            //在子线程中执行耗时操作,上传图片到服务端
            uploadBitmap(path)
        }.subscribeOn(Schedulers.io()).observeOn(AndroidSchedulers.mainThread())
            .subscribe { downloadUrl ->
                //在主线程中处理获取图片下载地址
            }
        //从服务端批量下载文件
        Observable.from(downloadUrls).flatMap { downloadUrl ->
            //下载单个文件,返回本地文件
            Observable.just(downloadUrl).map {url-> downloadResource(url) }
        }.map { file ->
            //对文件解压
            unzipFile(file)
        }.subscribeOn(Schedulers.io()).observeOn(AndroidSchedulers.mainThread())
            .subscribe { folderPath ->
                //拿到文件夹路径
            }

Therefore, the implementation of responsive programming mainly helps us solve the problem of concurrent programming, and can handle asynchronous events with elegant and concise code.

Kotlin coroutines and Flow, which together also enable reactive programming. In the Kotlin environment, combined with the extension of Lifecycle, ViewModel, and Flow provided by Android, it allows us to do concurrent programming in Android, and manage asynchronous events like ducks in water.

Kotlin Flow

Kotlin Flow is Kotlin data flow, which is built on Kotlin coroutines. The previous article Exploring Kotlin coroutines analyzed the general principle of coroutines, knowing that coroutines are a set of thread API framework provided by Kotlin, which is convenient for concurrent programming. Then the combination of Kotlin coroutine and Flow (data flow) is similar to the RxJava framework.

The following two business examples of RxJava above are implemented using Kotlin coroutines and Flow:

        GlobalScope.launch(Dispatchers.Main) {
            flowOf(bitmap).map { bmp ->
                //在子线程中执行耗时操作,存储 bitmap 到本地
                Log.d("TestFlow", "saveBitmap: ${Thread.currentThread()}")
                saveBitmap(bmp)
            }.flowOn(Dispatchers.IO).collect { bitmapLocalPath ->
                //在主线程中处理存储 bitmap 后的本地路径地址
                Log.d("TestFlow", "bitmapLocalPath=$bitmapLocalPath: ${Thread.currentThread()}")
            }
        }
        //从服务端批量下载文件
        GlobalScope.launch(Dispatchers.Main) {
            downloadUrls.asFlow().flatMapConcat { downloadUrl ->
                //下载单个文件,返回本地文件
                flowOf(downloadUrl).map { url ->
                    Log.d("TestFlow", "downloadResource:url=$url: ${Thread.currentThread()}")
                    downloadResource(url)
                }
            }.map { file ->
                //对文件解压
                Log.d("TestFlow", "unzipFile:file=${file.path}: ${Thread.currentThread()}")
                unzipFile(file)
            }.flowOn(Dispatchers.IO).collect { folderPath ->
                //拿到文件夹路径
                Log.d("TestFlow", "folderPath=$folderPath: ${Thread.currentThread()}")
            }
        }
控制台结果输出:
TestFlow: saveBitmap: Thread[DefaultDispatcher-worker-1,5,main]
TestFlow: bitmapLocalPath=/mnt/sdcard/Android/data/com.wangjiang.example/files/images/flow.png: Thread[main,5,main]

TestFlow: downloadResource:url=https://www.wangjiang.example/coroutine.zip: Thread[DefaultDispatcher-worker-1,5,main]
TestFlow: unzipFile:file=/mnt/sdcard/Android/data/com.wangjiang.example/files/zips/coroutine.zip: Thread[DefaultDispatcher-worker-1,5,main]
TestFlow: downloadResource:url=https://www.wangjiang.example/flow.zip: Thread[DefaultDispatcher-worker-1,5,main]
TestFlow: unzipFile:file=/mnt/sdcard/Android/data/com.wangjiang.example/files/zips/flow.zip: Thread[DefaultDispatcher-worker-1,5,main]
TestFlow: folderPath=/mnt/sdcard/Android/data/com.wangjiang.example/files/zips/coroutine: Thread[main,5,main]
TestFlow: folderPath=/mnt/sdcard/Android/data/com.wangjiang.example/files/zips/flow: Thread[main,5,main]

It can be seen that the effect achieved by RxJava is consistent. First, use to launchstart a coroutine, then use the source data to create one Flow(data production), then go through flatMapConcat, maptransform (multiple intermediate operations), and finally collectobtain the result data (data consumption), which also includes thread switching: in the main thread Start the child thread in the middle to execute the time-consuming task, and return the result of the time-consuming task to the main thread (flowOn specifies that the intermediate operation is performed in the IO thread). So the combination of coroutine and Flow (data flow) is the implementation of responsive programming. For us, using it can write elegant asynchronous code in Kotlin environment for concurrent programming.

Let's get familiar with coroutines and Flow separately.

coroutine concept

First, let's take a look at some concepts and APIs in coroutines.

CoroutineScope: Define the scope of the coroutine.

CoroutineScope keeps track of all coroutines it creates using launch or async. You can call scope.cancel() at any time to cancel work in progress (that is, a running coroutine). In Android, some KTX libraries provide their own CoroutineScope for certain lifecycle classes. For example, ViewModel has viewModelScope and Lifecycle has lifecycleScope. However, unlike the scheduler, CoroutineScope does not run coroutines.

Kotlin provides for UI components to use MainScope:

public fun MainScope(): CoroutineScope = ContextScope(SupervisorJob() + Dispatchers.Main)

Used for the entire lifetime of the application GlobalScope:

public object GlobalScope : CoroutineScope {
    /**
     * Returns [EmptyCoroutineContext].
     */
    override val coroutineContext: CoroutineContext
        get() = EmptyCoroutineContext
}

Because it is the entire life cycle of the application, it should be used with caution.

You can also customize Scope:

val scope = CoroutineScope(Job() + Dispatchers.Main)

In addition, the Android KTX library has been extended CoroutineScopefor , so in Android, those related to the Activity or Fragment life cycle lifecycleScopeand the ViewModel life cycle are usually used viewModelScope.

public val Lifecycle.coroutineScope: LifecycleCoroutineScope
    get() {
        while (true) {
            val existing = mInternalScopeRef.get() as LifecycleCoroutineScopeImpl?
            if (existing != null) {
                return existing
            }
            val newScope = LifecycleCoroutineScopeImpl(
                this,
                SupervisorJob() + Dispatchers.Main.immediate
            )
            if (mInternalScopeRef.compareAndSet(null, newScope)) {
                newScope.register()
                return newScope
            }
        }
    }
public val ViewModel.viewModelScope: CoroutineScope
    get() {
        val scope: CoroutineScope? = this.getTag(JOB_KEY)
        if (scope != null) {
            return scope
        }
        return setTagIfAbsent(
            JOB_KEY,
            CloseableCoroutineScope(SupervisorJob() + Dispatchers.Main.immediate)
        )
    }

internal class CloseableCoroutineScope(context: CoroutineContext) : Closeable, CoroutineScope {
    override val coroutineContext: CoroutineContext = context

    override fun close() {
        coroutineContext.cancel()
    }
}

Start coroutines: launch and async

There are two ways to start a coroutine:

  • launch: Start a new coroutine and return one Job, Jobwhich can be canceled Job.cancel;
  • async: It will also start a new coroutine and return an Deferredinterface implementation. This interface actually inherits Jobthe interface. You can use the awaitsuspend function to wait for the result to be returned.

CoroutineContext: coroutine context

val scope = CoroutineScope(Job() + Dispatchers.Main)

The plus operation is defined in CoroutineScope:

public operator fun CoroutineScope.plus(context: CoroutineContext): CoroutineScope =
    ContextScope(coroutineContext + context)

Because Jobboth and Dispatchersthe top level inherit the interface Element, which Elementin turn inherits the interface CoroutineContext:

public interface Element : CoroutineContext

So Job() and Dispatchers.Main can add up. Here CoroutineScope must be included in the construction method Job(), if not, it will create one by itself Job():

public fun CoroutineScope(context: CoroutineContext): CoroutineScope =
    ContextScope(if (context[Job] != null) context else context + Job())

The role of Job and CoroutineDispatcher CoroutineContextin is:

Job: Control the life cycle of the coroutine.
CoroutineDispatcher: Dispatches work to the appropriate thread.

CoroutineDispatcher: coroutine scheduler and thread

  • Dispatchers.Default: The default scheduler, indicating that this coroutine should be executed on the thread reserved for cpu computing operations;
  • Dispatchers.Main: Indicates that this coroutine should be executed on the main thread reserved for UI operations;
  • Dispatchers.IO: Indicates that this coroutine should execute on a thread reserved for I/O operations.
GlobalScope.launch(Dispatchers.Main) {
}
withContext(Dispatchers.IO){
}
.flowOn(Dispatchers.IO)

summary

To use a coroutine, first create a scope: CoroutineScopeto be responsible for managing the coroutine. scopeWhen defining, you need to specify a that controls the life cycle of the coroutine Joband dispatches the work to the appropriate thread CoroutineDispatcher. After the scope is defined, one coroutine can be scope.launchstarted , or scope.launchmultiple coroutines can be started multiple times. The started coroutine can be scope.cancelcanceled , but it cancels all the coroutines started by the scope. If you want to cancel a single coroutine, you need to use scope.launchthe returned Jobto cancel Job.cancel, this Job controls the life cycle of a single coroutine. After the coroutine is started, the tasks in the main thread can still continue to execute. During execution launch{}, the execution of the coroutine can withContext(Dispatchers.IO)be moved to an I/O sub-thread. After the sub-thread executes the task, the result is returned to the main thread to continue implement.

Simple example:

    //主线程分派任务
    private val scope = CoroutineScope(Job() + Dispatchers.Main)

    //管理对应的协程的生命周期
    private var job1: Job? = null

    fun exec() {
        //启动一个协程
        job1 = scope.launch {
            //子线程执行耗时任务
            withContext(Dispatchers.IO){

            }
        }
        //启动一个协程
        val job2 = scope.launch {
            //启动一个协程
            val taskResult1 = async {
                //子线程执行耗时任务
                withContext(Dispatchers.IO){

                }
            }
            val taskResult2 = async {
                //子线程执行耗时任务
                withContext(Dispatchers.IO){

                }
            }
            //taskResult1 和 taskResult2 都返回结果才会继续执行
            taskResult1.await() + taskResult2.await()
        }
    }

    fun cancelJob() {
        //取消 job1 对应的协程
        job1?.cancel("cancel job1")
    }

    fun cancelScope() {
        //取消 scope 对应的所有协程
        scope.cancel("cancel scope")
    }

In the example above:

  • scope: Define the scope of the main thread dispatching tasks to track all the coroutines it creates using launch or async;
  • job1: Manage the life cycle of its corresponding coroutine;
  • withContext(Dispatchers.IO): Switch to sub-threads to perform time-consuming tasks;
  • cancelJobThe coroutine corresponding to job1 will be canceled;
  • cancelScopeWill cancel all coroutines started by scope.

Flow data flow

After understanding some basic concepts and APIs of Kotlin coroutines, we know the basic usage of coroutines. Next, let's take a look at the concepts and APIs related to Kotlin Flow.

The Flow API in Kotlin is designed to process sequentially executed data flows asynchronously. Flow is essentially a Sequence. We can operate Flow just like Sequence in Kotlin: transform, filter, map, etc. The main difference between Kotlin Sequences and Flow is that Flow can suspend .

If you understand Kotlin Sequence, it is actually very easy to understand Kotlin Flow. Just in time, in the previous Kotlin lazy collection operation-sequence Sequence article, there is a principle of analyzing Sequence, and Flow can also be understood according to a similar principle here.

val sequenceResult = intArrayOf(1, 2, 3).asSequence().map { it * it }.toList()

 MainScope().launch{
            val flowResult = intArrayOf(1, 2, 3).asFlow().map { it * it }.toList(mutableListOf())
        }

The values ​​of sequenceResult and flowResult above are both: [1, 4, 9].

In Sequence, if there is no terminal operation, the intermediate operation will not be executed. The same is true in Flow. If there is no data consumption in the data flow collect, the intermediate operation will not be executed.

flowOf(bitmap).map { bmp ->
                //在子线程中执行耗时操作,存储 bitmap 到本地
                saveBitmap(bmp)
            }.flowOn(Dispatchers.Default)

In the above code, mapthe operation will not be executed.

A complete data flow should include: data production ( flowOf, asFlow, flow{}) → intermediate operations ( map, filteretc.) → data consumption ( collect, asList, asSetetc.). The related operations are described below.

Data Flow: Data Production

Data production is mainly to build data streams through data sources . You can use the Flow-related extension methods provided Builders.ktin , such as:

intArrayOf(1, 2, 3).asFlow().map { it * it }
val downloadUrl = "https://github.com/ReactiveX/RxJava"
flowOf(downloadUrl).map { downloadZip(it) }
(1..10).asFlow().filter { it % 2 == 0 }

Dataflows are usually constructed directly using the flowOfand methods. asFlowThey all create cold streams:

Cold flow: The code in this flow builder is not run until the flow is collected.

You can also build a data flow flow{}by using emitthe method to add a data source to the data flow:

    flow<Int> {
            emit(1)
            withContext(Dispatchers.IO){
                emit(2)
            }
            emit(3)
        }.map { it * it }

Either flowOf, asFlowor flow{}, they will implement the interface FlowCollector:

public fun <T> flow(@BuilderInference block: suspend FlowCollector<T>.() -> Unit): Flow<T> = SafeFlow(block)

internal inline fun <T> unsafeFlow(@BuilderInference crossinline block: suspend FlowCollector<T>.() -> Unit): Flow<T> {
    return object : Flow<T> {
        override suspend fun collect(collector: FlowCollector<T>) {
            collector.block()
        }
    }
}

FlowCollectorThe method provided by the interface emitis responsible for adding source data to the data stream:

public fun interface FlowCollector<in T> {

    /**
     * Collects the value emitted by the upstream.
     * This method is not thread-safe and should not be invoked concurrently.
     */
    public suspend fun emit(value: T)
}

Summary : To build a data flow, you can use Flow-related extension methods: flowOf, asFlow, flow{}, which are all FlowCollectorprovided emitto add source data to the data flow.

Data Flow: Intermediate Operations

Intermediate operations primarily modify values ​​sent to the data stream, or modify the data stream itself . Such as filter, map, flatMapConcatoperations, etc.:

intArrayOf(1, 2, 3).asFlow().map { it * it }.collect{ }

(1..100).asFlow().filter { it % 2 == 0 }.collect{ }

val data = hashMapOf<String, List<String>>(
                "Java" to arrayListOf<String>("xiaowang", "xiaoli"),
                "Kotlin" to arrayListOf<String>("xiaozhang", "xiaozhao")
            )
flow<Map<String, List<String>>> {
                emit(data)
            }.flatMapConcat {
                it.values.asFlow()
            }.collect{ }

There are many intermediate operators, which can be roughly divided into:

  • Conversion operators: Simple conversions can use filtering filterand mapping mapoperations , and complex conversions can use transformation transformoperations;
  • Length-limited transition operator: When the flow reaches the corresponding limit, its execution will be canceled, and the get takeoperation take(2)indicate that only the first two values ​​are obtained;
  • Discard operator: discard the result value in the stream, you can use the discard dropoperation , drop(2)which means discarding the first two values;
  • Flatten operator: flattens the given stream into a single stream, flatMapConcatand flattenConcatoperation means to collect incoming stream operations sequentially, flatMapMergeand flattenMergemeans to collect all incoming streams concurrently and merge their values ​​into a single stream, In order to launch the value operation as soon as possible, flatMapLatestthe operation means to collect the latest stream operation in a flattened manner;
  • Combination operator: Combine multiple streams. zipThe operation means to combine the values ​​of two streams. The combination operation is performed only combinewhen both streams have values. The operation means to combine the latest values ​​of the two streams. Each combination uses each the latest value of the stream;
  • Buffer operator: When data production is faster than data consumption, buffer bufferoperation can be used to shorten the time of data consumption;
  • Merge operator: Merge emission items, do not process each value, you can use merge conflate operations to skip intermediate values;
  • flowOn operator: changing the context of flow emission will switch the operation before flowOnthe operation to the context specified by flowOn Dispatchers.Default, Dispatchers.IO, Dispatchers.Main, which is to specify the thread executed by the previous operation;

The above describes the general usage scenarios of the main operators. For a detailed explanation of the operators, please refer to the official document: asynchronous stream .

Intermediate operator code example:

(1..3).asFlow().take(2).collect{
                //收集到结果值 1,2
            }
(1..3).asFlow().drop(2).collect{
                //收集到结果值 3
            }
    private fun downloadVideo(videoUrl: String): Pair<String, String> {
        return Pair(videoUrl, "videoFile")
    }

    private fun downloadAudio(audioUrl: String): Pair<String, String> {
        return Pair(audioUrl, "audioFile")
    }

    private fun downloadImage(imageUrl: String): Pair<String, String> {
        return Pair(imageUrl, "imageFile")
    }
    
  MainScope().launch {
            val imageDownloadUrls = arrayListOf<String>("image1", "image2")
            val audioDownloadUrls = arrayListOf<String>("audio1", "audio2", "audio3")
            val videoDownloadUrls = arrayListOf<String>("video1", "video2", "video3", "video4")
            val imageFlows = imageDownloadUrls.asFlow().map {
                downloadImage(it)
            }
            val audioFlows = audioDownloadUrls.asFlow().map {
                downloadAudio(it)
            }
            val videoFlows = videoDownloadUrls.asFlow().map {
                downloadVideo(it)
            }
            merge(imageFlows, audioFlows, videoFlows).flowOn(Dispatchers.IO).onEach {
                Log.d("TestFlow", "result=$it")
            }.collect()
        }
控制台输出结果:
TestFlow: result=(image1, imageFile)
TestFlow: result=(image2, imageFile)
TestFlow: result=(audio1, audioFile)
TestFlow: result=(audio2, audioFile)
TestFlow: result=(audio3, audioFile)
TestFlow: result=(video1, videoFile)
TestFlow: result=(video2, videoFile)
TestFlow: result=(video3, videoFile)
TestFlow: result=(video4, videoFile)

merge 操作符将多个流合并到一个流,支持并发。类似 RxJava 的 zip 操作
(1..3).asFlow().onStart {
                Log.d("TestFlow", "onStart:${Thread.currentThread()}")
            }.flowOn(Dispatchers.Main).map {
                Log.d("TestFlow", "map:$it,${Thread.currentThread()}")
                if (it % 2 == 0)
                    throw IllegalArgumentException("fatal args:$it")
                it * it
            }.catch {
                Log.d("TestFlow", "catch:${Thread.currentThread()}")
                emit(-1)
            }.flowOn(Dispatchers.IO)
                .onCompletion { Log.d("TestFlow", "onCompletion:${Thread.currentThread()}") }
                .onEach {
                    Log.d("TestFlow", "onEach:$it,${Thread.currentThread()}")
                }.collect()
控制台输出结果:
TestFlow: onStart:Thread[main,5,main]
TestFlow: map:1,Thread[DefaultDispatcher-worker-3,5,main]
TestFlow: map:2,Thread[DefaultDispatcher-worker-3,5,main]
TestFlow: catch:Thread[DefaultDispatcher-worker-3,5,main]
TestFlow: onEach:1,Thread[main,5,main]
TestFlow: onEach:-1,Thread[main,5,main]
TestFlow: onCompletion:Thread[main,5,main]

flowOn 指定 onStart 在主线程中执行(Dispatchers.Main),指定 map 和 catch 在 IO 线程中执行(Dispatchers.IO)

Summary : The intermediate operation is actually the transformation operation of the data stream, which is similar to the transformation operation of Sequence and RxJava.

Data Flow: Data Consumption

Data consumption is the use of the resulting value of a data stream . The terminal operator is most commonly used collectto collect stream result values:

 (1..3).asFlow().collect{
                //收集到结果值 1,2,3
            }

In addition to collectthe operator , there are some operators that can get the result value of the data stream:

  • collectLatest: use the latest value of the data stream;
  • toListor toSetetc .: convert the dataflow result value into a collection;
  • first: Get the first result value of the data stream;
  • single: ensures that the stream emits a single (single) value;
  • reduce: Accumulate the values ​​in the data stream;
  • fold: Given an initial value, accumulate the values ​​in the data stream.

Terminal operator code example:

  (1..3).asFlow().collectLatest {
                delay(300)
                //只能获取到3
            }
//转换为 List 集合 [1,2,3]
 val list = (1..3).asFlow().toList()
 //转换为 Set 集合 [1,2,3]
 val set = (1..3).asFlow().toSet()
val first = (1..3).asFlow().first()
//first 为第一个结果值 1
val single = (1..3).asFlow().single()
//流不是发射的单个值,会抛异常
val reduce = (1..3).asFlow().reduce { a, b ->
                a + b
            }
//reduce 的值为6=1+2+3            
val fold = (1..3).asFlow().fold(10) { a, b ->
                a + b
            }
 //fold 的值为16=10+1+2+3           

In addition to the above terminal operators, some operators are associated before the terminal:

  • onStart: Called before the data flow result value is collected;
  • onCompletion: Called after the data flow result value is collected;
  • onEmpty: Called when the data stream completes without emitting any elements;
  • onEach: iterate over each value of the stream as the data stream result value is collected;
  • catch: Declaratively catch exceptions when collecting dataflow results.

Code example for terminal associative operator:

 (1..3).asFlow().onStart {
                Log.d("TestFlow", "onStart")
            }.map {
                if (it % 2 == 0)
                    throw IllegalArgumentException("fatal args:$it")
                it * it
            }.catch { emit(-1) }.onCompletion { Log.d("TestFlow", "onCompletion") }.onEach {
                Log.d("TestFlow", "onEach:$it")
            }.collect()
            
控制台输出结果:
TestFlow: onStart
TestFlow: onEach:1
TestFlow: onEach:-1
TestFlow: onCompletion

Summary : When the data stream is used for data consumption, it can be combined with the terminal operator to output sets, cumulative values, etc. When you want to monitor the start or end of the data stream collection result value, you can use onStartand onCompletionhandlingcatch .

Summarize

Responsive programming can be understood as a data flow-oriented programming method, that is, using data sources to construct data flow → modify the value in the data flow → process the result value of the data flow. In this process, a series of events or operations are happen in sequence. In the Java environment, the RxJava framework implements responsive programming, which combines data flow, observer mode, and thread framework; in the Kotlin environment, Kotlin coroutines and Flow are combined to implement responsive programming, where coroutines are threads Framework, Flow is data flow. Regardless of the reactive programming implemented by RxJava or Kotlin coroutines and Flow, their purpose is to: use elegant, concise, easy-to-read, and easy-to-maintain code to write concurrent programming and handle asynchronous operation events. In addition, Android LifeCycle and ViewModel have extended support for Kotlin coroutines and Flow, which is also more convenient for life cycle management of asynchronous events.

Reference documents:

The next article will explore Kotlin Flow cold flow and hot flow.

Guess you like

Origin blog.csdn.net/wangjiang_qianmo/article/details/128554140