Kotlin Flow cold flow and hot flow

This article mainly analyzes the relevant realization principles of cold flow and hot flow, and the principle logic is long and complicated. Especially when it comes to the related implementation principles of heat flow SharedFlow, the logic is more abstract and difficult to understand. This article is relatively long. It is recommended to read it in sections according to the catalog. You can first look at the basic concepts and cold flow , and then look at the hot flow SharedFlow and StateFlow respectively .

When reading this article, you can think with the following questions:

  1. What do cold currents and hot currents refer to?
  2. In business development, what can cold flow and hot flow be used for or solve?
  3. What is the difference between cold flow and hot flow?
  4. What is the operating principle of cold flow?
  5. How does SharedFlow manage the data it emits?
  6. How does SharedFlow manage its subscribers?
  7. What is the difference between StateFlow and LiveData?

Technologies serve business, whether it is cold flow or hot flow, they all need to solve practical problems in business development, such as:

  • Coroutines and cold flows can replace RxJavathe framework for responsive programming. In Kotlin projects, using coroutines and cold flows has more advantages than using RxJava;
  • SharedFlow can be used as an event bus to replace EventBus;
  • Hot Flow StateFlow can be used for event state updates, replacements LiveData, and combined MVIreplacements MVVM.

If there is an error in this article, it will be corrected in time. Welcome to read.

basic concept

From the previous article: Exploring Kotlin Flow , I know that Kotlin Flow is Kotlin data flow, and data flow needs to include provider (producer), intermediary (intermediate operation), and consumer (consumer):

  • Provider (producer): source data, add data to the data flow;
  • Intermediaries (intermediate operations): can modify the values ​​sent to the data stream, or modify the data stream itself;
  • Consumer (Consumer): Result data, consumes the values ​​in the data stream.
flow
  .operator1()
  .operator2()
  .operator3()
  .collect(consumer)

To create a data stream, you can use the Kotlin extension function flowOf, asFlow, flow{}:

 flowOf(1, 2, 3).map { it * it }.collect {}
 
 (1..3).asFlow().map { it % 2 == 0 }.collect {}
 
 flow<Int> {
                emit(1)
                emit(2)
                emit(3)
            }.map { it * 2 }.collect {}

In the above method of creating a data flow, the intermediate operation will only be executed collect{}when which is Kotlin Sequencesthe same as .

Now, in addition to the above way of creating data streams, you can also use SharedFlowand StateFlow:

 
class TestFlow {
    private val _sharedFlow = MutableSharedFlow<Int>(
        replay = 0,
        extraBufferCapacity = 0,
        onBufferOverflow = BufferOverflow.SUSPEND
    )
    val sharedFlow: SharedFlow<Int> = _sharedFlow

    fun testSharedFlow() {
        MainScope().launch {
            Log.e("Flow", "sharedFlow:emit 1")
            _sharedFlow.emit(1)
            Log.e("Flow", "sharedFlow:emit 2")
            _sharedFlow.emit(2)
        }
    }

    private val _stateFlow = MutableStateFlow<Int>(value = 1)
    val stateFlow: SharedFlow<Int> = _stateFlow

    fun testStateFlow() {
        MainScope().launch {
            _stateFlow.value = 1
        }
    }
}

When creating a stream using andSharedFlow , there can be no or multiple collectors , it exists and will not be terminated by , etc. consumers.StateFlowcollect{}collect{}asListasSet

testFlow.testSharedFlow()

testFlow.testStateFlow()

控制台输出结果:
Flow                    com.wangjiang.example                E  sharedFlow:emit 1
Flow                    com.wangjiang.example                E  sharedFlow:emit 2
Flow                    com.wangjiang.example                E  stateFlow:value 1

It can be seen that when there is no collect{}collector , ShareFlow and StateFlow are still executed. Add the collector below collect{}, and take a look again:

        lifecycleScope.launch {
            testFlow.sharedFlow.collect {
                Log.e("Flow", "SharedFlow Collect1: value=$it")
            }
        }
        lifecycleScope.launch {
            testFlow.sharedFlow.collect {
                Log.e("Flow", "SharedFlow Collect2: value=$it")
            }
        }
        testFlow.testSharedFlow()

        lifecycleScope.launch {
            testFlow.stateFlow.collect {
                Log.e("Flow", "StateFlow Collect1: value=$it")
            }
        }
        lifecycleScope.launch {
            testFlow.stateFlow.collect {
                Log.e("Flow", "StateFlow Collect2: value=$it")
            }
        }
        testFlow.testStateFlow()
        
控制台输出结果:
Flow                    com.wangjiang.example                E  StateFlow Collect1: value=1
Flow                    com.wangjiang.example                E  StateFlow Collect2: value=1

Flow                    com.wangjiang.example                E  sharedFlow:emit 1
Flow                    com.wangjiang.example                E  SharedFlow Collect1: value=1
Flow                    com.wangjiang.example                E  SharedFlow Collect2: value=1
Flow                    com.wangjiang.example                E  sharedFlow:emit 2
Flow                    com.wangjiang.example                E  SharedFlow Collect1: value=2
Flow                    com.wangjiang.example                E  SharedFlow Collect2: value=2

For SharedFlow, it is similar to an event bus, which distributes events to event subscribers and shares events. For it StateFlow, it is similar to LiveData, which updates the latest status of the event and informs subscribers of the update of the event.

Now the cold flow and hot flow can be simply distinguished: the data flow created by using flowOf, asFlow, etc. is called cold flow, that is, the data flow created by using, it cannot exist independently of the collector, and each data flow needs the collector to be able to It is called a complete data flow; the data flow created by using or is called a heat flow, which can exist independently of the collector, and there can be no or multiple collectors .flow{}: Flow<T>collect{}collect{}: SharedFlow<T>: StateFlow<T>collect{}collect{}

cold flow

A Flow is a sequence-like cold flow—the code in the flow builder is not run until the flow is collected.

Flow is the same as a sequence, it needs to have a terminal operator, that is, it will only run when there is a collector collect{}or asListand asSetother operations:

        lifecycleScope.launch {
            val flow = flow {
                Log.e("Flow", "emit:1")
                emit(1)
                Log.e("Flow", "emit:2")
                emit(2)
            }.map {
                Log.e("Flow", "map:$it")
                it * it
            }
            flow.collect {
                Log.e("Flow", "collect:$it")
            }
        }
控制台输出结果:
Flow                    com.wangjiang.example                 E  emit:1
Flow                    com.wangjiang.example                 E  map:1
Flow                    com.wangjiang.example                 E  collect:1
Flow                    com.wangjiang.example                 E  emit:2
Flow                    com.wangjiang.example                 E  map:2
Flow                    com.wangjiang.example                 E  collect:4

When collect{}using , start data production: emit value emit, then execute intermediate operation: maptransformation, and then execute data consumption: collect. Throughout the process, the data flow occurs in chronological order, that is, emit:1map:1collect:1, emit:2map:2collect:4, not emit:1emit:2, map:1map:2, collect:1collect:4.

Let's take a brief look at the execution principle of cold flow from an example:

class TestFlow {
    fun testColdFlow() {
        MainScope().launch {
            flow<Int> { emit(1) }.map { it * it }.collect {
                Log.e("Flow", "testColdFlow", Throwable())
            }
        }
    }
}

Run testColdFlowthe method , and the console outputs collectthe method call stack information:

 Flow                    com.wangjiang.example                 E  testColdFlow
 
java.lang.Throwable
at com.wangjiang.example.flow.TestFlow$testColdFlow$1$3.emit(TestFlow.kt:46)
at com.wangjiang.example.flow.TestFlow$testColdFlow$1$3.emit(TestFlow.kt:45)
at com.wangjiang.example.flow.TestFlow$testColdFlow$1$invokeSuspend$$inlined$map$1$2.emit(Emitters.kt:224)
at kotlinx.coroutines.flow.internal.SafeCollectorKt$emitFun$1.invoke(SafeCollector.kt:15)
at kotlinx.coroutines.flow.internal.SafeCollectorKt$emitFun$1.invoke(SafeCollector.kt:15)
at kotlinx.coroutines.flow.internal.SafeCollector.emit(SafeCollector.kt:87)
at kotlinx.coroutines.flow.internal.SafeCollector.emit(SafeCollector.kt:66)
at com.wangjiang.example.flow.TestFlow$testColdFlow$1$1.invokeSuspend(TestFlow.kt:45)
at com.wangjiang.example.flow.TestFlow$testColdFlow$1$1.invoke(Unknown Source:14)
at com.wangjiang.example.flow.TestFlow$testColdFlow$1$1.invoke(Unknown Source:4)
at kotlinx.coroutines.flow.SafeFlow.collectSafely(Builders.kt:61)
at kotlinx.coroutines.flow.AbstractFlow.collect(Flow.kt:230)
at com.wangjiang.example.flow.TestFlow$testColdFlow$1$invokeSuspend$$inlined$map$1.collect(SafeCollector.common.kt:113)
at com.wangjiang.example.flow.TestFlow$testColdFlow$1.invokeSuspend(TestFlow.kt:45)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at android.os.Handler.handleCallback(Handler.java:900)
at android.os.Handler.dispatchMessage(Handler.java:103)
at android.os.Looper.loop(Looper.java:219)
at android.app.ActivityThread.main(ActivityThread.java:8668)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:513)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1109)

From the above method call stack information, we can see that the approximate order is: collectAbstractFlow.collectSafeFlow.collectSafelymapSafeCollector.emitemitTestFlow$testColdFlow$1$3.emit. (The log above does not correspond to the source code, here you can view the class file of kotlin through AndroidStudio.)

Here we focus on understanding the intermediate operation map transformation:

import kotlinx.coroutines.flow.unsafeTransform as transform

public inline fun <T, R> Flow<T>.map(crossinline transform: suspend (value: T) -> R): Flow<R> = transform { value ->
    return@transform emit(transform(value))
}

internal inline fun <T, R> Flow<T>.unsafeTransform(
    @BuilderInference crossinline transform: suspend FlowCollector<R>.(value: T) -> Unit
): Flow<R> = unsafeFlow { // Note: unsafe flow is used here, because unsafeTransform is only for internal use
    collect { value ->
        // kludge, without it Unit will be returned and TCE won't kick in, KT-28938
        return@collect transform(value)
    }
}

internal inline fun <T> unsafeFlow(@BuilderInference crossinline block: suspend FlowCollector<T>.() -> Unit): Flow<T> {
    return object : Flow<T> {
        override suspend fun collect(collector: FlowCollector<T>) {
            collector.block()
        }
    }
}

From the above code analysis, the map process is: maptransformunsafeTransformunsafeFlow { }collect {}执行上一个 flow拿到上一个 flow 的结果@collect transform(value)@transform emit(transform(value))map变换操作结果到下一个 flow 或 消费者 collect.

Seeing this process, yes, the execution process of flow cold flow is similar to the principle of Kotin Sequence execution process.

Therefore, the triggering process of the entire cold flow can be simply summarized as follows: the consumer collect triggers the intermediate operation, the intermediate operation filter, map transformation, etc. trigger the producer, and then the producer emits the production data, and then passes the data to the intermediate operation for transformation, and finally sends The transformed data is handed over to the consumer. This is the principle of cold flow execution. It's a bit similar, the feeling of being triggered from the bottom up, and then flowing from the top down .

Business scene

: Flow<T>For business scenarios, RxJavait is similar to using for responsive programming, for example:

 GlobalScope.launch(Dispatchers.Main) {
            flowOf(bitmap).map { bmp ->
                //在子线程中执行耗时操作,存储 bitmap 到本地
                saveBitmap(bmp)
            }.flowOn(Dispatchers.IO).collect { bitmapLocalPath ->
                //在主线程中处理存储 bitmap 后的本地路地址
            }
        }

In Kotlin projects, you can use coroutines and cold flow to replace RxJava for reactive programming.

Summarize

A cold flow requires data producers, 0 or more intermediate operations, and data consumers to build a complete flow together. Its execution principle is similar to Kotin Sequence. When there is a consumer collect or other terminal operations, the stream starts to be triggered from bottom to top, and then flows from top to bottom.

heat flow

Heat flow is divided into SharedFlow and StateFlow, both of which exist independently of the collector.

SharedFlow

SharedFlow, as the name suggests, is called a hot flow, mainly because it allows all collectors to share the values ​​it emits, and the way of sharing is broadcasting, and its instances can exist independently of the existence of collectors. To understand ShredFlow, it is necessary to understand the meaning of its shared and independent existence .

The following will be analyzed from the creation, sending and collection of SharedFlow .

create

Create a SharedFlow using MutableSharedFlowthe constructor :

private val _sharedFlow = MutableSharedFlow<Int>(
        replay = 0,
        extraBufferCapacity = 0,
        onBufferOverflow = BufferOverflow.SUSPEND
    )
    val sharedFlow: SharedFlow<Int> = _sharedFlow

Parameter meaning:

  • replay: When a new subscriber subscribes, how many previously sent values ​​are resent to the new subscriber (similar to sticky data);
  • extraBufferCapacity: In addition to replay, the number of cached values, when there are still values ​​in the cache space, emit will not suspend (emit is too fast, collect is too slow, and the emit data will be cached);
  • onBufferOverflow: Specify the processing strategy when the buffer is full of data items to be sent (the size of the buffer is jointly determined by replay and extraBufferCapacity). The default is BufferOverflow.SUSPEND, but can also be BufferOverflow.DROP_LATEST or BufferOverflow.DROP_OLDEST (as the name suggests).
public fun <T> MutableSharedFlow(
    replay: Int = 0,
    extraBufferCapacity: Int = 0,
    onBufferOverflow: BufferOverflow = BufferOverflow.SUSPEND
): MutableSharedFlow<T> {
    //.....省略
    //缓存的值数量
    val bufferCapacity0 = replay + extraBufferCapacity
    val bufferCapacity = if (bufferCapacity0 < 0) Int.MAX_VALUE else bufferCapacity0
    return SharedFlowImpl(replay, bufferCapacity, onBufferOverflow)
}

internal open class SharedFlowImpl<T>(
    private val replay: Int,
    private val bufferCapacity: Int,
    private val onBufferOverflow: BufferOverflow
) : AbstractSharedFlow<SharedFlowSlot>(), MutableSharedFlow<T>, CancellableFlow<T>, FusibleFlow<T> {
    //.....省略
}

The MutableSharedFlow constructor returns an SharedFlowImplinstance . Let's take a look at the simple relationship between the classes and interfaces associated with the SharedFlowImpl class:
insert image description here
From top to bottom, the responsibilities of each class or interface are:

  • Flow 接口: It is used for the consumption operation of the flow, that is, the subscriber subscribes to collect, which provides public suspend fun collect(collector: FlowCollector<T>)the interface method, and the method parameter collector is the collector of the cold flow or the hot flow, so the Flow interface depends on the FlowCollecter interface;
  • FlowCollecter 接口: Used for stream collection, it can be the end operation or intermediate operation of the stream, it provides public suspend fun emit(value: T)the method , the method parameter value is the value sent by the data producer or the intermediate operation emit;
  • SharedFlow 接口: Inherit the Flow interface, and define public val replayCache: List<T>the attribute, which represents the (replay number) value cache snapshot for new subscribers;
  • MutableSharedFlow 接口: Inherit SharedFlow and FlowCollecter interface, then it can collect(collecter: FlowCollecter<T>)also be emit(value: T);
  • CancellableFlow 接口: Inherit the Flow interface, which is an empty interface, mainly marking that the Flow can be cancelled, that is, the SharedFlow can be cancelled;
  • CancellableFlowImpl 类: implement collect(collector: FlowCollector<T>)the method ;
  • FusibleFlow 接口: Work in conjunction with BufferOverflow and flowOn operations;
  • AbstractSharedFlow<S : AbstractSharedFlowSlot<*>> 抽象类: Responsible for the management of subscribers, receiving AbstractSharedFlowSlot, and inheriting SynchronizedObjectthe class ;
  • SynchronizedObject 类: The coroutine is responsible for providing the lock object of the lock synchronized(lock: SynchronizedObject, block: () -> T)method parameter;
  • AbstractSharedFlowSlot<SharedFlowImpl<*>> 抽象类: The fun allocateLocked(flow: F): Booleanand fun freeLocked(flow: F): Array<Continuation<Unit>?>, respectively representing the application and release of the SharedFlowSlot associated with the subscriber;
  • SharedFlowSlot 类: Inherit the AbstractSharedFlowSlot class, implement allocateLockedand freeLockedabstract methods, and define attributes var index = -1Land var cont: Continuation<Unit>? = null, where index indicates the index of the data to be processed in the cache array, and cont indicates the continuation of the subscriber waiting for new data to be sent (wrapped subscription By);
  • BufferOverflow 枚举类: The buffer overflow handling strategy in the flow, the enumeration value SUSPENDindicates that upstream of sending or sending the value is suspended when the buffer is full, the enumeration value DROP_OLDESTindicates that the oldest value in the buffer is deleted when overflowing, and the new value is added to the cache area, do not suspend, the enumeration value DROP_LATESTmeans to delete the latest value currently added to the buffer when the buffer overflows (so that the content of the buffer remains unchanged), do not suspend;
  • SharedFlowImpl 类: The real SharedFlow implementation class, which inherits AbstractSharedFlow<SharedFlowSlot>the abstract class and implements MutableSharedFlow<T>, CancellableFlow<T>, FusibleFlow<T>the interface.

From the above information, after creating a SharedFlow, the capabilities it provides can be summarized as follows: you can use to emittransmit data, data caching is involved when transmitting, buffer overflow strategy, whether it will be suspended, etc.; you can also use collectsubscribe, subscribe It involves issues such as subscriber management, data acquisition, and whether it will be suspended .

emission

SharedFlowImplThe class FlowCollectorimplements emitthe method of the interface, when the emit method is called:

  1. This call may be suspended if there are subscribers collectcurrently SharedFlow, and onBufferOverflow: BufferOverflow = BufferOverflow.SUSPEND;
  2. If no subscribers are collectcurrently the SharedFlow, the buffer will not be used. If replay != 0, the most recently emitted value is simply stored in the replay cache and replaces the old element in the replay cache; if replay=0, the most recently emitted value is discarded;
  3. The emit method is suspendmodified , and there is a non- suspendmodified method related to it: tryEmit;
  4. This method is thread-safe and can be safely called from concurrent coroutines without external synchronization.

Calls to the emit method may hang

First, look at emitthe implementation of the method:

    override suspend fun emit(value: T) {
        if (tryEmit(value)) return // fast-path
        emitSuspend(value)
    }

Whether the emit method will be suspended depends mainly on the return value tryEmit(value)of the method . If it returns true, it will not be executed emitSuspend(value), that is, it will not be suspended. Otherwise emitSuspend(value), the emit will be suspended.

Let's first look at the examples where the emit method will not hang and will hang:

不会挂起,调用的是 tryEmit 方法

class TestFlow {
    private val _sharedFlow = MutableSharedFlow<Int>(
        replay = 0,
        extraBufferCapacity = 1,
        onBufferOverflow = BufferOverflow.SUSPEND
    )
    val sharedFlow: SharedFlow<Int> = _sharedFlow

    fun testSharedFlow() {
        MainScope().launch {
            Log.e("Flow", "sharedFlow:emit 1")
            _sharedFlow.emit(1)
        }
    }
  }

lifecycleScope.launch {
  testFlow.sharedFlow.collect(object : FlowCollector<Int> {
        override suspend fun emit(value: Int) {
               Log.e("Flow", "SharedFlow Collect: value=$value", Throwable())
        }
   })
}

控制台输出日志:
Flow                    com.wangjiang.example                 E  sharedFlow:emit 1
Flow                    com.wangjiang.example                 E  SharedFlow Collect: value=1
java.lang.Throwable
at com.wangjiang.example.fragment.TestFlowFragment$initView$5$1.emit(TestFlowFragment.kt:76)
at com.wangjiang.example.fragment.TestFlowFragment$initView$5$1.emit(TestFlowFragment.kt:174)
at kotlinx.coroutines.flow.SharedFlowImpl.collect$suspendImpl(SharedFlow.kt:383)
at kotlinx.coroutines.flow.SharedFlowImpl$collect$1.invokeSuspend(Unknown Source:15)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTaskKt.resume(DispatchedTask.kt:234)
at kotlinx.coroutines.DispatchedTaskKt.resumeUnconfined(DispatchedTask.kt:190)
at kotlinx.coroutines.DispatchedTaskKt.dispatch(DispatchedTask.kt:161)
at kotlinx.coroutines.CancellableContinuationImpl.dispatchResume(CancellableContinuationImpl.kt:397)
at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl(CancellableContinuationImpl.kt:431)
at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl$default(CancellableContinuationImpl.kt:420)
at kotlinx.coroutines.CancellableContinuationImpl.resumeWith(CancellableContinuationImpl.kt:328)
at kotlinx.coroutines.flow.SharedFlowImpl.tryEmit(SharedFlow.kt:400)
at kotlinx.coroutines.flow.SharedFlowImpl.emit$suspendImpl(SharedFlow.kt:405)
at kotlinx.coroutines.flow.SharedFlowImpl.emit(Unknown Source:0)
at com.wangjiang.example.flow.TestFlow$testSharedFlow$1.invokeSuspend(TestFlow.kt:20)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at android.os.Handler.handleCallback(Handler.java:900)
at android.os.Handler.dispatchMessage(Handler.java:103)
at android.os.Looper.loop(Looper.java:219)
at android.app.ActivityThread.main(ActivityThread.java:8668)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:513)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1109)

extraBufferCapacity = 1Change the above MutableSharedFlow construction method to extraBufferCapacity = 0, and keep the others unchanged:

会挂起,调用的是 emitSuspend 方法

控制台输出日志:
Flow                    com.wangjiang.example                 E  sharedFlow:emit 1
Flow                    com.wangjiang.example                 E  SharedFlow Collect: value=1
java.lang.Throwable
at com.wangjiang.example.fragment.TestFlowFragment$initView$5$1.emit(TestFlowFragment.kt:76)
at com.wangjiang.example.fragment.TestFlowFragment$initView$5$1.emit(TestFlowFragment.kt:174)
at kotlinx.coroutines.flow.SharedFlowImpl.collect$suspendImpl(SharedFlow.kt:383)
at kotlinx.coroutines.flow.SharedFlowImpl$collect$1.invokeSuspend(Unknown Source:15)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTaskKt.resume(DispatchedTask.kt:234)
at kotlinx.coroutines.DispatchedTaskKt.resumeUnconfined(DispatchedTask.kt:190)
at kotlinx.coroutines.DispatchedTaskKt.dispatch(DispatchedTask.kt:161)
at kotlinx.coroutines.CancellableContinuationImpl.dispatchResume(CancellableContinuationImpl.kt:397)
at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl(CancellableContinuationImpl.kt:431)
at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl$default(CancellableContinuationImpl.kt:420)
at kotlinx.coroutines.CancellableContinuationImpl.resumeWith(CancellableContinuationImpl.kt:328)
at kotlinx.coroutines.flow.SharedFlowImpl.emitSuspend(SharedFlow.kt:504)
at kotlinx.coroutines.flow.SharedFlowImpl.emit$suspendImpl(SharedFlow.kt:406)
at kotlinx.coroutines.flow.SharedFlowImpl.emit(Unknown Source:0)
at com.wangjiang.example.flow.TestFlow$testSharedFlow$1.invokeSuspend(TestFlow.kt:20)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at android.os.Handler.handleCallback(Handler.java:900)
at android.os.Handler.dispatchMessage(Handler.java:103)
at android.os.Looper.loop(Looper.java:219)
at android.app.ActivityThread.main(ActivityThread.java:8668)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:513)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1109)

The difference between the above two logs is:

  • emitsuspendImpltryEmitCancellableContinuationImpl.resumeWithDispatchedTaskKt.resumeTestFlowFragment$initView$5$1.emit
  • emitsuspendImplemitSuspendCancellableContinuationImpl.resumeWithDispatchedTaskKt.resumeTestFlowFragment$initView$5$1.emit

From the output log comparison, when onBufferOverflowthe policy BufferOverflow.SUSPENDis , if the cache space extraBufferCapacityhas a value, emit will not be suspended, otherwise it will be suspended. Therefore, it is now possible to conjecture onBufferOverflowthat extraBufferCapacitythe values ​​of and will affect the return value of tryEmitthe method .

  override fun tryEmit(value: T): Boolean {
        var resumes: Array<Continuation<Unit>?> = EMPTY_RESUMES
        val emitted = synchronized(this) {
            if (tryEmitLocked(value)) {
                resumes = findSlotsToResumeLocked(resumes)
                true
            } else {
                false
            }
        }
        for (cont in resumes) cont?.resume(Unit)
        return emitted
    }

    private fun tryEmitLocked(value: T): Boolean {
        // Fast path without collectors -> no buffering
        if (nCollectors == 0) return tryEmitNoCollectorsLocked(value) // always returns true
        // With collectors we'll have to buffer
        // 如果缓存已满且订阅者消费慢时,不能直接给订阅者值
        if (bufferSize >= bufferCapacity && minCollectorIndex <= replayIndex) {
            when (onBufferOverflow) {
                BufferOverflow.SUSPEND -> return false // will suspend
                BufferOverflow.DROP_LATEST -> return true // just drop incoming
                BufferOverflow.DROP_OLDEST -> {} // force enqueue & drop oldest instead
            }
        }
        enqueueLocked(value)
        bufferSize++ // value was added to buffer
        // drop oldest from the buffer if it became more than bufferCapacity
        if (bufferSize > bufferCapacity) dropOldestLocked()
        // keep replaySize not larger that needed
        if (replaySize > replay) { // increment replayIndex by one
            updateBufferLocked(replayIndex + 1, minCollectorIndex, bufferEndIndex, queueEndIndex)
        }
        return true
    }

The return value of the tryEmit method depends on emittedthe value of emitted , which in turn depends on the return value of tryEmitLockedthe method . Whether the return value of tryEmitLocked is false depends on:

if (bufferSize >= bufferCapacity && minCollectorIndex <= replayIndex) {
            when (onBufferOverflow) {
                BufferOverflow.SUSPEND -> return false // will suspend
                BufferOverflow.DROP_LATEST -> return true // just drop incoming
                BufferOverflow.DROP_OLDEST -> {} // force enqueue & drop oldest instead
            }
        }

Fields: bufferSize, bufferCapacity, minCollectorIndex, replayIndex, they are all global variables of SharedFlowImp.

private class SharedFlowImpl<T>(
    private val replay: Int, //新订阅者订阅时,重新发送多少个之前已发出的值给新订阅者
    private val bufferCapacity: Int, // replay+extraBufferCapacity,缓存容量
    private val onBufferOverflow: BufferOverflow //缓存溢出策略
) : AbstractSharedFlow<SharedFlowSlot>(), MutableSharedFlow<T>, CancellableFlow<T>, FusibleFlow<T> {
    /*
        Logical structure of the buffer 缓存逻辑结构

                  buffered values
             /-----------------------\
                          replayCache      queued emitters
                          /----------\/----------------------\
         +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
         |   | 1 | 2 | 3 | 4 | 5 | 6 | E | E | E | E | E | E |   |   |   |
         +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
               ^           ^           ^                      ^
               |           |           |                      |
              head         |      head + bufferSize     head + totalSize
               |           |           |
     index of the slowest  |    index of the fastest
      possible collector   |     possible collector
               |           |
               |     replayIndex == new collector's index
               \---------------------- /
          range of possible minCollectorIndex

          head == minOf(minCollectorIndex, replayIndex) // by definition
          totalSize == bufferSize + queueSize // by definition

       INVARIANTS:
          minCollectorIndex = activeSlots.minOf { it.index } ?: (head + bufferSize)
          replayIndex <= head + bufferSize
     */

    // Stored state 缓存存储状态
    private var buffer: Array<Any?>? = null // 缓存数组,用于保存 emit 发送的数据
    private var replayIndex = 0L // 新订阅者从 replayCache 中获取数据的起始位置
    private var minCollectorIndex = 0L //  当前活跃订阅者从缓存数组中获取数据时,对应的位置最小索引
    private var bufferSize = 0 // 缓存数组中 buffered values 的大小
    private var queueSize = 0 // 缓存数组中 queued emitters 的大小

    // Computed state 缓存计算状态
    private val head: Long get() = minOf(minCollectorIndex, replayIndex) // 缓存数组的起始位置
    private val replaySize: Int get() = (head + bufferSize - replayIndex).toInt() // 缓存数组中 replay 的大小
    private val totalSize: Int get() = bufferSize + queueSize // 缓存数组中已经缓存的数据数量
    private val bufferEndIndex: Long get() = head + bufferSize// 缓存数组中 buffered values 的结尾位置的后一位索引,也就是 queued emitters 的起始位置
    private val queueEndIndex: Long get() = head + bufferSize + queueSize 缓存数组中 queued emitters 的结尾位置的后一位索引

From the cache logic structure in SharedFlowImpl above, combined with:

MutableSharedFlow<Int>(
        replay = 0,
        extraBufferCapacity = 1 或 0,
        onBufferOverflow = BufferOverflow.SUSPEND
    )

When extraBufferCapacity = 1, call the emit method to emit data, then bufferSize=0, bufferCapacity=1, minCollectorIndex=0, replayIndex=0, so bufferSize >= bufferCapacity && minCollectorIndex <= replayIndexit is false, so tryEmitLocked returns true, tryEmit returns true, and emit will not be suspended.

When extraBufferCapacity = 0, call the emit method to emit data, at this time bufferSize=0, bufferCapacity=0, minCollectorIndex=0, replayIndex=0, so bufferSize >= bufferCapacity && minCollectorIndex <= replayIndexit is BufferOverflow.SUSPENDfalse, tryEmit returns false, so it will Execute emitSuspend, and emit will be suspended.

That's why calling the emit method may hang. In fact, satisfying bufferSize >= bufferCapacity && minCollectorIndex <= replayIndexthe judgment condition means buffer overflow. At this time, you need to choose a processing strategy, whether it is BufferOverflow.SUSPEND, BufferOverflow.DROP_LATEST, or BufferOverflow.DROP_OLDEST .

cache area

The value emitted by emit is stored bufferin :

private var buffer: Array<Any?>? = null // 缓存数组,用于保存 emit 发送的数据

buffer contains: buffered valuesand queued emitters.

buffered valuesThe value stored in the emit(value) method, the size of buffered values ​​depends on bufferCapacity=replay + extraBufferCapacity, replay and extraBufferCapacity are the values MutableSharedFlow( replay: Int = 0, extraBufferCapacity: Int = 0, onBufferOverflow: BufferOverflow = BufferOverflow.SUSPEND )​​passed in , so buffered values ​​are also divided into two parts: replay and extraBufferCapacity.

queued emittersAmong them , the value stored in the emit(value) method is wrapped into Emittera value:

private suspend fun emitSuspend(value: T) = suspendCancellableCoroutine<Unit> sc@{ cont ->
        var resumes: Array<Continuation<Unit>?> = EMPTY_RESUMES
        val emitter = synchronized(this) lock@{
            // recheck buffer under lock again (make sure it is really full)
            if (tryEmitLocked(value)) {
                cont.resume(Unit)
                resumes = findSlotsToResumeLocked(resumes)
                return@lock null
            }
            // add suspended emitter to the buffer
            // 将 value 包装成 Emitter 对象存储到 buffer 中,存储位置为 queued emitters 的起始结束位置范围
            Emitter(this, head + totalSize, value, cont).also {
                enqueueLocked(it)
                queueSize++ // added to queue of waiting emitters
                // synchronous shared flow might rendezvous with waiting emitter
                if (bufferCapacity == 0) resumes = findSlotsToResumeLocked(resumes)
            }
        }
        // outside of the lock: register dispose on cancellation
        emitter?.let { cont.disposeOnCancellation(it) }
        // outside of the lock: resume slots if needed
        for (r in resumes) r?.resume(Unit)
    }
    
 private class Emitter(
        @JvmField val flow: SharedFlowImpl<*>,
        @JvmField var index: Long,
        @JvmField val value: Any?,
        @JvmField val cont: Continuation<Unit>
    ) : DisposableHandle {
        override fun dispose() = flow.cancelEmitter(this)
    }

When emit is suspended (the buffered values ​​in the buffer area is full or the size is 0), the Emitter value is stored bufferin .

Now to store the value in the buffer, there are the following links:

  1. emittryEmittryEmitLockedtryEmitNoCollectorsLockedenqueueLocked
  2. emittryEmittryEmitLockedenqueueLocked
  3. emitemitSuspendtryEmitLockedtryEmitNoCollectorsLockedenqueueLocked
  4. emitemitSuspendtryEmittryEmitLockedenqueueLocked
  5. emitemitSuspendEmitterenqueueLocked
 // enqueues item to buffer array, caller shall increment either bufferSize or queueSize
    private fun enqueueLocked(item: Any?) {
        val curSize = totalSize
        val buffer = when (val curBuffer = buffer) {
           // 创建一个大小为 2 的缓存数组
            null -> growBuffer(null, 0, 2)
            // 如果当前缓存数据已经存满,则扩容,扩大为原来的 2 倍
            else -> if (curSize >= curBuffer.size) growBuffer(curBuffer, curSize,curBuffer.size * 2) else curBuffer
        }
        buffer.setBufferAt(head + curSize, item)
    }
No subscribers are collecting the current SharedFlow

When no subscriber is collectcurrently the SharedFlow, store the value in the buffer and try to follow the link:

  • emittryEmittryEmitLockedtryEmitNoCollectorsLockedenqueueLocked
  • emitemitSuspendtryEmitLockedtryEmitNoCollectorsLockedenqueueLocked
private fun tryEmitLocked(value: T): Boolean {
        // Fast path without collectors -> no buffering
        if (nCollectors == 0) return tryEmitNoCollectorsLocked(value) // always returns true
        // ..... 省略
        return true
    }
    
 private fun tryEmitNoCollectorsLocked(value: T): Boolean {
        assert { nCollectors == 0 }
        if (replay == 0) return true // no need to replay, just forget it now
        enqueueLocked(value) // enqueue to replayCache
        bufferSize++ // value was added to buffer
        // drop oldest from the buffer if it became more than replay
        if (bufferSize > replay) dropOldestLocked()
        minCollectorIndex = head + bufferSize // a default value (max allowed)
        return true
    }

At this time, if replay=0 again, the buffer will not be used. Otherwise, the value in emit(value) will be stored in the replayCache start and end range of buffered values ​​in the buffer array.

The value of replay

When a subscriber is collectcurrently SharedFlow, at this time, if replay=0, extraBufferCapacity=0, it will try to follow the link:

  • emitemitSuspendEmitterenqueueLocked

The value in emit(value) is packed into an Emitter object and stored in the starting and ending range of queued emitters in the buffer array. When there are no subscribers, the value in emit(value) will be discarded.

If replay!=0 or extraBufferCapacity!=0, it will try to follow the link:

  • emittryEmittryEmitLockedenqueueLocked
  • emitemitSuspendtryEmittryEmitLockedenqueueLocked
  • emitemitSuspendEmitterenqueueLocked

The value in the emit(value) method will be stored in the starting and ending range of buffered values ​​or queued emitters in the buffer array. When stored in the starting and ending range of buffered values, the value in the starting and ending range of replayCache will be Updates will also be affected by the buffer overflow strategy onBufferOverflow (BufferOverflow.SUSPEND, BufferOverflow.DROP_LATEST or BufferOverflow.DROP_OLDEST).

collect

According to the analysis of emission above, collection is to get the value bufferfrom , you can get the value value directly from the buffered values ​​area, or you can get the Emitter object from the queued emitters and disassemble the value value.

SharedFlowImplThe class Flowimplements collectthe method of the interface:

 override suspend fun collect(collector: FlowCollector<T>) {
        // 分配一个 SharedFlowSlot
        val slot = allocateSlot()
        try {
           // 如果订阅者是一个 SubscribedFlowCollector,则先告诉订阅者开始订阅
            if (collector is SubscribedFlowCollector) collector.onSubscription()
            // 当前订阅者所在协程
            val collectorJob = currentCoroutineContext()[Job]
            // 死循环
            while (true) {
                var newValue: Any?
                // 死循环
                while (true) {
                   // 通过分配的 slot 去从缓存区 buffer 获取值
                    newValue = tryTakeValue(slot) // attempt no-suspend fast path first
                    // 获取到值
                    if (newValue !== NO_VALUE) break
                    // 没有获取到值,订阅者所在协程会被挂起,等待 emit 发射新数据到缓存区
                    awaitValue(slot) // await signal that the new value is available
                }
                //确认订阅者所在协程是否还存活,如果不存活,会抛出 CancellationException 异常,直接到 finally
                collectorJob?.ensureActive()
                // 将新值给订阅者
                collector.emit(newValue as T)
            }
        } finally {
            // 订阅者不存活时,释放分配的 slot
            freeSlot(slot)
        }
    }

The main steps when a subscriber subscribes are:

  1. Allocate a SharedFlowSlot:val slot = allocateSlot()
  2. Obtain the value from the buffer buffer through the allocated slot: newValue = tryTakeValue(slot); If the value is obtained successfully, go to the next step directly, otherwise the subscriber's coroutine will be suspended, waiting for emit to emit new data to the buffer:awaitValue(slot)
  3. Confirm whether the coroutine where the subscriber is located is still alive, if not, CancellationExceptionan exception , and it will go directly to finally:collectorJob?.ensureActive()
  4. Give the new value to the subscriber:collector.emit(newValue as T)
  5. When the subscriber is not alive, release the allocated slot:freeSlot(slot)

Let's analyze allocateSlot, tryTakeValue(slot), awaitValueand freeSlot.

allocateSlot

allocateSlotThe method is defined in the AbstractSharedFlowabstract :

    @Suppress("UNCHECKED_CAST")
    protected var slots: Array<S?>? = null // 用于管理给订阅者分配的 slot
        private set
    protected var nCollectors = 0 // 还存活的订阅者数量
        private set
    private var nextIndex = 0 // 分配下一个 slot 对象在 slots 数组中的索引
    private var _subscriptionCount: MutableStateFlow<Int>? = null // 用一个 StateFlow 来记录订阅者数量

    protected fun allocateSlot(): S {
        // Actually create slot under lock
        var subscriptionCount: MutableStateFlow<Int>? = null
        // 加锁
        val slot = synchronized(this) {
            // 获取一个 Array<SharedFlowSlot?> 对象
            val slots = when (val curSlots = slots) {
                // 新创建一个大小为 2 的 Array<SharedFlowSlot?> 
                null -> createSlotArray(2).also { slots = it }
                // 扩容,容量扩大为原来 Array<SharedFlowSlot?>  的 2 倍
                else -> if (nCollectors >= curSlots.size) {
                    curSlots.copyOf(2 * curSlots.size).also { slots = it }
                } else {
                    // 直接使用当前的 Array<SharedFlowSlot?>
                    curSlots
                }
            }
            // 下面为从上面的 slots 数组中获取一个 slot 对象
            var index = nextIndex
            var slot: S
            while (true) {
                slot = slots[index] ?: createSlot().also { slots[index] = it }
                index++
                if (index >= slots.size) index = 0
                // 给 slot 的属性 index 赋值,index 的值指向的缓存区 buffer 中的 index
                if ((slot as AbstractSharedFlowSlot<Any>).allocateLocked(this)) break // break when found and allocated free slot
            }
            nextIndex = index
            // 订阅者加1
            nCollectors++
            subscriptionCount = _subscriptionCount // retrieve under lock if initialized
            slot
        }
        // 订阅数量加 1
        subscriptionCount?.increment(1)
        return slot
    }

From the above code logic, the main function of this method is: assign a SharedFlowSlot object to the subscriber, which can be used to associate the index of the value obtained from the buffer buffer, that is, to determine the value that the subscriber will receive. And suspend the coroutine where the subscriber is located, waiting for a new value to be sent to the buffer buffer.

About SharedFlowSlotclass :

private class SharedFlowSlot : AbstractSharedFlowSlot<SharedFlowImpl<*>>() {
    @JvmField
    var index = -1L // 指向缓冲区 buffer 中的索引,值为 -1 表示当前 slot 已经被释放
    //......省略
    override fun allocateLocked(flow: SharedFlowImpl<*>): Boolean {
        if (index >= 0) return false // not free
        index = flow.updateNewCollectorIndexLocked()
        return true
    }
    //......省略
}

    internal fun updateNewCollectorIndexLocked(): Long {
        val index = replayIndex
        if (index < minCollectorIndex) minCollectorIndex = index
        return index
    }

The variable of the slot object assigned to the subscriber index, the initial index of the value obtained from the buffer buffer is replayIndex (index=replayIndex), that is, the new subscriber obtains the value from the starting position in the replayCache.

tryTakeValue

tryTakeValueThe function of the method is: get the value from the buffer buffer through the index SharedFlowSlotof , and the index may point to the range of buffered values ​​or the start and end positions of queued emitters in the buffer buffer. When the value is obtained successfully, the index points to the next position of the buffer slot.index = index + 1:

    private fun tryTakeValue(slot: SharedFlowSlot): Any? {
        var resumes: Array<Continuation<Unit>?> = EMPTY_RESUMES
        // 加锁
        val value = synchronized(this) {
            // 通过 slot ,获取指向的缓存区 buffer 中的 index
            val index = tryPeekLocked(slot)
            if (index < 0) {
                // 没有值
                NO_VALUE
            } else {
                // 记录一下当前 slot 的 index
                val oldIndex = slot.index
                // 通过上面的 index,从缓存区 buffer 中取出对应的值
                val newValue = getPeekedValueLockedAt(index)
                // slot 的 index 指向缓存区 buffer 中的下一位 index+1
                slot.index = index + 1 // points to the next index after peeked one
                // 更新缓存数组的位置,并获取缓存数组与订阅者数组中可恢复的续体
                resumes = updateCollectorIndexLocked(oldIndex)
                newValue
            }
        }
        for (resume in resumes) resume?.resume(Unit)
        return value
    }

To judge whether the index is in line with the buffered values ​​in the buffer or the start and end position range of queued emitters, mainly through tryPeekLockedthe method :

// returns -1 if cannot peek value without suspension
    private fun tryPeekLocked(slot: SharedFlowSlot): Long {
        // slot.index 刚开始的值是 replayIndex,也就是指向 buffered values(参看上面的 updateNewCollectorIndexLocked 方法 )
        val index = slot.index
        
        // 如果 index 在 buffered values  的起始结束位置范围内,直接返回
        if (index < bufferEndIndex) return index
        // 下面的逻辑都是用来判断是否能在 queued emitters 取值
        
        // index>=bufferEndIndex ,此时如果 buffered values 的容量又大于0,找不到值
        if (bufferCapacity > 0) return -1L 
        
        // 此时缓存数组只有 queued emitters,不能取起始位置后面的 Emitter,所以找不到值
        // 因为 head=minOf(minCollectorIndex, replayIndex)
        if (index > head) return -1L 
        
        // 缓存数组大小为0 ,找不到值
        if (queueSize == 0) return -1L
        
        // 从 queued emitters 起始结束位置范围内取值
        return index 
    }

awaitValue

When the tryTakeValue method returns NO_VALUEa value , that is -1L, when the tryPeekLocked method returns , and the corresponding index cannot be found in the buffer, it will execute awaitValue:

    // 这个方法是一个挂起方法
    private suspend fun awaitValue(slot: SharedFlowSlot): Unit = suspendCancellableCoroutine { cont ->
        synchronized(this) lock@{
            // 再此尝试获取指向的缓存区 buffer 中的 index
            val index = tryPeekLocked(slot) // recheck under this lock
            if (index < 0) {
                // 没有找到,给 slot 对象 cont 赋值,也就是让订阅者所在协程挂起
                slot.cont = cont // Ok -- suspending
            } else {
                // 找到,恢复协程,不需要挂起
                cont.resume(Unit) // has value, no need to suspend
                return@lock
            }
            slot.cont = cont // suspend, waiting
        }
    }

The main function of this method is to encapsulate the subscriber into Continuationan interface implementation class object, and suspend the coroutine where the subscriber is located.

slot.cont is an Continuationinterface implementation class object:

public interface Continuation<in T> {
    /**
     *  关联的协程
     */
    public val context: CoroutineContext

    /**
     *  恢复关联的协程,传递一个 successful 或 failed 结果值过去
     *  
     */
    public fun resumeWith(result: Result<T>)
}

It is contextassociated with the coroutine where the subscriber is located. So the value of slot.cont stores the Continuation object of the associated subscriber.

freeSlot

The freeSlot method corresponds to allocateSlot. When the subscriber is no longer alive, freeSlotthe method will be executed:

    protected fun freeSlot(slot: S) {
        // 使用 StateFlow 保存订阅数量
        var subscriptionCount: MutableStateFlow<Int>? = null
        // 加锁
        val resumes = synchronized(this) {
            // 订阅者数量减1
            nCollectors--
            subscriptionCount = _subscriptionCount
            //  如果没有订阅者,下一次在 slots 中分配 slot 对象,从索引0开始
            if (nCollectors == 0) nextIndex = 0
            // slot 对象的真正释放
            (slot as AbstractSharedFlowSlot<Any>).freeLocked(this)
        }
        /*
           Resume suspended coroutines.
           This can happens when the subscriber that was freed was a slow one and was holding up buffer.
           When this subscriber was freed, previously queued emitted can now wake up and are resumed here.
        */
        for (cont in resumes) cont?.resume(Unit)
        // 订阅数量减1
        subscriptionCount?.increment(-1)
    }

The main function of this method: the number of recorded subscribers is reduced by 1, and the index and cont in the slot object are reset, that is, the index no longer points to the range of the start and end positions of the buffer, and cout is no longer associated with the subscriber's coroutine .

private class SharedFlowSlot : AbstractSharedFlowSlot<SharedFlowImpl<*>>() {
    @JvmField
    var index = -1L // 指向缓冲区 buffer 中的索引,值为 -1 表示当前 slot 已经被释放
    //......省略
    @JvmField
    var cont: Continuation<Unit>? = null // 用来保存等待新数据发送的订阅者的续体,当订阅者等待新值时用到

    //......省略
    override fun freeLocked(flow: SharedFlowImpl<*>): Array<Continuation<Unit>?> {
        assert { index >= 0 }
        val oldIndex = index
        index = -1L
        cont = null // cleanup continuation reference
        return flow.updateCollectorIndexLocked(oldIndex)
    }
}

flow.updateCollectorIndexLocked(oldIndex)The method called in freeLocked is used to update the location of the cache array .

At this point, the creation, sending, collection and analysis of SharedFlow is over. It has a general understanding of its characteristics: sharing and independent existence.

Business scene

After knowing the creation, sending and collection principles of SharedFlow, based on its sharing and independent existence, it can be used as an event bus in business , similar to the previously used EventBus. The following is a simple example of EventBus implemented by SharedFlow:

Define the event bus:

object EventBus {
    private val events = ConcurrentHashMap<String, MutableSharedFlow<Event>>()

    private fun getOrPutEventFlow(eventName: String): MutableSharedFlow<Event> {
        return events[eventName] ?: MutableSharedFlow<Event>().also { events[eventName] = it }
    }

    fun getEventFlow(event: Class<Event>): SharedFlow<Event> {
        return getOrPutEventFlow(event.simpleName).asSharedFlow()
    }

    suspend fun produceEvent(event: Event) {
        val eventName = event::class.java.simpleName
        getOrPutEventFlow(eventName).emit(event)
    }

    fun postEvent(event: Event, delay: Long = 0, scope: CoroutineScope = MainScope()) {
        scope.launch {
            delay(delay)
            produceEvent(event)
        }
    }
}

@Keep
open class Event(val value: Int) {
}

Event emission and subscription:

        lifecycleScope.launch {
            EventBus.getEventFlow(Event::class.java).collect {
                Log.e("Flow", "EventBus Collect: value=${it.value}")
            }
        }
        EventBus.postEvent(Event(1), 0, lifecycleScope)
        EventBus.postEvent(Event(2), 0)

控制台输出结果:
Flow                    com.example.wangjaing                E  EventBus Collect: value=1
Flow                    com.example.wangjaing                E  EventBus Collect: value=2

Using SharedFlow as an event bus has the following advantages:

  1. Events can be sent late
  2. Sticky events can be defined
  3. Events can sense the life cycle of Activity or Fragment
  4. events are ordered

Summarize

In the hot flow SharedFlow, it exists after it is created, and it can run independently without the consumer collecting data when the producer emits data. After the producer emits the data, the data will be cached, and both new and old consumers can receive the data, so as to achieve shared data.

For the emission data operation, it will be affected by the MutableSharedFlow constructor parameters replay, extraBufferCapacity, onBufferOverflow value, these parameters will determine whether the emission operation is suspended or not. The transmitted data will be managed using a buffer array, and the management area is divided into buffered values ​​and queued emitters. The replay and extraBufferCapacity parameters determine the size of the buffered values ​​area. When the buffered values ​​area is full and overflows, the area will be adjusted according to the overflow policy onBufferOverflow. When replay=0 and extraBufferCapacity=0, or replay!=0 and extraBufferCapacity!=0 and the buffered values ​​area is full, the emitted data will be packaged as an Emitter and stored in the queued emitters area. In addition, the number of subscribers determines whether the emitted data is stored in the buffer or discarded. Finally, the data stored in the cache is shared with all subscribers.

For data collection operations, use the slots: Array<SharedFlowSlot?> array to manage subscribers, where each slot object corresponds to a subscriber, and the slot.index of the slot object associates the data to be collected by the subscriber with the cache area, slot. cont associates the subscriber's coroutine with the SharedFlow context. If the value can be obtained in the cache area through slot.index, the value will be directly sent to the subscriber. Otherwise, the subscriber is encapsulated into a Continuation interface implementation class object and stored in slot.cont, the coroutine where the subscriber is located is suspended, and when there is a value in the buffer, the subscriber coroutine is resumed and the value is given to it. When the subscriber coroutine does not survive, the slot object associated with the subscriber will be released, that is, the values ​​of slot.inext and slot.cont will be reset, and the position of the cache array will be readjusted.

StateFlow

StateFlowIt is also implemented based on SharedFlow, so StateFlow can be understood as a special existence of SharedFlow.

public interface StateFlow<out T> : SharedFlow<T> {
    /**
     * The current value of this state flow.
     */
    public val value: T
}

StateFlow, which is also a hot flow, can also allow all collectors to share the value it emits, but this value is the latest value, and its instance can also exist independently of the existence of the collector.

The following will analyze from the creation, sending and collection of StateFlow. The principle is similar to the creation, sending and collection of SharedFlow, so here is only a simple analysis.

create

Create a StateFlow using MutableStateFlowthe constructor :

public fun <T> MutableStateFlow(value: T): MutableStateFlow<T> = StateFlowImpl(value ?: NULL)

private class StateFlowImpl<T>(
    initialState: Any // T | NULL
) : AbstractSharedFlow<StateFlowSlot>(), MutableStateFlow<T>, CancellableFlow<T>, FusibleFlow<T> {
   private val _state = atomic(initialState)
   //......省略
}

StateFlow must have an initial value, which is cached in an atomic object: _state = atomic(initialState). If this value is not updated, then the subscriber will receive this value when subscribing.

emission

The StateFlowImpl class implements the EmitData emitand tryEmitmethods:

private class StateFlowImpl<T>(
    initialState: Any // T | NULL
) : AbstractSharedFlow<StateFlowSlot>(), MutableStateFlow<T>, CancellableFlow<T>, FusibleFlow<T> {
    // 缓存当前值原子对象
    private val _state = atomic(initialState) 
    private var sequence = 0 

    
    public override var value: T
        get() = NULL.unbox(_state.value)
        //发送时,更新当前值,并缓存在 _state 中
        set(value) { updateState(null, value ?: NULL) }

    override fun compareAndSet(expect: T, update: T): Boolean =
        updateState(expect ?: NULL, update ?: NULL)
        
    //更新原子对象 _state 的值
    private fun updateState(expectedState: Any?, newState: Any): Boolean {
        var curSequence = 0
        // 订阅者关联的 slots
        var curSlots: Array<StateFlowSlot?>? = this.slots 
        synchronized(this) {
            val oldState = _state.value
            if (expectedState != null && oldState != expectedState) return false // CAS 操作
            if (oldState == newState) return true // 如果当前值和新值相等,则不用更新,也不用通知订阅者
            // 将新值更新到缓存中
            _state.value = newState
            curSequence = sequence
            if (curSequence and 1 == 0) { 
                curSequence++ // 
                sequence = curSequence
            } else {
                
                sequence = curSequence + 2 // change sequence to notify, keep it odd
                return true 
            }
            curSlots = slots // read current reference to collectors under lock
        }
       //......省略
    }

   //......省略
   //非挂起发送
    override fun tryEmit(value: T): Boolean {
        this.value = value
        return true
    }
    //挂起发送
    override suspend fun emit(value: T) {
        this.value = value
    }

}

These two methods are to update the value _statestored . If the current value is equal to the new value, it will not be updated, otherwise it will be updated and the new value will be given to the subscriber.

collect

According to the analysis of the launch above, collection means fetching values _state​​from .

StateFlowImplThe class Flowimplements collectthe method of the interface:

    override suspend fun collect(collector: FlowCollector<T>): Nothing {
        // 分配一个 StateFlowSlot
        val slot = allocateSlot()
        try {
            // 如果订阅者是 SubscribedFlowCollector 类型,则告诉订阅者订阅开始
            if (collector is SubscribedFlowCollector) collector.onSubscription()
            // 当前协程
            val collectorJob = currentCoroutineContext()[Job]
            // 记录一下上一个缓存值
            var oldState: Any? = null 
            // 死循环
            while (true) {
                // Here the coroutine could have waited for a while to be dispatched,
                // 获取当前缓存值
                val newState = _state.value
                // 确认订阅者所在协程是否还存活,如果不存活,会抛出 `CancellationException` 异常,直接到 finally
                collectorJob?.ensureActive()
                // 如果上一个缓存值为空或新值不等于上一个缓存值,则将新值给订阅者
                if (oldState == null || oldState != newState) {
                    collector.emit(NULL.unbox(newState))
                    //更新记录的缓存值
                    oldState = newState
                }
                // 判断订阅者是否需要挂起
                if (!slot.takePending()) { 
                    //订阅者所在协程会被挂起,等待 emit 发射新数据到缓存
                    slot.awaitPending()
                }
            }
        } finally {
           // 订阅者不存活时,释放分配的 slot
            freeSlot(slot)
        }
    }

The main steps when a subscriber subscribes are:

  1. Allocate a StateFlowSlot:val slot = allocateSlot()
  2. _stateGet the value from the cache through the allocated slot :val newState = _state.value
  3. Confirm whether the coroutine where the subscriber is located is still alive, if not, CancellationExceptionan exception , and it will go directly to finally:collectorJob?.ensureActive()
  4. Give the new value to the subscriber:collector.emit(NULL.unbox(newState))
  5. When the subscriber is not alive, release the allocated slot:freeSlot(slot)

Business scene

Know the creation of StateFlow, the general principle of sending and collecting, and its characteristics of sharing the latest state. It can be used for status update (replacing LiveData) in business .

For example, get a list data from the server and display the list data to the UI. The following uses MVI (Model-View-Intent)to do it :

Data Layer:

class FlowRepository private constructor() {

    companion object {
        @JvmStatic
        fun newInstance(): FlowRepository = FlowRepository()
    }

    fun requestList(): Flow<List<ItemBean>> {
        val call = ServiceGenerator
            .createService(FlowListApi::class.java)
            .getList()
        return flow {
            emit(call.execute())
        }.flowOn(Dispatchers.IO).filter { it.isSuccessful }
            .map {
                it.body()?.data
            }
            .filterNotNull().catch {
                emit(emptyList())
            }.onEmpty {
                emit(emptyList())
            }
    }
}

ViewModel:

class ListViewModel : ViewModel() {

    private val repository: FlowRepository = FlowRepository.newInstance()
    
    private val _uiIntent: Channel<FlowViewIntent> = Channel()
    private val uiIntent: Flow<FlowViewIntent> = _uiIntent.receiveAsFlow()
    
    private val _uiState: MutableStateFlow<FlowViewState<List<ItemBean>>> =
        MutableStateFlow(FlowViewState.Init())
    val uiState: StateFlow<FlowViewState<List<ItemBean>>> = _uiState

    fun sendUiIntent(intent: FlowViewIntent) {
        viewModelScope.launch {
            _uiIntent.send(intent)
        }
    }

    init {
        viewModelScope.launch {
            uiIntent.collect {
                handleIntent(it)
            }
        }
    }

    private fun handleIntent(intent: FlowViewIntent) {
        viewModelScope.launch {
            repository.requestList().collect {
                if (it.isEmpty()) {
                    _uiState.emit(FlowViewState.Failure(0, "data is invalid"))
                } else {
                    _uiState.emit(FlowViewState.Success(it))
                }
            }
        }
    }
}


data class FlowViewIntent()

sealed class FlowViewState<T> {
    @Keep
    class Init<T> : FlowViewState<T>()

    @Keep
    class Success<T>(val result: T) : FlowViewState<T>()

    @Keep
    class Failure<T>(val code: Int, val msg: String) : FlowViewState<T>()
}

UI:

 private var isRequestingList = false
 private lateinit var listViewModel: ListViewModel

 private fun initData() {
        listViewModel = ViewModelProvider(this)[ListViewModel::class.java]
        lifecycleScope.launchWhenStarted {
            listViewModel.uiStateFlow.collect {
                when (it) {
                    is FlowViewState.Success -> {
                        showList(it.result)
                    }
                    is FlowViewState.Failure -> {
                        showListIfFail()
                    }
                    else -> {}
                }
            }
        }
        requestList()
    } 

  private fun requestList() {
        if (!isRequestingList) {
            isRequestingList = true
            listViewModel.sendUiIntent( FlowViewIntent() )
        }
    }

After replacing LiveData with StateFlow and replacing MVVM with MVI, you can have the following advantages:

  1. The only trusted data source : There may be a large number of LiveData in MVVM, which leads to uncontrollable logic of data interaction or parallel update. Add UIState combined with StateFlow, and the data source is only UIState;
  2. One-way flow of data : In MVVM, data UI ⇆ ViewModel flows mutually, while in MVI, data can only flow from Data Layer → ViewModel → UI, and data flows in one direction.

Using StateFlow to replace LiveData for event status update has the following differences:

  • StateFlow requires an initial state to be passed to the constructor, LiveData does not.
  • LiveData.observe() automatically unregisters the consumer when the View enters the STOPPED state, but collecting data from StateFlow or any other data flow does not stop automatically. To achieve the same behavior, collect data flow from Lifecycle.repeatOnLifecycle block.

Summarize

Heat flow StateFlow is implemented based on SharedFlow, so it also has the characteristics of independent existence and sharing. But when emitting data in StateFlow, only the latest value is cached, so when new and old subscribers subscribe, they will only receive the last updated value. If the new value emitted is equal to the current value, the subscriber will not Notifications received.

Guess you like

Origin blog.csdn.net/wangjiang_qianmo/article/details/129505497