Количество просмотров471
7 декабря 2021

Kotlin Native. New Memory Management Model

On August 31, JetBrains company presented new memory management model for Kotlin/Native. The focus was made at thread-safety, safe context and data sharing between threads, memory leaks fixing and working without need to use special annotations. Also, there are several Coroutines improvements. Since now, it is possible to switch between contexts with no need for freezing. All these updates are supported by Ktor in new versions.

Let’s summarize what’s new in the suggested memory model:

  1. Multithreading without freeze(). It was claimed, that we can remove all freeze() blocks from our code, even from the background workers, and switch between contexts and thread without any blockers and problems.
  2. AtomicReferences/ FreezableAtomicReference don’t produce any leaks.
  3. No need of ShareImmutable when use globals constants.
  4. The Producer of Worker.execute doesn’t return an isolated graph of dependencies anymore.

Also, there are several nuances and side effects:

  1. We still have to use freeze() with Atomic references. To deal with it use FreezableAtomicReference instead. Also, we can use AtomicRef from atomicfu.
  2. All global constants are started lazily. In previous version all globals were initialized immediately at start. In new version use EagerInitialization to keep such behaviour.
  3. There is no guarantee that suspend function will return the completion handler into the Main Thread. So we need to wrap it in DispatchQueue.main.async{…} for iOS.
  4. Deinit for Swift/ObjC objects could be called in other thread.

Speaking about Coroutines, there are also some improvements and changes. You can check them in special version branch with new memory model support:

  1. We can work with Workers, Channels and Flows without freeze. In contrast of native-mt version all the content of Channel could be unexpectedly frozen.
  2. Dispatchers.Default now is bounded to the Global queue.
  3. newSingleThreadContext and newFixedThreadPoolContext could be used to create new Coroutine Dispatcher with support of the pool for one or several Workers.
  4. Dispatchers.Main is bound with Dispatch Main Queue for Darwin and special Worker for other Native platforms. It is recommended not to use it for Unit testing, because nothing will be called in main thread queue.

So, there a bunch of different improvements, changes, nuances, also with some performance bugs and problems. All of them are known and described in documentation. At the moment, it is just a preview version, not the Alpha release and the JetBrains command still improve it and develop.

Well, let’s apply all new features to our code sample.
At first, we are going to install correct versions for Coroutines and Kotlin:



#Common versions

Add correct dependency from Coroutines:


val commonMain by getting {
            dependencies {

Important! Install Xcode 12.5 or newer. It is a minimal compatible with 1.6.0-M1–139 version. If you already have installed over one version, you need to switch to the correct variant with xcode-select. Then close Kotlin Multiplatform project and call Invalidate cache and Restart.

Now we are going to remove all freeze() blocks from non-coroutine code:

internal fun background(block: () -> Unit) {
    val future = worker.execute(TransferMode.SAFE, { block}) {

//Main wrapper
internal fun main(block:()->Unit) {
    dispatch_async(dispatch_get_main_queue()) {

Remove all freeze() from all the parameters we use in NSUrlSession. Remember, we deal with the native network client:

fun request(request: Request, completion: (Response) -> Unit) {
        this.completion = completion
        val responseReader = ResponseReader().apply { this.responseListener = this@HttpEngine }
        val urlSession =
                NSURLSessionConfiguration.defaultSessionConfiguration, responseReader,
                delegateQueue = NSOperationQueue.currentQueue()

        val urlRequest =
            NSMutableURLRequest(NSURL.URLWithString(request.url)!!).apply {


        fun doRequest() {
            val task = urlSession.dataTaskWithRequest(urlRequest)


Also, we need to switch from AtomicReference to FreezableAtomicReference:

internal fun <T> T.atomic(): AtomicReference<T>{
    return AtomicReference(this.share())

internal fun <T> T.atomic(): FreezableAtomicReference<T>{
    return FreezableAtomicReference(this)

Apply changes to code:

 private fun updateChunks(data: NSData) {
        var newValue = ByteArray(0)
        newValue += chunks.value
        newValue += data.toByteArray()
        chunks.value = newValue//.share()

Our code is clean and fresh, our app is flying, despite GC still work not perfectly.
Now let’s tweak our Coroutine sample:

val uiDispatcher: CoroutineContext = Dispatchers.Main
val ioDispatcher: CoroutineContext = Dispatchers.Default

We are going to use standard Dispatchers, that are available by default. In order to check a GlobalQueue we need to output an information about the Coroutine Context from ioDispatcher.

StandaloneCoroutine{Active}@26dbcd0, DarwinGlobalQueueDispatcher@28ea470

Now we remove all the freeze() from Flows and Channels:

class FlowResponseReader : NSObject(),
    NSURLSessionDataDelegateProtocol {
    private var chunksFlow = MutableStateFlow(ByteArray(0))
    private var rawResponse = CompletableDeferred<Response>()

    suspend fun awaitResponse(): Response {
        var chunks = ByteArray(0)

        chunksFlow.onEach {
            chunks += it
        val response = rawResponse.await()
        response.content = chunks.string()
        return response


    private fun updateChunks(data: NSData) {
        val bytes = data.toByteArray()

It works nice and fast. Do not forget to send an answer in the main thread:

actual override suspend fun request(request: Request):Response {

        val response = engine.request(request)
        return withContext(uiDispatcher){response}

Important! In order to prevent memory leaks in the iOS side, it will be useful to wrap the blocks with a lot Swift/ObjC in autoreleasepool

Let’s check some cases. We are going to make a request from the MainScope and specify some other background Dispatcher with newSingleThreadContext:

 val task = urlSession.dataTaskWithRequest(urlRequest)
        mainScope.launch(newSingleThreadContext("MyOwnThread")) {
[StandaloneCoroutine{Active}@384d2a0, WorkerDispatcher@384d630]

Works with no troubles. The new memory management model will be the perfect solution for all developers and simplify our work.

But! It could be some problems with libraries that don’t support new-mm at the moment. Sometimes, there could be InvalidMutabilityException or FreezingException.
In order to deal with them and Kotlin 1.6.0-M1 or newer, we have to disable embedded freezing:


//либо build.gradle.kts
kotlin.targets.withType(KotlinNativeTarget::class.java) {
    binaries.all {
        binaryOptions["freezing"] = "disabled"

More info read here: https://github.com/JetBrains/kotlin/blob/master/kotlin-native/NEW_MM.md

Some pieces of sample: