Overview
In iOS development, the application of locks is essential. Today we will explore the principle of the commonly used @synchronized.
explore goals
When we use @synchronized, the basic use is
@synchronized (self) {
// your code here
}
Here we see that @synchronized
followed by a parameter, followed by a code block, this is our @synchronized
basic usage. Next we will explore the problem.
@synchronized
What parameters should be passed? What happens if you pass nil?@synchronized
What exactly is the block of code?@synchronized
What is the realization principle of ?@synchronized
Why can it be nested
A preliminary exploration of @synchronized
There is no direct implementation of @synchronized in the source code, and we cannot directly view the underlying implementation in this way.
main.m
Implement one in a file @synchronized
and explore how with xcrun.
In the compiled cpp file, in addition to the system code, the @synchronized
generated code is in the red box. We align the format to make it look better.
In the task, we simply wrote a NSLog print, and we saw that there is a _SYNC_EXIT structure in which the constructor and destructor are defined, and there is also a id
type sync_exit
, which is used to receive parameters.
After removing the simple information, we can see that the basic process is as follows
_sync_obj
:@synchronized
incoming parameters- transfer
objc_sync_enter(_sync_obj);
_sync_exit(_sync_obj)
: Calling the destructor with parameters is actually equivalent to callingobjc_sync_exit(_sync_obj);
The whole process is not complicated, we will study nextobjc_sync_enter(_sync_obj);
If the incoming obj
exists, obj
perform business processing on it; if it obj
is nil
, such as a comment, @synchronized(nil) does nothing
, do not process it
objc_sync_exit(id obj)
The processing of data in the same function is similar
两个函数都对obj
进行了一个id2data()
的操作,以及在objc_sync_enter
对data
进行mutex.lock()
和mutex.tryUnlock()
操作
SyncData分析
简单分析一下SyncData
typedef struct alignas(CacheLineSize) SyncData {
struct SyncData* nextData; // 显然是一个单项链表结构
DisguisedPtr<objc_object> object; // 关联对象的封装
int32_t threadCount; // number of THREADS using this block
recursive_mutex_t mutex; // 递归锁, 但不能多线程递归
} SyncData;
先来看一下核心重点的id2data()
源码中主要包含三个部分
#if SUPPORT_DIRECT_THREAD_KEYS
SyncData *data = (SyncData *)tls_get_direct(SYNC_DATA_DIRECT_KEY);
if (data) {......}
这里SUPPORT_DIRECT_THREAD_KEYS
用到了tls(Thread Local Storage)线程局部存储,是操作系统为线程单独提供的私有空间,通常只有有限容量。
SyncCache *cache = fetch_cache(NO);
if (cache) {......}
在cache
中查找,与上一种方式处理逻辑相似,意味着如果支持tls,走第一种方式,如果不支持则走第二种方式。
// 剩余的部分
{......}
done:
......
在LOCK_FOR_OBJ
和LIST_FOR_OBJ
的宏定义中,有一个sDataList
#define LOCK_FOR_OBJ(obj) sDataLists[obj].lock
#define LIST_FOR_OBJ(obj) sDataLists[obj].data
static StripedMap<SyncList> sDataLists; // 全局哈希表
其中SyncList
是一个带有SyncData和lock的结构
struct SyncList {
SyncData *data;
spinlock_t lock;
constexpr SyncList() : data(nil), lock(fork_unsafe_lock) { }
};
sDataList
的结构,由于是哈希结构,并不会顺序存储,第一个值不一定实在0位置。
(StripeMap<SyncList>) $0 = {
array = {
[0] = {
value = {
data = nil
lock = {
mLock = (_os_unfaire_lock_opaque = 0)
}
}
},
......
[63] = {
value = {
data = nil
lock = {
mLock = (_os_unfaire_lock_opaque = 0)
}
}
},
}
}
在SyncList
中data
即为syncData
,那么我们在使用@synchronized
的嵌套用法时如果参数都是self
,将如何存储,所以SyncData
使用链表结构,同一个self
中的不同操作可以形成链表,通过同一个self
找到所有的操作。
解析
static SyncData* id2data(id object, enum usage why)
{
spinlock_t *lockp = &LOCK_FOR_OBJ(object);
SyncData **listp = &LIST_FOR_OBJ(object);
SyncData* result = NULL;
1、如果支持tls走这里,第一次并不存在
7、嵌套的第二层进来的时候,data存在并且支持tls,进入此处的代码
#if SUPPORT_DIRECT_THREAD_KEYS
// Check per-thread single-entry fast cache for matching object
bool fastCacheOccupied = NO;
SyncData *data = (SyncData *)tls_get_direct(SYNC_DATA_DIRECT_KEY);
if (data) {
fastCacheOccupied = YES;
if (data->object == object) {
// Found a match in fast cache.
uintptr_t lockCount;
result = data;
8、获取对象被加锁了多少次,如果线程数或锁的数量小于等于零,说明对象不存在
lockCount = (uintptr_t)tls_get_direct(SYNC_COUNT_DIRECT_KEY);
if (result->threadCount <= 0 || lockCount <= 0) {
_objc_fatal("id2data fastcache is buggy");
}
switch(why) {
9、如果是ACQUIRE,对象加锁数+1,并进行存储
case ACQUIRE: {
lockCount++;
tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)lockCount);
break;
}
10.如果是RELEASE,对象加锁数-1,并进行存储
case RELEASE:
lockCount--;
tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)lockCount);
11、锁为零,对result中的线程数
if (lockCount == 0) {
// remove from fast cache
tls_set_direct(SYNC_DATA_DIRECT_KEY, NULL);
// atomic because may collide with concurrent ACQUIRE
11、线程数-1,这里也说明@synchronized是跨线程的,多线程的
OSAtomicDecrement32Barrier(&result->threadCount);
}
break;
case CHECK:
// do nothing
break;
}
return result;
}
}
#endif
2、如果不持之走这里,第一次进来并不存在cache
// Check per-thread cache of already-owned locks for matching object
SyncCache *cache = fetch_cache(NO);
if (cache) {
unsigned int i;
for (i = 0; i < cache->used; i++) {
SyncCacheItem *item = &cache->list[i];
if (item->data->object != object) continue;
// Found a match.
result = item->data;
if (result->threadCount <= 0 || item->lockCount <= 0) {
_objc_fatal("id2data cache is buggy");
}
switch(why) {
case ACQUIRE:
item->lockCount++;
break;
case RELEASE:
item->lockCount--;
if (item->lockCount == 0) {
// remove from per-thread cache
cache->list[i] = cache->list[--cache->used];
// atomic because may collide with concurrent ACQUIRE
OSAtomicDecrement32Barrier(&result->threadCount);
}
break;
case CHECK:
// do nothing
break;
}
return result;
}
}
// Thread cache didn't find anything.
// Walk in-use list looking for matching object
// Spinlock prevents multiple threads from creating multiple
// locks for the same new object.
// We could keep the nodes in some hash table if we find that there are
// more than 20 or so distinct locks active, but we don't do that now.
lockp->lock();
3、第一次过来list为空,why = ACQUIRE,所以也不走这里
{
SyncData* p;
SyncData* firstUnused = NULL;
for (p = *listp; p != NULL; p = p->nextData) {
if ( p->object == object ) {
result = p;
// atomic because may collide with concurrent RELEASE
12、如果是同一个对象
OSAtomicIncrement32Barrier(&result->threadCount);
goto done;
}
if ( (firstUnused == NULL) && (p->threadCount == 0) )
firstUnused = p;
}
// no SyncData currently associated with object
if ( (why == RELEASE) || (why == CHECK) )
goto done;
// an unused one was found, use it
if ( firstUnused != NULL ) {
result = firstUnused;
result->object = (objc_object *)object;
result->threadCount = 1;
goto done;
}
}
4、第一次走这里,创建一个新的SyncData并加入到list
// Allocate a new SyncData and add to list.
// XXX allocating memory with a global lock held is bad practice,
// might be worth releasing the lock, allocating, and searching again.
// But since we never free these guys we won't be stuck in allocation very often.
posix_memalign((void **)&result, alignof(SyncData), sizeof(SyncData));
result->object = (objc_object *)object;
result->threadCount = 1;
new (&result->mutex) recursive_mutex_t(fork_unsafe_lock);
5、用头插法进行数据排列
result->nextData = *listp;
*listp = result;
6、对syncData进行存储
done:
lockp->unlock();
if (result) {
// Only new ACQUIRE should get here.
// All RELEASE and CHECK and recursive ACQUIRE are
// handled by the per-thread caches above.
if (why == RELEASE) {
// Probably some thread is incorrectly exiting
// while the object is held by another thread.
return nil;
}
if (why != ACQUIRE) _objc_fatal("id2data is buggy");
if (result->object != object) _objc_fatal("id2data is buggy");
#if SUPPORT_DIRECT_THREAD_KEYS
if (!fastCacheOccupied) {
// Save in fast thread cache
tls_set_direct(SYNC_DATA_DIRECT_KEY, result);
tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)1);
} else
#endif
{
// Save in thread cache
if (!cache) cache = fetch_cache(YES);
cache->list[cache->used].data = result;
cache->list[cache->used].lockCount = 1;
cache->used++;
}
}
return result;
}
syncData
的操作有mutex.lock()
,多层嵌套会对相同的参数连续加锁,这也是@synchronized
的重要特性之一,
总结
-
@synchronized
调用时生成了一张哈希表sDataList,其中维护了一个存放SyncList的array结构 -
@synchronized
通过objc_sync_enter
和objc_sync_exit
对数据进行加锁和解锁的操作,其中用到的是递归锁 -
其中生成的SyncData有两种存储方式:tls,cache
-
可重入:通过lockcount对对象进行加锁次数的管理,可多次加锁
-
多线程:通过threadcount维护多线程对对象加锁的情况
-
@synchronized
传入的参数应该保证:生命周期大于多线程要操作的数据源,通常传入self(vc)也是因为在创建syncData链时,创建一条即可。