In-depth analysis of the hard-core synchronized interview questions in the entire network

Preface

I haven't posted a post for a while, and a classmate in the learning group often asked (spurred) me to speed up the progress, and to be honest, I was very pleased.

synchronized To be honest, I wanted to write it a long time ago, because the status of synchronized in the current interview is basically similar to that of HashMap. Collection and concurrency itself are very important knowledge systems, and HashMap and synchronized are the core of the core.

Compared with HashMap, synchronized is a bit more complicated, because its main principles are in the JVM source code, so this time it took a lot of time to look through the JVM source code, but to be honest, the harvest is quite fruitful, because there are many knowledge points and The current mainstream view is still a bit biased.

text

1. A small example of the use of synchronized?

public class SynchronizedTest {     public static volatile int race = 0;     private static CountDownLatch countDownLatch = new CountDownLatch(2);     public static void main(String[] args) throws InterruptedException {        // 循环开启2个线程来计数        for (int i = 0; i < 2; i++) {            new Thread(() -> {                // 每个线程累加1万次                for (int j = 0; j < 10000; j++) {                    race++;                }                countDownLatch.countDown();            }).start();        }        // 等待,直到所有线程处理结束才放行        countDownLatch.await();        // 期望输出 2万(2*1万)        System.out.println(race);    }}

The familiar example of two thread counts, each thread increments 10,000 times, the expected result is 20,000, but the actual running result is always a number less than or equal to 20,000. Why is this happening?

In our opinion, race++ may only be one operation, but it is actually composed of multiple operations at the bottom, so there will be the following scenarios under concurrency:

In order to get the correct result, we can use synchronized to modify race++, as follows:

synchronized (SynchronizedTest.class) {    race++;}

After synchronized is added, only the lock can be preempted to operate on race. At this time, the process will become as follows:

2. Synchronized various locking scenarios?

1) Acting on non-static methods, the object instance (this) is locked, and each object instance has a lock.

public synchronized void method() {}

2) Acting on static methods, it is the Class object of the class that is locked, and there is only one copy of the Class object globally. Therefore, the static method lock is equivalent to a global lock of the class, which locks all threads that call the method.

public static synchronized void method() {}

3) Acting on Lock.class, the Lock's Class object is locked, and there is only one globally.

synchronized (Lock.class) {}

4) Acting on this, the object instance is locked, and each object instance has a lock.

synchronized (this) {}

5) Acting on a static member variable, it is the static member variable object that is locked. Because it is a static variable, there is only one global.

public static Object monitor = new Object(); synchronized (monitor) {}

Some students may be confused, but it is actually easy to remember. Remember the following two points:

1) There must be an "object" to act as a "lock".

2) For the same class, there are usually only two types of objects that act as locks: instance objects and Class objects (only one copy for a class).

Class object: All static related objects belong to the Class object, and there is another way to directly specify Lock.class.

Instance objects: non-static related objects belong to instance objects.

3. Why do I need to add a synchronized lock when calling the wait/notify/notifyAll method of Object?

This question is difficult and difficult to say, simple and simple to say. The simple reason is that everyone should remember the question: "The difference between sleep and wait". The very important one in the answer is: "wait will release the object lock, sleep will not". Since the lock is to be released, it must be Acquire the lock first.

It's difficult to say because if you don't think of this topic and don't understand the underlying principles, you may be completely clueless.

The reason is that because these three methods all operate on the lock object, it is necessary to obtain the lock object first, and adding a synchronized lock allows us to obtain the lock object.

Let's look at an example:

public class SynchronizedTest {     private static final Object lock = new Object();     public static void testWait() throws InterruptedException {        lock.wait();    }     public static void testNotify() throws InterruptedException {        lock.notify();    }}

In this example, wait will release the lock object, and notify/notifyAll will wake up other threads waiting to acquire the lock object to preempt the lock object.

Since you want to manipulate the lock object, you must first acquire the lock object. Just like you want to give the apple to other students, then you must get the apple first.

Let's look at another counterexample:

public class SynchronizedTest {     private static final Object lock = new Object();     public static synchronized void getLock() throws InterruptedException {        lock.wait();    }}

After this method runs, IllegalMonitorStateException will be thrown. Why, have we obviously added synchronized to obtain the lock object?

Because the synchronized method is added to the getLock static method to obtain the lock object of SynchronizedTest.class, and our wait() method is to release the lock object of the lock.

This is equivalent to you want to give other students an apple (lock), but you only have one pear (SynchronizedTest.class).

4. How many lists does synchronize maintain at the bottom to store blocked threads?

This question follows the previous one. Obviously, the interviewer wants to see if I really understand the underlying principles of synchronize.

The underlying JVM model corresponding to synchronized is objectMonitor, which uses three doubly linked lists to store blocked threads: _cxq (Contention queue), _EntryList (EntryList), _WaitSet (WaitSet).

When the thread fails to acquire the lock and enters the block, it will first be added to the _cxq linked list, and the nodes of the _cxq linked list will be further transferred to the _EntryList linked list at some point.

When the thread holding the lock releases the lock, the thread at the head node of the _EntryList linked list will be awakened. This thread is called the successor (assuming the successor), and then the thread will try to preempt the lock.

When we call wait(), the thread will be put into _WaitSet. Until notify()/notifyAll() is called, the thread will be put back into _cxq or _EntryList. By default, it will be put into the head of the _cxq linked list.

The overall process of objectMonitor is as follows:

5. Why is the thread awakened when the lock is released called the "presumed successor"? Must the awakened thread acquire the lock?

Because the awakened thread does not necessarily acquire the lock, the thread still needs to compete for the lock, and may fail, so the thread does not necessarily become the "successor" of the lock, but just has the opportunity to become, so We call it hypothetical.

This is also one of the reasons why synchronized is an unfair lock.

6. Is synchronized a fair lock or an unfair lock?

Unfair lock.

7. Why is synchronized an unfair lock? Where is the injustice reflected?

In fact, the unfairness of synchronized should have many places in the source code, because the designer did not design according to the fair lock. The core has the following points:

1) When the thread holding the lock releases the lock, the thread will perform the following two important operations:

  1. First assign the owner attribute of the lock holder to null
  2. Wake up a thread in the waiting list (assuming a successor).

Between 1 and 2, if there are other threads just trying to acquire the lock (for example, spin), the lock can be acquired immediately.

2) When a thread fails to acquire the lock and enters blocking, the order of putting it into the linked list is inconsistent with the order in which it is finally awakened. That is to say, if you enter the linked list first, it does not mean you will be awakened first.

8. Now that the synchronized lock is added, when a thread calls wait, it is clearly still in the synchronized block. How can other threads enter synchronized to execute notify?

The following example: when lock.wait() is called, the thread is blocked here. At this time, the code execution should still be in the synchronized block. Why can other threads enter the synchronized block to execute notify()?

public class SynchronizedTest {     private static final Object lock = new Object();     public static void testWait() throws InterruptedException {        synchronized (lock) {            // 阻塞住,被唤醒之前不会输出aa,也就是还没离开synchronized            lock.wait();            System.out.println("aa");        }    }     public static void testNotify() throws InterruptedException {        synchronized (lock) {            lock.notify();            System.out.println("bb");        }    }}

Just looking at the code does give people this illusion in the title. This is also the reason why Object's wait() and notify() methods are not used well by many people, including me.

This question needs to be seen from the bottom. When the thread enters synchronized, it needs to acquire the lock lock, but when calling lock.wait(), although the thread is still in the synchronized block, the lock lock has actually been released.

Therefore, other threads can acquire the lock and enter the synchronized block at this time to execute lock.notify().

9. If multiple threads enter the wait state, does a thread call notify to wake up the thread in the order in which it entered wait?

the answer is negative. When introducing why synchronized is an unfair lock above, it was also introduced that it will not wake up in order.

When wait is called, the node enters the end of the _WaitSet linked list.

When calling notify, according to different strategies, the node may be moved to the head of cxq, the tail of cxq, the head of EntryList, the tail of EntryList, etc.

Therefore, the order of waking up is not necessarily the order of entering wait.

10. How does notifyAll achieve full arousal?

Nofity is to get the head node of the WaitSet and perform the arousal operation.

The process of nofityAll can be simply understood as looping through all the nodes of the WaitSet and performing the notify operation on each node.

11. What lock optimizations does the JVM make?

Deflection lock, lightweight lock, spin lock, adaptive spin, lock elimination, lock coarsening.

12. Why should we introduce biased locks and lightweight locks? Why are heavyweight locks expensive?

The bottom layer of the heavyweight lock depends on the synchronization function of the system to realize it, which is realized by using pthread_mutex_t (mutual exclusion lock) in linux.

These low-level synchronization function operations will involve: operating system user mode and kernel mode switching, process context switching, and these operations are relatively time-consuming, so the overhead of heavyweight lock operations is relatively large.

In many cases, there may only be one thread when acquiring the lock, or multiple threads may alternately acquire the lock. In this case, it is not cost-effective to use heavyweight locks. Therefore, biased locks and lightweight locks are introduced to reduce There is no lock overhead during concurrent contention.

13. The bias lock is cancelled and expanded, and the performance loss is so large. Why should it be used?

The advantage of the biased lock is that when only one thread acquires the lock, it only needs to modify the markword through one CAS operation, and then make a simple judgment each time, avoiding the CAS operation each time the lightweight lock acquires and releases the lock.

If it is determined that the synchronized code block will be accessed by multiple threads or the competition is high, you can turn off the biased lock through the -XX:-UseBiasedLocking parameter.

14. What usage scenarios do bias locks, lightweight locks, and heavyweight locks correspond to?

1) Bias lock

Applies to only one thread acquiring the lock. When the second thread tries to acquire the lock, even if the first thread has released the lock at this time, it will still be upgraded to a lightweight lock at this time.

But there is a special case, if there is a re-bias of the bias lock, then the second thread can try to acquire the bias lock at this time.

2) Lightweight lock

It is suitable for multiple threads to acquire locks alternately. The difference with the biased lock is that there can be multiple threads to acquire the lock, but there must be no competition. If there is, the heavyweight lock will be upgraded. Some students may say that there is no spin, please continue to look down.

3) Heavyweight lock

Applicable to multiple threads acquiring locks at the same time.

15. At what stage does the spin occur?

The spin takes place in the heavyweight lock phase.

According to 99.99% of the Internet, the spin happens in the lightweight lock phase, but after actually reading the source code (JDK8), this is not the case.

There is no spin operation in the lightweight lock phase. In the lightweight lock phase, as long as competition occurs, it directly expands into a heavyweight lock.

In the heavyweight lock phase, if acquiring the lock fails, it will try to spin to acquire the lock.

16. Why do we need to design a spin operation?

Because the suspension overhead of heavyweight locks is too large.

Generally speaking, the code in the synchronization code block should be executed quickly. At this time, the thread that competes for the lock spins for a period of time to easily obtain the lock, which saves the overhead of heavyweight lock suspension.

17. How does adaptive spin embody self-adaptation?

The adaptive spin lock has a limit on the number of spins, ranging from 1000 to 5000.

If the current spin acquisition lock is successful, the number of spins will be rewarded 100 times. If the current spin acquisition lock fails, the penalty will be 200 times.

Therefore, if the spin is always successful, the JVM considers that the success rate of the spin is very high, and it is worth spinning a few more times, thus increasing the number of spin attempts.

On the contrary, if the spin fails all the time, the JVM thinks that the spin is just a waste of time, and minimizes the spin as much as possible.

18. Can synchronized locks be downgraded?

The answer is yes.

Specific trigger timing: In the global safepoint (safepoint), an attempt to downgrade lock will be triggered when the cleanup task is executed.

When the lock is degraded, the following operations are mainly performed:

1) Restore the markword object header of the lock object;

2) Reset ObjectMonitor, and then put the ObjectMonitor into the global free list, waiting for subsequent use.

19. The difference between synchronized and ReentrantLock

1) Low-level implementation: synchronized is a keyword in Java and a lock at the JVM level; ReentrantLock is a lock implementation at the JDK level.

2) Do you need to manually release: synchronized does not need to manually acquire and release the lock. When an exception occurs, the lock will be automatically released, so it will not cause a deadlock. When ReentrantLock is abnormal, if it does not actively pass unLock() to go Releasing the lock is likely to cause deadlock, so when using ReentrantLock, you need to release the lock in the finally block.

3) The fairness of the lock: synchronized is an unfair lock; ReentrantLock is an unfair lock by default, but a fair lock can be selected by parameters.

4) Whether it can be interrupted: synchronized is not interruptible; ReentrantLock can be interrupted.

5) Flexibility: When using synchronized, the waiting thread will wait until the lock is acquired; the use of ReentrantLock is more flexible, with immediate return of success, response interruption, timeout, etc.

6) Performance: With the continuous optimization of synchronized in recent years, there is no obvious difference in performance between ReentrantLock and synchronized, so performance should not be the main reason why we choose the two. The official recommendation is to use synchronized as much as possible, unless synchronized can not meet the demand, you can use Lock.

20. What is the synchronized lock upgrade process?

The core process is shown in the figure below. Please save it and zoom in to view it. It is normal for some concepts to be unintelligible and will be introduced later in the article. Please continue to read on.

If the image quality is fuzzy, you can download the original image from the Baidu cloud disk where I shared the interview questions.

synchronized lock flowchart

PS: From here on, the content below is partial to the underlying principle, which is a detailed analysis of the lock upgrade flowchart. Most interviewers may not ask directly, but when talking about lock upgrades, the following can be said.

21. The underlying implementation of synchronized

The underlying implementation of synchronized mainly distinguishes: methods and code blocks, as shown in the following example.

/** * @author joonwhee * @date 2019/7/6 */public class SynchronizedDemo {     private static final Object lock = new Object();     public static void main(String[] args) {        // 锁作用于代码块        synchronized (lock) {            System.out.println("hello word");        }    }     // 锁作用于方法    public synchronized void test() {        System.out.println("test");    }}

After compiling the code, check its bytecode, the core code is as follows:

{  public com.joonwhee.SynchronizedDemo();    descriptor: ()V    flags: ACC_PUBLIC    Code:      stack=1, locals=1, args_size=1         0: aload_0         1: invokespecial #1                  // Method java/lang/Object."<init>":()V         4: return      LineNumberTable:        line 9: 0   public static void main(java.lang.String[]);    descriptor: ([Ljava/lang/String;)V    flags: ACC_PUBLIC, ACC_STATIC    Code:      stack=2, locals=3, args_size=1         0: getstatic     #2                  // Field lock:Ljava/lang/Object;         3: dup         4: astore_1         5: monitorenter   // 进入同步块           6: getstatic     #3                  // Field java/lang/System.out:Ljava/io/PrintStream;         9: ldc           #4                  // String hello word        11: invokevirtual #5                  // Method java/io/PrintStream.println:(Ljava/lang/String;)V        14: aload_1        15: monitorexit   // 退出同步块          16: goto          24        19: astore_2        20: aload_1        21: monitorexit  // 退出同步块          22: aload_2        23: athrow        24: return      Exception table:         from    to  target type             6    16    19   any            19    22    19   any    public synchronized void test();    descriptor: ()V    flags: ACC_PUBLIC, ACC_SYNCHRONIZED  // ACC_SYNCHRONIZED 标记    Code:      stack=2, locals=1, args_size=1         0: getstatic     #3                  // Field java/lang/System.out:Ljava/io/PrintStream;         3: ldc           #6                  // String test         5: invokevirtual #5                  // Method java/io/PrintStream.println:(Ljava/lang/String;)V         8: return      LineNumberTable:        line 20: 0        line 21: 8}

When synchronized modifies the code block, monitorenter and monitorexit instructions will be generated after compilation, which correspond to entering and exiting the synchronized block respectively. You can see that there are two monitor exits. This is because the JVM adds implicit try-finally to the code block at compile time and releases the lock in the finally. This is why synchronized does not need to release the lock manually.

When the synchronized method is modified, the ACC_SYNCHRONIZED flag will be generated after compilation. When the method is called, the calling instruction will check whether the ACC_SYNCHRONIZED access flag of the method is set, and if it is set, it will first try to obtain the lock.

In fact, the two implementations are essentially the same, but the synchronization of methods is implemented in an implicit way, without bytecode.

22. What about Mark Word?

Before introducing Mark Word, you need to understand the memory layout of the object. In HotSpot, the storage layout of objects in the heap memory can be divided into three parts: Header, Instance Data, and Padding.

1) Object header (Header)

It mainly contains two types of information: Mark Word and type pointer.

Mark Word records the runtime data of the object, such as: HashCode, GC generation age, bias mark, lock mark, biased thread ID, biased epoch, etc. The 32-bit markword is shown in the figure below.

Type pointer, a pointer to its type metadata, the Java virtual machine uses this pointer to determine which class instance the object is. If the object is an array, there needs to be a data for recording the length of the array.

2) Instance Data

The truly valid information stored by the object is the content of the various types of fields that we define in the code.

3) Padding

Hotspot requires that the size of the object must be an integral multiple of 8 bytes. Therefore, if the instance data is not an integral multiple of 8 bytes, this field needs to be filled.

23. Introduce Lock Record?

Lock record, everyone should have heard of this, a markword used to temporarily store objects during lightweight locks.

Lock Record is BasicObjectLock in the source code, the source code is as follows:

class BasicObjectLock VALUE_OBJ_CLASS_SPEC { private:  BasicLock _lock;  oop       _obj;};class BasicLock VALUE_OBJ_CLASS_SPEC { private:  volatile markOop _displaced_header; };

In fact, there are two attributes:

1) _displaced_header: the markword used to temporarily store the lock object in the lightweight lock, also known as the displaced mark word.

2) _obj: points to the lock object.

In addition to being used to temporarily store the markword, Lock Record has an important function that is used to implement lock reentry counters. When each lock reentry, a Lock Record will be used to record, but at this time _displaced_header is null .

In this way, when unlocking, one Lock Record will be removed every time it is unlocked. When removing, judge whether _displaced_header is null. If it is, it means that the lock is reentrant, and the real unlocking will not be performed; otherwise, it means that this is the last Lock Record, and the unlocking operation will be actually performed at this time.

24. What is anonymity preference?

The so-called anonymous bias means that the lock has never been acquired, that is, the first bias. The feature at this time is that the thread ID of the lock object markword is 0.

When the first thread acquires the bias lock, the thread ID will be changed from 0 to the thread ID, and then the thread ID will not be 0, because releasing the bias lock will not modify the thread ID.

This is why the biased lock is suitable for scenarios where only one thread acquires the lock.

25. Where is the hashCode stored in the biased lock mode?

In the biased lock state, there is no place to store hashCode.

Therefore, after an object has calculated the hashCode, it can no longer enter the biased lock state.

If an object is currently in a biased lock state and receives a request to calculate its hashCode (Object::hashCode() or System::identityHashCode(Object) method call), its bias lock state will be revoked immediately.

26. Prefer to lock process?

First, when the bias lock is turned on, after the object is created, its bias lock flag is 1. If the bias lock is not turned on, after the object is created, its bias lock flag is 0.

Locking process:

1) Find a free Lock Record from the stack frame of the current thread, and point the obj attribute to the current lock object.

2) When acquiring a biased lock, various judgments will be made first. As shown in the locking flowchart, there are only two scenarios that can try to acquire the lock in the end: anonymous bias and batch re-bias.

3) Use CAS to try to fill your thread ID into the lock object markword. If the modification is successful, the lock will be acquired.

4) If it is not the two scenarios of step 1, or the CAS modification fails, the bias lock will be revoked and upgraded to a lightweight lock.

5) If the thread successfully acquires the biased lock, then every time it enters the synchronization block, it only needs to simply determine whether the thread ID in the markword of the lock object is itself, and if it is, enter it directly, with almost no additional overhead.

Unlocking process:

The unlocking of the bias lock is very simple, that is, assign the obj attribute to null. The important point here is that the thread ID of the lock object markword will not be restored to 0.

In the bias lock process, the status change of the markword is shown in the following figure:

27. Batch re-biasing and batch cancellation? heuristic algorithm?

Above we mentioned the batch re-biasing. At the same time as the batch re-biasing, there is also batch cancellation. The official collectively refers to the two as "heuristic algorithms."

Why introduce heuristic algorithms?

From the above introduction, we know that when there is only one thread acquiring the lock, the biased lock only needs to perform a CAS operation when entering the synchronized block for the first time, and then only a simple judgment is required for each entry, and the overhead at this time is basically OK. ignore. Therefore, in a scenario where only one thread acquires the lock, the performance improvement of the biased lock is very considerable.

But if there are other threads trying to acquire the lock, you need to revoke the biased lock to a lock-free state or upgrade to a lightweight lock. Revocation of biased locks has a certain cost. If there is multi-threaded competition in our usage scenario that causes a large number of biased locks to be revoked, then biased locks will cause performance degradation.

JVM developers obtained the following two points of view through analysis:

Viewpoint 1: For some objects, biased locks are obviously unhelpful. For example, a producer-consumer queue involving two or more threads. Such objects must have lock contention, and many such objects may be allocated during program execution.

This view describes a scenario where there is a lot of lock competition. For this scenario, a simple and rude method is to directly disable the biased lock, but this method is not optimal.

Because in the entire service, there may be only a small part of this kind of scene, it is obviously not cost-effective to give up the optimization of the bias lock because of this small part of the scene. The ideal situation is to be able to recognize such objects, and only for them disable bias locks .

Batch undo is the optimization of this scene.

Point 2: In some cases, it is beneficial to re-bias a group of objects to another thread. Especially when one thread allocates many objects and performs the initial synchronization operation on each object, but another thread performs subsequent work on them.

We know that the original intention of the biased lock is to be used in scenarios where only one thread acquires the lock. The second half of this view actually fits this scenario, but because the first half cannot enjoy the benefits of biased locks, all JVM developers have to do is to identify this scenario and optimize it.

For this scenario, the official introduction of batch re-biased to optimize.

Batch re-bias

JVM chooses to use class as the granularity and maintains a biased lock revocation counter for each class. Whenever an object of this class is revoked by a biased lock, the counter value is +1.

When the value of the counter exceeds the batch re-biasing threshold (default 20), the JVM thinks that the above scenario 2 is hit at this time, and the entire class will be batch-re-biased.

Each class will have a markword. When in a biased lock state, the markword will have an epoch attribute. When an instance object of this class is created, the epoch value of the instance object will be assigned the epoch value of the class, which means that under normal circumstances, the instance The epoch of the object and the epoch of the class are equal.

When batch re-biasing occurs, epoch comes in handy.

When batch re-biasing occurs, first the epoch value of the class will be +1, and then traverse the stacks of all currently surviving threads, find all lock instance objects of the class that are in the biased lock state, and modify their epoch values ​​to the new value.

The epoch value of those lock instance objects that are not currently held by any thread has not been updated, and will be 1 less than the epoch value of the class. The next time another thread prepares to acquire the lock object, it will not be directly upgraded to a lightweight lock because the thread ID of the lock object is not 0 (that is, it has been acquired by other threads). Instead, CAS is used to try Obtain the bias lock, so as to achieve the optimization effect of batch re-bias.

PS: Corresponds to the selection box of "The epoch of the lock object is equal to the epoch of the class?" in the locking flowchart.

Batch revocation

Batch revocation is the follow-up process of batch re-biasing. It also uses class as the granularity and also uses the bias revocation counter.

When the batch is re-biased, the interval between the current revocation time and the last revocation time will be calculated each time the revocation of the bias is performed. If the interval between the two revocation times exceeds the specified time (25 seconds), the JVM will consider the batch revocation at this time. The bias is effective, because the frequency of bias cancellation is very low at this time, so the bias cancellation counter is reset to 0.

When the batch is re-biased, the value of the bias counter continues to increase rapidly. When the value of the counter exceeds the threshold of batch cancellation (default 40), the JVM considers that there is obvious lock contention for instance objects of this class, and it is not suitable to use biased locks. Will trigger batch undo operations.

Batch cancellation: modify the markword of the class to a non-biasable lock-free state, that is, the biased mark bit is 0 and the lock mark bit is 01. Then traverse the stacks of all currently surviving threads, find all lock instance objects of this class that are in the biased lock state, and perform the cancel operation of the biased lock.

In this way, when the thread subsequently tries to acquire the lock instance object of the class, it will find that the markword of the lock object's class is not in the biased lock state, knowing that the class has been disabled for the biased lock, and directly enters the lightweight lock process.

PS: Corresponds to the selection box of "Is the lock object's class in a biased mode?" in the locking flowchart.

28. Lightweight lock process?

Locking process:

If the bias lock is turned off, or the bias lock is upgraded, it will enter the lightweight lock locking process.

1) Find a free Lock Record from the stack frame of the current thread, and the obj attribute points to the lock object.

2) Modify the markword of the lock object to a lock-free state, and fill it in the displaced_header property of Lock Rrcord.

3) Use CAS to modify the markword of the object header to a pointer to Lock Record

The relationship between the thread stack and the lock object at this time is shown in the figure below. You can see that the displaced_header that has been reentered twice is filled with null.

Unlocking process:

1) Assign the value of obj property to null.

2) Use CAS to restore the displaced mark word temporarily stored in the displaced_header attribute back to the mark word of the locked object.

29. Heavyweight lock process?

Locking process:

When there is competition for lightweight locks, they will expand into heavyweight locks.

1) Assign an ObjectMonitor and fill in the relevant attributes.

2) Modify the markword of the lock object to: the ObjctMonitor address + the heavyweight lock mark bit (10)

3) Try to acquire the lock, if it fails, try to spin to acquire the lock

4) If it fails after several attempts, the thread is encapsulated as ObjectWaiter and inserted into the cxq linked list, and the current thread enters the blocking state

5) When other locks are released, the nodes in the linked list will be awakened, and the awakened node will try to acquire the lock again. After the acquisition is successful, it will remove itself from the cxq (EntryList) linked list

The relationship between the thread stack, lock object, and ObjectMonitor at this time is shown in the following figure:

The core attributes of ObjectMonitor are as follows:

ObjectMonitor() {    _header       = NULL; // 锁对象的原始对象头    _count        = 0;    // 抢占该锁的线程数,_count大约等于 _WaitSet线程数 + _EntryList线程数    _waiters      = 0,    // 调用wait方法后的等待线程数    _recursions   = 0;    // 锁的重入数    _object       = NULL; // 指向锁对象指针    _owner        = NULL; // 当前持有锁的线程    _WaitSet      = NULL; // 存放调用wait()方法的线程    _WaitSetLock  = 0 ;   // 操作_WaitSet链表的锁    _Responsible  = NULL ;    _succ         = NULL ;  // 假定继承人    _cxq          = NULL ;  // 等待获取锁的线程链表,竞争锁失败后会被先放到cxq链表,之后再进入_EntryList链接    FreeNext      = NULL ;  // 指向下一个空闲的ObjectMonitor    _EntryList    = NULL ;  // 等待获取锁的线程链表,该链表的头结点是获取锁的第一候选者    _SpinFreq     = 0 ;    _SpinClock    = 0 ;    OwnerIsThread = 0 ; // 标记_owner是指向占用当前锁的线程的指针还是BasicLock,1为线程,0为BasicLock,发生在轻锁升级重锁的时候    _previous_owner_tid = 0;  // 监视器上一个所有者的线程id  }

Unlocking process:

1) Reenter counter-1, the _recursions attribute in ObjectMonitor.

2) Release the lock first, and assign the owner attribute of the lock holder to null. At this time, other threads can already acquire the lock, such as a spinning thread.

3) Wake up the next thread node from the EntryList or cxq linked list.

At last

I am Jiuhui, a programmer who insists on sharing original technology and dry goods . My goal is to help you get your favorite offers from major manufacturers. See you in the next issue.