[Java intern] Daily Interview Questions-Operating System

  • As the autumn is approaching, prepare for the summer internship, I wish you a little better every day!Day15
  • Benpian summary is operating system face questions related to the follow-up will be updated daily -
Insert picture description here

1. Please briefly talk about processes and threads and their differences?

  • The fundamental difference: the process is the basic unit of operating system resource allocation, and the thread is the basic unit of task scheduling and execution.
  • Containment relationship: a process consists of at least one thread.
  • The difference in the environment: multiple processes (programs) can be run at the same time in the operating system; and multiple threads execute at the same time in the same process (through CPU scheduling, only one thread executes in each time slice).
  • Memory allocation: When the system is running, it will allocate different memory space for each process; for threads, except for the CPU, the system will not allocate memory for the threads ( the resources used by the threads come from the resources of the processes they belong to ), Only resources can be shared between thread groups.

2. What are the communication methods between processes?

  • Pipe : Pipe is a half-duplex communication method. Data can only flow in one direction, and it can only be used between related processes. The kinship of the process usually refers to the parent-child process relationship.
  • Named pipe FIFO : An unnamed pipe can only communicate between two related processes. Through a named pipe FIFO, unrelated processes can also exchange data.
  • Message queue :
  • The message queue is a linked list of messages, with a specific format, stored in memory and identified by the message queue identifier.
  • The message queue allows one or more processes to write and read messages to it.
  • The communication data of pipes and named pipes are based on the first-in-first-out principle. Message queues can realize random query of messages. Messages do not have to be read in the first-in first-out order, and they can also be read according to the type of message, which is more advantageous than FIFO. .
  • Shared memory : Shared memory is a memory area that allows one or more processes to share.
  • Semaphore : A semaphore is a counter that can be used to control access to shared resources by multiple processes. It is usually used as a locking mechanism to prevent other processes from accessing the shared resource when a process is accessing it.

3. What are the ways to synchronize threads?

  • Mutex: The mutual exclusion object mechanism is adopted, and only the thread that owns the mutex object has the authority to access public resources. Because there is only one mutex, it can be guaranteed that public resources will not be accessed by multiple threads at the same time.
  • Semaphore: It allows multiple threads to access the same resource at the same time, but it needs to control the maximum number of threads that access this resource at the same time.
  • Event (signal): By means of notification operation to keep multi-threaded synchronization, you can also easily implement multi-thread priority comparison operations.

4. The state of the process and its transition

The state of the process includes: ready state, running state, and blocking state.

The conversion relationship between process states is:

5. The state of the Java thread

The Java thread has the following states:

  • New status (New)
  • Ready state (Runnable)
  • Running status
  • Blocked:
  • Waiting for blocking
  • Synchronous blocking
  • Other blocking
  • Dead

6. What are the scheduling algorithms for processes?

  • First come first serve algorithm
  • Short job priority algorithm
  • Priority scheduling algorithm
  • Time slice round-robin scheduling algorithm

Reference article: Several commonly used operating system process scheduling algorithms

7. The causes of deadlocks, what are the necessary conditions for deadlocks, how to prevent deadlocks, and how to avoid deadlocks?

**Causes of deadlock: **Resource competition, improper order of process advancement.

Four necessary conditions for deadlock:

  • Mutual exclusion: A resource is only allowed to be accessed by one process at a time, that is, once the resource is assigned to a process, other processes can no longer access it until the process access ends.
  • Inalienable: the resources acquired by a process cannot be forcibly deprived by other processes before they are used up, and can only be released by the process that has obtained the right to use the resource.
  • Occupy and wait: a process requests a certain resource and occupies it, even if the process is blocked, it will not release the occupied resource.
  • Circular waiting: A circular waiting chain that is connected end to end is formed between several processes.

**Prevention of deadlock:** Destroy one of the four necessary conditions for deadlock.

Removal of deadlock:

  • Forcibly revoke one or more deadlocked processes from the system to break the loop waiting chain.
  • Mandatory preemption of the resources that the deadlock process is striving for to remove the deadlock.

8. Context switching of the process

Switching from one process to another process is called process context switching. Processes are managed and scheduled by the kernel, so process switching can only occur in kernel mode.

9. Do you understand the memory management mechanism of the operating system? What are the methods of memory management?

  • Block management : The memory management method of computer operating systems in ancient times. Divide the memory into a fixed size block, each block contains only one process. If the program needs memory, the operating system allocates a block to it. If the program requires only a small amount of space, a large part of the allocated memory is wasted. These unused spaces in each block are called fragments.
  • Type management : Divide the main memory into a fixed size of the same size. The size is smaller than that of the block management, which improves the memory utilization rate and reduces Broken pieces. The page management corresponds to the logical address and the physical address through the page table.
  • Segment management : Although the memory utilization rate is improved by the segment management , the content of the segment management is practical and does not have any practical meaning. Segment management divides the main memory into segments, and the space of each segment is much smaller than the space of one segment. However, the most important thing is that the segments are of practical significance. Each segment defines a set of logical information, for example, there are main program segment MAIN, sub-program segment X, data segment D, stack segment S and so on. Segment management corresponds to the logical address and physical address through the segment table.

10. Do you understand CPU addressing? Why do you need virtual address space?

The processor uses an addressing method called virtual addressing . With virtual addressing, the CPU needs to translate the virtual address into a physical address so that it can access the real physical memory. In fact, the hardware that completes the conversion of virtual addresses to physical addresses is a hardware called a memory management unit in the CPU .

11. What is user attitude and core attitude?

In the computer system, there are two kinds of programs: system program and application program. In order to ensure that the system program is not damaged by the application program intentionally or unintentionally, two states are set for the computer-user state and core state.

  • User mode : only limited access to memory, running all applications.
  • Core state : Run the operating system program, the CPU can access all the data in the memory, including peripherals.

12. What are the advantages and disadvantages of operating system memory management, paging segmentation, and segment paging?

Memory management methods: block management, page management, segment management, segment page management.

Segment management:

  • In segment storage management, the program address space is divided into several segments, such as code segment, data segment, and stack segment; in this way, each process has a two-dimensional address space, which is independent of each other and does not interfere with each other. The advantage of segment management is that there is no internal fragmentation (because the segment size is variable, change the segment size to eliminate internal fragmentation). However, when the segments are swapped in and out, external fragments will be generated (for example, if a 4k segment is changed to a 5k segment, a 1k external fragment will be generated).

Paging management :

  • In page storage management, the logical address of the program is divided into fixed-size pages, and the physical memory is divided into page frames of the same size. When the program is loaded, any page can be placed in any page frame in the memory. , These page frames do not have to be continuous, thus achieving discrete separation. The advantage of page-based storage management is that there is no external fragmentation (because the page size is fixed), but internal fragmentation (a page may not be full).

Paragraph management :

  • The segment management mechanism combines the advantages of segment management and page management. To put it simply, the section management mechanism is to divide the main memory into several sections first, and each section is divided into several sections, which means that the sections in the section management mechanism are discrete from section to section and within the section.

13. Tell me about the I/O model of the operating system? What is I/O multiplexing?

There are two stages to an I/O request:

  • Waiting for resources stage : I/O requests generally need to request special resources (such as disk, RAM, and files). When the resource is used by the previous user and has not been released, the IO request will be blocked until the resource can be used.
  • Use resource stage : Really carry out data reception and transmission.

In the data waiting phase, I/O is divided into blocking I/O model and non-blocking I/O model:

  • Blocking I/O model: When resources are unavailable, I/O requests are blocked until the feedback result (data or timeout).
  • Non-blocking I/O model: When the resource is unavailable, the I/O request will not be blocked, and the data will be returned directly to indicate that the resource is unavailable. But the process will keep polling to check if the resource is available.

In the stage of using resources, I/O is divided into synchronous I/O model and asynchronous I/O model.

  • Synchronous I/O model: The application blocks in the state of sending or receiving data until the data is successfully transmitted or a failed result is returned.
  • Asynchronous I/O model: The application returns immediately after sending or receiving data, and the data is written into the operating system cache. The operating system completes the data sending or receiving, and returns success or failure information to the application.

I/O multiplexing (I/O multiplexing model):

  • I/O multiplexing can block multiple I/O operations at the same time, and can detect the I/O functions of multiple read operations and multiple write operations at the same time, until there is data readable or writable.

Because blocking I/O can only block one I/O operation, and the I/O multiplexing model can block multiple I/O operations, it is called multiplexing .

14. What is the producer consumer model?

For producers:

  • In the operating system, the producer produces a piece of data, first check whether the buffer can be put in the data, whether there is an empty position, if there is any, execute it, otherwise the buffer is full, and the consumer has not come to consume the data in time . At this time, the producer will enter the blocking wait (will wake up the consumer to consume data).
  • Similarly, assuming that multiple producers have applied for execution rights, only one producer can enter the production state at this time, so several other producers will be blocked again.

For consumers:

  • When a consumer consumes a piece of data, first check whether there is a data unit in the buffer, if there is a consumption execution, otherwise the consumer enters a blocking wait (the producer will be awakened to produce the data).

The summarized interview questions are also time-consuming. The articles will be updated from time to time, sometimes a few more a day. If you help you review and consolidate your knowledge points, please support three consecutive times, and you will be updated with billions of points in the future!

Insert picture description here