Operating system-detailed explanation of processor scheduling (scheduling levels and algorithms such as FCFS, SPF, RR, etc.)

table of Contents

Scheduling level

Processor scheduling algorithm

Evaluation index

Non-deprivation/preemptive

Non-preemptive priority scheduling algorithm

First come first served (FCFS)

Short process first (SPF)

Response Ratio Priority Algorithm (HRRN)

Deprivation/preemption

Shortest remaining time first (SRTN)

Time slice round-robin scheduling algorithm (RR)

Preemptive priority scheduling algorithm

Multi-level feedback queue

Go implementation of HRRN

design

All codes

result

reference


Scheduling level

1. advanced scheduling (High Level Scheduling) advanced scheduling, also known as long-range scheduling or job scheduling , it is an object of scheduling jobs . The main function is to determine which jobs in the backup queue on the external memory are transferred to the memory according to a certain algorithm, create processes for them, allocate necessary resources, and put them in the ready queue. Advanced scheduling is mainly used in multi-pass batch processing systems, and advanced scheduling is not set in time-sharing and real-time systems.
2. Low-level scheduling (Low Level Scheduling) Low-level scheduling is also called process scheduling or short-range scheduling, and the object of its scheduling is a process (or kernel-level thread). Its main function is to determine which process in the ready queue should get a processor according to a certain algorithm, and the dispatcher assigns the processor to the selected process. Process scheduling is the most basic type of scheduling. This level of scheduling must be configured in the three types of OSs: multi-pass, time-sharing, and real-time.
3. Intermediate Scheduling (Intermediate Scheduling) Intermediate scheduling is also called memory scheduling . The main purpose of introducing intermediate scheduling is to improve memory utilization and system throughput . For this reason, those processes that are temporarily unable to run should be transferred to the external memory to wait. At this time, the state of the process is called the ready-resident external memory state (or suspended state). When they have the operating conditions and the memory is slightly free, it is determined by the intermediate scheduling, and the ready processes on the external memory that have the operating conditions are reloaded into the memory, and their status is changed to the ready state, and the ready state is suspended Waiting on the queue.

Scheduling level comparison
levelPlace of occurrencefrequencyImpact on the process
advancedExternal memory -> memory (operation-oriented)lowestNone -> Created state -> Ready state
intermediateExternal memory -> memory (process-oriented)medium

Suspend state -> Ready state

(Blocking suspension -> blocking state)

Low-levelMemory -> CPUhighestReady state -> running state

Processor scheduling algorithm

Let's take low-level scheduling as an example

The main tasks of process scheduling are:

  • Save the on-site information of the processor . When scheduling, it is necessary to save the on-site information of the current process processor, such as the program counter, the contents of multiple general-purpose registers, and so on.
  • Select the process according to a certain algorithm . The scheduler selects a process from the ready queue according to a certain algorithm, changes its state to the running state , and prepares to assign a processor to it.
  • Assign the processor to the process . The dispatch program assigns the processor to the process. At this time, the information about the processor site in the process control block of the selected process needs to be loaded into the corresponding registers of the processor, and the control of the processor is handed over to the process. It resumes operation from the last breakpoint.

Evaluation index

CPU utilization

Use time/total time

System throughput

Process completed/total time

Turnaround time

  • Turnaround time = process completion time-process submission time
  • Average Turnaround Time = Sum of Turnover Time of All Processes/Number of Processes
  • Weighted turnaround time=process turnaround time/process running time, weighted turnaround time>=1, the closer to 1, the higher the user satisfaction
  • Average weighted turnaround time = weighted turnaround time/number of processes

waiting time

Waiting time = turnaround time-running time (waiting for I/O also counts as running time)

Average waiting time = waiting time/number of processes

Response time

The time from when the user submits the request to the first response.

Non-deprivation/preemptive

Suitable for early batch operating systems

Non-preemptive priority scheduling algorithm

Among the processes that have arrived, the one with the highest priority is selected for scheduling. If the priority is the same, the one that arrives first is selected.

example

Scheduling sequence: P1->P3->P2->P4

  • P1: 0 arrives, run 7, 7 ends
  • P2: 2 arrives, run 4, 12 ends
  • P3: 4 arrives, run 1, 8 ends
  • P4: 5 arrives, run 4, 16 ends

Turnaround time

  • P1: 7-0=7
  • P2: 12-2=10
  • P3: 8-4=4
  • P4: 16-5=11

Average turnaround time: 8

Weighted turnaround time:

  • P1: 7/7=1
  • P2: 10/4=2.5
  • P3: 4/1=4
  • P4: 11/4=2.75

Average weighted turnaround time: 2.56

waiting time:

  • P1: 7-7=0
  • P2: 10-4=6
  • P3: 4-1=3
  • P4: 11-4=7

Average waiting time: 4

First come first served (FCFS)

first come, first service, dispatch according to the order of arrival

Advantages: fairness, simple implementation, and will not cause hunger

Disadvantages: beneficial to processes with a long running time, but not friendly to processes with a short running time

example

There are the following processes. According to the arrival time and running time, the process's turnaround time, weighted turnaround time, waiting time, and the corresponding average time are calculated.

  • P1: 0 arrives, run 7, 7 ends
  • P2: 2 arrives, run 4, 11 ends
  • P3: 4 arrives, run 1, 12 ends
  • P4: 5 arrives, run 4, 16 ends

Turnaround time

  • P1: 7-0=7
  • P2: 11-2=9
  • P3: 12-4=8
  • P4: 16-5=11

Average turnaround time: 8.75

Weighted turnaround time:

  • P1: 7/7=1
  • P2: 9/4=2.25
  • P3: 8/1=8
  • P4: 11/4=2.75

Average weighted turnaround time: 3.5

waiting time:

  • P1: 7-7=0
  • P2: 9-4=5
  • P3: 8-1=7
  • P4: 11-4=7

Average waiting time: 4.75

Short process first (SPF)

Due to the shortcomings of FCFS, SPF was proposed. shortest process first, a process with a short running time and a high priority. The running time is the same, and the one that comes first runs first.

Advantages: When arriving almost at the same time, the shortest average turnaround time and average waiting time can be obtained

Disadvantages: good for short jobs, longest jobs are not friendly, and may cause hunger

example

The same as the one above, the scheduling order is P1->P3->P2->P4

  • P1: 0 arrives, run 7, 7 ends
  • P2: 2 arrives, run 4, 12 ends
  • P3: 4 arrives, run 1, 8 ends
  • P4: 5 arrives, run 4, 16 ends

Turnaround time

  • P1: 7-0=7
  • P2: 12-2=10
  • P3: 8-4=4
  • P4: 16-5=11

Average turnaround time: 8

Weighted turnaround time:

  • P1: 7/7=1
  • P2: 10/4=2.5
  • P3: 4/1=4
  • P4: 11/4=2.75

Average weighted turnaround time: 2.56

waiting time:

  • P1: 7-7=0
  • P2: 10-4=6
  • P3: 4-1=3
  • P4: 11-4=7

Average waiting time: 4

Response Ratio Priority Algorithm (HRRN)

Considering the waiting time and running time, FCFS and SPF are integrated . For long jobs, the waiting time is long, the response ratio increases, and there is no starvation.

Response ratio=(waiting time+running time)/running time, response ratio>=1, the larger the priority, the attention is paid to the remaining process response ratio at the end of the process

example

The timetable is as follows, running in green, ending in yellow, and the response ratio in parentheses:

0: P1 (7/7, 7)

7: P3((3+1)/1=4) , P2((5+4)/4=2.25), P4((2+4)/4=1.5), P1

8: P2((6+4)/4=2.5) , P4((3+4)/4=1.75), P3 , P1

12: P4((7+4)/4=2.75) , P2 , P3, P1

16: P4 , P2, P3, P1

Scheduling sequence: P1->P3->P2->P4

Turnaround time

  • P1: 7-0=7
  • P2: 12-2=10
  • P3: 8-4=4
  • P4: 16-5=11

Average turnaround time: 8.75

Weighted turnaround time: 8

  • P1: 7/7=1
  • P2: 10/4=2.5
  • P3: 4/1=4
  • P4: 11/4=2.75

Average weighted turnaround time: 2.56

waiting time:

  • P1: 7-7=0
  • P2: 10-4=6
  • P3: 4-1=3
  • P4: 11-4=7

Average waiting time: 4.25

Deprivation/preemption

Suitable for time-sharing, real-time operating system

Shortest remaining time first (SRTN)

example

Shortest Remaining Time Next, the shortest remaining time has priority, the remaining time is the same, and the one that arrives first runs first. Pay attention to the time when the new process arrives and when the process completes .

The timetable is as follows, green means running, yellow means ending

0: P1(7)

2: P2(4) , P1(5)

4: P3(1) , P2(2), P1(5)

5: P2(2) , P4(4), P1(5), P3(0)

7: P4(4) , P1(5), P2(0) , P3(0)

11: P1(5) , P4(0) , P2(0), P3(0)

16: P1(0) , P4(0), P2(0), P3(0)

Scheduling sequence: P1->P2->P3->P2->P4->P1

  • P1: 0 arrives, run 7, 16 ends
  • P2: 2 arrives, run 4, 7 ends
  • P3: 4 arrives, run 1, 5 ends
  • P4: 5 arrives, run 4, 11 ends

Turnaround time

  • P1: 16-0=16
  • P2: 7-2=5
  • P3: 5-4=1
  • P4: 11-5=6

Average turnaround time: 7

Weighted turnaround time:

  • P1: 16/7=2.28
  • P2: 5/4=1.25
  • P3: 1/1=1
  • P4: 6/4=1.5

Average weighted turnaround time: 1.5

waiting time:

  • P1: 16-7=9
  • P2: 5-4=1
  • P3: 1-1=0
  • P4: 6-4=2

Average waiting time: 3

Time slice round-robin scheduling algorithm (RR)

Round-Robin, in the ready queue, the newly arrived process is ranked behind the process that engraved the processor at the same time. The time slice is not used up, the process ends, and the CPU is actively released.

If the time slice is too large so that each process can be completed within one time slice, the time slice round-robin scheduling algorithm degenerates into a first-come, first-served scheduling algorithm, and will increase the process response time. Therefore, the time slice cannot be too large.
On the other hand, process scheduling and switching have time costs (saving and restoring the operating environment), so if the time slice is too small, it will lead to process switching too frequently, * the system will spend a lot of time processing process switching, resulting in actual The proportion of time spent on process execution is reduced. It can be seen that the time slice cannot be too small.

Time slice is 2

0: P1 arrives, P1 runs

2: P2 arrives, P1 runs for a time slice, ready queue is P2(4), P1(5), P2 runs

4: P3 arrives, P2 runs for a time slice, the ready queue is P1(5), P3(1), P2(2), P1 runs

5: P4 arrives, ready queue P3(1), P2(2), P4(4)

6: P1 runs for a time slice, ready queues P3(1), P2(2), P4(4), P1(3), P3 runs

7: P3 running is completed , ready queue P2(2), P4(4), P1(3), P2 running

9: P2 has finished running , ready queues P4(4), P1(3), and P4 are running

11: P4 runs for a time slice, ready queues P1(3), P4(2), and P3 run

13: P3 runs for a time slice, ready queue P4(2), P1(1), P4 runs

15: P4 has finished running , ready queue P1(1), P1 is running

16: P1 has finished running

Scheduling sequence: P1->P2->P1->P3->P2->P4->P3->P4->P1

  • P1: 0 arrived, run 7, first run time 0, end 16
  • P2: 2 arrives, run 4, first run time 2, 9 ends
  • P3: 4 arrives, run 1, first run time 6, 7 ends
  • P4: 5 arrives, run 4, first run time 9, 15 ends

Response time:

  • P1: 0-0=0
  • P2: 2-2=0
  • P3: 6-4=2
  • P4: 9-5=4

Preemptive priority scheduling algorithm

The real-time operating system uses more, pay attention to the ready queue changes (the priority of the newly arrived process may be higher than the running process) and the moment when the process actively releases the CPU

Advantages: priority can be dynamically adjusted in real time, suitable for real-time operating systems, very flexible

Disadvantages: a steady stream of high-priority processes arrive, causing starvation

  • System process priority is higher than user process
  • The foreground process has a higher priority than the background process
  • The operating system prefers I/O processes (or I/O busy processes)

example

Scheduling sequence: P1->P2->P3->P2->P4->P1

  • P1: 0 arrives, run 7, 16 ends
  • P2: 2 arrives, run 4, 7 ends
  • P3: 4 arrives, run 1, 5 ends
  • P4: 5 arrives, run 4, 11 ends

The timetable is as follows:

0: P1 arrives, ready queue: P1(7,1), P1 is running

2: P2 arrives, ready queue: P2(4,2), P2 runs , P1(5,1) enters the ready queue

4: P3 arrives, ready queue: P3(1,3), P1(5,1), P3 runs , P2(2,2) enters the ready queue

5: P3 runs , P4 arrives, ready queue: P2(2,2), P4(4,2), P1(5,1), P2 runs

7: P2 has finished running , ready queue: P4(4,2), P1(5,1), P4 is running

11: P4 has finished running , ready queue: P1(5,1), P1 is running

16: P1 has finished running

Response time:

  • P1: 0-0=0
  • P2: 2-2=0
  • P3: 4-4=0
  • P4: 7-5=2

Multi-level feedback queue

The first few

Set up a multi-level ready queue, the priority of each level of the queue is from high to low , and the time slice is from small to large . When the new process arrives, it enters the first level queue first , and queues up for the allocated time slice according to the FCFS principle . If the process runs out of time slices and has not ended , the process enters the end of the next level queue . If you are already in the lowest-level queue at this time, put it back to the end of the lowest-level queue . Only when the k-th queue is empty , the time slice will be allocated to the process at the head of the k+1 queue . The process of the preempted processor is put back to the end of the original queue .

FCFS and SPF are too simple, we will implement a HRRN

Go implementation of HRRN

design

Process structure design

// 进程type  Process struct{	Pid int      // 进程id	Pname string // 进程名	Runtime int  // 运行时间	Priority int // 优先数}

The priority number is not used, so that it can be compatible when you want to implement other algorithms in the future.

Design of Ready Queue Node

// 就绪队列节点type Node struct {	p *Process    // 进程	Arrtime int // 到达时间	Waittime int // 等待时间}

The ready queue knows when the process arrives and how long it has been waiting for. The queue uses slices. If implemented in C++, you can use vector and other stl.

Processor design

// 处理机type Processor struct {	p *Process} // 运行func (p *Processor) Run() bool {	println("running", p.p.Pname)	p.p.Runtime --	if p.p.Runtime == 0{		println(p.p.Pname,"finish")		p.p = nil // 进程移除内存		return true	}	return false}

Above the processor is the process, which has the function of running the process, and it will be moved out of the memory after running.

All codes

package main import (	"fmt") // 进程type  Process struct{	Pid int      // 进程id	Pname string // 进程名	Runtime int  // 运行时间	Priority int // 优先数} // 固定优先数进程创建func New(pid,runtime int,pname string) *Process {	return &Process{		Pid: pid,		Pname: pname,		Runtime: runtime,		Priority: 0,	}} // 就绪队列节点type Node struct {	p *Process    // 进程	Arrtime int // 到达时间	Waittime int // 等待时间} // 进程移除就绪队列func Pop(b *[]Node,index int){	//-----索引越界------	if index<0 || index>=len(*b){		return	}	//------前删-------	if index == 0{		*b = (*b)[1:]		return	}	//------尾删-------	if index == len(*b)-1{		*b = (*b)[:len(*b)-1]		return	}	//------中间删------	*b = append((*b)[0:index],(*b)[index+1:]...)	return} // 处理机type Processor struct {	p *Process} // 运行func (p *Processor) Run() bool {	println("running", p.p.Pname)	p.p.Runtime --	if p.p.Runtime == 0{		println(p.p.Pname,"finish")		p.p = nil // 进程移除内存		return true	}	return false} // 进程到达func ProcessCome(queue []Node, time int) []Node {	// 进程到达	if time == 0{		queue = append(queue, Node{			p: New(0,7,"P1"),			Arrtime: 0,		})		fmt.Printf("%d %s 到达\n",time,"P1")	}	if time == 2{		queue = append(queue, Node{			p: New(1,4,"P2"),			Arrtime: 2,		})		fmt.Printf("%d %s 到达\n",time,"P2")	}	if time == 4{		queue = append(queue,Node{			p: New(2,1,"P3"),			Arrtime: 4,		})		fmt.Printf("%d %s 到达\n",time,"P3")	}	if time == 5{		queue = append(queue, Node{			p: New(3,4,"P4"),			Arrtime: 5,		})		fmt.Printf("%d %s 到达\n",time,"P4")	}	return queue} // 从就绪队列选择进程分配给处理机func HRRN(queue []Node,p *Processor)  {	// 处理机上进程完成标志	finish := true	for time:=0;;time++{		queue = ProcessCome(queue,time)		println("queue 长度:",len(queue))		// 就绪队列和处理机中都没有进程,结束		if len(queue) == 0 && p.p == nil{			break		}		// 处理机上需要进程		if finish == true{			max := 0.0			index := -1			var runP Node			// 遍历就绪队列			for i,p := range queue{				//println(p.p.Pname,p.Waittime)				ratio := 1.0 + float64(p.Waittime)/float64(p.p.Runtime)				fmt.Printf("%s响应比%f\n",p.p.Pname,ratio)				if  ratio > max{					max = ratio					index = i					runP = p				}else if ratio < max{ 				}else {					// 响应比相同,先到达的先运行					if runP.Arrtime > p.Arrtime{						max = ratio						index = i						runP = p					}				}			}			// 就绪队列的进程放入处理机			p.p = runP.p			//就绪队列进程移除			Pop(&queue,index)		}		// 处理机运行进程		if p.p != nil{			finish = p.Run()		}		// 就绪队列进程等待时间增加		for i := range queue{			queue[i].Waittime ++		}	}	fmt.Println("就绪队列没有进程,处理机处于监听状态")} func main()  {	// 就绪队列	queue := make([]Node,0)	// 处理机	p := Processor{}	HRRN(queue,&p)}  

result

0 P1 到达
queue 长度: 1
P1响应比1.000000
running P1
queue 长度: 0
running P1
2 P2 到达
queue 长度: 1
running P1
queue 长度: 1
running P1
4 P3 到达
queue 长度: 2
running P1
5 P4 到达
queue 长度: 3
running P1
queue 长度: 3
running P1
P1 finish
queue 长度: 3
P2响应比2.250000
P3响应比4.000000
P4响应比1.500000
running P3
P3 finish
queue 长度: 2
P2响应比2.500000
P4响应比1.750000
running P2
queue 长度: 1
running P2
queue 长度: 1
running P2
queue 长度: 1
running P2
P2 finish
queue length: 1
P4 response ratio 2.750000
running P4
queue length: 0
running P4
queue length: 0
running P4
queue length: 0
running P4
P4 finish
queue length: 0
There is no process in the ready queue, and the processor is in the listening state

If there is no process in the ready queue and the processor, the processor should be in the listening state instead of shutting down. The user is not allowed to enter the process information here, but it is placed in the ProcessCome function, so the program is directly terminated. After all, it is just to be familiar with the algorithm. . The core of the algorithm is that when the process on the processor ends, the process with a higher response ratio is selected and scheduled to run on the processor. When the response ratio is the same, the arrival time will be served first.

reference

"Computer Operating System Tang Xiaodan, Tang Xiaoying, etc."