Operating System Unit 2 Part 2 Notes | BEU BTech CSE (New Syllabus 2025)

2025 Operating System Unit 2 Part 2 notes for BEU / AKU BTech CSE 3rd Semester based on the new syllabus. Covers SJF, SRTF, Round Robin, preemptive and non-preemptive scheduling, multiprocessor scheduling, RM, EDF, thread types, and multithreading models with simple explanations for exam preparation.

Updated on

Operating System Unit 2 is one of the most scoring chapters in the BEU / AKU BTech CSE 3rd semester new syllabus. This unit covers some of the most important core topics such as processes, process states, PCB, context switching, threads, multithreading models, CPU scheduling, FCFS, SJF, SRTF, Round Robin, multiprocessor scheduling, and real-time scheduling algorithms like RM and EDF.

These OS notes are prepared strictly according to the latest BEU CSE syllabus and explained in a simple, exam-oriented way so that any student can understand the concepts clearly. If you are searching for OS Unit 2 notes for BTech, Operating System notes for BEU, or 3rd-semester process scheduling notes, this guide will help you revise everything quickly and score full marks.

Bihar engineering university Btech computer science 3rd semeter operating system uinit 1 notes is already covered you can visit NotesNav

BEU 3RD SEMESTER OPERATING SYSTEMS UNIT-2 NOTES (Part 2) Visit part -1

Unit-2.0. Processes: Definition, Process Relationship, Different states of a Process, Process State transitions, Process Control Block (PCB), Context switching.

Thread: Definition, Various states, Benefits of threads, Types of threads, Concept of multithreads Process

Scheduling: Foundation and Scheduling objectives, Types of Schedulers, Scheduling criteria: CPU utilization, Throughput, Turnaround Time, Waiting Time, Response Time; Scheduling algorithms: Pre-emptive and Non pre-emptive, FCFS, SJF, RR; Multiprocessor scheduling: Real Time scheduling: RM and EDF.

Preemptive Scheduling

Preemptive scheduling allows the operating system to stop a running process and replace it with another process. The OS can take back the CPU at any time, usually when a higher-priority process arrives or when a time slice (quantum) finishes. This makes the system more responsive because important or short tasks can interrupt long tasks whenever needed.

Important Points

  • The OS can forcefully remove a process from the CPU.

  • A higher-priority process can interrupt a lower-priority one.

  • Used in time-sharing systems where fairness and responsiveness are important.

  • Allows shorter jobs to complete faster because they can preempt longer jobs.

  • Helps reduce response time in interactive systems.

  • Requires more context switching, which increases overhead.

  • Algorithms like Round Robin, SRTF, and Preemptive Priority use this approach.

 

Non-Preemptive Scheduling

Non-preemptive scheduling does not allow the OS to interrupt a running process. Once a process gets the CPU, it continues until it finishes or voluntarily gives up the CPU (e.g., for I/O). Because of this, scheduling is simpler, but responsiveness can be lower. Long processes may delay shorter ones, making the system feel slow.

Important Points

  • A running process keeps the CPU until it completes or blocks for I/O.

  • No forced CPU removal by the OS.

  • Simple to implement because no frequent context switching.

  • Can lead to long waiting times if a long job arrives first.

  • Suitable for batch systems where tasks are processed fully.

  • Less overhead because fewer context switches occur.

  • Algorithms like FCFS, SJF (Non-Preemptive), and Non-Preemptive Priority follow this model.

List of Scheduling Algorithms

Non-Preemptive Algorithms

  • First Come First Serve (FCFS)

  • Shortest Job First (SJF)

  • Non-Preemptive Priority Scheduling

Preemptive Algorithms

  • Shortest Remaining Time First (SRTF)

  • Preemptive Priority Scheduling

  • Round Robin (RR)

Multiprocessor Scheduling

  • Load Sharing

  • Symmetric Multiprocessing (SMP) Scheduling

  • Asymmetric Multiprocessing Scheduling

Real-Time Scheduling Algorithms

  • Rate Monotonic (RM)

  • Earliest Deadline First (EDF)

 

CPU Scheduling in Operating Systems - GeeksforGeeks

 

Difference Between Preemptive and Non-Preemptive Scheduling

Preemptive Scheduling

Non-Preemptive Scheduling

The OS can forcibly stop a running process and assign the CPU to another process.

Once a process starts execution, it cannot be interrupted until it finishes or blocks.

CPU allocation changes based on events like arrival of higher-priority or shorter job.

CPU allocation is simple: the current process continues until completion.

More suitable for interactive and real-time systems because response time is better.

Suitable for batch systems where tasks run to completion without needing interaction.

Supports algorithms like Preemptive Priority, Round Robin (RR), SRTF.

Supports algorithms like FCFS, SJF (non-preemptive), Non-preemptive Priority.

Provides better responsiveness because urgent tasks can run immediately.

Response time is poor if a long task starts first and blocks shorter ones.

Requires more context switching, increasing overhead.

Very little context switching → low overhead and simpler implementation.

Reduces waiting time for short processes by interrupting long ones.

Short processes may wait long behind one big process → convoy effect.

Complex to implement due to frequent switching and priority updates.

Simple to implement because scheduling decisions are straightforward.

Difficult to predict exact completion time because task execution keeps switching.

Easy to predict because process runs continuously once scheduled.

No starvation guarantee unless techniques like aging are used.

May cause starvation in priority-based non-preemptive scheduling, but not in FCFS.

Helps CPU-bound and interactive tasks share CPU smoothly.

CPU-bound processes dominate; interactive tasks suffer.

FCFS (First Come First Serve) Scheduling

FCFS is the simplest CPU scheduling method where the process that arrives first is served first, just like people standing in a queue. The operating system places every incoming process in the ready queue, and the CPU selects the one that has been waiting the longest. Once a process gets the CPU, it runs until it finishes because FCFS is non-preemptive, meaning no other process can interrupt it. This makes FCFS easy to implement, but it can cause long waiting times when a large process arrives before smaller ones. Because the CPU follows strict arrival order, the performance depends heavily on the order in which processes come into the system.

Characteristics of FCFS

  • FCFS selects the process based on arrival order, giving the CPU to the one that entered the ready queue first.

  • Works on a simple FIFO (First In First Out) queue.

  • It is non-preemptive, so once the CPU starts running a process, it continues until the process finishes without interruption.

  • FCFS is very simple to implement because it only requires a basic queue where processes wait in the order they arrive.

  • This scheduling can lead to long delays for short processes when a long job arrives first, a problem known as the “convoy effect.”

  • Waiting time and turnaround time can become high if the arrival order is unfavorable, especially when burst times vary a lot.

  • FCFS works well when all processes have similar CPU burst times or when the system load is low.

  • It does not consider process priority or burst time, so sometimes the CPU is not used in the most efficient way.

  • FCFS is fair in the sense that every process is treated equally based only on arrival time.

How FCFS Works (Simple Example)

Suppose three processes arrive in this order:

Process

Arrival Time

Burst Time

P1

0

5

P2

1

3

P3

2

2

 

Execution order = P1 → P2 → P3 (because P1 arrived first)

Even though P3 is the shortest, it waits until P1 and P2 finish.

Advantages of FCFS Scheduling

  • FCFS is very simple to understand and implement because it uses a normal FIFO queue.

  • It has very low overhead since no complex scheduling decisions or priorities are needed.

  • FCFS is fair in the sense that processes are served strictly in the order they arrive.

  • Works well for batch systems where tasks are large and user interaction is not required.

  • There is no risk of starvation because every process will eventually get the CPU.

  • Easy to maintain and debug due to straightforward execution flow.

  • Suitable for systems where predictability and order of arrival are important.

 

Disadvantages of FCFS Scheduling

  • Long processes delay all shorter processes behind them, causing the convoy effect.

  • Average waiting time can become very high if long jobs arrive early.

  • Not suitable for time-sharing or interactive systems because response time becomes poor.

  • The algorithm is non-preemptive, so the OS cannot stop a running process even if a more important task arrives.

  • Short processes suffer heavily if they are unlucky in arrival order.

  • CPU utilization may drop when many processes wait behind a long I/O-bound job.

  • Does not consider process priorities or burst times, making it inefficient for mixed workloads.

 

Given Question

Processes come in this order (FCFS always follows the given order):

  • P₁ → Burst Time = 3

  • P₂ → Burst Time = 10

  • P₃ → Burst Time = 5

  • P₄ → Burst Time = 2

Arrival times are not written, so we assume all arrive at the same time (time = 0).


STEP 1 → Write processes in arrival order

Since FCFS means first come first serve, and all arrive together:

P₁ → P₂ → P₃ → P₄
This is the running order.


STEP 2 → Start from time 0 and run one process completely

Imagine the CPU is like a single-person barber shop.
The first customer (P₁) sits first and finishes fully…
Then the next customer (P₂), and so on.

We build a time line.

P₁ runs first

P₁ needs 3 time units.

  • It starts at time 0

  • It finishes at time 3

So completion time of P₁ = 3

P₂ runs next

P₂ needs 10 time units.

  • It starts when P₁ finished → time 3

  • It finishes at 3 + 10 = 13

Completion time of P₂ = 13

P₃ runs next

P₃ needs 5 time units.

  • It starts when P₂ finished → time 13

  • It finishes at 13 + 5 = 18

Completion time of P₃ = 18

P₄ runs last

P₄ needs 2 time units.

  • It starts when P₃ finished → time 18

  • It finishes at 18 + 2 = 20

Completion time of P₄ = 20


Gantt Chart (Timeline)

  • From 0 to 3 → P₁ runs

  • From 3 to 13 → P₂ runs

  • From 13 to 18 → P₃ runs

  • From 18 to 20 → P₄ runs


STEP 3 → Turnaround Time (TAT)

Easy formula:
TAT = Completion Time – Arrival Time

Arrival time = 0 for all, so
TAT = Completion Time

So:

  • P₁ → TAT = 3

  • P₂ → TAT = 13

  • P₃ → TAT = 18

  • P₄ → TAT = 20


STEP 4 → Waiting Time (WT)

Waiting time means:
“How long the process waited in the ready queue (not running)?”

Formula:
WT = TAT – Burst Time

Let's calculate:

  • P₁ → WT = 3 – 3 = 0

  • P₂ → WT = 13 – 10 = 3

  • P₃ → WT = 18 – 5 = 13

  • P₄ → WT = 20 – 2 = 18


STEP 5 → Average values

Average Turnaround Time

(3 + 13 + 18 + 20) ÷ 4
= 54 ÷ 4
= 13.5

Average Waiting Time

(0 + 3 + 13 + 18) ÷ 4
= 34 ÷ 4
= 8.5


FINAL ANSWERS

  • Average Turnaround Time = 13.5

  • Average Waiting Time = 8.5

Shortest Job First (SJF) Scheduling

Shortest Job First is a CPU scheduling method where the process with the smallest CPU burst time is selected first. Instead of executing processes based on arrival order, SJF prioritizes the jobs that will finish the quickest, which helps reduce overall waiting and turnaround time.

It is a non-preemptive algorithm in its basic form, so once a process starts running, it continues until completion.

A major challenge is that SJF depends on knowing the CPU burst time in advance, which real operating systems cannot accurately predict.

 SJF also has a preemptive variation known as Shortest Remaining Time First (SRTF), where a currently running process can be interrupted if a new process arrives with a shorter remaining burst.

Important Points

  • Selects the process with the smallest burst time for execution.

  • Basic SJF is non-preemptive: once started, a process runs until it finishes.

  • SJF provides the lowest possible average waiting time compared to all non-preemptive algorithms.

  • Requires the burst time of each process, which is difficult to predict in real operating systems.

  • Real systems estimate burst time using previous CPU usage patterns, but prediction is not always accurate.

  • Has a preemptive variant called Shortest Remaining Time First (SRTF) where CPU can be taken away if a shorter process arrives.

  • Can cause starvation for long processes if short processes keep entering the system.

  • Works best in batch systems where burst times are known in advance.

  • Not suitable for interactive or real-time systems due to its non-preemptive nature (in SJF version).

How it works

  • When multiple processes are available, the one with the shortest burst time is selected.

  • In the non-preemptive version, once a process begins, it finishes its execution without interruption.

  • In the preemptive version (SRTF), the system continuously checks if a new process has a shorter remaining time. If so, it switches to that process. 

Example scenario

Imagine processes arrive at time 0 with burst times 5, 3, and 1.

  • Non-preemptive SJF: The process with burst time 1 runs first, then the one with burst time 3, and finally the one with burst time 5.

  • Preemptive SJF (SRTF): The process with burst time 1 runs to completion. If a new process with burst time 0.5 arrives while the process with burst time 3 is running, it would be preempted and the new process with 0.5 burst time would run first. 

 

Advantages of SJF Scheduling

  • SJF gives the lowest average waiting time among all non-preemptive algorithms because short processes finish quickly and do not block others for long.

  • It provides very good turnaround time performance, especially when most processes are short.

  • The algorithm increases overall system throughput because more jobs finish in a shorter period.

  • Small tasks get completed early, which improves user satisfaction and system efficiency in batch environments.

  • Works well when the CPU burst times are known or predictable, such as offline job scheduling or controlled environments.

  • Reduces unnecessary delays because long jobs do not run before short jobs unless they arrived earlier.

 

Disadvantages of SJF Scheduling

  • SJF requires knowing or estimating CPU burst time, which real operating systems cannot determine exactly, making it impractical without prediction methods.

  • It can cause starvation for long processes because frequent short jobs keep getting priority and long ones may wait indefinitely.

  • Not suitable for interactive or real-time systems because it is non-preemptive; once a long job starts, it cannot be interrupted.

  • If burst time estimation is wrong, the algorithm may behave unpredictably or inefficiently.

  • Implementing burst time prediction (like exponential averaging) adds extra complexity to the scheduler.

  • Arrival order may become irrelevant, so sometimes a slightly earlier process waits longer if a shorter job appears later.

  • The non-preemptive nature of SJF makes the system less responsive during long CPU-bound tasks.

 

SJF (Non-Preemptive) Example

Question

Four processes arrive at the same time, and we need to schedule them using Shortest Job First (SJF).
Find completion time, turnaround time, waiting time, and averages.

Process

Arrival Time

Burst Time

P₁

0

7

P₂

0

4

P₃

0

1

P₄

0

5

 

Answer: Since all processes arrive at the same time, we first check arrival time (all are 0).
After that, SJF says we must always choose the process with the smallest burst time.

Burst times are: 7, 4, 1, 5.
So the order becomes(smallest burst time first): P₃ → P₂ → P₄ → P₁.

Now we start the CPU timeline at time 0.
P₃ runs first and finishes at time 1.
Then P₂ starts at time 1 and finishes at time 5.
Then P₄ runs from 5 to 10.
Finally P₁ runs from 10 to 17.

Once we know completion time, turnaround time is simply completion time minus arrival time.
Since arrival time is 0 for all, turnaround time becomes the same as completion time.
Waiting time is turnaround time minus burst time.

Final Table

Process

AT

BT

CT

TAT = CT -AT

WT= TAT - BT

P₃

0

1

1

1-0 =1

1-1 =0

P₂

0

4

5

5-0 =5

5-4 =1

P₄

0

5

10

10-0 =10

10-5 =5

P₁

0

7

17

17-0 =17

17-7 =10

 

Average turnaround time = (1 + 5 + 10 + 17) ÷ 4 = 8.25
Average waiting time = (0 + 1 + 5 + 10) ÷ 4 = 4

 

Difference Between SJF and SRTF

SJF (Shortest Job First)

SRTF (Shortest Remaining Time First)

SJF is non-preemptive — once a process starts, it runs until it finishes.

SRTF is preemptive — a running process can be interrupted anytime.

Priority is based on the full burst time of the process.

Priority is based on the remaining burst time, which changes continuously.

A newly arrived process cannot replace the running process, even if it is shorter.

A newly arrived process can replace the current process if it has shorter remaining time.

Simpler to implement because the CPU is not interrupted once a job starts.

More complex because scheduling decisions change whenever a new process enters.

May give longer average response time because short jobs wait for ongoing big jobs.

Gives excellent response time because short jobs immediately run.

May reduce context switching (almost none).

High context switching because CPU may change tasks frequently.

Starvation can occur for long processes if shorter processes keep appearing.

Starvation is even more likely because short tasks repeatedly preempt long tasks.

Used in batch systems where execution time is predictable.

Used where responsiveness is important and short tasks must be served immediately.

Decision taken only when CPU becomes free.

Decision taken continuously whenever any new process arrives.

Round Robin (RR) Scheduling

Round Robin is a CPU scheduling algorithm designed for time-sharing systems. Every process gets a fixed time slice called a time quantum, and after using its quantum, the process moves to the back of the ready queue. This ensures that all processes get a fair chance to run and no process can hold the CPU for too long.

RR makes the system responsive because even long processes are forced to pause and let others run. It is widely used in modern operating systems because it prevents starvation and gives fast response time for interactive applications.

Important Points

  • Each process gets the CPU for a fixed time quantum (e.g., 2 ms, 4 ms).

  • When a process exhausts its quantum, it is moved to the back of the ready queue.

  • RR is preemptive, meaning the OS forcibly stops the process after the quantum ends.

  • Ensures fairness because every process gets CPU time in rotation.

  • Very suitable for time-sharing systems, interactive tasks, and operating systems with many users.

  • The choice of time quantum affects performance:

    • Too small → too many context switches

    • Too large → behaves like FCFS

  • Does not cause starvation because each process regularly receives CPU time.

  • Response time is very good because all processes get CPU quickly.

 

How it works

Time Quantum: A small, fixed unit of time is set for each process. This is also called the time slice.

Circular Queue: A ready queue, typically a circular queue, holds processes ready to be executed.

Execution: The scheduler picks the process at the front of the queue and lets it run for a duration up to the time quantum.

Preemption: If the process finishes before the time quantum, it is removed from the system. If the time quantum expires before the process finishes, the CPU is preempted (forcefully taken).

Re-queueing: The preempted process is placed at the end of the circular queue to wait for its next turn.

Turn-based: The scheduler then moves to the next process in the queue, repeating the cycle until all processes are complete.

 

Advantages of Round Robin

  • Provides excellent fairness because all processes get equal CPU time.

  • Ensures no starvation, even when many processes are present.

  • Gives good response time, making it ideal for interactive systems.

  • Easy to implement using a simple circular queue structure.

  • Offers better performance than FCFS for CPU-bound processes.

  • Every process gets a predictable, repeated chance to run.

  • Works well for multitasking and multiuser operating systems.

 

Disadvantages of Round Robin

  • Too many context switches if time quantum is very small, reducing CPU efficiency.

  • Higher overhead because switching happens frequently.

  • If the time quantum is too large, RR behaves like FCFS and loses its benefits.

  • Turnaround time can be large for long processes since they repeatedly wait for their quantum.

  • Not optimal for minimizing waiting time or turnaround time.

  • Performance depends heavily on the correct choice of time quantum.

Multiprocessor Scheduling

Multiprocessor scheduling refers to the methods the operating system uses to assign processes and threads to multiple CPUs or cores in a system. Unlike single-CPU scheduling, where only one process runs at a time, multiprocessor systems allow several processes to run simultaneously on different processors. The scheduler must decide how to distribute work across all CPUs so that none remain idle while others are overloaded. Multiprocessor scheduling improves performance, increases throughput, and supports true parallelism, but also brings challenges such as load balancing, processor affinity, and synchronization between running tasks.

Important Points

  • The OS must decide which process runs on which CPU among multiple processors.

  • Tasks can run in parallel, improving performance and reducing execution time.

  • The scheduler tries to keep all CPUs busy to avoid idle time and maximize throughput.

  • Load balancing is necessary to distribute tasks evenly across processors.

  • Some systems avoid unnecessary movement of tasks between CPUs to maintain cache affinity.

  • Multiprocessor scheduling must manage race conditions and synchronization between threads.

  • Used in modern systems like multicore laptops, servers, smartphones, and cloud machines.

  • Supports true parallelism where multiple CPU cores execute different tasks at the same time.

  • Multiprocessor systems may use Symmetric or Asymmetric scheduling.

 

Types of Multiprocessor Scheduling

  1. Asymmetric Multiprocessing (AMP)

  2. Symmetric Multiprocessing (SMP)

  3. Processor Affinity (Soft & Hard Affinity)

  4. Load Sharing / Load Balancing

  5. Gang Scheduling

  6. Multicore Scheduling

  7. Real-Time Multiprocessor Scheduling

  8. Space-Shared Scheduling

  9. Time-Shared Scheduling

Asymmetric Multiprocessing (AMP)

Asymmetric Multiprocessing is a scheduling method where only one processor, called the master processor, controls the entire system. The master CPU performs all operating system tasks such as scheduling, memory management, and handling interrupts. The remaining processors, called slave processors, run only user programs and depend on the master for all OS-related decisions. This structure reduces scheduling complexity because only one CPU makes the decisions, but it can create a bottleneck if too many tasks depend on the master. AMP is commonly used in older systems, small embedded devices, or specialized hardware where simple control is more important than high performance.

Important Points

  • Only one processor acts as the master and makes all scheduling decisions.

  • Other processors (slave CPUs) execute user programs but do not run OS code.

  • Reduces complexity because only one CPU handles all OS tasks.

  • Easier to design and implement than symmetric multiprocessing.

  • Can create a bottleneck if the master processor becomes overloaded.

  • Not suitable for large systems requiring high scalability or parallelism.

  • Useful in embedded systems or devices with limited resources.

  • Provides predictable and centralized control in real-time environments.

  • Slave processors must wait for the master to assign tasks, reducing flexibility.

Symmetric Multiprocessing (SMP)

Symmetric Multiprocessing is a scheduling model where all processors are equal, and each one can run both user programs and operating system tasks.

There is no master-slave structure; instead, every CPU works independently while sharing the same memory, OS, and ready queue. Any processor can execute any task, and all processors participate in scheduling decisions.

This makes the system faster, more balanced, and more scalable because workload is distributed evenly across all CPUs. SMP is used in almost all modern computers, laptops, servers, and smartphones because it provides high performance and efficient parallelism.

Common Ready Queue: All CPUs access a shared queue of ready processes.
Private Ready Queues: Each CPU has its own queue of ready processes.

 

Important Points

  • All processors are equal; no master or slave CPU exists.

  • Each processor can run OS functions, schedule tasks, and execute user processes.

  • Uses a shared ready queue, allowing load balancing across processors.

  • Improves performance because multiple CPUs work simultaneously.

  • Provides true parallelism and faster execution for multi-threaded programs.

  • More scalable than AMP because adding more CPUs enhances performance.

  • Requires proper synchronization to avoid conflicts in shared memory.

  • Used in modern operating systems like Windows, Linux, Android, macOS.

  • Better resource utilization because no single CPU becomes a bottleneck.

  • Slightly more complex to design due to shared OS access and race conditions.

 

Difference Between AMP and SMP

Asymmetric Multiprocessing (AMP)

Symmetric Multiprocessing (SMP)

Only one CPU is the master; it performs all OS operations.

All CPUs are equal; every CPU can run OS code and user processes.

Slave CPUs execute only user processes and depend on master for scheduling.

All CPUs participate in scheduling and execute OS tasks independently.

Simple to design because only one CPU controls the system.

More complex due to shared OS access and synchronization issues.

Can become a bottleneck if the master CPU gets overloaded.

Better performance because workload is shared across all CPUs.

Less scalable — adding more CPUs does not increase performance much.

Highly scalable — adding CPUs significantly improves performance.

Used in older systems and small embedded devices.

Used in modern systems: Windows, Linux, Android, servers, multicore CPUs.

OS code runs only on master CPU.

OS code can run on any CPU.

No load balancing; each slave waits for the master.

Has load balancing — tasks are distributed across CPUs.

Lower parallel performance due to central control.

High parallel performance because tasks run truly in parallel.

Easier debugging because only one CPU handles OS actions.

Harder debugging due to concurrency and race conditions.

Processor Affinity

Processor Affinity is the scheduling technique where the operating system tries to keep a process running on the same CPU it used previously. This is done because each CPU has its own cache memory, and when a process continues on the same CPU, it can reuse cached data, leading to faster execution. Moving a process to another CPU forces it to rebuild its cache, which slows down performance. Affinity helps improve speed, reduces cache misses, and avoids unnecessary migration of tasks across processors.

Benefits: Data stays in the CPU cache & reduces cache misses and improves performance.

 

Important Points

  • Keeps a process attached to the same CPU it used earlier to improve performance.

  • Reduces cache misses because the CPU still holds the process’s previous data.

  • Prevents unnecessary shifting of processes between CPUs.

  • Improves throughput and stability in multiprocessor systems.

  • Two Forms:

    • Soft Affinity: The OS tries to keep a process on the same CPU but may move it if needed.

    • Hard Affinity: A process is strictly restricted to specific CPU(s).

  • Helps stabilize real-time and long-running applications.

  • Used in servers, gaming systems, high-performance computing, and multicore CPUs.

  • Can reduce scheduler flexibility if used too aggressively.

Load Sharing / Load Balancing

Load Sharing (Load Balancing) is a scheduling approach where the operating system distributes processes evenly across all CPUs so no processor becomes overloaded while others stay idle. The main goal is to use every CPU efficiently, improve throughput, and reduce waiting time. The scheduler constantly checks the load on each CPU and shifts tasks from busy CPUs to free ones when necessary. This avoids bottlenecks and ensures smooth processing in multiprocessor systems.

Important Points

  • Distributes processes evenly across all processors to avoid overload.

  • Prevents situations where one CPU is heavily loaded while another CPU is idle.

  • Improves system performance, throughput, and response time.

  • May involve moving processes between CPUs if imbalance occurs.

  • Can be global (one shared queue for all CPUs) or local (each CPU has its own queue).

  • Helps keep all cores active, especially in multicore systems.

  • Requires careful migration because frequent shifting may harm cache performance.

  • Used in modern operating systems like Linux, Windows, Android, and macOS.

  • Very important in cloud systems, servers, and high-traffic environments.

Real-Time Scheduling

Real-time scheduling is used in systems where tasks must finish within fixed time limits. The operating system must decide the order of tasks in such a way that every task meets its deadline. These systems are used where timing is critical, and even a small delay can cause failure.

Two widely used real-time scheduling algorithms are Rate Monotonic (RM) and Earliest Deadline First (EDF). Both help the processor decide which real-time task should run first so that all deadlines are met.

Important Points

  • Real-time systems are used in medical devices, robots, automotive systems, aircraft control, and industrial machines.

  • Tasks may be periodic (repeat after fixed time) or aperiodic (random).

  • Failure to meet deadlines can cause critical system failures.

  • Two important algorithms in real-time scheduling are Rate Monotonic (RM) and Earliest Deadline First (EDF).

  • The scheduler focuses on finishing tasks before their deadlines.

  • Real-time scheduling focuses on predictable timing instead of maximum CPU utilization.

  • Tasks are divided into hard real-time (must always meet deadlines) and soft real-time (missing a deadline is undesirable but not catastrophic).

Rate Monotonic Scheduling (RM)

Rate Monotonic Scheduling is a fixed-priority real-time scheduling algorithm where tasks with shorter periods are given higher priority. A task that repeats more frequently gets a higher chance to run before others so that it never misses its deadline.

Rate Monotonic is a static priority scheduling algorithm.
Task priority is decided before execution based on task period — the shorter the period, the higher the priority. Tasks that occur more frequently are considered more important, so they get the CPU first. RM is simple and predictable but works best only for periodic real-time tasks.

 

Important Points

  • RM is a preemptive algorithm, meaning a high-priority task can interrupt a lower-priority one.

  • Priorities are assigned only once based on the task period and never change during execution.

  • A task with the smallest period gets the highest priority because it needs the CPU more frequently.

  • RM guarantees that if the total CPU utilization stays within the schedulability limit, all deadlines will be met.

  • RM works well for periodic tasks where each task repeats after a fixed time interval.

  • Priority is fixed (static) and never changes during execution.

  • If high-frequency tasks are heavy, low-frequency tasks may miss deadlines.

  • One limitation is that RM cannot always schedule tasks perfectly when their periods are not harmonically related.

  • RM is commonly used in embedded systems, robotics, and control systems where predictable timing is important.

Advantages of Rate Monotonic Scheduling (RM)

  • RM is simple to understand and implement because priorities are fixed and never change during execution.

  • Tasks with shorter periods automatically get higher priority, making RM predictable and easy to analyze mathematically.

  • Very stable and reliable for hard real-time systems where failure is not acceptable.

  • Works well when tasks are periodic and follow a fixed repeating schedule.

  • Easier to test and verify because all scheduling decisions are based on static priorities.

  • Lower risk of unexpected behavior compared to dynamic algorithms.

  • RM ensures that frequent tasks (which are usually more important) always get CPU time first.

 

Examples

  1. Airbag control systems in cars where frequent safety checks must run at high priority.

  2. Pacemakers and medical monitoring devices that repeatedly check heartbeat signals.

  3. Industrial robots where repeated sensor readings must be processed at strict intervals.

  4. Flight control systems where periodic control loops must run at fixed, guaranteed timing.

  5. Embedded systems in washing machines, microwave ovens, and printers where operations repeat in cycles.

Disadvantages of Rate Monotonic Scheduling (RM)

  • RM works only for periodic tasks; it does not handle aperiodic or unpredictable tasks well.

  • CPU utilization is limited to 69% for guaranteed scheduling. If total load is higher, RM may miss deadlines.

  • If a high-frequency task has a long burst time, it can dominate CPU and cause lower-frequency tasks to miss deadlines.

  • Not suitable when task deadlines differ from their periods (RM assumes Deadline = Period).

  • Priorities never change, even when deadlines are close, which may lead to inefficient scheduling.

  • Can cause starvation for long-period tasks if high-rate tasks consume most CPU time.

  • Harder to adapt when system workload changes (no flexibility).

 

Real-Life Example of RM (Easy to Remember)

Example: Controlling a Robot Arm

Imagine a robot arm in a factory that performs periodic tasks:

  • Task T1: Check safety sensor every 10 ms

  • Task T2: Adjust motor speed every 20 ms

  • Task T3: Update display panel every 100 ms

According to RM:
✔ Task with smallest period → highest priority

So the priority order becomes:

Highest Priority: T1 (10 ms)
Medium Priority: T2 (20 ms)
Lowest Priority: T3 (100 ms)

Why?
Because safety must be checked most frequently — it is most important.
Therefore:

  • T1 interrupts T2 and T3 if needed.

  • T2 interrupts T3.

  • T3 only runs when both T1 and T2 are idle.

This ensures the robot always reacts quickly to safety issues while still updating the display and adjusting motors.

 

Earliest Deadline First (EDF)

Earliest Deadline First is a dynamic real-time scheduling algorithm that always runs the task whose deadline is closest. The priority keeps changing during execution because every time a new task arrives or a deadline approaches, the scheduler chooses the task that will miss its deadline soonest. This makes EDF very flexible and suitable for systems where timing changes frequently.

 

Important Points

  • Priority is dynamic → changes based on deadlines.

  • The task with the nearest deadline always runs first.

  • Works for both periodic and aperiodic tasks.

  • More flexible and powerful than RM (utilization limit = 100%).

  • Provides better CPU usage and can handle heavier workloads.

  • Suitable for soft and hard real-time systems.

  • Requires continuous tracking of deadlines, which increases complexity.

  • Can cause frequent context switching if deadlines are close to each other.

Advantages of Earliest Deadline First (EDF)

  • EDF uses dynamic priority, so the system always focuses on the task whose deadline is closest, ensuring timely execution.

  • Achieves 100% CPU utilization, meaning it can schedule tasks even when the processor load is very high, unlike RM which is limited to 69%.

  • Works for both periodic and aperiodic tasks, making it flexible and suitable for real-world systems with unpredictable workloads.

  • More efficient and powerful than RM because priorities adjust according to time pressure, not fixed periods.

  • Helps avoid unnecessary delays by always choosing the task that is most urgent.

  • Reduces the chance of missing deadlines because the scheduler continuously adapts to current situations.

  • Suitable for systems where workload changes frequently and tasks do not have fixed periods.

Disadvantages of Earliest Deadline First (EDF)

  • Since priorities change continuously, EDF requires more computational overhead and more frequent scheduling decisions.

  • If many tasks have deadlines very close together, EDF may cause frequent context switching, reducing CPU efficiency.

  • Harder to analyze formally compared to RM because priorities are dynamic and depend on time.

  • If the CPU becomes slightly overloaded, EDF can quickly become unstable and start missing deadlines.

  • More difficult to implement in low-resource embedded systems due to its complexity.

  • Needs accurate deadline information; wrong or inconsistent deadlines can break the schedule.

  • Managing dynamic priorities increases the risk of timing errors if the system is not tuned properly.

Real-Life Example of EDF (Easy to Remember)

Example: Mobile Phone Notification System

Imagine your smartphone receives different notifications, each with its own urgency:

  • Task T1: Alarm ringing → deadline 5 seconds

  • Task T2: Incoming message → deadline 20 seconds

  • Task T3: App updates downloading in background → deadline 3 minutes

  • Task T4: Backup sync → deadline 10 minutes

According to EDF:
✔ Task with the closest deadline gets the CPU first.

So priority order becomes:

Highest Priority: Alarm (5 sec deadline)
Then → Incoming message
Then → App update
Finally → Backup sync

Why?
Because missing the alarm deadline means the user won't hear it on time — it is the most urgent.
The phone dynamically changes priorities as deadlines get closer.

This is exactly how EDF works:
always pick the task whose deadline is nearest, even if another task arrived earlier.

Difference Between RM and EDF

Rate Monotonic Scheduling (RM)

Earliest Deadline First (EDF)

RM assigns static priority. Once priority is given, it never changes during execution.

EDF assigns dynamic priority. Priorities continuously change based on deadlines.

Priority depends on period of the task: shorter period → higher priority.

Priority depends on deadline: task with nearest deadline gets highest priority.

Works only when tasks are periodic and repeat at a fixed rate.

Works for periodic and aperiodic tasks because deadlines control scheduling.

Assumes Deadline = Period for correctness.

Deadline can be different from period, and EDF still works correctly.

Has a strict CPU utilization limit of 69% for guaranteed scheduling.

Can achieve 100% CPU utilization without missing deadlines (theoretically optimal).

Very easy to implement because priority rules are fixed and simple.

Harder to implement due to continuous tracking of deadlines and dynamic priority changes.

Gives predictable, stable behavior → good for hard real-time systems.

More flexible and responsive → good for general real-time systems with mixed workloads.

Low flexibility because priorities never change even if deadlines are near.

Highly flexible because priorities adapt instantly when deadlines approach.

Can cause starvation for long-period tasks if high-frequency tasks dominate CPU.

Less starvation risk because every task gets priority as its deadline comes near.

Scheduling analysis is simple and mathematically easy to verify.

Scheduling analysis is complex because priorities keep changing.

Performs poorly if tasks do not strictly follow periodic behaviour.

Performs well even when tasks have irregular release times.

More stable when system load is low to moderate.

Can become unstable if overloaded or if many deadlines clash at the same time.

Suitable for safety-critical embedded systems with fixed cycles (robots, controllers).

Suitable for systems with unpredictable timing, like mobile devices, multimedia, soft real-time tasks.

NotesNav

NotesNav

Madhepura college of engineering

Frequently Asked Questions