Operating System Unit 2 Notes | BEU BTech CSE 3rd Sem (New Syllabus)
Complete Operating System Unit 2 notes for BEU / AKU BTech CSE 3rd Semester. Covers processes, process states, PCB, context switching, threads, multithreading, FCFS, SJF, SRTF, Round Robin, multiprocessor scheduling, RM, EDF and more. Updated as per the new syllabus with clear explanations for exams.

Operating System Unit 2 is one of the most scoring chapters in the BEU / AKU BTech CSE 3rd semester new syllabus. This unit covers some of the most important core topics such as processes, process states, PCB, context switching, threads, multithreading models, CPU scheduling, FCFS, SJF, SRTF, Round Robin, multiprocessor scheduling, and real-time scheduling algorithms like RM and EDF.
These OS notes are prepared strictly according to the latest BEU CSE syllabus and explained in a simple, exam-oriented way so that any student can understand the concepts clearly. If you are searching for OS Unit 2 notes for BTech, Operating System notes for BEU, or 3rd-semester process scheduling notes, this guide will help you revise everything quickly and score full marks.
Bihar engineering university Btech computer science 3rd semeter operating system uinit 1 notes is already covered you can visit NotesNav
BEU 3RD SEMESTER OPERATING SYSTEMS UNIT-2 NOTES (Part 1) visit part -2 notes
Unit-2.0. Processes: Definition, Process Relationship, Different states of a Process, Process State transitions, Process Control Block (PCB), Context switching.
Thread: Definition, Various states, Benefits of threads, Types of threads, Concept of multithreads Process
Scheduling: Foundation and Scheduling objectives, Types of Schedulers, Scheduling criteria: CPU utilization, Throughput, Turnaround Time, Waiting Time, Response Time; Scheduling algorithms: Pre-emptive and Non pre-emptive, FCFS, SJF, RR; Multiprocessor scheduling: Real Time scheduling: RM and EDF.
PROCESS
A process is a running instance of a program that lives inside memory and moves step-by-step under the control of the operating system. When a program starts, the OS gives it its own memory space, CPU state, file access rights, and other resources so it can run independently.
The process is not just instructions, it includes everything needed for execution such as variables, stack, program counter, temporary data, open files, and runtime context. While it runs, the OS actively manages it—pausing it, resuming it, switching it, or ending it based on system needs. Multiple processes can run together because the OS shares CPU time among them, giving the user the illusion that everything is running at once.
For example, when we write a program in C or C++ and compile it, the compiler creates binary code. The original code and binary code are both programs. When we actually run the binary code, it becomes a process.
A process is an 'active' entity instead of a program, which is considered a 'passive' entity.
A single program can create many processes when run multiple times; for example, when we open a .exe or binary file multiple times, multiple instances begin (multiple processes are created). .
Important Points
A process is a running program with its own memory and execution environment
OS gives every process a unique PID
Contains code, data, stack, heap, and CPU context
Runs independently from other processes
Needs OS scheduling to get CPU time
Can be paused, resumed, or stopped at any time
Multiple processes create multitasking in a system
Programs become processes only when loaded into memory
Programs become processes only when loaded into memory
Difference Between Program and Process
Program | Process |
A program is a passive entity. It is just a file containing a set of instructions stored on disk, doing nothing by itself until it is loaded for execution. | A process is an active entity. It is a running instance of a program that is currently being executed by the CPU with its own data and state. |
It exists as a static sequence of instructions, the same each time you open it. | It exists as a dynamic activity whose contents (register values, variables, program counter) keep changing while it runs. |
A program is stored in secondary memory such as hard disk, SSD, or pen drive. | A process resides in primary memory (RAM) while it is executing. |
A program by itself does not need CPU, registers, or RAM because it is not running. | A process requires CPU time, registers, stack, heap, and memory space because it is actually executing. |
For one program file, there is usually one copy on disk, even if many users can open it. | From one program, the OS can create multiple processes, all running independently with their own data and state. |
A program does not have states like ready, running, or waiting, because it is not active. | A process moves through states such as new, ready, running, waiting/blocked, and terminated. |
A program is mainly concerned with what to do (the logic and instructions). | A process is concerned with doing it (executing the instructions using system resources). |
If a program file is deleted, only the stored instructions are lost. | If a process is killed, its current execution, data in memory, and resource usage are affected. |
Example: a “chrome.exe” file stored in your system. | Example: three running Chrome windows or tabs, each one is a separate process created from chrome.exe. |
Process Relationship
A process relationship describes how processes are connected to each other during execution. When one process creates another, they form a parent–child relationship. The parent process starts the child, gives it resources, and may wait for it to finish. A child process can also create its own children, forming a process tree. These relationships help the operating system manage process creation, communication, completion, and resource sharing. The OS tracks all these relationships to maintain order, prevent resource leaks, and allow controlled communication between related processes.
Important Points
A process that starts another process becomes the parent, and the started process becomes the child.
Parent and child processes form a process tree, where each node represents a process.
The parent allocates resources to the child, such as memory, files, and CPU time.
A child can inherit certain properties from the parent, like environment variables or open files.
A parent may wait for its child to finish before continuing its own execution.
If the parent finishes before the child, the child may become an orphan process.
If a child finishes before the parent retrieves its exit status, it becomes a zombie process.
Each child process can create more child processes, forming a hierarchy of processes.
The OS ensures proper cleanup of child processes to avoid resource wastage.
Parent and child processes can communicate using IPC methods such as pipes, shared memory, or message passing.
The OS uses system calls like fork(), exec(), wait() to manage process relationships.
Process relationships help organize tasks and make the system predictable and manageable.
Diagram of Process Relationship
Different States of a Process
A process moves through several states during its lifetime, from the moment it is created until it finishes. Each state represents a specific condition of the process, such as waiting for the CPU, using the CPU, or waiting for input. The operating system keeps track of these states to manage scheduling, resource allocation, and process execution smoothly.
List of All Process States
New
Ready
Running
Waiting / Blocked
Terminated
(Optional in some OS) Suspended Ready
(Optional) Suspended Blocked
New State (Process Creation State)
The New state is the very first stage of a process when it has just been created but has not yet entered the ready queue. At this moment, the operating system is preparing everything the process needs to start running. This includes loading the program into memory, creating the Process Control Block (PCB), assigning a unique process ID, allocating initial memory space, and checking if enough resources are available. The process is not ready to execute yet; it is still in the setup phase. Only after the OS finishes all the initial preparations does the process move from the New state to the Ready state.
Important Points
The process is newly created and has not started execution yet.
The operating system is still allocating memory and preparing the process structure.
A unique Process ID (PID) is assigned by the OS during this stage.
The OS creates the Process Control Block (PCB) containing all initial information about the process.
The program’s instructions may be loaded from the disk into memory at this stage.
The OS checks if resources like memory, CPU time, and I/O devices are available for the new process.
The process cannot be scheduled by the CPU while in this state because setup is incomplete.
If insufficient resources exist, the OS may delay the process in this state.
After all preparation is successful, the process is moved to the Ready state.
Real-Life Example
When you tap the Chrome app icon on your phone or double-click Chrome.exe on a laptop, the OS does not immediately show the browser window. First, the OS creates the process, loads required files, assigns a PID, and prepares internal structures. During this short moment (usually a fraction of a second), the process is in the New state.
Ready State
The Ready state is the stage where a process has been fully prepared by the operating system and is now waiting for the CPU to execute it. All required resources such as memory, program instructions, and PCB details are already set, so the process does not need anything else except CPU time. The process stays in the ready queue along with other processes, and the CPU scheduler chooses one process from this queue when the CPU becomes free. The process is not running yet, but it is completely ready to run at the next available CPU turn.
Important Points
The process has completed all initial setup and is fully prepared for execution.
It stays in the ready queue, waiting for the CPU to assign execution time.
The CPU scheduler selects a process from this queue based on the scheduling algorithm.
The process has all required resources except the CPU; memory and data structures are already assigned.
It can immediately start execution as soon as the CPU becomes available.
Multiple processes can exist in the ready queue at the same time.
The process may stay in this state longer if many other processes are ahead in the queue.
Transition from ready to running occurs when the CPU scheduler dispatches the process.
The process does not perform any meaningful work in this state; it only waits for CPU access.
This state is essential for implementing multitasking because many processes appear ready to run.
Real-Life Example
When you open multiple apps on your phone—like WhatsApp, YouTube, and Instagram—only one of them runs at a time, but the others remain prepared in the background. These apps are not closed; they are waiting for the processor. This waiting-but-ready condition is exactly the Ready State.
Running State
The Running state is the stage where the process is actually being executed by the CPU. In this state, the process’s instructions are actively running, its registers are being updated, and it is using CPU time to perform its tasks. Only one process per CPU core can be in the running state at any moment because the CPU can execute only one instruction stream at a time per core. The operating system moves a process from the ready state to the running state when the CPU becomes available. While running, the process may finish normally, get interrupted by the OS, or move into a waiting state if it needs input/output or another event.
Important Points
The process is currently using the CPU and executing its instructions step by step.
Only one process per CPU core can be in this state at any given time.
The OS scheduler selects a process from the ready queue and dispatches it to the CPU.
CPU registers, program counter, and process variables are continuously updated while the process runs.
A running process may lose the CPU due to time slice completion (in preemptive OS).
A running process may voluntarily leave the CPU if it needs input/output operation.
The process may complete successfully and move to the terminated state after execution.
The OS continuously monitors the running process to maintain fairness and prevent CPU blocking.
The running state is the most active phase of the process lifecycle.
Real-Life Example
When you are watching a video on YouTube or typing a message on WhatsApp, the app currently interacting with you is the one in the Running State. At that moment, the CPU is executing only that app’s instructions, while other apps remain in the ready or waiting state.
Waiting / Blocked State
The Waiting or Blocked state is the stage where a process cannot continue execution because it is waiting for some external event to happen. Even though the process is active, it cannot use the CPU until the event it needs is completed. Typical events include reading data from disk, waiting for user input, receiving data from the network, or waiting for another process to send a message. Since the CPU cannot help the process during this time, the operating system temporarily removes the process from the ready queue and places it into the waiting/blocked queue. Once the required event is completed, the process moves back to the ready state.
Important Points
The process is paused because it is waiting for an event such as input/output completion, network response, or a resource becoming available.
It cannot be given the CPU while waiting because it has nothing useful to execute.
This state prevents the CPU from being wasted on processes that cannot make progress.
The OS moves the process to a waiting queue specifically reserved for blocked processes.
Common reasons for entering this state include disk I/O, keyboard input, network delay, or waiting for another process through IPC.
A process may also become blocked while waiting for a lock or semaphore in synchronization.
When the event is completed, the OS changes the process state from waiting to ready.
The process does not compete for CPU time until it returns to the ready state.
A blocked process cannot return to running directly; it must first move to the ready queue.
This state is essential for multitasking because it prevents stalled processes from blocking CPU usage.
Real-Life Example
When you download a file in a browser, the download process waits for data from the internet. During this waiting time, the process cannot run further because it needs the next chunk of data. While waiting for the network response, the process is in the Waiting/Blocked State. Once the data arrives, it returns to the ready queue.
Terminated State
The Terminated state is the final stage of a process when it has finished its execution or has been stopped by the operating system. At this point, the process no longer needs the CPU, memory, or any system resources. The operating system removes the process from the process table, clears its memory space, closes all open files, and frees the resources it was using. A process may enter this state after completing its instructions successfully, after the user closes the program, or if the OS forcibly ends it due to an error. Once a process reaches the terminated state, it cannot return to execution.
Important Points
The process has completed execution or has been stopped forcibly.
The operating system removes the process entry from the process table.
All resources used by the process—memory, files, CPU registers—are released back to the system.
The OS cleans up the Process Control Block (PCB) for the process.
The program may have ended normally, through user action, or because of an error or crash.
A terminated process cannot move to any other state; it is permanently finished.
Some OSes show a short “exit” or “zombie” stage before final cleanup, especially in UNIX/Linux systems.
The OS may notify the parent process that the child has finished execution.
If the parent does not collect exit status, the process may remain as a zombie for a short time in UNIX-like systems.
After cleanup, the memory space used by the process becomes available for new processes.
Real-Life Example
When you close a game or app on your phone or computer, the process for that app finishes running. The OS frees the RAM used by the app, closes all related files, and removes the process from its tracking list. At that point, the app’s process is in the Terminated State.
Suspended Ready State
The Suspended Ready state is the stage where a process is ready to run but is temporarily moved out of the main memory (RAM) and placed into secondary storage. This happens when the operating system needs to free RAM for higher-priority processes or when too many processes are waiting. Even though the process is swapped out of memory, it is still considered “ready” because it does not need any event to continue; it only needs to be brought back into RAM. When memory becomes available, the OS moves the process back to the ready queue, allowing it to compete for CPU time.
Important Points
The process is ready to execute but has been moved out of RAM to disk.
It is not blocked; it just cannot run because it is not currently in main memory.
The OS swaps the process out to manage limited RAM efficiently.
The process remains in a ready-like condition but outside memory.
When memory is available, the process is swapped back into RAM.
After swapping in, it returns directly to the ready queue.
Suspension is often controlled by the OS, but sometimes by the user (e.g., pausing a program).
It helps the OS maintain multitasking even with limited memory.
The process does not lose its execution state; it is stored on disk temporarily.
Real-Life Example
When your laptop runs too many apps and RAM becomes full, the OS may push a background app (like a game or editor) to disk. The app is not closed and needs no event—it simply waits for memory to return. This is Suspended Ready.
Suspended Blocked State
The Suspended Blocked state is the stage where a process is both waiting for an event and has been moved out of main memory into secondary storage. The process cannot run even if the CPU becomes free because it is not in memory, and it also cannot continue running until the event it is waiting for is completed. This is usually done to free RAM from processes that are not only inactive but also blocked, making it a more efficient memory management decision.
Important Points
The process is waiting for an event such as I/O completion, message arrival, or resource availability.
It is also swapped out of RAM and moved to secondary storage.
Even if the event completes, the process must first be swapped back into memory before running.
The OS uses suspended blocked state to optimize RAM usage during heavy load.
This state prevents memory waste by storing inactive blocked processes in disk.
The process moves to Blocked state after swapping in when memory becomes available.
It reduces the number of active blocked processes occupying RAM.
Suitable for long-waiting or low-priority processes that do not need immediate attention.
Real-Life Example
Suppose a background app is downloading a large file and waiting for slow network data. If RAM becomes low, the OS may move this blocked app out of memory. It is both waiting and suspended, which is exactly the Suspended Blocked state.
Process State Transitions
Process state transitions describe how a process moves from one state to another during its lifetime. A process does not stay in a single state; it shifts between new, ready, running, waiting, suspended, and terminated depending on what it needs and what the operating system decides. The OS controls all transitions to ensure that processes get CPU time, manage input/output requests properly, and release resources when finished. These transitions allow multitasking to work smoothly by ensuring the CPU is always used efficiently while processes wait, run, pause, or complete.
Important Points
A process moves from New → Ready when the OS finishes initial setup and loads the process into memory.
A process moves from Ready → Running when the CPU scheduler selects it and assigns CPU time.
A process moves from Running → Waiting/Blocked when it needs an external event such as disk input, keyboard input, or a network response.
A process moves from Waiting → Ready when the waiting event is completed and it is prepared to use the CPU again.
A process moves from Running → Ready when its time slice expires in a preemptive OS or when a higher-priority process arrives.
A process moves from Running → Terminated after it finishes execution or is forcefully stopped.
A process moves from Ready → Suspended Ready when RAM is full and the OS needs to swap it out to disk.
A process moves from Waiting → Suspended Blocked when it is blocked and the OS decides to save memory by swapping it out.
A process moves from Suspended Ready → Ready after being swapped back into RAM.
A process moves from Suspended Blocked → Waiting when it is swapped back into RAM but is still waiting for the event.
These transitions ensure efficient CPU use by preventing blocked or suspended processes from wasting processor time.
The OS uses internal data structures (like the PCB) to track each process during these transitions.
Simple Diagram of Process State Transitions
Process Control Block (PCB)
A Process Control Block is a data structure that the operating system creates and maintains for every process. It acts like the complete identity card of a process, storing all information the OS needs to manage, control, schedule, and track that process.
Whenever a process is created, the OS builds a PCB for it, and whenever the process is switched out or resumed, the OS updates the PCB with the latest values.
Without the PCB, the operating system would not know the process’s current state, its memory usage, or where to continue execution after an interruption. It is the central structure that allows multitasking and context switching to work properly.
Important Points
The PCB stores the current state of the process such as new, ready, running, or waiting.
It contains the Process ID (PID) which uniquely identifies the process in the system.
It holds the program counter, which tells the OS the next instruction to execute when the process resumes.
It keeps all CPU register values so the OS can restore them during context switching.
It stores memory information such as base address, limit address, page tables, and segment tables.
It contains CPU scheduling information, including priority, scheduling queues, and time quotas.
It keeps I/O information such as open files, devices in use, and pending I/O requests.
It stores accounting information, including CPU usage time and execution history.
It helps the OS perform context switching by saving and restoring process data.
Each process has its own PCB, and the OS keeps all PCBs in a system-wide process table.
The PCB remains in memory as long as the process exists and is deleted only when the process terminates.
It allows the OS to track and manage multiple processes in a multitasking environment.
Real-Life Example
When you switch from watching a YouTube video to replying on WhatsApp and then come back to YouTube, the CPU switches between processes. The OS uses each process’s PCB to remember what the CPU was doing earlier, such as the video time, registers, and memory state. Without the PCB, the OS would forget everything when switching tasks.
Diagram of PCB
Structure of PCB (All Components)
Below is the complete structure of a PCB and explanation of every block.
Pointer
A pointer refers to the next process in the scheduling queue.
The OS uses it to link multiple PCBs together in ready, waiting, or suspended queues.
It helps the OS quickly manage lists of processes.
Process State
Stores the current state of the process such as new, ready, running, waiting, or terminated.
The OS checks this field to decide what the process is currently doing.
Process ID (PID)
A unique number assigned to every process by the OS.
Helps identify the process in the system table.
Program Counter
Holds the memory address of the next instruction the process will execute.
When the process is paused, the OS stores the last executed instruction here.
CPU Registers
Stores the current values of all CPU registers used by the process.
Includes accumulator, index registers, stack pointer, general-purpose registers.
Essential for context switching because the OS must restore these values when the process resumes.
CPU Scheduling Information
Stores priority number, scheduling queues, and other scheduling parameters.
Helps the CPU scheduler decide when and how the process should run.
Memory Management Information
Contains memory-related data such as base address, limit address, page tables, segment tables.
Allows the OS to protect memory and map the process’s address space.
Accounting Information
Stores data about CPU usage, execution time, process start time, and time limits.
Used by the OS to track performance, billing, and resource usage.
I/O Status Information
Stores list of open files, I/O devices in use, pending device requests.
Helps the OS manage file systems and device interactions.
Process Privileges
Contains permission levels assigned to the process.
Determines what the process is allowed to access in the system.
Process Parent and Child Pointers
Holds information about which process created this process.
Maintains process relationships and process tree structures.
Context Switching
Context switching is the process in which the operating system temporarily stops a running process, saves all the information required to resume it later, and then loads the saved information of another process to start its execution.
Since only one process can use a CPU core at a time, the OS must switch rapidly between processes to give the appearance of multitasking. During a context switch, the OS stores the current process’s CPU registers, program counter, memory details, and other states into its PCB. Then it loads the next process’s PCB and restores all its values so that it continues exactly from where it stopped. Context switching allows smooth multitasking, process scheduling, and responsiveness, even on systems with a single CPU.
Important Points
The OS saves the current process’s execution details (registers, program counter, stack pointer) into its PCB.
The OS loads another process’s saved details from its PCB to resume that process.
It happens whenever the CPU needs to stop one process and run another.
Context switching is essential for multitasking and time-sharing systems.
It allows a single CPU core to run many processes in a fast switching cycle.
It ensures no process loses progress when it is paused.
It happens frequently in preemptive systems when the time slice of a process finishes.
It also occurs when a running process moves to waiting/blocked state.
Context switching takes some time because the OS must save and load data; this time is called context switch overhead.
The OS tries to minimize context switches because they reduce overall CPU efficiency.
Without context switching, multiple processes could not share the CPU smoothly.
PCB plays the central role in saving and restoring the process state during switching.
Real-Life Example
When you are typing a message on WhatsApp and a call arrives, the CPU immediately pauses WhatsApp, saves its current execution details, and switches to the Phone app. After the call ends, the CPU loads WhatsApp’s saved data from its PCB and brings you back exactly where you left. This switching between apps is exactly what context switching does inside the OS.
Context Switching Diagram
Thread
A thread is the smallest independent flow of execution inside a process. When a program is running as a process, it can divide its work into multiple threads so different tasks can progress at the same time.
All threads of a process share the same memory, files, and resources, but each thread runs with its own stack, program counter, and execution path. Because threads share the same environment, they can communicate quickly without needing heavy mechanisms like separate processes. This makes threads useful for tasks that need parallel execution, quick responses, and better use of CPU time.
Threads make a program faster, more responsive, and able to perform many tasks simultaneously without creating multiple heavy processes. Because threads share the same memory space, switching between them is faster and more efficient than switching between processes.
Each thread has:
A program counter
A register set
A stack space
Important Points
A thread is the smallest executable unit of a program.
Multiple threads can exist inside one process and run independently.
Threads share the same memory, code, data, and files of the parent process.
Each thread has its own program counter, stack, and CPU registers.
Threads allow multitasking within a single process.
Thread creation is faster and lighter than creating a new process.
Communication between threads is easier since they use the same memory.
If one thread crashes, it may affect the entire process because memory is shared.
Threads help improve program performance and responsiveness.
Modern applications like browsers, games, and servers rely heavily on multithreading.
Real-Life Example
In a web browser, one thread loads the webpage, another thread handles user input like scrolling, a third thread downloads files, and another thread plays audio/video. All these tasks run inside the same browser process but operate independently. This is the real working of threads.
Thread Diagram
Thread States
A thread, like a process, moves through different states during its lifetime. These states describe what the thread is currently doing—whether it is ready to run, actually running, waiting for an event, or finished. Understanding these states helps explain how multithreading works inside an operating system and how multiple threads share the CPU smoothly.
List of All Thread States
New
Ready
Running
Waiting / Blocked
Terminated
(Note: Some systems also include Timed Waiting, but the core OS concept uses the above five.)
Explanation of Each Thread State
New
A thread enters the new state when it has just been created but has not started its execution yet. The operating system allocates internal structures and prepares the thread to begin running. It has not yet been scheduled to use the CPU.
Important Points
Thread has been created but not started.
OS prepares its initial data like stack and registers.
Thread cannot run until moved to the ready state.
Example
When you open a browser, it creates several threads. At the moment they are created but not yet running, they are in the New state.
Ready
A thread is in the ready state when it is fully prepared to run and is waiting for the CPU. It has everything needed to execute, but it must wait because the CPU is currently busy with another thread.
Important Points
Thread has all required resources except the CPU.
It waits in the ready queue for CPU scheduling.
It can start execution immediately once the CPU is available.
Multiple threads can be in the ready state at the same time.
Example
In a game, one thread updates graphics while others remain ready in the background, waiting for the CPU. These background threads are in the Ready state.
Running
A thread enters the running state when the CPU starts executing its instructions. Only one thread per CPU core can be in this state at a time. The OS keeps updating the thread’s registers and execution progress while it runs.
Important Points
Thread is actively using the CPU.
Registers and program counter keep updating.
The thread may run until completion or until preempted.
Thread may leave running state if time slice expires or if it needs I/O.
Example
You play a video on YouTube. The thread decoding the video is currently in the Running state.
Waiting / Blocked
A thread enters this state when it cannot continue execution because it needs an external event. During this time, it cannot run even if the CPU is free. It waits until the event happens.
Important Points
Thread is paused because it needs I/O, input, or a resource.
Cannot use the CPU until the event is completed.
OS places it into a waiting or blocked queue.
When the event finishes, the thread returns to the ready state.
Example
A thread downloading a file waits for network data. While waiting, it enters the Blocked state.
Terminated
A thread is in the terminated state when it has finished execution, or the OS has stopped it. After this, the thread is removed from the system and its resources are released.
Important Points
Thread has completed its task or has been forced to stop.
OS cleans up its memory, stack, and internal structures.
The thread cannot run again once terminated.
Example
When a browser finishes rendering a page and closes its worker thread, that thread moves to the Terminated state.
Benefits of Threads
Threads help a program break its tasks into smaller independent units that can run at the same time. Because threads inside the same process share memory and resources, they communicate faster, switch quicker, and use less overhead compared to separate processes. This makes applications smoother, more responsive, and able to do many operations in parallel, especially on multi-core processors. Threads are essential for modern software like browsers, games, servers, and mobile apps.
Benefit
Threads reduce the overall execution time of a program because multiple tasks can run at the same time, allowing work to be completed faster than a single-threaded program.
Thread creation consumes less memory because all threads share the same address space of the parent process instead of creating new copies of data.
Switching between threads is faster because the OS does not need to change the entire process memory; it only needs to switch thread-specific registers and the program counter.
Threads keep applications responsive by running heavy tasks in the background while the main thread handles user interactions smoothly.
Threads make better use of multi-core processors by allowing different threads to run on different CPU cores at the same time, increasing overall performance.
Communication between threads is easier and faster because they share common data and memory, avoiding complex inter-process communication methods.
Threads allow parallel execution of independent tasks, such as loading a webpage, rendering images, downloading files, and handling user input all at once.
Server applications benefit greatly because one thread can handle each client request, allowing the server to manage thousands of users simultaneously.
Threads improve system efficiency by reducing idle CPU time, especially when one thread is waiting for I/O while another thread continues using the processor.
Threads help structure large programs by dividing tasks into smaller independent units, making the program easier to manage, scale, and debug.
Types of Threads
· User-Level Threads (ULT)
· Kernel-Level Threads (KLT)
· Hybrid Threads (Combination of User + Kernel Threads)
· Single-threaded Model
· Multithreaded Model
· Many-to-One Model
· One-to-One Model
· Many-to-Many Model
User-Level Threads (ULT)
User-Level Threads are threads that are completely created and managed in user space by a thread library, not by the operating system. The OS sees the entire multithreaded program as just one single process, while all thread operations happen inside the application itself.
Important Points
All thread operations such as creation, deletion, and scheduling happen in user space.
The OS kernel does not know how many threads exist inside the process.
Switching between threads is very fast because it does not require system calls.
Thread creation is lightweight and requires very little overhead.
A blocking system call blocks the entire process because the OS cannot switch to another thread.
ULT cannot take advantage of multiple CPU cores since the OS schedules only the process, not the threads inside it.
The application can define its own scheduling algorithm because the OS is not involved.
Works even on operating systems that do not support kernel-level threading.
Used in older Java "green threads" and some language runtime coroutine systems.
Real-Life Example
Many programming language runtimes like early Java green threads or Python cooperative threads used user-level threading, where the language runtime handled all thread operations instead of the operating system.
Advantages of ULT
Can be implemented on an OS that doesn't support multithreading .
Simple representation since thread has only program counter , register set, stack space.
Simple to create since no intervention of kernel.
Thread switching is fast since no OS calls need to be made.
Disadvantages of ULT
No or less co-ordination among the threads and Kernel.
If one thread causes a page fault, the entire process blocks.
Kernel-Level Threads (KLT)
Kernel-Level Threads are threads that are fully created, managed, and scheduled by the operating system kernel. Unlike user-level threads, the OS is aware of each individual thread and treats them separately. Because the kernel controls them directly, these threads can run on multiple CPU cores at the same time and can continue execution even if one thread becomes blocked.
Important Points
The operating system kernel performs creation, deletion, and scheduling of threads.
Each thread is known to the OS, so the OS can manage them individually.
A blocking system call affects only the blocked thread, not the entire process.
KLT can run on multiple CPU cores because the kernel can schedule different threads on different processors.
Switching between threads is slower than ULT because it requires a mode switch between user and kernel space.
Thread creation has more overhead compared to user-level threads because kernel involvement is required.
KLT allows true parallelism on multi-core systems.
Operating systems like Windows, Linux, and macOS use kernel-level threading.
Useful for applications that require strong CPU parallelism, such as servers and multimedia programs.
Advantages of Kernel-Level Threads
If one thread blocks due to I/O or waiting, the rest of the threads in the same process continue running smoothly.
They use multiple CPU cores effectively, allowing true parallel execution.
The OS scheduler can make better decisions because it knows about all threads individually.
More reliable for applications that need high performance or heavy multitasking.
Ideal for servers and large applications that need to manage many simultaneous tasks.
Disadvantages of Kernel-Level Threads
Switching between threads is slower because it requires kernel mode involvement.
Thread creation has more overhead and consumes more system resources.
Frequent context switches may reduce performance in systems with many threads.
Implementation is more complex and requires OS support.
Hybrid Threads (ULT + KLT Combination)
Hybrid threads combine the features of User-Level Threads and Kernel-Level Threads to overcome the limitations of both models. In this approach, the application creates many user-level threads, but only a few kernel-level threads are created by the OS. The user threads are mapped onto the kernel threads, allowing the application to enjoy fast user-level switching while still gaining the benefit of true parallelism through kernel scheduling. This model provides flexibility, efficiency, and better performance, especially in systems where large numbers of threads are needed.
Important Points
Combines the speed of user-level threads with the power of kernel-level scheduling.
Many user-level threads are mapped onto fewer kernel-level threads (many-to-many model).
User-level switching is fast because it does not always involve the kernel.
Kernel-level threads allow the program to use multiple CPU cores.
If one user-level thread blocks, the kernel can still schedule another thread mapped to a different kernel thread.
Helps reduce kernel overhead since fewer kernel threads are created.
Applications can choose custom scheduling for user-level threads.
Supported in systems like Solaris and some advanced threading libraries.
Advantages of Hybrid Threads
Achieves both fast switching (from ULT) and true parallel execution (from KLT).
Reduces kernel overhead because not every user thread needs to be a kernel thread.
Offers better scalability for applications requiring thousands of threads.
If a thread blocks, other threads can continue running because the kernel manages multiple kernel threads.
Provides great flexibility in scheduling because user-level schedulers can work on top of kernel-level threads.
Disadvantages of Hybrid Threads
More complex to design and maintain compared to pure ULT or KLT.
Mapping user threads onto kernel threads requires careful coordination.
Debugging becomes harder because two schedulers are involved (user-level and kernel-level).
Performance depends heavily on how well user threads are mapped to kernel threads
Single-Threaded Model
A single-threaded model is a design in which a process contains only one thread of execution. This means the program can perform only one task at a time, step by step, without any parallel activity. Since there is only one flow of control, the program must complete one operation before moving to the next. If this single thread becomes busy or blocked, the entire process must wait, making the application less responsive. Single-threaded programs are simple to design and debug but are limited in performance when handling multiple tasks.
Important Points
The process contains only one thread, so it can execute only one operation at a time.
There is no internal parallelism because no other thread exists to perform work alongside the main thread.
If the thread blocks due to input/output or waiting for a resource, the entire process becomes blocked.
Responsiveness is lower because the program cannot continue other work while waiting.
The design is simple because the developer does not need to manage shared memory, synchronization, or thread conflicts.
Debugging is easier compared to multithreaded programs, as there is only one path of execution.
Performance is limited on multi-core systems because only one core can be used at a time.
Suitable for small, simple tasks that do not require background operations or heavy computation.
Many older applications and simple command-line tools follow this model.
Multithreaded Model
A multithreaded model is a design where a single process contains multiple threads running independently but sharing the same memory and resources. Each thread performs a separate task, allowing the program to do many operations at the same time. This improves responsiveness, performance, and multitasking within the application. If one thread is waiting for input, other threads can continue running without delay. Modern applications like browsers, servers, games, and mobile apps rely heavily on this model to handle background tasks, user interactions, networking, and computation all together.
Important Points
A single process contains multiple threads that run independently but share the same data and memory space.
The program becomes faster because different tasks run in parallel or overlap in execution.
The application stays responsive because one thread can handle user input while others perform background work.
If one thread blocks for input/output, other threads can continue running without affecting the entire process.
This model uses system resources efficiently, especially on multi-core processors where threads can run on different cores.
Communication between threads is simple because they use shared memory instead of complex inter-process communication.
Multithreaded programs can divide large tasks into smaller parallel units, improving performance.
Servers use multiple threads to handle many clients at the same time, increasing throughput.
Requires careful synchronization to prevent issues when multiple threads access shared data.
Used in modern software like browsers (Chrome tabs), media players, IDEs, games, and mobile apps.
Many-to-One Model
The Many-to-One model is a threading approach where many user-level threads are mapped to a single kernel thread. All threads are created and managed in user space, and the operating system sees the entire group as one single thread. Because only one kernel thread exists, only one user thread can run at a time, even on multi-core processors. This model is simple and fast for user-level switching, but it has major limitations in parallel execution and blocking behavior.
Important Points
Many user-level threads are connected to one kernel thread, so the OS treats them as a single execution unit.
Thread management such as creation, switching, and scheduling happens entirely in user space.
Switching between threads is very fast because no kernel involvement is required.
A blocking system call from any thread blocks the entire process since the kernel has only one thread to work with.
No real parallelism is possible, even if the system has multiple CPU cores, because the kernel thread runs on only one core at a time.
The model is simple to implement and works even on OSes that do not support kernel threading.
It is efficient for applications with many short tasks that don’t require true parallel execution.
Communication between threads is easy because they share the same memory of the parent process.
Used historically in early Java “green threads” and other user-level thread libraries.
One-to-One Model
The One-to-One model maps each user-level thread directly to a separate kernel-level thread. This means every thread created by the application is also recognized and managed individually by the operating system. Because each user thread becomes a kernel thread, the OS can schedule multiple threads on multiple CPU cores at the same time, allowing true parallel execution. This model removes the blocking limitations of the Many-to-One model, but it increases overhead because creating too many kernel threads can consume more system resources.
Important Points
Each user thread has a separate corresponding kernel thread managed by the operating system.
The OS schedules every thread independently, enabling parallel execution on multiple CPU cores.
A blocking system call affects only the blocked thread; other threads in the process continue running normally.
Thread creation and switching involve kernel activity, making it slower and heavier than user-level threading.
High thread count may cause performance issues because each thread consumes kernel memory and scheduling time.
This model provides strong responsiveness and high CPU utilization for multitasking applications.
Modern operating systems like Windows, Linux, and macOS use this model by default.
Ideal for applications that need real concurrency, such as servers, browsers, and multicore optimized programs.
It avoids the major disadvantage of the Many-to-One model where one blocked thread freezes the entire process.
Many-to-Many Model
The Many-to-Many model allows many user-level threads to be mapped onto a smaller or equal number of kernel-level threads. This means the application can create hundreds or thousands of user threads, but the operating system manages only a limited number of kernel threads underneath. The OS schedules kernel threads across multiple CPU cores, while the user-level thread library schedules user threads onto those kernel threads. This model provides both efficiency and parallel execution, offering the flexibility of user-level threading with the performance of kernel-level threading.
Important Points
Many user-level threads are mapped to fewer or equal kernel-level threads, giving flexibility in how threads are used.
User threads are created and managed in user space, making them faster and lighter.
Kernel threads provide actual parallelism by running on multiple CPU cores.
A blocking user thread blocks only the kernel thread it is mapped to; other user threads can still run via other kernel threads.
This model allows scalable applications because creating user threads is fast and low-cost.
The OS handles kernel thread scheduling, while the application handles user thread scheduling.
Helps avoid the limitations of the Many-to-One model by enabling parallelism.
Reduces the overhead of the One-to-One model because fewer kernel threads are created.
Applications can run many more threads than the number of available CPU cores.
Commonly used in advanced thread libraries and operating systems like Solaris.
Concept of Multithreading
Multithreading is a technique where a single process is divided into multiple threads that run independently but share the same memory and resources. Instead of creating many heavy processes, a program creates lightweight threads that can handle different parts of the work at the same time. This allows the application to run faster, stay responsive, and efficiently use modern multi-core CPUs. Multithreading makes it possible for a program to perform background work and user interactions together without slowing down or freezing.
Important Points
Multiple threads exist inside one process, each running independently.
All threads share the same code section, data section, and memory space of the parent process.
Thread creation is faster and uses fewer resources compared to creating full processes.
Multithreading improves performance by allowing multiple operations to run at the same time.
If one thread is waiting for I/O, the other threads continue working without blocking the whole process.
Threads can run on different CPU cores to achieve true parallel execution.
Communication between threads is easy because they share memory and variables.
Useful for breaking large tasks into smaller parts that run concurrently.
Makes programs more responsive by separating background work from user actions.
Needs careful synchronization to avoid conflicts when threads access shared data.
How Multithreading Works
Multithreading works by dividing the program into multiple threads, where each thread handles a specific task. These threads are scheduled by the operating system or a thread library to run either one after another or in parallel on different CPU cores. When a thread needs to wait for input/output, another thread immediately gets CPU time, keeping the application active. The OS keeps track of each thread’s state, saves its progress when switching, and resumes it later, allowing smooth multitasking within one process.
Important Points
The OS or thread library schedules threads to run one after another or in parallel.
Each thread has its own program counter, stack, and register values.
All threads access the same shared memory of the parent process.
When one thread gets blocked, another thread is selected to run immediately.
Context switching between threads is fast because they share memory.
The OS may run different threads on different CPU cores for parallelism.
The process continues smoothly because threads take turns using the CPU.
Real-Life Example of Multithreading
Example: Using a Web Browser
A browser uses many threads inside one process:
One thread loads the webpage
One thread handles user typing and scrolling
One thread downloads files
One thread plays audio/video
One thread runs JavaScript
All of these tasks happen together without the browser freezing.
This is exactly how multithreading works.
Example: Mobile Apps
When using Instagram:
One thread loads images
One thread handles screen touch
One thread uploads your post
One thread plays videos
One thread fetches notifications
All running smoothly inside one app.
Advantages of Multithreading
Multithreading improves the performance of a program by allowing different tasks to run at the same time.
It keeps applications responsive because background tasks do not block user-interface interactions.
Threads use less memory than processes since they share the same address space.
Context switching between threads is faster than switching between processes.
It increases CPU usage efficiency because threads can run when others are blocked.
Makes good use of multi-core processors by executing different threads on different cores.
Useful for real-time applications where continuous interaction and fast response are required.
Helps divide large problems into smaller parallel parts for better performance.
Improves server performance by handling multiple client requests simultaneously.
Disadvantages of Multithreading
Managing multiple threads increases complexity in program design.
When threads share data, they may create conflicts or incorrect results without proper synchronization.
Debugging multithreaded programs is difficult because errors may appear randomly depending on timing.
Improperly handled threads can cause deadlocks, where two or more threads wait for each other forever.
Context switching between many threads may increase overhead and reduce performance.
If one thread misbehaves or corrupts shared data, it can affect the entire process.
Testing becomes harder because thread execution order is unpredictable.
Process Scheduling
Process scheduling is the method used by the operating system to decide which process or thread should use the CPU at any given moment. Because the number of processes is always more than the number of CPUs available, the OS must select one process, let it run, pause it, and then switch to another process. This makes multitasking possible. The scheduler ensures that every process gets the CPU fairly, efficiently, and at the right time, improving system performance and responsiveness.
Important Points
It decides which process should run next and how long it gets the CPU.
It keeps the CPU busy by selecting a ready process whenever the CPU becomes free.
It makes multitasking possible by switching between processes quickly.
The OS uses scheduling algorithms to manage how processes move from ready to running.
The goal is to increase speed, reduce waiting, and improve system efficiency.
Scheduling is essential in preemptive systems where the OS can stop a running process.
It helps balance CPU use among high-priority and low-priority processes.
Scheduling decisions affect user experience, system throughput, and overall performance.
Scheduling Objectives
Scheduling objectives describe what the OS tries to achieve when deciding which process should get the CPU. These goals ensure that the system runs efficiently, responds quickly, and fairly shares CPU time among processes.
Important Points
The scheduler aims to keep the CPU busy as much as possible by reducing idle time.
It tries to minimize the time a process stays waiting in the ready queue.
It ensures a short response time so interactive programs feel fast and responsive.
It aims to minimize turnaround time so jobs finish as early as possible.
It tries to maximize throughput by completing as many processes as possible in a given time.
It must maintain fairness so no process is starved or ignored.
It should support priority scheduling so important tasks run earlier.
It must handle both CPU-bound and I/O-bound processes efficiently.
It must manage context switching carefully to avoid wasting CPU time.
It must support real-time needs in systems where deadlines are important.
Schedulers
Schedulers are special components of the operating system responsible for selecting which process should enter the ready queue, which process should get the CPU, and which process should be moved out of the system. Different schedulers handle these tasks at different stages of a process’s life cycle. They work together to manage process creation, CPU assignment, and process completion. Each scheduler plays its own role so the OS can handle thousands of processes smoothly without confusion or slowdown.
Types of Schedulers
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long-Term Scheduler (Job Scheduler)
The long-term scheduler decides which new processes should be admitted into the ready queue. It controls the degree of multiprogramming by choosing how many processes should be in the system at one time. If too many processes enter, the system slows down; if too few enter, the CPU becomes underutilized.
Important Points
Selects which new processes will enter the ready queue.
Controls the total number of processes in the system (degree of multiprogramming).
Ensures the CPU is neither overloaded nor underloaded.
Balances CPU-bound and I/O-bound processes to improve efficiency.
Runs less frequently because job admission does not happen constantly.
Helps maintain smooth long-term system performance.
Real-Life Example
When you open many apps at once, your phone may refuse to open new heavy apps due to memory limits. This is controlled by a long-term scheduling decision.
Short-Term Scheduler (CPU Scheduler)
The short-term scheduler is responsible for selecting which process from the ready queue should be given the CPU next. It makes decisions very frequently, sometimes hundreds of times per second. This scheduler is the key to multitasking and determines how responsive the system feels.
Important Points
Selects a ready process to run on the CPU.
Makes decisions very frequently, often many times per second.
Directly affects system responsiveness and speed.
Performs context switching when changing from one process to another.
Uses scheduling algorithms like FCFS, SJF, and Round Robin.
Decides which process moves from ready → running.
Real-Life Example
When you switch between apps quickly, the short-term scheduler decides which app gets CPU time at each moment.
Medium-Term Scheduler (Swapper)
The medium-term scheduler decides which processes should be temporarily removed from RAM and stored in secondary memory (swap space) to free up resources. It helps the system handle memory shortages smoothly by suspending processes that are not currently needed.
Important Points
Selects processes to be swapped out of memory during heavy load.
Moves processes to suspended ready or suspended blocked states.
Helps control memory usage when RAM becomes full.
Improves responsiveness by freeing space for active processes.
Runs when the OS detects memory pressure or too many active processes.
Plays an important role in systems using swapping or virtual memory.
Real-Life Example
On a laptop with low RAM, when multiple apps are open, some background apps get “frozen” or moved out of RAM. This is handled by the medium-term scheduler.
CPU Utilization
CPU utilization measures how effectively the CPU is being used during system operation. The goal of the operating system is to keep the CPU as busy as possible because idle CPU time is wasted time. A high CPU utilization means the scheduler is selecting processes efficiently, reducing idle gaps, and ensuring the processor is working most of the time. This is important in both general systems and servers where efficient resource use directly affects performance.
Important Points
Shows the percentage of time the CPU stays busy doing productive work.
Higher CPU utilization means better system efficiency and less idle time.
Scheduling algorithms aim to maximize this value.
Poor scheduling leads to low utilization, causing the CPU to sit idle.
Ideal utilization in real systems ranges between 70% to 90%.
Crucial for high-load systems like servers and multitasking environments.
Throughput
Throughput refers to the number of processes that complete execution in a given amount of time. It tells how productive the system is. A higher throughput means the scheduler is able to complete more processes within the same time period, making the system more efficient. Throughput depends on scheduling algorithm choice, process type, CPU–I/O ratio, and system load.
Important Points
Indicates how many processes finish execution per unit time.
Higher throughput means more work is being completed.
Small, shorter processes increase throughput, while long processes reduce it.
Scheduling algorithms like SJF and RR can significantly affect throughput.
Systems with heavy I/O wait time typically have lower throughput.
Important for environments where many jobs must finish quickly (batch systems, servers).
Scheduling Time Parameters / Scheduling Performance Metrics
CPU Scheduling Time Parameters
These are the basic timing values used to calculate scheduling results in CPU scheduling algorithms.
They include:
Arrival Time
Burst Time
Completion Time
Turnaround Time
Waiting Time
Response Time
Arrival Time
Arrival Time is the moment when a process first enters the ready queue and becomes available for scheduling. Before this time, the operating system does not consider the process for CPU execution. It marks the starting point from which the OS tracks the process’s progress. Every scheduling algorithm uses arrival time to decide the order in which processes should be handled, especially when new processes keep entering the system at different times.
Important Points
It is the time when a process enters the ready queue for the first time.
The OS begins tracking the process only from this point.
A process cannot be scheduled before its arrival time.
Different processes can have different arrival times, depending on when they were submitted.
Arrival time helps determine scheduling order in algorithms like FCFS, SJF, and RR.
Used in calculations for waiting time, turnaround time, and response time.
Important for real-time systems where tasks appear at different moments.
Burst Time (CPU Burst Time)
Burst Time is the total amount of actual CPU time a process needs to complete its execution. It represents how long the CPU must work continuously on that process without interruption if no preemption occurs. Every scheduling algorithm uses burst time to decide which process should get the CPU next. Shorter burst times usually help complete processes faster, while longer burst times can delay others. Burst time is one of the most important values in CPU scheduling because it directly influences waiting time, turnaround time, and response time.
Important Points
It is the total time the CPU needs to finish the process’s execution.
It does not include waiting time or blocked time — only pure CPU working time.
Burst time may be known in advance (in theoretical problems) or estimated (in real OS).
Used heavily in algorithms like SJF, SRTF, RR, FCFS, and priority scheduling.
Shorter burst time improves system throughput.
Long burst time processes may slow down smaller processes.
In preemptive scheduling, burst time reduces after each execution slice.
Burst time may appear multiple times in I/O-bound processes (CPU bursts + I/O bursts).
Formulas Related to Burst Time
Turnaround Time = Completion Time – Arrival Time
Waiting Time = Turnaround Time – Burst Time
Response Time = First Time CPU is Assigned – Arrival Time
Completion Time = Burst Time + Total time spent on previous processes
Remaining Burst Time = Original Burst Time – Time Already Executed
Total CPU Burst = Sum of burst times of all processes
Completion Time
Completion Time is the exact moment when a process finishes all its CPU execution and exits the system. It marks the endpoint of the process’s life cycle. Once the process reaches its completion time, the operating system considers it fully executed, and no more CPU work remains for that process. Completion time depends on the scheduling algorithm, the arrival order, burst times, and the waiting period of other processes. It is an important value used to calculate turnaround time and waiting time.
Important Points
It is the final time at which a process completes its execution.
The OS records completion time when the process leaves the CPU for the last time.
Completion time varies depending on scheduling order and CPU availability.
It is affected by arrival time, burst time, preemption, and waiting periods.
Used to calculate Turnaround Time.
Used indirectly to calculate Waiting Time.
In preemptive scheduling, completion time occurs after the last remaining burst finishes.
In non-preemptive algorithms, completion time is easier to calculate because processes run to completion once started.
Completion Time = Time when the process leaves CPU for the last time
Turnaround Time
The total amount of time spent by the process from its arrival to its completion is called turnaround time.
OR
Turnaround Time is the total time taken by a process from the moment it enters the ready queue until it completes its execution. It represents the overall time spent in the system, including waiting time, execution time, and any delays caused by scheduling. Turnaround time shows how long a process takes from start to finish and is one of the most important measurements for evaluating the effectiveness of scheduling algorithms. Lower turnaround time means faster job completion and better system performance.
Important Points
It is the total time a process spends inside the system.
Includes waiting time, CPU burst time, and any delays.
Turnaround time depends heavily on the scheduling method used.
Processes with short burst time usually have small turnaround time.
Algorithms like SJF and SRTF reduce turnaround time effectively.
Used to measure how fast the system completes tasks.
Long waiting queues and long burst processes increase turnaround time.
Important in batch systems where job completion speed matters.
Turnaround Time = Completion Time – Arrival Time
Waiting Time
Waiting Time is the total time a process spends waiting in the ready queue before it gets the CPU. It does not include CPU execution time or time spent in blocked/waiting state for I/O. It only counts the time the process is ready to run but has not been scheduled yet. Lower waiting time means the CPU scheduler is efficient, while higher waiting time indicates delays caused by long queues or inefficient scheduling algorithms.
Important Points
It is the total time a process spends in the ready queue.
Does not include CPU burst time or I/O waiting time.
Waiting time increases when processes wait behind long jobs.
Preemptive scheduling can reduce waiting time by giving short jobs quicker access.
Priority scheduling may increase waiting time for lower-priority processes.
Round Robin reduces long waiting but may slightly increase average waiting time.
Used to calculate turnaround time.
A key factor for user experience in interactive systems.
Waiting Time = Turnaround Time – Burst Time
Formulas for Waiting Time
Main Formula
Waiting Time = Turnaround Time – Burst Time
Using Completion and Arrival Time
Waiting Time = (Completion Time – Arrival Time) – Burst Time
Average Waiting Time
Average Waiting Time = (Sum of all waiting times) ÷ Number of processes
Gantt Chart Interpretation
Waiting Time = Total time spent in ready queue before each CPU execution.
Response Time
Response Time is the time taken from when a process enters the ready queue (arrival) to the moment it gets the CPU for the very first time. It measures how quickly the system reacts to a request, especially important in interactive systems like mobile apps, browsers, and real-time applications. A lower response time means the user experiences faster feedback from the system. Even if the process has not completed, response time focuses only on the delay before the process’s first execution.
Important Points
It is the time between arrival of the process and the first CPU allocation.
Response time does not depend on when the process finishes — only when it starts.
Very important for interactive systems where quick reaction is required.
Preemptive algorithms like RR and SRTF tend to give low response time.
Long processes ahead in FCFS increase response time for new processes.
Response time improves user experience by reducing the delay before execution starts.
If a process waits too long before its first run, system responsiveness feels slow.
Once the process gets the CPU for the first time, response time is decided and does not change later.
Response Time = First Time CPU is Assigned – Arrival Time
Formulas for Response Time
Main Formula
Response Time = First Time CPU is Assigned – Arrival Time
If first execution time is F and arrival time is A
Response Time = F – A
Average Response Time
Average Response Time = (Sum of all response times) ÷ Number of processes
Gantt Chart Interpretation
Response time = time from arrival → first scheduling.
Next part 2