Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Operating Systems: Concepts and Principles, Lecture notes of Operating Systems

Delve into the intricate world of operating systems through this extensive collection of notes. This resource offers a comprehensive exploration of the core principles, architectural components, and operational intricacies of operating systems. Uncover the inner workings of process management, gain insights into memory allocation strategies, and delve into the complexities of file systems. Whether you're a student seeking to grasp the essentials of OS design or a professional aiming to deepen your knowledge, these notes provide a rich source of information and guidance. By navigating through the intricacies of task scheduling, memory management, and I/O operations, you'll develop a solid foundation in operating system theory and practice. Explore the challenges and solutions that arise in modern computing environments, and equip yourself with the knowledge needed to optimize system performance and reliability.

Typology: Lecture notes

2023/2024

Uploaded on 09/13/2023

remeeee
remeeee ๐Ÿ‡ฎ๐Ÿ‡ณ

2 documents

1 / 12

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
1
Operating Systems
โ— An Operating System can be defined as an interface between user and hardware. It
is responsible for the execution of all the processes, Resource Allocation, CPU
management, File Management and many other tasks. The purpose of an operating
system is to provide an environment in which a user can execute programs in a
convenient and efficient manner.
โ— Types of Operating Systems :
1. Batch OS โ€“ A set of similar jobs are stored in the main memory for execution. A job gets
assigned to the CPU, only when the execution of the previous job completes.
2. Multiprogramming OS โ€“ The main memory consists of jobs waiting for CPU time. The
OS selects one of the processes and assigns it to the CPU. Whenever the executing
process needs to wait for any other operation (like I/O), the OS selects another process
from the job queue and assigns it to the CPU. This way, the CPU is never kept idle
and the user gets the flavor of getting multiple tasks done at once.
3. Multitasking OS โ€“ Multitasking OS combines the benefits of Multiprogramming OS
and CPU scheduling to perform quick switches between jobs. The switch is so quick
that the user can interact with each program as it runs.
4. Time Sharing OS โ€“ Time-sharing systems require interaction with the user to instruct
the OS to perform various tasks. The OS responds with an output. The instructions are
usually given through an input device like the keyboard.
5. Real Time OS โ€“ Real-Time OS are usually built for dedicated systems to accomplish a
specific set of tasks within deadlines.
โ— Process : A process is a program under execution. The value of the program
counter (PC) indicates the address of the next instruction of the process being
executed.
Each process is represented by a Process Control Block (PCB).
โ— Process Scheduling:
pf3
pf4
pf5
pf8
pf9
pfa

Partial preview of the text

Download Operating Systems: Concepts and Principles and more Lecture notes Operating Systems in PDF only on Docsity!

Operating Systems

โ— An Operating System can be defined as an interface between user and hardware. It

is responsible for the execution of all the processes, Resource Allocation, CPU management, File Management and many other tasks. The purpose of an operating system is to provide an environment in which a user can execute programs in a convenient and efficient manner.

โ— Types of Operating Systems :

  1. Batch OS โ€“ A set of similar jobs are stored in the main memory for execution. A job gets assigned to the CPU, only when the execution of the previous job completes.
  2. Multiprogramming OS โ€“ The main memory consists of jobs waiting for CPU time. The OS selects one of the processes and assigns it to the CPU. Whenever the executing process needs to wait for any other operation (like I/O), the OS selects another process from the job queue and assigns it to the CPU. This way, the CPU is never kept idle and the user gets the flavor of getting multiple tasks done at once.
  3. Multitasking OS โ€“ Multitasking OS combines the benefits of Multiprogramming OS and CPU scheduling to perform quick switches between jobs. The switch is so quick that the user can interact with each program as it runs.
  4. Time Sharing OS โ€“ Time-sharing systems require interaction with the user to instruct the OS to perform various tasks. The OS responds with an output. The instructions are usually given through an input device like the keyboard.
  5. Real Time OS โ€“ Real-Time OS are usually built for dedicated systems to accomplish a specific set of tasks within deadlines. โ— Process : A process is a program under execution. The value of the program counter (PC) indicates the address of the next instruction of the process being executed. Each process is represented by a Process Control Block (PCB). โ— Process Scheduling:
  1. Arrival Time โ€“ Time at which the process arrives in the ready queue.
  2. Completion Time โ€“ Time at which process completes its execution.
  3. Burst Time โ€“ Time required by a process for CPU execution.
  4. Turn Around Time โ€“ Time Difference between completion time and arrival time. Turn Around Time = Completion Time - Arrival Time
  5. Waiting Time (WT) โ€“ Time Difference between turn around time and burst time. Waiting Time = Turnaround Time - Burst Time โ— Thread (Important) : A thread is a lightweight process and forms the basic unit of CPU utilization. A process can perform more than one task at the same time by including multiple threads. โ— A thread has its own program counter, register set, and stack โ— A thread shares resources with other threads of the same process: the code section, the data section, files and signals. Note : A new thread, or a child process of a given process, can be introduced by using the fork() system call. A process with n fork() system call generates 2^n โ€“ 1 child processes. There are two types of threads: โ— User threads (User threads are implemented by users) โ— Kernel threads (Kernel threads are implemented by OS) โ— Scheduling Algorithms :
  6. First Come First Serve (FCFS) : Simplest scheduling algorithm that

schedules according to arrival times of processes.

  1. Shortest Job First (SJF) : Processes which have the shortest burst time are

scheduled first.

  1. Race around Condition โ€“ The final output of the code depends on the order in which the variables are accessed. This is termed as the race around condition. A solution for the critical section problem must satisfy the following three conditions:
  2. Mutual Exclusion โ€“ If a process Pi is executing in its critical section, then no other process is allowed to enter into the critical section.
  3. Progress โ€“ If no process is executing in the critical section, then the decision of a process to enter a critical section cannot be made by any other process that is executing in its remainder section. The selection of the process cannot be postponed indefinitely.
  4. Bounded Waiting โ€“ There exists a bound on the number of times other processes can enter into the critical section after a process has made a request to access the critical section and before the request is granted. โ— **Synchronization Tools:
  5. Semaphore** : Semaphore is a protected variable or abstract data type that is used to lock the resource being used. The value of the semaphore indicates the status of a common resource. There are two types of semaphores: Binary semaphores (Binary semaphores take only 0 and 1 as value and are used to implement mutual exclusion and synchronize concurrent processes.) Counting semaphores (A counting semaphore is an integer variable whose value can range over an unrestricted domain.) Mutex (A mutex provides mutual exclusion, either producer or consumer can have the key (mutex) and proceed with their work. As long as the buffer is filled by the producer, the consumer needs to wait, and vice versa.

At any point of time, only one thread can work with the entire buffer. The concept can be generalized using semaphore.) โ— Deadlocks (Important) : A situation where a set of processes are blocked because each process is holding a resource and waiting for another resource acquired by some other process. Deadlock can arise if following four conditions hold simultaneously (Necessary Conditions):

  1. Mutual Exclusion โ€“ One or more than one resource is non-sharable (Only one process can use at a time).
  2. Hold and Wait โ€“ A process is holding at least one resource and waiting for resources.
  3. No Preemption โ€“ A resource cannot be taken from a process unless the process releases the resource.
  4. Circular Wait โ€“ A set of processes are waiting for each other in circular form. โ— Methods for handling deadlock: There are three ways to handle deadlock
  5. Deadlock prevention or avoidance : The idea is to not let the system into a deadlock state.
  6. Deadlock detection and recovery : Let deadlock occur, then do preemption to handle it once occurred.
  7. Ignore the problem all together : If deadlock is very rare, then let it happen and reboot the system. This is the approach that both Windows and UNIX take. โ— Banker's algorithm is used to avoid deadlock. It is one of the deadlock-avoidance

methods. It is named as Banker's algorithm on the banking system where a bank never

Note: โ— Best fit does not necessarily give the best results for memory allocation. โ— The cause of external fragmentation is the condition in Fixed partitioning and Variable partitioning saying that the entire process should be allocated in a contiguous memory location.Therefore Paging is used.

  1. Paging โ€“ The physical memory is divided into equal sized frames. The main memory is divided into fixed size pages. The size of a physical memory frame is equal to the size of a virtual memory frame.
  2. Segmentation โ€“ Segmentation is implemented to give users a view of memory. The logical address space is a collection of segments. Segmentation can be implemented with or without the use of paging. โ— Page Fault: A page fault is a type of interrupt, raised by the hardware when a running program accesses a memory page that is mapped into the virtual address space, but not loaded in physical memory. Page Replacement Algorithms (Important):
  3. First In First Out (FIFO) โ€“ This is the simplest page replacement algorithm. In this algorithm, the operating system keeps track of all pages in the memory in a queue, the oldest page is in the front of the queue. When a page needs to be replaced, the page in the front of the queue is selected for removal. For example, consider page reference string 1, 3, 0, 3, 5, 6 and 3 page slots. Initially, all slots are empty, so when 1, 3, 0 come they are allocated to the empty slots โ€”> 3 Page Faults. When 3 comes, it is already in memory so โ€”> 0 Page Faults. Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. โ€”> 1

Page Fault. Finally, 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 โ€”> 1 Page Fault. Beladyโ€™s anomaly: Beladyโ€™s anomaly proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we consider reference string ( 3 2 1 0 3 2 4 3 2 1 0 4 ) and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page faults.

  1. Optimal Page replacement โ€“ In this algorithm, pages are replaced which are not used for the longest duration of time in the future. Let us consider page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 and 4 page slots. Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots โ€”> 4 Page faults. 0 is already there so โ€”> 0 Page fault. When 3 came it will take the place of 7 because it is not used for the longest duration of time in the future.โ€”> 1 Page fault. 0 is already there so โ€”> 0 Page fault. 4 will takes place of 1 โ€”> 1 Page Fault. Now for the further page reference string โ€”> 0 Page fault because they are already available in the memory. Optimal page replacement is perfect, but not possible in practice as an operating system cannot know future requests. The use of Optimal Page replacement is to set up a benchmark so that other replacement algorithms can be analyzed against it.
  2. Least Recently Used (LRU) โ€“ In this algorithm, the page will be replaced with the one which is least recently used. Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2. Initially, we had 4-page slots empty. Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots โ€”> 4 Page faults. 0 is already there so โ€”> 0 Page fault. When 3 comes it will take the place of 7 because it is least recently used โ€”> 1 Page fault. 0 is already in memory so โ€”> 0 Page fault. 4 will take place of 1 โ€”> 1 Page Fault. Now for the

reverses its direction and again services the request arriving in its path. So, this algorithm works like an elevator and hence is also known as elevator algorithm.

  1. CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its direction. So, it may be possible that too many requests are waiting at the other end or there may be zero or few requests pending at the scanned area.
  2. LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm in spite of going to the end of the disk goes only to the last request to be serviced in front of the head and then reverses its direction from there only. Thus it prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.
  3. CLOOK: As LOOK is similar to SCAN algorithm, CLOOK is similar to CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the last request to be serviced in front of the head and then from there goes to the other endโ€™s last request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.

Key Terms

โ— Real-time system is used in the case when rigid-time requirements have been

placed on the operation of a processor. It contains well defined and fixed time

constraints.

โ— A monolithic kernel is a kernel which includes all operating system code in a

single executable image.

โ— Micro kernel: Microkernel is the kernel which runs minimal performance affecting

services for the operating system. In the microkernel operating system all other

operations are performed by the processor.

Macro Kernel: Macro Kernel is a combination of micro and monolithic kernel.

โ— Re-entrancy : It is a very useful memory saving technique that is used for multi-

programmed time sharing systems. It provides functionality that multiple users

can share a single copy of a program during the same period. It has two key

aspects:The program code cannot modify itself and the local data for each user

process must be stored separately.

โ— Demand paging specifies that if an area of memory is not currently being used,

it is swapped to disk to make room for an application's need.

โ— Virtual memory (Imp) is a very useful memory management technique which

enables processes to execute outside of memory. This technique is especially

used when an executing program cannot fit in the physical memory.

โ— RAID stands for Redundant Array of Independent Disks. It is used to store the

same data redundantly to improve the overall performance. There are 7 RAID

levels.