Definition:

  • CPU burst (time): the Process is being executed in CPU
  • I/O burst (time): It is waiting for I/O operations.
  • CPU-I/O burst cycle: time for a cycle
    • a process might needs CPU then IO wait then CPU,…
    • Some processes are CPU-bound (spend most time in CPU) and I/O-bound

Context switching:

  • Switching the CPU core to another process
  • requires performing a state save of the current process
  • requires a state restore of a different process
  • overhead

Dispatcher:

  • gives control of the CPU to the process selected by the CPU scheduler to:
    • Switching context
    • Switching to user mode
    • Jumping to the proper location in the user program to restart that program
  • Dispatch latency – time it takes for the dispatcher to stop one process and start another running (save state and restore state of new process)

Scheduling scheme:

  • Decisions may take place under the following four circumstances in Process State:
    1. Switches from running to waiting (for example, I/O request or wait()_
    2. Switches from the running to ready (for example,when an interrupt occurs)
    3. Switches from waiting to ready (for example,at completion of I/O)
    4. When process terminates
  • Nonpreemptive scheduling scheme:
    • for cases 1 and 4, there is no need for that process to be a candidate for scheduling, CPU is allocated until it terminates or go to waiting.
    • Less overhead for switching
    • Algorithm:
      • First come first serve: The request order decides the serving order, i.e., first come, first served
        • Convoy Effect
        • short processes behind a long process then long process does I/O, making short processes wait wait burst time
      • shortest job first: CPU is allocated to the process having the shortest next-CPU burst time
        • can predict next burst time by using the length of previous CPU bursts
        • It is probably optimal scheduling algorithm
        • Could estimated by using the length of previous CPU bursts
  • Preemptive scheduling scheme:
    • for cases 2 and 3, there is a choice for scheduling as processes can be terminated mid-execution, they fight for CPU time
    • Algorithm:
      • Shortest Remaining Time First: CPU is allocated to the process closest to completion (execute process with the lowest arrival+burst time at any moment)
        • Overhead
        • Need to know the burst time beforehand
      • Round Robin
      • Priority Scheduling: The CPU is allocated to the process with the highest priority (smallest integer = highest priority)
        • Consider the precise importance of each process
        • Starvation for low-priority process
  • Multilevel Queue: The ready queue consists of multiple queues, each queue has a scheduling algorithm
    • Flexible queueing mechanisms for various kind of processes
    • Starvation
  • Multilevel Feedback Queue: A process can be moved between the various queues with the consideration of following parameters
    • Number of queues
    • Scheduling algorithms for each queue
    • Method to determine when to upgrade a process
    • Method to determine when to demote a process
    • Method to determine which queue a process will enter when it needs service
  • Scheduling criteria:
    • maximize CPU utilization: 0-100%
    • maximize throughput: # of processes that complete their execution per time unit
    • minimize Turnaround time: t_out - t_in (amount of time to execute a particular process)
    • minimize Waiting time: t_avg-waiting = t_waiting/#process
      • amount of time a process has been waiting in the ready queue
    • minimize Response time: t_serve - t_request (amount of time it takes from when a request was submitted until the first response is produced)

Algorithm evaluation for CPU:

  • Deterministic: practical comparision using criteria, ex: avg waiting time
    • simple fast
    • not generalized to every cases
  • Queueing models: use probabilistically, i.e., approximate the distribution of CPU and I/O bursts (commonly exponential, and described by mean; poisson distribution)
    • complex and approximation
  • Simulation: simulate the computer system state to reflect the activities of the devices, the processes, and the scheduler. Gather statistics indicating algorithm performance
    • high cost
    • close to real-life