View on GitHub

os202

OS202

HOME


Top 10 List of Week 08

  1. Basic Concepts Scheduling
    Almost all programs have some alternating cycle of CPU number crunching and waiting for I/O of some kind. (Even a simple fetch from memory takes a long time relative to CPU speeds). In a simple system running a single process, the time spent waiting for I/O is wasted, and those CPU cycles are lost forever. A scheduling system allows one process to use the CPU while another is waiting for I/O, thereby making full use of otherwise lost CPU cycles.

  2. Criteria Scheduling
    Different CPU scheduling algorithms have different properties and the choice of a particular algorithm depends on the various factors. Many criteria have been suggested for comparing CPU scheduling algorithms. On this page you will find the explanation of each criteria.

  3. Preemptive vs Non-Preemptive Scheduling
    Preemptive scheduling is used when a process switches from running state to ready state or from waiting state to ready state. Non-preemptive Scheduling is used when a process terminates, or a process switches from running to waiting state. On this page you will find comparison chart of preemptive and non-preemptive scheduling.

  4. Process Contention Scope (PCS) and System Contention Scope (SCS)
    The contention takes place among threads within a same process. The thread library schedules the high-prioritized PCS thread to access the resources via available LWPs (priority as specified by the application developer during thread creation)

  5. Completely fair Scheduler (CFS)
    CFS stands for “Completely Fair Scheduler,” and is the new “desktop” process scheduler implemented by Ingo Molnar and merged in Linux 2.6.23. It is the replacement for the previous vanilla scheduler’s SCHED_OTHER interactivity code.

  6. Real-Time Scheduling
    A real-time scheduling System is composed of the scheduler, clock and the processing hardware elements. In a real-time system, a process or task has schedulability; tasks are accepted by a real-time system and completed as specified by the task deadline depending on the characteristic of the scheduling algorithm.

  7. Difference between Asymmetric and Symmetric Multiprocessing
    Asymmetric multiprocessing is the use of two or more processors handled by one master processor and Symmetric multiprocessing is the use of two or more self-scheduling processors sharing a common memory space. On this page you will find a table of difference between Asymmetric and Symmetric Multiprocessing.

  8. Processor Affinity
    Processors contain cache memory, which speeds up repeated accesses to the same memory locations. If a process were to switch from one processor to another each time it got a time slice, the data in the cache ( for that process ) would have to be invalidated and re-loaded from main memory, thereby obviating the benefit of the cache.

  9. Load Balancing
    Obviously an important goal in a multiprocessor system is to balance the load between processors, so that one processor won’t be sitting idle while another is overloaded. Systems using a common ready queue are naturally self-balancing, and do not need any special handling. Most systems, however, maintain separate ready queues for each processor. On this page is briefly explained of Load Balancing.

  10. Soft real-time vs Hard real-time
    Soft real-time systems have degraded performance if their timing needs cannot be met. Example: streaming video. Hard real-time systems have total failure if their timing needs cannot be met. Examples: Assembly line robotics, automobile air-bag deployment