os-arch.org 11 KB

OS Arch

This is my cheatsheet from this wikibook.

An operating system is the fundamental bits of software that make your computer function. Your computer is a number crunching machine. Specifically, your processor performs calculations all day long. Your processor has to do various things for different applications. All of the applications on your computer fight for processor time. It is the job of your OS to ensure that all applications efficiently perform their function.

OS do some of this with threads and processes. Let's explain what a process is first. When every application starts, your OS starts that application inside a process. The OS then schedules that process CPU time. That process has a list of open files, a certain amount of memory, and dates with which to work with the CPU. However, there are times when two different processes need to communicate to other processes. This is called IPC or inter-process communication. Each process has its own amount of memory reserved, its own list of open files, and its own dates to talk to the CPU. A process is like a like a boat with one occupant. The occupant, by virtue of being the only occupant, is the captain. He dictates where the ship goes, who eats the food, etc.

In order to effective communicate well between processes, an operating system uses locks, mutexes, semaphores, etc.

IPC can be done via

1) socket programming TCP/UDP 2) Pipes 3) shared memory

One the other hand a thread is a small clone of a process. A thread inherits the amount of information that the process has. The thread can look at the files its parent process has, can look at the variables its parent process has set. A thread takes less time to spawn than a process, so you can think of it as a light-weight process.

./images/OS-structure.png

4 main kernel types

monolithic

microkernel

A monolithic kernel runs everything in kernel space, except for user applications. A micro-kernel runs as few things as possible in kernel space. A true micro kernel-will only have the virtual file system, IPC, and scheduling inside the kernel. Everything else is outside the kernel.

So device drivers and file servers run as user space processes.

Theoretically, a micro kernel-is more stable than a monolithic kernel. Consider that in a micro-kernel the internet runs as a process in user-space. If this user-space process crashes, the whole system does not crash. In fact the internet connection could be restarted and processes that were using it would be connect to the new instance.

There is a problem with this model though. State is not preserved when the internet server (users pace process) crashes and restarts. A file that is connected to over say ssh, may not be aware that its connection was lost.

hybrid

exokernel

More Processes

Interrupts

Content switch

A hybrid tries to mix both a micro-kernel and a monolithic kernel. An exokernel is a kernel where each application connects to a sub-operating system. It is a recent development. An interrupt is a signal sent to the CPU that something has happened that demands its immediatelyte attention. This can happen from the keyboard (when a key is pressed), or by a number of other things. The CPU is told that some process needs its attention. The CPU then saves information on its current process. Does some computation on the other process, and when it has more time, it will get back to processing the save process. The CPU has to save the processes stack (or variables that the program has defined). Most of this information is saved in thee stack itself.

Scheduling

First come, first served

Context switches are expensive. So the OS tries to avoid doing them. A processor can only do one thing at one time. It cannot perform computations on two different processes at once. Therefore it has to schedule when processes can access the CPU. There are several different strategies This is the simplest scheduler and it is a poor choice. It does not allow for preemption. If a process takes a long amount of time, then it has to complete the entire process, before doing anything else.

Shortest Process Next

This means that a small process must wait for the VERY LONG process to finish. This schedules that the next process is the shortest process. This is an OK way to schedule a process, but it effectively stops long processes from accessing the CPU, this is called live lock.

Shortest Remaining Time

It also creates the halting problem. If we do not know how long it will take a process to execute, we can never really know if a process will ever finish. Run a program and wait 5 minutes. This allows short processes to skip ahead of long processes as they appear.

Round Robin

Preemption

Priority Scheduling

Concurrency

deadlock

    It can still have problems with the /halting problem/ or /live lock/. Round Robin scheduling schedules a certain amount of time a process has with the CPU set over periodic and separate points in time. When a process uses enough CPU time for one session, it is put back at the end of the queue. Preemption allows a process to interrupt another process, because it has a higher priority. Each process has a priority. Higher priority processes get access to the CPU sooner than lower ones. In the Linux kernel priority numbers are nice numbers. Concurrency can be melted into 4 main issues:
  • Concurrent processes or threads often need to access shared date and shared resources
  • The OS must control access to shared data, otherwise data can become inconsistent, corrupted, deleted, etc.
  • Maintaining data means that one needs to orderly schedule cooperating processes
  • various programs may cause race conditions

A deadlock occurs when two processes are waiting for the other to give up a resource. Neither gives up the resource, and both processes wait indefinitely.

This is like two people trying to draw a straight line. One reaches for the pencil, the other reaches for the ruler. Neither can draw a straight line, so they wait.

When two processes are fighting for software based resources (memory) this is a soft (software) lock.

livelock

When two processes are fighting for hardware this is a hard (hardware) lock. Two processes commit livelock when both processes effectively achieve nothing, but continue doing things.

Inter-process communication

Locks

A good example is when I need to buy a new social security card because someone stole money from my account.. To generate a new social security card, costs $10, but all of my money is in the bank. So I go to the bank to get money out of it. But they ask me to prove who I am via a social security card, which I don't have. A lock is a method of restricting multiple threads from using, modifying, or changing a resource at the same time. For example if two people try to take a drink out of the same cup at the same time, neither will be very successful. Locks are meant to ensure that those kinds of things do not happen.

Signals

Semaphore

Most locking mechanisms are guide locks. Threads work together in order to properly use resources. Most locks are not minatory locks. Signals are software communication to a process. They Are messages that tell a process about things that it needs to do. Signals can tell a process when a Martian region of memory is accessed. Or signals can be used to tell a process to terminate, etc. A semaphore is a variable that signifies when a process can access a resource. Suppose 100 people want to ride a roller coaster, but the ride only seats 20 at a time. When the ride is empty the semaphore value is 20. When 20 people get on the ride, the semaphore value becomes 0. This kind of semaphore is called a counting semaphore.

Interestingly-ly, A semaphore that can only be 1 or 0 is called a binary semaphore and these are usually used to make mutexes.

Hardware does some atomic operations to ensure this stuff works.

Monitors

Shared Memory

One disadvantage of semaphores is that processes that want to use a resource, but must wait, usually just call the CPU over and over again to see if they can use the resource. This is called busy waiting or spinlock. A monitor is a resource that can be used safely by all threads without having to worry about corruption. High level languages may have support for them, but C just has threads. Processes must use memory. In this case it means they all play in the same sandbox, but they all get their own sections to play with.

thread

I'm really not sure why they call it shared memory. I suppose when one process dies, another process may access that dead processes memory. They may share in time, but rarely will two processes share the same regions of memory at the same time. That would be foolish, because no one throws gas on a campfire, if they are smart.

Memory Management

A thread is a sub-process with fewer rights, running inside a process. Memory management is what an operating system does to ensure that the computer does not run out of RAM.

Physical memory

Physical memory is the amount of RAM available to your operating system. When someone says a physical address, they are referring physical memory.

If an operating system only uses physical memory and fails to use virtual memory, this is called segmented memory. BUT no operating system does this, because the benefits far exceed the downsides.

Virtual memory

Virtual memory is fake memory that maps onto physical memory (RAM). Suppose a system has 1 GB of physical memory, but the same system might have 2GB of virtual memory (swap comes in handy too). This can be useful for a variety of ways, but one way is that one can maneuver and disrupt virtual memory in ways that one cannot do with physical memory.

When someone says logical address, they are referring to physical address.

It can be thought of a LVM or logical volume management. Logical volume management lets you create, resize, and manipulate partitions on your hard-drive with ease. LVM lets one maintain many partitions, and constantly move things around without breaking the underlying actual partitions. Virtual memory does something similar.

Suppose an Firefox needs 50MB of memory. The application may not want to manage 20 continuous MG then another 20 elsewhere and the last 10MB somewhere else, like so.

firefox grep firefox emacs firefox
20MB 2MB 20MB 7MB 10MB

So instead of Firefox managing separate and discreet regions of memory, the physical memory is mapped onto virtual memory, so the above table becomes:

firefox grep emacs
50MB 2MB 7MB

An Firefox is unaware that it is making the operating systems job so much harder.

Virtual memory is also great for inter-process communication, because numerous process can share the same logical address space. Sharing logical address space makes COW (copy on write) possible.

Page memory

Page memory is essentially using translating virtual memory to physical memory with an MMU (or memory management unit).

The heap