admin管理员组

文章数量:1616420

(边读边整理操作系统著名教材OSTEP (Operating Systems: Three Easy Pieces). 我对书中的代码进行了纠正和注释,并附上我运行这些代码的结果图,以替换书中的附图;书中对代码运行过程的详细解释,我也做了相应修改,以对应我的运行结果)

2.0 Beginning

If you are taking an undergraduate operating systems course, you should already have some idea of what a computer program does when it runs. If not, this book (and the corresponding course) is going to be difficult——so you should probably stop reading this book, or run to the nearest bookstore and quickly consume the necessary background material before continuing (both Patt & Patel [PP03] and Bryant & O’Hallaron [BOH10] are pretty great books).

So what happens when a program runs?

Well, a running program does one very simple thing: it executes instructions. Many millions (and these days, even billions) of times every second, the processor fetches an instruction from memory, decodes it (i.e., figures out which instruction this is), and executes it (i.e., it does the thing that it is supposed to do, like add two numbers together, access memory, check a condition, jump to a function, and so forth). After it is done with this instruction, the processor moves on to the next instruction, and so on, and so on, until the program finally completes.

(Of course, modern processors do many bizarre and frightening things underneath the hood to make programs run faster, e.g., executing multiple instructions at once, and even issuing and completing them out of order! But that is not our concern here; we are just concerned with the simple model most programs assume: that instructions seemingly execute one at a time, in an orderly and sequential fashion.)

Thus, we have just described the basics of the Von Neumann model of computing. Sounds simple, right? But in this class, we will be learning that while a program runs, a lot of other wild things are going on with the primary goal of making the system easy to use.

There is a body of software, in fact, that is responsible for making it easy to run programs (even allowing you to seemingly run many at the same time), allowing programs to share memory, enabling programs to interact with devices, and other fun stuff like that. That body of software is called the operating system (OS), as it is in charge of making sure the system operates correctly and efficiently in an easy-to-use manner.

The primary way the OS does this is through a general technique that we call virtualization. That is, the OS takes a physical resource (such as the processor, or memory, or a disk) and transforms it into a more general, powerful, and easy-to-use virtual form of itself. Thus, we sometimes refer to the operating system as a virtual machine.

THE CRUX OF THE PROBLEM:

HOW TO VIRTUALIZE RESOURCES

One central question we will answer in this book is quite simple: how does the operating system virtualize resources? This is the crux of our problem. Why the OS does this is not the main question, as the answer should be obvious: it makes the system easier to use. Thus, we focus on the how: what mechanisms and policies are implemented by the OS to attain virtualization? How does the OS do so efficiently? What hardware support is needed?

We will use the “crux of the problem”, in shaded boxes such as this one, as a way to call out specific problems we are trying to solve in building an operating system. Thus, within a note on a particular topic, you may find one or more cruces (yes, this is the proper plural) which highlight the problem. The details within the chapter, of course, present the solution, or at least the basic parameters of a solution.

Of course, in order to allow users to tell the OS what to do and thus make use of the features of the virtual machine (such as running a program, or allocating memory, or accessing a file), the OS also provides some interfaces (APIs) that you can call. A typical OS, in fact, exports a few hundred system calls that are available to applications. Because the OS provides these calls to run programs, access memory and devices, and other related actions, we also sometimes say that the OS provides a standard library to applications.

Finally, because virtualization allows many programs to run (thus sharing the CPU), and many programs to concurrently access their own instructions and data (thus sharing memory), and many programs to access devices (thus sharing disks and so forth), the OS is sometimes known as a resource manager. Each of the CPU, memory, and disk is a resource of the system; it is thus the operating system’s role to manage those resources, doing so efficiently or fairly or indeed with many other possible goals in mind. To understand the role of the OS a little bit better, let’s take a look at some examples.

2.1 Virtualizing The CPU

Code 2.1 presents our first program. It doesn’t do much. In fact, all it does is call sleep(), a function that repeatedly checks the time and returns once it has run for a second. Then, it prints out the string that the user passed in on the command line, and repeats, forever.

Code 2.1: Loops And Prints (cpu.c)

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>  // Include for sleep
#include <assert.h>
#include "common.h"
 
// main function takes command-line arguments (argc & argv)
int main(int argc, char *argv[])
{
	// It checks whether exactly one command-line argument is provided.
	// If not, it prints an error message to stderr and exits with a status code of 1.
	if (argc != 2)
	{
		fprintf(stderr, "usage: cpu <string>\n");
		exit(1);
	}
	// If exactly one argument is provided, it will be stored in the str variable.
	char *str = argv[1];
    // The program enters an infinite loop using while (1)
	while (1)
	{
		// Inside the loop, it sleeps for 1 second using the sleep(1) function from the <unistd.h> library.
		// This means that it pauses execution for 1 second in each iteration of the loop.
		sleep(1);
		// After the sleep, it prints the string str to the standard output (stdout)
		printf("%s\n", str);
	}
	return 0;
}

Note: In the same file path as “cpu.c”, we need a header file named “common.h” whose content is:

#ifndef __COMMON_H
#define __COMMON_H
 
// 基本常用头文件
#include <time.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <strings.h>
#include <stdbool.h>
 
// 全局错误码
#include <errno.h>
 
// 文件IO操作
#include <stdio.h>
#include <fcntl.h>
#include <sys/stat.h>
#include <sys/types.h>
 
// 数学库
#include <math.h>
 
// 目录
#include <dirent.h>
 
#endif

Let’s say we save this file as cpu.c and decide to compile and run it on a system with a single processor (or CPU as we will sometimes call it). Here is what we will see:

Figure 2.1

Not too interesting of a run——the system begins running the program, which repeatedly checks the time until a second has elapsed. Once a second has passed, the code prints the input string passed in by the user (in this example, the letter “A”), and continues. Note the program will run forever; by pressing “Control-c” (which on UNIX-based systems will terminate the program running in the foreground) we can halt the program.

Now, let’s do the same thing, but this time, let’s run many different instances of this same program. Figure 2.2 shows the results of this slightly more complicated example.

Figure2.2 Running Many Programs At Once

Well, now things are getting a little more interesting. Even though we have only one processor, somehow all four of these programs seem to be running at the same time! How does this magic happen?

(Note how we ran four processes at the same time, by using the “&” symbol. Doing so runs a job in the background in the zsh shell, which means that the user is able to immediately issue their next command, which in this case is another program to run. If you’re using a different shell (e.g., tcsh), it works slightly differently; read documentation online for details.)

It turns out that the operating system, with some help from the hardware, is in charge of this illusion, i.e., the illusion that the system has a very large number of virtual CPUs. Turning a single CPU (or small set of them) into a seemingly infinite number of CPUs and thus allowing many programs to seemingly run at once is what we call virtualizing the CPU, the focus of the first major part of this book.

Of course, to run programs, and stop them, and otherwise tell the OS which programs to run, there need to be some interfaces (APIs) that you can use to communicate your desires to the OS. We’ll talk about these APIs throughout this book; indeed, they are the major way in which most users interact with operating systems.

You might also notice that the ability to run multiple programs at once raises all sorts of new questions. For example, if two programs want to run at a particular time, which should run? This question is answered by a policy of the OS; policies are used in many different places within an OS to answer these types of questions, and thus we will study them as we learn about the basic mechanisms that operating systems implement (such as the ability to run multiple programs at once). Hence the role of the OS as a resource manager.

2.2 Virtualizing Memory

Now let’s consider memory. The model of physical memory presented by modern machines is very simple. Memory is just an array of bytes; to read memory, one must specify an address to be able to access the data stored there; to write (or update) memory, one must also specify the data to be written to the given address.

Memory is accessed all the time when a program is running. A program keeps all of its data structures in memory, and accesses them through various instructions, like loads and stores or other explicit instructions that access memory in doing their work. Don’t forget that each instruction of the program is in memory too; thus memory is accessed on each instruction fetch.

Let’s take a look at a program (in Code 2.2) that allocates some memory by calling malloc():

Code2.2: A Program That Accesses Memory (mem.c)

#include <unistd.h>  // for POSIX system calls and constants.
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>  // for assertion checks.
#include "common.h"
 
int main(int argc, char *argv[])
{
    // It dynamically allocates memory for an integer
    // and assigns the pointer p to the allocated memory.
    int *p = malloc(sizeof(int));
    // An assertion is used to check whether the allocation was successful,
    // and if not, the program will terminate with an error message.
    assert(p != NULL);
    // It prints the process ID (PID) and the memory address pointed to by p.
    printf("(%d) address pointed to by p: %p\n", getpid(), p);
    // It initializes the integer pointed to by p with the value 0.
    *p = 0;
    // It enters an infinite loop,
    // where it increments the value of the integer pointed to by p every second
    // and prints the updated value along with the PID.
    while (1)
    {
        sleep(1);
        *p = *p + 1;
        printf("(%d) p: %d\n", getpid(), *p);
    }
    return 0;
}

The output of this program can be found here:

Figure 2.3

The program does a couple of things. First, it allocates some memory. Then, it prints out the address of the memory, and then puts the number zero into the first slot of the newly allocated memory. Finally, it loops, delaying for a second and incrementing the value stored at the address held in p. With every print statement, it also prints out what is called the process identifier (the PID) of the running program. This PID is unique per running process.

Again, this first result is not too interesting. The newly allocated memory is at address 0x55ceb4e992a0 (See Figure 2.3). As the program runs, it slowly updates the value and prints out the result.

Now, we again run multiple instances of this same program to see what happens (See Figure 2.4). We see from the example that each running program has allocated memory at the same address (0x558c9b7682a0), and yet each seems to be updating the value at 0x558c9b7682a0 independently! It is as if each running program has its own private memory, instead of sharing the same physical memory with other running programs.

Figure 2.4 Running The Memory Program Multiple Times

(For this example to work, you need to make sure address-space randomization is disabled; randomization, as it turns out, can be a good defense against certain kinds of security flaws. Read more about it on your own, especially if you want to learn how to break into computer systems via stack-smashing attacks. Not that we would recommend such a thing...)

Indeed, that is exactly what is happening here as the OS is virtualizing memory. Each process accesses its own private virtual address space (sometimes just called its address space), which the OS somehow maps onto the physical memory of the machine. A memory reference within one running program does not affect the address space of other processes (or the OS itself); as far as the running program is concerned, it has physical memory all to itself. The reality, however, is that physical memory is a shared resource, managed by the operating system. Exactly how all of this is accomplished is also the subject of the first part of this book, on the topic of virtualization.

2.3 Concurrency

Another main theme of this book is concurrency. We use this conceptual term to refer to a host of problems that arise, and must be addressed, when working on many things at once (i.e., concurrently) in the same program. The problems of concurrency arose first within the operating system itself; as you can see in the examples above on virtualization, the OS is juggling many things at once, first running one process, then another, and so forth. As it turns out, doing so leads to some deep and interesting problems.

Unfortunately, the problems of concurrency are no longer limited just to the OS itself. Indeed, modern multi-threaded programs exhibit the same problems. Let us demonstrate with an example of a multi-threaded program.

Code 2.3: A Multi-threaded Program (threads.c)

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
// The header file pthread.h includes 'pthread_t', 'pthread_create' and 'pthread_join'
// pthread_t is a data type in C that is used to represent a POSIX thread identifier,
// which is a standard for creating and managing threads in a multi-threaded program.
#include "common.h"
 
volatile int counter = 0;
// The 'volatile' keyword is used to indicate to the compiler
// that the variable may be changed by external factors not known to the compiler,
// so it should not optimize away reads and writes to it.
int loops;
// Declare a global variable 'loops' without initialization, 
// which will be set based on the command-line argument.
 
//Define a function 'worker' that will be executed by the threads.
void *worker(void *arg)
{
    int i;
    for (i = 0; i < loops; i++)
        counter++;
    return NULL;
}
 
 
int main(int argc, char *argv[])
{
    // Check if there is exactly one command-line argument provided
    if (argc != 2)
    {
        fprintf(stderr, "usage: threads <value>\n");
        exit(1);
    }
 
    // Parse the command-line argument as an integer and store it in the 'loops' variable.
    loops = atoi(argv[1]);
    // Create two pthreads ('p1' and 'p2') and execute the 'worker' function in each of them.
    pthread_t p1, p2;
    // Print the initial value of 'counter'
    printf("Initial value : %d\n", counter);
    pthread_create(&p1, NULL, worker, NULL);
    pthread_create(&p2, NULL, worker, NULL);
    // Wait for both threads to complete using 'pthread_join'.
    pthread_join(p1, NULL);
    pthread_join(p2, NULL);
    // Print the final value of 'counter'.
    printf("Final value : %d\n", counter);
    return 0;
}

Although you might not understand this example fully at the moment (and we’ll learn a lot more about it in later chapters, in the section of the book on concurrency), the basic idea is simple. The main program creates two threads using pthread_create(). You can think of a thread as a function running within the same memory space as other functions, with more than one of them active at a time. In this example, each thread starts running in a routine called worker(), in which it simply increments a counter in a loop for loops number of times.

Below is a transcript of what happens when we run this program with the input value for the variable loops set to 1000. The value of loops determines how many times each of the two workers will increment the shared counter in a loop. When the program is run with the value of loops set to 1000, what do you expect the final value of counter to be?

Figure 2.5

As you probably guessed, when the two threads are finished, the final value of the counter is 2000, as each thread incremented the counter 1000 times. Indeed, when the input value of loops is set to N, we would expect the final output of the program to be 2N. But life is not so simple, as it turns out. Let’s run the same program, but with higher values for loops, and see what happens:

Figure 2.6

In this run, when we gave an input value of 100,000, instead of getting a final value of 200,000, we instead first get 111728. Then, when we run the program a second time, we not only again get the wrong value, but also a different value than the last time. In fact, if you run the program over and over with high values of loops, you may find that sometimes you even get the right answer! So why is this happening?

As it turns out, the reason for these odd and unusual outcomes relate to how instructions are executed, which is one at a time. Unfortunately, a key part of the program above, where the shared counter is incremented, takes three instructions: one to load the value of the counter from memory into a register, one to increment it, and one to store it back into memory. Because these three instructions do not execute atomically (all at once), strange things can happen. It is this problem of concurrency that we will address in great detail in the second part of this book.

THE CRUX OF THE PROBLEM:

HOW TO BUILD CORRECT CONCURRENT PROGRAMS

When there are many concurrently executing threads within the same memory space, how can we build a correctly working program? What primitives are needed from the OS? What mechanisms should be provided by the hardware? How can we use them to solve the problems of concurrency?

2.4 Persistence

The third major theme of the course is persistence. In system memory, data can be easily lost, as devices such as DRAM store values in a volatile manner; when power goes away or the system crashes, any data in memory is lost. Thus, we need hardware and software to be able to store data persistently; such storage is thus critical to any system as users care a great deal about their data.

The hardware comes in the form of some kind of input/output or I/O device; in modern systems, a hard drive is a common repository for long-lived information, although solid-state drives (SSDs) are making headway in this arena as well.

The software in the operating system that usually manages the disk is called the file system; it is thus responsible for storing any files the user creates in a reliable and efficient manner on the disks of the system.

Unlike the abstractions provided by the OS for the CPU and memory, the OS does not create a private, virtualized disk for each application. Rather, it is assumed that often times, users will want to share information that is in files. For example, when writing a C program, you might first use an editor to create and edit the C file. Once done, you might use the compiler to turn the source code into an executable (e.g., gcc -o main main.c). When you’re finished, you might run the new executable (e.g., ./main). Thus, you can see how files are shared across different processes. First, the editor creates a file that serves as input to the compiler; the compiler uses that input file to create a new executable file (in many steps — take a compiler course for details); finally, the new executable is then run. And thus a new program is born!

To understand this better, let’s look at some code. Code 2.4 presents the program to create a file (named “file”) that contains the string “hello world”.

Code 2.4: A Program That Does I/O (io.c)

#include <stdio.h>     // for standard input/output functions.
#include <unistd.h>    // for POSIX system calls and constants.
#include <assert.h>    // for assertion checks.
#include <fcntl.h>     // for file control options.
#include <sys/types.h> // for data types used in system calls.
 
// main function takes command-line arguments (argc and argv)
int main(int argc, char *argv[])
{
    // Opens a file named "/tmp/file" (or creates it if it doesn't exist)
    // for writing (O_WRONLY) with permissions set to read, write, and execute
    int fd = open("file", O_WRONLY|O_CREAT|O_TRUNC);
    // Checks if the file descriptor (fd) is valid (greater than -1)
    // using the assert macro. If not, the program will terminate.
    assert(fd > -1);
    // Writes the string "hello world\n" to the opened file using the write system call.
    // It writes 13 bytes (the length of the string) to the file.
    int rc = write(fd, "hello world\n", 13);
    // Checks if the return value of the write function (rc) is equal to 13 (the number of bytes written).
    // If not, it means the write operation didn't complete as expected, the program will terminate.
    assert(rc == 13);
    // Closes the file using the close system call to release the associated resources.
    close(fd);
    return 0;
}

The result of running "io.c" is that a file named "file" is either found or created in the current folder, and "hello world" plus a backspace are written into it, as shown in Figure 2.7:

Figure 2.7

To accomplish this task, the program makes three calls into the operating system. The first, a call to open(), opens the file and creates it; the second, write(), writes some data to the file; the third, close(), simply closes the file thus indicating the program won’t be writing any more data to it. These system calls are routed to the part of the operating system called the file system, which then handles the requests and returns some kind of error code to the user.

You might be wondering what the OS does in order to actually write to disk. We would show you but you’d have to promise to close your eyes first; it is that unpleasant. The file system has to do a fair bit of work: first figuring out where on disk this new data will reside, and then keeping track of it in various structures the file system maintains. Doing so requires issuing I/O requests to the underlying storage device, to either read existing structures or update (write) them. As anyone who has written a device driver (A device driver is some code in the operating system that knows how to deal with a specific device. We will talk more about devices and device drivers later.) knows, getting a device to do something on your behalf is an intricate and detailed process. It requires a deep knowledge of the low-level device interface and its exact semantics. Fortunately, the OS provides a standard and simple way to access devices through its system calls. Thus, the OS is sometimes seen as a standard library.

Of course, there are many more details in how devices are accessed, and how file systems manage data persistently atop said devices. For performance reasons, most file systems first delay such writes for a while, hoping to batch them into larger groups. To handle the problems of system crashes during writes, most file systems incorporate some kind of intricate write protocol, such as journaling or copy-on-write, carefully ordering writes to disk to ensure that if a failure occurs during the write sequence, the system can recover to reasonable state afterwards. To make different common operations efficient, file systems employ many different data structures and access methods, from simple lists to complex B-trees. If all of this doesn’t make sense yet, good! We’ll be talking about all of this quite a bit more in the third part of this book on persistence, where we’ll discuss devices and I/O in general, and then disks, RAIDs, and file systems in great detail.

2.5 Design Goals

So now you have some idea of what an OS actually does: it takes physical resources, such as a CPU, memory, or disk, and virtualizes them. It handles tough and tricky issues related to concurrency. And it stores files persistently, thus making them safe over the long-term. Given that we want to build such a system, we want to have some goals in mind to help focus our design and implementation and make trade-offs as necessary; finding the right set of trade-offs is a key to building systems.

One of the most basic goals is to build up some abstractions in order to make the system convenient and easy to use. Abstractions are fundamental to everything we do in computer science. Abstraction makes it possible to write a large program by dividing it into small and understandable pieces, to write such a program in a high-level language like C without thinking about assembly, to write code in assembly without thinking about logic gates, and to build a processor out of gates without thinking too much about transistors. Abstraction is so fundamental that sometimes we forget its importance, but we won’t here; thus, in each section, we’ll discuss some of the major abstractions that have developed over time, giving you a way to think about pieces of the OS.

One goal in designing and implementing an operating system is to provide high performance; another way to say this is our goal is to minimize the overheads of the OS. Virtualization and making the system easy to use are well worth it, but not at any cost; thus, we must strive to provide virtualization and other OS features without excessive overhead. These overheads arise in a number of forms: extra time (more instructions) and extra space (in memory or on disk). We’ll seek solutions that minimize one or the other or both, if possible. Perfection, however, is not always attainable, something we will learn to notice and (where appropriate) tolerate.

Another goal will be to provide protection between applications, as well as between the OS and applications. Because we wish to allow many programs to run at the same time, we want to make sure that the malicious or accidental bad behavior of one does not harm others; we certainly don’t want an application to be able to harm the OS itself (as that would affect all programs running on the system). Protection is at the heart of one of the main principles underlying an operating system, which is that of isolation; isolating processes from one another is the key to protection and thus underlies much of what an OS must do.

The operating system must also run non-stop; when it fails, all applications running on the system fail as well. Because of this dependence, operating systems often strive to provide a high degree of reliability. As operating systems grow evermore complex (sometimes containing millions of lines of code), building a reliable operating system is quite a challenge——and indeed, much of the on-going research in the field focuses on this exact problem.

Other goals make sense: energy-efficiency is important in our increasingly green world; security (an extension of protection, really) against malicious applications is critical, especially in these highly-networked times; mobility is increasingly important as OSes are run on smaller and smaller devices. Depending on how the system is used, the OS will have different goals and thus likely be implemented in at least slightly different ways. However, as we will see, many of the principles we will present on how to build an OS are useful on a range of different devices.

2.6 Summary

Thus, we have an introduction to the OS. Today’s operating systems make systems relatively easy to use, and virtually all operating systems you use today have been influenced by the developments we will discuss throughout the book.

Unfortunately, due to time constraints, there are a number of parts of the OS we won’t cover in the book. For example, there is a lot of networking code in the operating system; we leave it to you to take the networking class to learn more about that. Similarly, graphics devices are particularly important; take the graphics course to expand your knowledge in that direction. Finally, some operating system books talk a great deal about security; we will do so in the sense that the OS must provide protection between running programs and give users the ability to protect their files, but we won’t delve into deeper security issues that one might find in a security course.

However, there are many important topics that we will cover, including the basics of virtualization of the CPU and memory, concurrency, and persistence via devices and file systems. Don’t worry! While there is a lot of ground to cover, most of it is quite cool, and at the end of the road, you’ll have a new appreciation for how computer systems really work. Now get to work!

本文标签: 操作系统OSTEPChapterSystemsoperating