OS-Level_Virtualization: Fix typo.
[aple.git] / OS-Level_Virtualization.m4
2 Fools ignore complexity. Pragmatists suffer it. Some can avoid it.
3 Geniuses remove it. -- Perlis's Programming Proverb #58 (1982)
4 », __file__)
9 In general, virtualization refers to the abstraction of computer
10 resources. This chapter is primarily concerned with <em> server
11 virtualization</em>, a concept which makes it possible to run
12 more than one operating system simultaneously and independently
13 of each other on a single physical computer. We first describe
14 the different virtualization frameworks but quickly specialize on
15 Linux OS-level virtualization and their virtual machines called <em>
16 containers</em>. Container platforms for Linux are built on top of
17 <em>namespaces</em> and <em>control groups</em>, the low-level kernel
18 features which implement abstraction and isolation of processes. We
19 look at both concepts in some detail. One of the earliest container
20 platforms for Linux is <em> LXC </em> (Linux containers) which is
21 discussed in a dedicated section.
23 »)
25 SECTION(«Virtualization Frameworks»)
27 The origins of server virtualization date back to the 1960s. The
28 first virtual machine was created as a collaboration between IBM
29 (International Business Machines) and the MIT (Massachusetts Institute
30 of Technology). Since then, many different approaches have been
31 designed, resulting in several <em> Virtualization Frameworks</em>. All
32 frameworks promise to improve resource utilization and availability, to
33 reduce costs, and to provide greater flexibility. While some of these
34 benefits might be real, they do not come for free. Their costs include:
35 the host becomes a single point of failure, decreased performance,
36 added complexity and increased maintenance costs due to extensive
37 debugging, documentation, and maintenance of the VMs. This chapter
38 briefly describes the three main virtualization frameworks. We list
39 the advantages and disadvantages of each and give some examples.
41 SUBSECTION(«Software Virtualization (Emulation)»)
43 This virtualization framework does not play a significant role in
44 server virtualization, it is only included for completeness. Emulation
45 means to imitate a complete hardware architecture in software,
46 including peripheral devices. All CPU instructions and hardware
47 interrupts are interpreted by the emulator rather than being run by
48 native hardware. Since this approach has a large performance penalty,
49 it is only suitable when speed is not critical. For this reason,
50 emulation is typically employed for ancient hardware like arcade
51 game systems and home computers such as the Commodore 64. Despite
52 the performance penalty, emulation is valuable because it allows
53 applications and operating systems to run on the current platform as
54 they did in their original environment.
56 Examples: Bochs, Mame, VICE.
58 SUBSECTION(«Paravirtualization and Hardware-Assisted Virtualization»)
60 These virtualization frameworks are characterized by the presence
61 of a <em> hypervisor</em>, also known as <em> Virtual Machine
62 Monitor</em>, which translates system calls from the VMs to native
63 hardware requests. In contrast to Software Virtualization, the
64 host OS does not emulate hardware resources but offers a special
65 APIs to the VMs. If the presented interface is different to that
66 of the underlying hardware, the term <em> paravirtualization </em>
67 is used. The guest OS then has to be modified to include modified
68 (paravirtualized) drivers. In 2005 AMD and Intel added hardware
69 virtualization instructions to the CPUs and IOMMUs (Input/Output memory
70 management units) to the chipsets. This allowed VMs to directly execute
71 privileged instructions and use peripheral devices. This so-called <em>
72 Hardware-Assisted Virtualization </em> allows unmodified operating
73 systems to run on the VMs.
75 The main advantage of Hardware-Assisted Virtualization is its
76 flexibility, as the host OS does not need to match the OS running on
77 the VMs. The disadvantages are hardware compatibility constraints and
78 performance loss. Although these days all hardware has virtualization
79 support, there are still significant differences in performance between
80 the host and the VM. Moreover, peripheral devices like storage hardware
81 has to be compatible with the chipset to make use of the IOMMU.
83 Examples: KVM (with QEMU as hypervisor), Xen, UML
85 SUBSECTION(«OS-level Virtualization (Containers)»)
87 OS-level Virtualization is a technique for lightweight virtualization.
88 The abstractions are built directly into the kernel and no
89 hypervisor is needed. In this context the term "virtual machine" is
90 inaccurate, which is why the OS-level VMs are called differently in
91 this context. On Linux, they are called <em> containers</em>, other
92 operating systems call them <em> jails </em> or <em> zones</em>. We
93 shall exclusively use "container" from now on. All containers share
94 a single kernel, so the OS running in the container has to match the
95 host OS. However, each container has its own root file system, so
96 containers can differ in user space. For example, different containers
97 can run different Linux distributions. Since programs running in a
98 container use the normal system call interface to communicate with
99 the kernel, OS-level Virtualization does not require hardware support
100 for efficient performance. In fact, OS-level Virtualization imposes
101 no overhead at all.
103 OS-level Virtualization is superior to the alternatives because of its
104 simplicity and its performance. The only disadvantage is the lack of
105 flexibility. It is simply not an option if some of the VMs must run
106 different operating systems than the host.
108 Examples: LXC, Singularity, Docker.
112 <ul>
114 <li> On any Linux system, check if the processor supports virtualization
115 by running <code> cat /proc/cpuinfo</code>. Hint: svm and vmx. </li>
117 <li> Hypervisors come in two flavors called <em> native </em> and <em>
118 hosted</em>. Explain the difference and the pros and cons of either
119 flavor. Is QEMU a native or a hosted hypervisor? </li>
121 <li> Find the AMD Programmer's Manual online. The chapter on
122 "Secure Virtual Machine" describes the CPU instructions related to
123 Hardware-Assisted Virtualization. Glance over this chapter to get an
124 idea of the complexity of Hardware-Assisted Virtualization. </li>
126 </ul>
130 <ul>
131 <li> Recall the concept of <em> direct memory access </em> (DMA)
132 and explain why DMA is a problem for virtualization. Which of the
133 three virtualization frameworks of this chapter are affected by this
134 problem? </li>
136 <li> Compare AMD's <em> Rapid Virtualization Indexing </em> to Intel's
137 <em> Extended Page Tables</em>. </li>
139 <li> Suppose a hacker gained root access to a VM and wishes to proceed
140 from there to get also full control over the host OS. Discuss the thread
141 model in the context of the three virtualization frameworks covered
142 in this section. </li>
144 </ul>
145 »)
147 SECTION(«Namespaces»)
149 Namespaces partition the set of processes into disjoint subsets
150 with local scope. Where the traditional Unix systems provided only
151 a single system-wide resource shared by all processes, the namespace
152 abstractions make it possible to give processes the illusion of living
153 in their own isolated instance. Linux implements the following
154 six different types of namespaces: mount (Linux-2.4.x, 2002), IPC
155 (Linux-2.6.19, 2006), UTS (Linux-2.6.19, 2006), PID (Linux-2.6.24,
156 2008), network (Linux-2.6.29, 2009), UID (Linux-3.8, 2013).
157 For OS-level virtualization all six name space types are typically
158 employed to make the containers look like independent systems.
160 Before we look at each namespace type, we briefly describe how
161 namespaces are created and how information related to namespaces can
162 be obtained for a process.
164 SUBSECTION(«Namespace API»)
166 <p> Initially, there is only a single namespace of each type called the
167 <em> root namespace</em>. All processes belong to this namespace. The
168 <code> clone(2) </code> system call is a generalization of the classic
169 <code> fork(2) </code> which allows privileged users to create new
170 namespaces by passing one or more of the six <code> NEW_ </code>
171 flags. The child process is made a member of the new namespace. Calling
172 plain <code> fork(2) </code> or <code> clone(2) </code> with no
173 <code> NEW_* </code> flag lets the newly created process inherit the
174 namespaces from its parent. There are two additional system calls,
175 <code> setns(2) </code> and <code> unshare(2) </code> which both
176 change the namespace(s) of the calling process without creating a
177 new process. For the latter, there is a user command, also called
178 <code> unshare(1) </code> which makes the namespace API available to
179 scripts. </p>
181 <p> The <code> /proc/$PID </code> directory of each process contains a
182 <code> ns </code> subdirectory which contains one file per namespace
183 type. The inode number of this file is the <em> namespace ID</em>.
184 Hence, by running <code> stat(1) </code> one can tell whether
185 two different processes belong to the same namespace. Normally a
186 namespace ceases to exist when the last process in the namespace
187 terminates. However, by opening <code> /proc/$PID/ns/$TYPE </code>
188 one can prevent the namespace from disappearing. </p>
190 SUBSECTION(«UTS Namespaces»)
192 UTS is short for <em> UNIX Time-sharing System</em>. The old fashioned
193 word "Time-sharing" has been replaced by <em> multitasking</em>
194 but the old name lives on in the <code> uname(2) </code> system
195 call which fills out the fields of a <code> struct utsname</code>.
196 On return the <code> nodename </code> field of this structure
197 contains the hostname which was set by a previous call to <code>
198 sethostname(2)</code>. Similarly, the <code> domainname </code> field
199 contains the string that was set with <code> setdomainname(2)</code>.
201 UTS namespaces provide isolation of these two system identifiers. That
202 is, processes in different UTS namespaces might see different host- and
203 domain names. Changing the host- or domainname affects only processes
204 which belong to the same UTS namespace as the process which called
205 <code> sethostname(2) </code> or <code> setdomainname(2)</code>.
207 SUBSECTION(«Mount Namespaces»)
209 The <em> mount namespaces </em> are the oldest Linux namespace
210 type. This is kind of natural since they are supposed to overcome
211 well-known limitations of the venerable <code> chroot(2) </code>
212 system call which was introduced in 1979. Mount namespaces isolate
213 the mount points seen by processes so that processes in different
214 mount namespaces can have different views of the file system hierarchy.
216 Like for other namespace types, new mount namespaces are created by
217 calling <code> clone(2) </code> or <code> unshare(2)</code>. The
218 new mount namespace starts out with a copy of the caller's mount
219 point list. However, with more than one mount namespace the <code>
220 mount(2) </code> and <code> umount(2) </code> system calls no longer
221 operate on a global set of mount points. Whether or not a mount
222 or unmount operation has an effect on processes in different mount
223 namespaces than the caller's is determined by the configurable <em>
224 mount propagation </em> rules. By default, modifications to the list
225 of mount points have only affect the processes which are in the same
226 mount namespace as the process which initiated the modification. This
227 setting is controlled by the <em> propagation type </em> of the
228 mount point. Besides the obvious private and shared types, there is
229 also the <code> MS_SLAVE </code> propagation type which lets mount
230 and unmount events propagate from from a "master" to its "slaves"
231 but not the other way round.
233 SUBSECTION(«Network Namespaces»)
235 Network namespaces not only partition the set of processes, as all
236 six namespace types do, but also the set of network interfaces. That
237 is, each physical or virtual network interface belongs to one (and
238 only one) network namespace. Initially, all interfaces are in the
239 root network namespace. This can be changed with the command <code>
240 ip link set iface netns PID</code>. Processes only see interfaces
241 whose network namespace matches the one they belong to. This lets
242 processes in different network namespaces have different ideas about
243 which network devices exist. Each network namespace has its own IP
244 stack, IP routing table and TCP and UDP ports. This makes it possible
245 to start, for example, many <code> sshd(8) </code> processes which
246 all listen on "their own" TCP port 22.
248 An OS-level virtualization framework typically leaves physical
249 interfaces in the root network namespace but creates a dedicated
250 network namespace and a virtual interface pair for each container. One
251 end of the pair is left in the root namespace while the other end is
252 configured to belong to the dedicated namespace, which contains all
253 processes of the container.
255 SUBSECTION(«PID Namespaces»)
257 This namespace type allows a process to have more than one process
258 ID. Unlike network interfaces which disappear when they enter a
259 different network namespace, a process is still visible in the root
260 namespace after it has entered a different PID namespace. Besides its
261 existing PID it gets a second PID which is only valid inside the target
262 namespace. Similarly, when a new PID namespace is created by passing
263 the <code> CLONE_NEWPID </code> flag to <code> clone(2)</code>, the
264 child process gets some unused PID in the original PID namepspace
265 but PID 1 in the new namespace.
267 As as consequence, processes in different PID namespaces can have the
268 same PID. In particular, there can be arbitrary many "init" processes,
269 which all have PID 1. The usual rules for PID 1 apply within each PID
270 namespace. That is, orphaned processes are reparented to the init
271 process, and it is a fatal error if the init process terminates,
272 causing all processes in the namespace to terminate as well. PID
273 namespaces can be nested, but under normal circumstances they are
274 not. So we won't discuss nesting.
276 Since each process in a non-root PID namespace has also a PID in the
277 root PID namespace, processes in the root PID namespace can "see" all
278 processes but not vice versa. Hence a process in the root namespace can
279 send signals to all processes while processes in the child namespace
280 can only send signals to processes in their own namespace.
282 Processes can be moved from the root PID namespace into a child
283 PID namespace but not the other way round. Moreover, a process can
284 instruct the kernel to create subsequent child processes in a different
285 PID namespace.
287 SUBSECTION(«User Namespaces»)
289 User namespaces have been implemented rather late compared to other
290 namespace types. The implementation was completed in 2013. The purpose
291 of user namespaces is to isolate user and group IDs. Initially there
292 is only one user namespace, the <em> initial namespace </em> to which
293 all processes belong. As with all namespace types, a new user namespace
294 is created with <code> unshare(2) </code> or <code> clone(2)</code>.
296 The UID and GID of a process can be different in different
297 namespaces. In particular, an unprivileged process may have UID
298 0 inside an user namespace. When a process is created in a new
299 namespace or a process joins an existing user namespace, it gains full
300 privileges in this namespace. However, the process has no additional
301 privileges in the parent/previous namespace. Moreover, a certain flag
302 is set for the process which prevents the process from entering yet
303 another namespace with elevated privileges. In particular it does not
304 keep its privileges when it returns to its original namespace. User
305 namespaces can be nested, but we don't discuss nesting here.
307 Each user namespace has an <em> owner</em>, which is the effective user
308 ID (EUID) of the process which created the namespace. Any process
309 in the root user namespace whose EUID matches the owner ID has all
310 capabilities in the child namespace.
312 If <code> CLONE_NEWUSER </code> is specified together with other
313 <code> CLONE_NEW* </code> flags in a single <code> clone(2) </code>
314 or <code> unshare(2) </code> call, the user namespace is guaranteed
315 to be created first, giving the child/caller privileges over the
316 remaining namespaces created by the call.
318 It is possible to map UIDs and GIDs between namespaces. The <code>
319 /proc/$PID/uid_map </code> and <code> /proc/$PID/gid_map </code> files
320 are used to get and set the mappings. We will only talk about UID
321 mappings in the sequel because the mechanism for the GID mappings are
322 analogous. When the <code> /proc/$PID/uid_map </code> (pseudo-)file is
323 read, the contents are computed on the fly and depend on both the user
324 namespace to which process <code> $PID </code> belongs and the user
325 namespace of the calling process. Each line contains three numbers
326 which specify the mapping for a range of UIDs. The numbers have
327 to be interpreted in one of two ways, depending on whether the two
328 processes belong to the same user namespace or not. All system calls
329 which deal with UIDs transparently translate UIDs by consulting these
330 maps. A map for a newly created namespace is established by writing
331 UID-triples <em> once </em> to <em> one </em> <code> uid_map </code>
332 file. Subsequent writes will fail.
334 SUBSECTION(«IPC Namespaces»)
336 System V inter process communication (IPC) subsumes three different
337 mechanisms which enable unrelated processes to communicate with each
338 other. These mechanisms, known as <em> message queues</em>, <em>
339 semaphores </em> and <em> shared memory</em>, predate Linux by at
340 least a decade. They are mandated by the POSIX standard, so every Unix
341 system has to implement the prescribed API. The common characteristic
342 of the System V IPC mechanisms is that their objects are addressed
343 by system-wide IPC <em> identifiers</em> rather than by pathnames.
345 IPC namespaces isolate these resources so that processes in different
346 IPC namespaces have different views of the existing IPC identifiers.
347 When a new IPC namespace is created, it starts out with all three
348 identifier sets empty. Newly created IPC objects are only visible
349 for processes which belong to the same IPC namespace as the process
350 which created the object.
354 <ul>
356 <li> Examine <code> /proc/$$/mounts</code>,
357 <code>/proc/$$/mountinfo</code>, and <code>/proc/$$/mountstats</code>.
358 </li>
360 <li> Recall the concept of a <em> bind mount</em>. Describe the
361 sequence of mount operations a container implementation would need
362 to perform in order to set up a container whose root file system
363 is mounted on, say, <code> /mnt </code> before the container is
364 started. </li>
366 <li> What should happen on the attempt to change a read-only mount
367 to be read-write from inside of a container? </li>
368 <li> Compile and run <code> <a
369 href="#uts_namespace_example">utc-ns.c</a></code>, a minimal C
370 program which illustrates how to create a new UTS namespace. Explain
371 each line of the source code. </li>
373 <li> Run <code> ls -l /proc/$$/ns </code> to see the namespaces of
374 the shell. Run <code> stat -L /proc/$$/ns/uts </code> and confirm
375 that the inode number coincides with the number shown in the target
376 of the link of the <code> ls </code> output.
378 <li> Discuss why creating a namespace is a privileged operation. </li>
380 <li> What is the parent process ID of the init process? Examine the
381 fourth field of <code> /proc/1/stat </code> to confirm. </li>
383 <li> It is possible for a process in a PID namespace to have a parent
384 which is outside of this namespace. This is certainly the case for
385 the process with PID 1. Can this also happen for a different process?
386 </li>
388 <li> Examine the <code> <a
389 href="#pid_namespace_example">pid-ns.c</a></code> program. Will the
390 two numbers printed as <code> PID </code> and <code> child PID </code>
391 be the same? What will be the PPID number? Compile and run the program
392 to see if your guess was correct.
394 <li> Create a veth socket pair. Check that both ends of the pair are
395 visible with <code> ip link show</code>. Start a second shell in a
396 different network namespace and confirm by running the same command
397 that no network interfaces exist in this namespace. In the original
398 namespace, set the namespace of one end of the pair to the process ID
399 of the second shell and confirm that the interface "moved" from one
400 namespace to the other. Configure (different) IP addresses on both ends
401 of the pair and transfer data through the ethernet tunnel between the
402 two shell processes which reside in different network namespaces. </li>
404 <li> Loopback, bridge, ppp and wireless are <em> network namespace
405 local devices</em>, meaning that the namespace of such devices can
406 not be changed. Explain why. Run <code> ethtool -k iface </code>
407 to find out which devices are network namespace local. </li>
409 <li> In a user namespace where the <code> uid_map </code> file has
410 not been written, system calls like <code> setuid(2) </code> which
411 change process UIDs fail. Why? </li>
413 <li> What should happen if a set-user-ID program is executed inside
414 of a user namespace and the on-disk UID of the program is not a mapped
415 UID? </li>
417 <li> Is it possible for a UID to map to different user names even if
418 no user namespaces are in use? </li>
420 </ul>
423 The <code> shmctl(2) </code> system call performs operations on a System V
424 shared memory segment. It operates on a <code> shmid_ds </code> structure
425 which contains in the <code> shm_lpid </code> field the PID of the process
426 which last attached or detached the segment. Describe the implications this API
427 detail has on the interaction between IPC and PID namespaces.
428 »)
430 SECTION(«Control Groups»)
432 <em> Control groups </em> (cgroups) allow processes to be grouped
433 and organized hierarchically in a tree. Each control group contains
434 processes which can be monitored or controlled as a unit, for example
435 by limiting the resources they can occupy. Several <em> controllers
436 </em> exist (CPU, memory, I/O, etc.), some of which actually impose
437 control while others only provide identification and relay control
438 to separate mechanisms. Unfortunately, control groups are not easy to
439 understand because the controllers are implemented in an inconsistent
440 way and because of the rather chaotic relationship between them.
442 In 2014 it was decided to rework the cgroup subsystem of the Linux
443 kernel. To keep existing applications working, the original cgroup
444 implementation, now called <em> cgroup-v1</em>, was retained and a
445 second, incompatible, cgroup implementation was designed. Cgroup-v2
446 aims to address the shortcomings of the first version, including its
447 inefficiency, inconsistency and the lack of interoperability among
448 controllers. The cgroup-v2 API was made official in 2016. Version 1
449 continues to work even if both implementations are active.
451 Both cgroup implementations provide a pseudo file system that
452 must be mounted in order to define and configure cgroups. The two
453 pseudo file systems may be mounted at the same time (on different
454 mountpoints). For both cgroup versions, the standard <code> mkdir(2)
455 </code> system call creates a new cgroup. To add a process to a cgroup
456 one must write its PID to one of the files in the pseudo file system.
458 We will cover both cgroup versions because as of 2018-11 many
459 applications still rely on cgroup-v1 and cgroup-v2 still lacks some
460 of the functionality of cgroup-v1. However, we will not look at
461 all controllers.
463 SUBSECTION(«CPU controllers»)
465 These controllers regulate the distribution of CPU cycles. The <em>
466 cpuset </em> controller of cgroup-v1 is the oldest cgroup controller,
467 it was implemented before the cgroups-v1 subsystem existed, which is
468 why it provides its own pseudo file system which is usually mounted at
469 <code>/dev/cpuset</code>. This file system is only kept for backwards
470 compability and is otherwise equivalent to the corresponding part of
471 the cgroup pseudo file system. The cpuset controller links subsets
472 of CPUs to cgroups so that the processes in a cgroup are confined to
473 run only on the CPUs of "their" subset.
475 The CPU controller of cgroup-v2, which is simply called "cpu", works
476 differently. Instead of specifying the set of admissible CPUs for a
477 cgroup, one defines the ratio of CPU cycles for the cgroup. Work to
478 support CPU partitioning as the cpuset controller of cgroup-v1 is in
479 progress and expected to be ready in 2019.
481 SUBSECTION(«Devices»)
483 The device controller of cgroup-v1 imposes mandatory access control
484 for device-special files. It tracks the <code> open(2) </code> and
485 <code> mknod(2) </code> system calls and enforces the restrictions
486 defined in the <em> device access whitelist </em> of the cgroup the
487 calling process belongs to.
489 Processes in the root cgroup have full permissions. Other cgroups
490 inherit the device permissions from their parent. A child cgroup
491 never has more permission than its parent.
493 Cgroup-v2 takes a completely different approach to device access
494 control. It is implemented on top of BPF, the <em> Berkeley packet
495 filter</em>. Hence this controller is not listed in the cgroup-v2
496 pseudo file system.
498 SUBSECTION(«Freezer»)
500 Both cgroup-v1 and cgroup-v2 implement a <em>freezer</em> controller,
501 which provides an ability to stop ("freeze") all processes in a
502 cgroup to free up resources for other tasks. The stopped processes can
503 be continued ("thawed") as a unit later. This is similar to sending
504 <code>SIGSTOP/SIGCONT</code> to all processes, but avoids some problems
505 with corner cases. The v2 version was added in 2019-07. It is available
506 from Linux-5.2 onwards.
508 SUBSECTION(«Memory»)
510 Cgroup-v1 offers three controllers related to memory management. First
511 there is the cpusetcontroller described above which can be instructed
512 to let processes allocate only memory which is close to the CPUs
513 of the cpuset. This makes sense on NUMA (non-uniform memory access)
514 systems where the memory access time for a given CPU depends on the
515 memory location. Second, the <em> hugetlb </em> controller manages
516 distribution and usage <em> of huge pages</em>. Third, there is the
517 <em> memory resource </em> controller which provides a number of
518 files in the cgroup pseudo file system to limit process memory usage,
519 swap usage and the usage of memory by the kernel on behalf of the
520 process. The most important tunable of the memory resource controller
521 is <code> limit_in_bytes</code>.
523 The cgroup-v2 version of the memory controller is rather more complex
524 because it attempts to limit direct and indirect memory usage of
525 the processes in a cgroup in a bullet-proof way. It is designed to
526 restrain even malicious processes which try to slow down or crash
527 the system by indirectly allocating memory. For example, a process
528 could try to create many threads or file descriptors which all cause a
529 (small) memory allocation in the kernel. Besides several tunables and
530 statistics, the memory controller provides the <code> memory.events
531 </code> file whose contents change whenever a state transition
532 for the cgroup occurs, for example when processes are started to get
533 throttled because the high memory boundary was exceeded. This file
534 could be monitored by a <em> management agent </em> to take appropriate
535 actions. The main mechanism to control the memory usage is the <code>
536 memory.high </code> file.
540 I/O controllers regulate the distribution of IO resources among
541 cgroups. The throttling policy of cgroup-v2 can be used to enforce I/O
542 rate limits on arbitrary block devices, for example on a logical volume
543 provided by the logical volume manager (LVM). Read and write bandwidth
544 may be throttled independently. Moreover, the number of IOPS (I/O
545 operations per second) may also be throttled. The I/O controller of
546 cgroup-v1 is called <em> blkio </em> while for cgroup-v2 it is simply
547 called <em> io</em>. The features of the v1 and v2 I/O controllers
548 are identical but the filenames of the pseudo files and the syntax
549 for setting I/O limits differ. The exercises ask the reader to try
550 out both versions.
552 There is no cgroup-v2 controller for multi-queue schedulers so far.
553 However, there is the <em> I/O Latency </em> controller for cgroup-v2
554 which works for arbitrary block devices and all I/O schedulers. It
555 features <em> I/O workload protection </em> for the processes in
556 a cgroup. This works by throttling the processes in cgroups that
557 have a lower latency target than those in the protected cgroup. The
558 throttling is performed by lowering the depth of the request queue
559 of the affected devices.
563 <ul>
564 <li> Run <code> mount -t cgroup none /var/cgroup </code> and <code>
565 mount -t cgroup2 none /var/cgroup2 </code> to mount both cgroup pseudo
566 file systems and explore the files they provide. </li>
568 <li> Learn how to put the current shell into a new cgroup.
569 Hints: For v1, start with <code> echo 0 > cpuset.mems && echo 0 >
570 cpuset.cpus</code>. For v2: First activate controllers for the cgroup
571 in the parent directory. </li>
573 <li> Set up the cpuset controller so that your shell process has only
574 access to a single CPU core. Test that the limitation is enforced by
575 running <code>stress -c 2</code>. </li>
577 <li> Repeat the above for the cgroup-v2 CPU controller. Hint: <code>
578 echo 1000000 1000000 > cpu.max</code>. </li>
580 <li> In a cgroup with one bash process, start a simple loop that prints
581 some output: <code> while :; do date; sleep 1; done</code>. Freeze
582 and unfreeze the cgroup by writing the string <code> FROZEN </code>
583 to a suitable <code> freezer.state </code> file in the cgroup-v1 file
584 system. Then unfreeze the cgroup by writing <code> THAWED </code>
585 to the same file. Find out how one can tell whether a given cgroup
586 is frozen. </li>
588 <li> Pick a block device to throttle. Estimate its maximal read
589 bandwidth by running a command like <code> ddrescue /dev/sdX
590 /dev/null</code>. Enforce a read bandwidth rate of 1M/s for the
591 device by writing a string of the form <code> "$MAJOR:$MINOR $((1024 *
592 1024))" </code> to a file named <code> blkio.throttle.read_bps_device
593 </code> in the cgroup-v1 pseudo file system. Check that the bandwidth
594 was indeed throttled by running the above <code> ddrescue </code>
595 command again. </li>
597 <li> Repeat the previous exercise, but this time use the cgroup-v2
598 interface for the I/O controller. Hint: write a string of the form
599 <code> $MAJOR:MINOR rbps=$((1024 * 1024))" </code> to a file named
600 <code>io.max</code>. </li>
602 </ul>
605 <ul>
607 <li> In one terminal running <code> bash</code>, start a second <code>
608 bash </code> process and print its PID with <code> echo $$</code>.
609 Guess what happens if you run <code> kill -STOP $PID; kill -CONT
610 $PID</code> from a second terminal, where <code> $PID </code>
611 is the PID that was printed in the first terminal. Try it out,
612 explain the observed behaviour and discuss its impact on the freezer
613 controller. Repeat the experiment but this time use the freezer
614 controller to stop and restart the bash process. </li>
615 </ul>
617 »)
619 SECTION(«Linux Containers (LXC)»)
621 Containers provide resource management through control groups and
622 resource isolation through namespaces. A <em> container platform </em>
623 is thus a software layer implemented on top of these features. Given a
624 directory containing a Linux root file system, starting the container
625 is a simple matter: First <code> clone(2) </code> is called with the
626 proper <code> NEW_* </code> flags to create a new process in a suitable
627 set of namespaces. The child process then creates a cgroup for the
628 container and puts itself into it. The final step is to let the child
629 process hand over control to the container's <code> /sbin/init </code>
630 by calling <code> exec(2)</code>. When the last process in the newly
631 created namespaces exits, the namespaces disappear and the parent
632 process removes the cgroup. The details are a bit more complicated,
633 but the above covers the essence of what the container startup command
634 has to do.
636 Many container platforms offer additional features not to be discussed
637 here, like downloading and unpacking a file system image from the
638 internet, or supplying the root file system for the container by other
639 means, for example by creating an LVM snapshot of a master image.
640 LXC is a comparably simple container platform which can be used to
641 start a single daemon in a container, or to boot a container from
642 a root file system as described above. It provides several <code>
643 lxc-* </code> commands to start, stop and maintain containers.
644 LXC version 1 is much simpler than subsequent versions, and is still
645 being maintained, so we only discuss this version of LXC here.
647 An LXC container is defined by a configuration file in
648 the format described in <code> lxc.conf(5)</code>. A <a
649 href="#minimal_lxc_config_file"> minimal configuration </a> which
650 defines a network device and requests CPU and memory isolation has
651 as few as 10 lines (not counting comments). With the configuration
652 file and the root file system in place, the container can be started
653 by running <code> lxc-start -n $NAME</code>. One can log in to the
654 container on the local pseudo terminal or via ssh (provided the sshd
655 package is installed). The container can be stopped by executing
656 <code> halt </code> from within the container, or by running <code>
657 lxc-stop </code> on the host system. <code> lxc-ls </code> and
658 <code> lxc-info</code> print information about containers, and <code>
659 lxc-cgroup </code> changes the settings of the cgroup associated with
660 a container.
662 The exercises ask the reader to install the LXC package from source,
663 and to set up a minimal container running Ubuntu-18.04.
667 <ul>
669 <li> Clone the LXC git repository from <code>
670 https://github.com/lxc/lxc</code>, check out the <code> stable-1.0
671 </code> tag. Compile the source code with <code> ./autogen.sh </code>
672 and <code> ./configure && make</code>. Install with <code> sudo make
673 install</code>. </li>
675 <li> Download a minimal Ubuntu root file system with a command like
676 <code> debootstrap --download-only --include isc-dhcp-client bionic
677 /media/lxc/buru/ http://de.archive.ubuntu.com/ubuntu</code>. </li>
679 <li> Set up an ethernet bridge as described in the <a
680 href="./Networking.html#link_layer">Link Layer</a> section of the
681 chapter on networking. </li>
683 <li> Examine the <a href="#minimal_lxc_config_file"> minimal
684 configuration file </a> for the container and copy it to <code>
685 /var/lib/lxc/buru/config</code>. Adjust host name, MAC address and
686 the name of the bridge interface. </li>
688 <li> Start the container with <code> lxc-start -n buru</code>. </li>
690 <li> While the container is running, investigate the control files of the
691 cgroup pseudo file system. Identify the pseudo files which describe the
692 CPU and memory limit. </li>
694 <li> Come up with a suitable <code> lxc-cgroup </code> command
695 to change the cpuset and the memory of the container while it is
696 running. </li>
698 <li> On the host system, create a loop device and a file system on
699 it. Mount the file system on a subdirectory of the root file system
700 of the container. Note that the mount is not visible from within the
701 container. Come up with a way to make it visible without restarting
702 the container. </li>
704 </ul>
706 HOMEWORK(«Compare the features of LXC versions 1, 2 and 3.»)
710 SUBSECTION(«UTS Namespace Example»)
711 <pre>
712 <code>
713 #define _GNU_SOURCE
714 #include &lt;sys/utsname.h&gt;
715 #include &lt;sched.h&gt;
716 #include &lt;stdio.h&gt;
717 #include &lt;stdlib.h&gt;
718 #include &lt;unistd.h&gt;
720 static void print_hostname_and_exit(const char *pfx)
721 {
722 struct utsname uts;
724 uname(&uts);
725 printf("%s: %s\n", pfx, uts.nodename);
726 exit(EXIT_SUCCESS);
727 }
729 static int child(void *arg)
730 {
731 sethostname("jesus", 5);
732 print_hostname_and_exit("child");
733 }
735 #define STACK_SIZE (64 * 1024)
736 static char child_stack[STACK_SIZE];
738 int main(int argc, char *argv[])
739 {
740 clone(child, child_stack + STACK_SIZE, CLONE_NEWUTS, NULL);
741 print_hostname_and_exit("parent");
742 }
743 </code>
744 </pre>
746 SUBSECTION(«PID Namespace Example»)
747 <pre>
748 <code>
749 #define _GNU_SOURCE
750 #include &lt;sched.h&gt;
751 #include &lt;unistd.h&gt;
752 #include &lt;stdlib.h&gt;
753 #include &lt;stdio.h&gt;
755 static int child(void *arg)
756 {
757 printf("PID: %d, PPID: %d\n", (int)getpid(), (int)getppid());
758 }
760 #define STACK_SIZE (64 * 1024)
761 static char child_stack[STACK_SIZE];
763 int main(int argc, char *argv[])
764 {
765 pid_t pid = clone(child, child_stack + STACK_SIZE, CLONE_NEWPID, NULL);
766 printf("child PID: %d\n", (int)pid);
767 exit(EXIT_SUCCESS);
768 }
769 </code>
770 </pre>
772 SUBSECTION(«Minimal LXC Config File»)
773 <pre>
774 <code>
775 # Employ cgroups to limit the CPUs and the amount of memory the container is
776 # allowed to use.
777 lxc.cgroup.cpuset.cpus = 0-1
778 lxc.cgroup.memory.limit_in_bytes = 2G
780 # So that the container starts out with a fresh UTS namespace that
781 # has already set its hostname.
782 lxc.utsname = buru
784 # LXC does not play ball if we don't set the type of the network device.
785 # It will always be veth.
786 lxc.network.type = veth
788 # This sets the name of the veth pair which is visible on the host. This
789 # way it is easy to tell which interface belongs to which container.
790 lxc.network.veth.pair = buru
792 # Of course we need to tell LXC where the root file system of the container
793 # is located. LXC will automatically mount a couple of pseudo file systems
794 # for the container, including /proc and /sys.
795 lxc.rootfs = /media/lxc/buru
797 # so that we can assign a fixed address via DHCP
798 lxc.network.hwaddr = ac:de:48:32:35:cf
800 # You must NOT have a link from /dev/kmsg pointing to /dev/console. In the host
801 # it should be a real device. In a container it must NOT exist. When /dev/kmsg
802 # points to /dev/console, systemd-journald reads from /dev/kmsg and then writes
803 # to /dev/console (which it then reads from /dev/kmsg and writes again to
804 # /dev/console ad infinitum). You've inadvertently created a messaging loop
805 # that's causing systemd-journald to go berserk on your CPU.
806 #
807 # Make sure to remove /var/lib/lxc/${container}/rootfs.dev/kmsg
808 lxc.kmsg = 0
810 lxc.network.link = br39
812 # This is needed for lxc-console
813 lxc.tty = 4
814 </code>
815 </pre>
817 SECTION(«Further Reading»)
818 <ul>
819 <li> <a href="https://lwn.net/Articles/782876/">The creation of the
820 io.latency block I/O controller</a>, by Josef Bacik: </li>
821 </ul>