6 :Author: Tejun Heo <tj@kernel.org>
8 This is the authoritative documentation on the design, interface and
9 conventions of cgroup v2. It describes all userland-visible aspects
10 of cgroup including core and specific controller behaviors. All
11 future changes must be reflected in this document. Documentation for
12 v1 is available under Documentation/cgroup-v1/.
21 2-2. Organizing Processes and Threads
24 2-3. [Un]populated Notification
25 2-4. Controlling Controllers
26 2-4-1. Enabling and Disabling
27 2-4-2. Top-down Constraint
28 2-4-3. No Internal Process Constraint
30 2-5-1. Model of Delegation
31 2-5-2. Delegation Containment
33 2-6-1. Organize Once and Control
34 2-6-2. Avoid Name Collisions
35 3. Resource Distribution Models
43 4-3. Core Interface Files
46 5-1-1. CPU Interface Files
48 5-2-1. Memory Interface Files
49 5-2-2. Usage Guidelines
50 5-2-3. Memory Ownership
52 5-3-1. IO Interface Files
55 5-3-3-1. How IO Latency Throttling Works
56 5-3-3-2. IO Latency Interface Files
58 5-4-1. PID Interface Files
60 5.5-1. Cpuset Interface Files
63 5-7-1. RDMA Interface Files
66 5-N. Non-normative information
67 5-N-1. CPU controller root cgroup process behaviour
68 5-N-2. IO controller root cgroup process behaviour
71 6-2. The Root and Views
72 6-3. Migration and setns(2)
73 6-4. Interaction with Other Namespaces
74 P. Information on Kernel Programming
75 P-1. Filesystem Support for Writeback
76 D. Deprecated v1 Core Features
77 R. Issues with v1 and Rationales for v2
78 R-1. Multiple Hierarchies
79 R-2. Thread Granularity
80 R-3. Competition Between Inner Nodes and Threads
81 R-4. Other Interface Issues
82 R-5. Controller Issues and Remedies
92 "cgroup" stands for "control group" and is never capitalized. The
93 singular form is used to designate the whole feature and also as a
94 qualifier as in "cgroup controllers". When explicitly referring to
95 multiple individual control groups, the plural form "cgroups" is used.
101 cgroup is a mechanism to organize processes hierarchically and
102 distribute system resources along the hierarchy in a controlled and
105 cgroup is largely composed of two parts - the core and controllers.
106 cgroup core is primarily responsible for hierarchically organizing
107 processes. A cgroup controller is usually responsible for
108 distributing a specific type of system resource along the hierarchy
109 although there are utility controllers which serve purposes other than
110 resource distribution.
112 cgroups form a tree structure and every process in the system belongs
113 to one and only one cgroup. All threads of a process belong to the
114 same cgroup. On creation, all processes are put in the cgroup that
115 the parent process belongs to at the time. A process can be migrated
116 to another cgroup. Migration of a process doesn't affect already
117 existing descendant processes.
119 Following certain structural constraints, controllers may be enabled or
120 disabled selectively on a cgroup. All controller behaviors are
121 hierarchical - if a controller is enabled on a cgroup, it affects all
122 processes which belong to the cgroups consisting the inclusive
123 sub-hierarchy of the cgroup. When a controller is enabled on a nested
124 cgroup, it always restricts the resource distribution further. The
125 restrictions set closer to the root in the hierarchy can not be
126 overridden from further away.
135 Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2
136 hierarchy can be mounted with the following mount command::
138 # mount -t cgroup2 none $MOUNT_POINT
140 cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All
141 controllers which support v2 and are not bound to a v1 hierarchy are
142 automatically bound to the v2 hierarchy and show up at the root.
143 Controllers which are not in active use in the v2 hierarchy can be
144 bound to other hierarchies. This allows mixing v2 hierarchy with the
145 legacy v1 multiple hierarchies in a fully backward compatible way.
147 A controller can be moved across hierarchies only after the controller
148 is no longer referenced in its current hierarchy. Because per-cgroup
149 controller states are destroyed asynchronously and controllers may
150 have lingering references, a controller may not show up immediately on
151 the v2 hierarchy after the final umount of the previous hierarchy.
152 Similarly, a controller should be fully disabled to be moved out of
153 the unified hierarchy and it may take some time for the disabled
154 controller to become available for other hierarchies; furthermore, due
155 to inter-controller dependencies, other controllers may need to be
158 While useful for development and manual configurations, moving
159 controllers dynamically between the v2 and other hierarchies is
160 strongly discouraged for production use. It is recommended to decide
161 the hierarchies and controller associations before starting using the
162 controllers after system boot.
164 During transition to v2, system management software might still
165 automount the v1 cgroup filesystem and so hijack all controllers
166 during boot, before manual intervention is possible. To make testing
167 and experimenting easier, the kernel parameter cgroup_no_v1= allows
168 disabling controllers in v1 and make them always available in v2.
170 cgroup v2 currently supports the following mount options.
174 Consider cgroup namespaces as delegation boundaries. This
175 option is system wide and can only be set on mount or modified
176 through remount from the init namespace. The mount option is
177 ignored on non-init namespace mounts. Please refer to the
178 Delegation section for details.
182 Only populate memory.events with data for the current cgroup,
183 and not any subtrees. This is legacy behaviour, the default
184 behaviour without this option is to include subtree counts.
185 This option is system wide and can only be set on mount or
186 modified through remount from the init namespace. The mount
187 option is ignored on non-init namespace mounts.
190 Organizing Processes and Threads
191 --------------------------------
196 Initially, only the root cgroup exists to which all processes belong.
197 A child cgroup can be created by creating a sub-directory::
201 A given cgroup may have multiple child cgroups forming a tree
202 structure. Each cgroup has a read-writable interface file
203 "cgroup.procs". When read, it lists the PIDs of all processes which
204 belong to the cgroup one-per-line. The PIDs are not ordered and the
205 same PID may show up more than once if the process got moved to
206 another cgroup and then back or the PID got recycled while reading.
208 A process can be migrated into a cgroup by writing its PID to the
209 target cgroup's "cgroup.procs" file. Only one process can be migrated
210 on a single write(2) call. If a process is composed of multiple
211 threads, writing the PID of any thread migrates all threads of the
214 When a process forks a child process, the new process is born into the
215 cgroup that the forking process belongs to at the time of the
216 operation. After exit, a process stays associated with the cgroup
217 that it belonged to at the time of exit until it's reaped; however, a
218 zombie process does not appear in "cgroup.procs" and thus can't be
219 moved to another cgroup.
221 A cgroup which doesn't have any children or live processes can be
222 destroyed by removing the directory. Note that a cgroup which doesn't
223 have any children and is associated only with zombie processes is
224 considered empty and can be removed::
228 "/proc/$PID/cgroup" lists a process's cgroup membership. If legacy
229 cgroup is in use in the system, this file may contain multiple lines,
230 one for each hierarchy. The entry for cgroup v2 is always in the
233 # cat /proc/842/cgroup
235 0::/test-cgroup/test-cgroup-nested
237 If the process becomes a zombie and the cgroup it was associated with
238 is removed subsequently, " (deleted)" is appended to the path::
240 # cat /proc/842/cgroup
242 0::/test-cgroup/test-cgroup-nested (deleted)
248 cgroup v2 supports thread granularity for a subset of controllers to
249 support use cases requiring hierarchical resource distribution across
250 the threads of a group of processes. By default, all threads of a
251 process belong to the same cgroup, which also serves as the resource
252 domain to host resource consumptions which are not specific to a
253 process or thread. The thread mode allows threads to be spread across
254 a subtree while still maintaining the common resource domain for them.
256 Controllers which support thread mode are called threaded controllers.
257 The ones which don't are called domain controllers.
259 Marking a cgroup threaded makes it join the resource domain of its
260 parent as a threaded cgroup. The parent may be another threaded
261 cgroup whose resource domain is further up in the hierarchy. The root
262 of a threaded subtree, that is, the nearest ancestor which is not
263 threaded, is called threaded domain or thread root interchangeably and
264 serves as the resource domain for the entire subtree.
266 Inside a threaded subtree, threads of a process can be put in
267 different cgroups and are not subject to the no internal process
268 constraint - threaded controllers can be enabled on non-leaf cgroups
269 whether they have threads in them or not.
271 As the threaded domain cgroup hosts all the domain resource
272 consumptions of the subtree, it is considered to have internal
273 resource consumptions whether there are processes in it or not and
274 can't have populated child cgroups which aren't threaded. Because the
275 root cgroup is not subject to no internal process constraint, it can
276 serve both as a threaded domain and a parent to domain cgroups.
278 The current operation mode or type of the cgroup is shown in the
279 "cgroup.type" file which indicates whether the cgroup is a normal
280 domain, a domain which is serving as the domain of a threaded subtree,
281 or a threaded cgroup.
283 On creation, a cgroup is always a domain cgroup and can be made
284 threaded by writing "threaded" to the "cgroup.type" file. The
285 operation is single direction::
287 # echo threaded > cgroup.type
289 Once threaded, the cgroup can't be made a domain again. To enable the
290 thread mode, the following conditions must be met.
292 - As the cgroup will join the parent's resource domain. The parent
293 must either be a valid (threaded) domain or a threaded cgroup.
295 - When the parent is an unthreaded domain, it must not have any domain
296 controllers enabled or populated domain children. The root is
297 exempt from this requirement.
299 Topology-wise, a cgroup can be in an invalid state. Please consider
300 the following topology::
302 A (threaded domain) - B (threaded) - C (domain, just created)
304 C is created as a domain but isn't connected to a parent which can
305 host child domains. C can't be used until it is turned into a
306 threaded cgroup. "cgroup.type" file will report "domain (invalid)" in
307 these cases. Operations which fail due to invalid topology use
308 EOPNOTSUPP as the errno.
310 A domain cgroup is turned into a threaded domain when one of its child
311 cgroup becomes threaded or threaded controllers are enabled in the
312 "cgroup.subtree_control" file while there are processes in the cgroup.
313 A threaded domain reverts to a normal domain when the conditions
316 When read, "cgroup.threads" contains the list of the thread IDs of all
317 threads in the cgroup. Except that the operations are per-thread
318 instead of per-process, "cgroup.threads" has the same format and
319 behaves the same way as "cgroup.procs". While "cgroup.threads" can be
320 written to in any cgroup, as it can only move threads inside the same
321 threaded domain, its operations are confined inside each threaded
324 The threaded domain cgroup serves as the resource domain for the whole
325 subtree, and, while the threads can be scattered across the subtree,
326 all the processes are considered to be in the threaded domain cgroup.
327 "cgroup.procs" in a threaded domain cgroup contains the PIDs of all
328 processes in the subtree and is not readable in the subtree proper.
329 However, "cgroup.procs" can be written to from anywhere in the subtree
330 to migrate all threads of the matching process to the cgroup.
332 Only threaded controllers can be enabled in a threaded subtree. When
333 a threaded controller is enabled inside a threaded subtree, it only
334 accounts for and controls resource consumptions associated with the
335 threads in the cgroup and its descendants. All consumptions which
336 aren't tied to a specific thread belong to the threaded domain cgroup.
338 Because a threaded subtree is exempt from no internal process
339 constraint, a threaded controller must be able to handle competition
340 between threads in a non-leaf cgroup and its child cgroups. Each
341 threaded controller defines how such competitions are handled.
344 [Un]populated Notification
345 --------------------------
347 Each non-root cgroup has a "cgroup.events" file which contains
348 "populated" field indicating whether the cgroup's sub-hierarchy has
349 live processes in it. Its value is 0 if there is no live process in
350 the cgroup and its descendants; otherwise, 1. poll and [id]notify
351 events are triggered when the value changes. This can be used, for
352 example, to start a clean-up operation after all processes of a given
353 sub-hierarchy have exited. The populated state updates and
354 notifications are recursive. Consider the following sub-hierarchy
355 where the numbers in the parentheses represent the numbers of processes
361 A, B and C's "populated" fields would be 1 while D's 0. After the one
362 process in C exits, B and C's "populated" fields would flip to "0" and
363 file modified events will be generated on the "cgroup.events" files of
367 Controlling Controllers
368 -----------------------
370 Enabling and Disabling
371 ~~~~~~~~~~~~~~~~~~~~~~
373 Each cgroup has a "cgroup.controllers" file which lists all
374 controllers available for the cgroup to enable::
376 # cat cgroup.controllers
379 No controller is enabled by default. Controllers can be enabled and
380 disabled by writing to the "cgroup.subtree_control" file::
382 # echo "+cpu +memory -io" > cgroup.subtree_control
384 Only controllers which are listed in "cgroup.controllers" can be
385 enabled. When multiple operations are specified as above, either they
386 all succeed or fail. If multiple operations on the same controller
387 are specified, the last one is effective.
389 Enabling a controller in a cgroup indicates that the distribution of
390 the target resource across its immediate children will be controlled.
391 Consider the following sub-hierarchy. The enabled controllers are
392 listed in parentheses::
394 A(cpu,memory) - B(memory) - C()
397 As A has "cpu" and "memory" enabled, A will control the distribution
398 of CPU cycles and memory to its children, in this case, B. As B has
399 "memory" enabled but not "CPU", C and D will compete freely on CPU
400 cycles but their division of memory available to B will be controlled.
402 As a controller regulates the distribution of the target resource to
403 the cgroup's children, enabling it creates the controller's interface
404 files in the child cgroups. In the above example, enabling "cpu" on B
405 would create the "cpu." prefixed controller interface files in C and
406 D. Likewise, disabling "memory" from B would remove the "memory."
407 prefixed controller interface files from C and D. This means that the
408 controller interface files - anything which doesn't start with
409 "cgroup." are owned by the parent rather than the cgroup itself.
415 Resources are distributed top-down and a cgroup can further distribute
416 a resource only if the resource has been distributed to it from the
417 parent. This means that all non-root "cgroup.subtree_control" files
418 can only contain controllers which are enabled in the parent's
419 "cgroup.subtree_control" file. A controller can be enabled only if
420 the parent has the controller enabled and a controller can't be
421 disabled if one or more children have it enabled.
424 No Internal Process Constraint
425 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
427 Non-root cgroups can distribute domain resources to their children
428 only when they don't have any processes of their own. In other words,
429 only domain cgroups which don't contain any processes can have domain
430 controllers enabled in their "cgroup.subtree_control" files.
432 This guarantees that, when a domain controller is looking at the part
433 of the hierarchy which has it enabled, processes are always only on
434 the leaves. This rules out situations where child cgroups compete
435 against internal processes of the parent.
437 The root cgroup is exempt from this restriction. Root contains
438 processes and anonymous resource consumption which can't be associated
439 with any other cgroups and requires special treatment from most
440 controllers. How resource consumption in the root cgroup is governed
441 is up to each controller (for more information on this topic please
442 refer to the Non-normative information section in the Controllers
445 Note that the restriction doesn't get in the way if there is no
446 enabled controller in the cgroup's "cgroup.subtree_control". This is
447 important as otherwise it wouldn't be possible to create children of a
448 populated cgroup. To control resource distribution of a cgroup, the
449 cgroup must create children and transfer all its processes to the
450 children before enabling controllers in its "cgroup.subtree_control"
460 A cgroup can be delegated in two ways. First, to a less privileged
461 user by granting write access of the directory and its "cgroup.procs",
462 "cgroup.threads" and "cgroup.subtree_control" files to the user.
463 Second, if the "nsdelegate" mount option is set, automatically to a
464 cgroup namespace on namespace creation.
466 Because the resource control interface files in a given directory
467 control the distribution of the parent's resources, the delegatee
468 shouldn't be allowed to write to them. For the first method, this is
469 achieved by not granting access to these files. For the second, the
470 kernel rejects writes to all files other than "cgroup.procs" and
471 "cgroup.subtree_control" on a namespace root from inside the
474 The end results are equivalent for both delegation types. Once
475 delegated, the user can build sub-hierarchy under the directory,
476 organize processes inside it as it sees fit and further distribute the
477 resources it received from the parent. The limits and other settings
478 of all resource controllers are hierarchical and regardless of what
479 happens in the delegated sub-hierarchy, nothing can escape the
480 resource restrictions imposed by the parent.
482 Currently, cgroup doesn't impose any restrictions on the number of
483 cgroups in or nesting depth of a delegated sub-hierarchy; however,
484 this may be limited explicitly in the future.
487 Delegation Containment
488 ~~~~~~~~~~~~~~~~~~~~~~
490 A delegated sub-hierarchy is contained in the sense that processes
491 can't be moved into or out of the sub-hierarchy by the delegatee.
493 For delegations to a less privileged user, this is achieved by
494 requiring the following conditions for a process with a non-root euid
495 to migrate a target process into a cgroup by writing its PID to the
498 - The writer must have write access to the "cgroup.procs" file.
500 - The writer must have write access to the "cgroup.procs" file of the
501 common ancestor of the source and destination cgroups.
503 The above two constraints ensure that while a delegatee may migrate
504 processes around freely in the delegated sub-hierarchy it can't pull
505 in from or push out to outside the sub-hierarchy.
507 For an example, let's assume cgroups C0 and C1 have been delegated to
508 user U0 who created C00, C01 under C0 and C10 under C1 as follows and
509 all processes under C0 and C1 belong to U0::
511 ~~~~~~~~~~~~~ - C0 - C00
514 ~~~~~~~~~~~~~ - C1 - C10
516 Let's also say U0 wants to write the PID of a process which is
517 currently in C10 into "C00/cgroup.procs". U0 has write access to the
518 file; however, the common ancestor of the source cgroup C10 and the
519 destination cgroup C00 is above the points of delegation and U0 would
520 not have write access to its "cgroup.procs" files and thus the write
521 will be denied with -EACCES.
523 For delegations to namespaces, containment is achieved by requiring
524 that both the source and destination cgroups are reachable from the
525 namespace of the process which is attempting the migration. If either
526 is not reachable, the migration is rejected with -ENOENT.
532 Organize Once and Control
533 ~~~~~~~~~~~~~~~~~~~~~~~~~
535 Migrating a process across cgroups is a relatively expensive operation
536 and stateful resources such as memory are not moved together with the
537 process. This is an explicit design decision as there often exist
538 inherent trade-offs between migration and various hot paths in terms
539 of synchronization cost.
541 As such, migrating processes across cgroups frequently as a means to
542 apply different resource restrictions is discouraged. A workload
543 should be assigned to a cgroup according to the system's logical and
544 resource structure once on start-up. Dynamic adjustments to resource
545 distribution can be made by changing controller configuration through
549 Avoid Name Collisions
550 ~~~~~~~~~~~~~~~~~~~~~
552 Interface files for a cgroup and its children cgroups occupy the same
553 directory and it is possible to create children cgroups which collide
554 with interface files.
556 All cgroup core interface files are prefixed with "cgroup." and each
557 controller's interface files are prefixed with the controller name and
558 a dot. A controller's name is composed of lower case alphabets and
559 '_'s but never begins with an '_' so it can be used as the prefix
560 character for collision avoidance. Also, interface file names won't
561 start or end with terms which are often used in categorizing workloads
562 such as job, service, slice, unit or workload.
564 cgroup doesn't do anything to prevent name collisions and it's the
565 user's responsibility to avoid them.
568 Resource Distribution Models
569 ============================
571 cgroup controllers implement several resource distribution schemes
572 depending on the resource type and expected use cases. This section
573 describes major schemes in use along with their expected behaviors.
579 A parent's resource is distributed by adding up the weights of all
580 active children and giving each the fraction matching the ratio of its
581 weight against the sum. As only children which can make use of the
582 resource at the moment participate in the distribution, this is
583 work-conserving. Due to the dynamic nature, this model is usually
584 used for stateless resources.
586 All weights are in the range [1, 10000] with the default at 100. This
587 allows symmetric multiplicative biases in both directions at fine
588 enough granularity while staying in the intuitive range.
590 As long as the weight is in range, all configuration combinations are
591 valid and there is no reason to reject configuration changes or
594 "cpu.weight" proportionally distributes CPU cycles to active children
595 and is an example of this type.
601 A child can only consume upto the configured amount of the resource.
602 Limits can be over-committed - the sum of the limits of children can
603 exceed the amount of resource available to the parent.
605 Limits are in the range [0, max] and defaults to "max", which is noop.
607 As limits can be over-committed, all configuration combinations are
608 valid and there is no reason to reject configuration changes or
611 "io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
612 on an IO device and is an example of this type.
618 A cgroup is protected to be allocated upto the configured amount of
619 the resource if the usages of all its ancestors are under their
620 protected levels. Protections can be hard guarantees or best effort
621 soft boundaries. Protections can also be over-committed in which case
622 only upto the amount available to the parent is protected among
625 Protections are in the range [0, max] and defaults to 0, which is
628 As protections can be over-committed, all configuration combinations
629 are valid and there is no reason to reject configuration changes or
632 "memory.low" implements best-effort memory protection and is an
633 example of this type.
639 A cgroup is exclusively allocated a certain amount of a finite
640 resource. Allocations can't be over-committed - the sum of the
641 allocations of children can not exceed the amount of resource
642 available to the parent.
644 Allocations are in the range [0, max] and defaults to 0, which is no
647 As allocations can't be over-committed, some configuration
648 combinations are invalid and should be rejected. Also, if the
649 resource is mandatory for execution of processes, process migrations
652 "cpu.rt.max" hard-allocates realtime slices and is an example of this
662 All interface files should be in one of the following formats whenever
665 New-line separated values
666 (when only one value can be written at once)
672 Space separated values
673 (when read-only or multiple values can be written at once)
685 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
686 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
689 For a writable file, the format for writing should generally match
690 reading; however, controllers may allow omitting later fields or
691 implement restricted shortcuts for most common use cases.
693 For both flat and nested keyed files, only the values for a single key
694 can be written at a time. For nested keyed files, the sub key pairs
695 may be specified in any order and not all pairs have to be specified.
701 - Settings for a single feature should be contained in a single file.
703 - The root cgroup should be exempt from resource control and thus
704 shouldn't have resource control interface files. Also,
705 informational files on the root cgroup which end up showing global
706 information available elsewhere shouldn't exist.
708 - If a controller implements weight based resource distribution, its
709 interface file should be named "weight" and have the range [1,
710 10000] with 100 as the default. The values are chosen to allow
711 enough and symmetric bias in both directions while keeping it
712 intuitive (the default is 100%).
714 - If a controller implements an absolute resource guarantee and/or
715 limit, the interface files should be named "min" and "max"
716 respectively. If a controller implements best effort resource
717 guarantee and/or limit, the interface files should be named "low"
718 and "high" respectively.
720 In the above four control files, the special token "max" should be
721 used to represent upward infinity for both reading and writing.
723 - If a setting has a configurable default value and keyed specific
724 overrides, the default entry should be keyed with "default" and
725 appear as the first entry in the file.
727 The default value can be updated by writing either "default $VAL" or
730 When writing to update a specific override, "default" can be used as
731 the value to indicate removal of the override. Override entries
732 with "default" as the value must not appear when read.
734 For example, a setting which is keyed by major:minor device numbers
735 with integer values may look like the following::
737 # cat cgroup-example-interface-file
741 The default value can be updated by::
743 # echo 125 > cgroup-example-interface-file
747 # echo "default 125" > cgroup-example-interface-file
749 An override can be set by::
751 # echo "8:16 170" > cgroup-example-interface-file
755 # echo "8:0 default" > cgroup-example-interface-file
756 # cat cgroup-example-interface-file
760 - For events which are not very high frequency, an interface file
761 "events" should be created which lists event key value pairs.
762 Whenever a notifiable event happens, file modified event should be
763 generated on the file.
769 All cgroup core files are prefixed with "cgroup."
773 A read-write single value file which exists on non-root
776 When read, it indicates the current type of the cgroup, which
777 can be one of the following values.
779 - "domain" : A normal valid domain cgroup.
781 - "domain threaded" : A threaded domain cgroup which is
782 serving as the root of a threaded subtree.
784 - "domain invalid" : A cgroup which is in an invalid state.
785 It can't be populated or have controllers enabled. It may
786 be allowed to become a threaded cgroup.
788 - "threaded" : A threaded cgroup which is a member of a
791 A cgroup can be turned into a threaded cgroup by writing
792 "threaded" to this file.
795 A read-write new-line separated values file which exists on
798 When read, it lists the PIDs of all processes which belong to
799 the cgroup one-per-line. The PIDs are not ordered and the
800 same PID may show up more than once if the process got moved
801 to another cgroup and then back or the PID got recycled while
804 A PID can be written to migrate the process associated with
805 the PID to the cgroup. The writer should match all of the
806 following conditions.
808 - It must have write access to the "cgroup.procs" file.
810 - It must have write access to the "cgroup.procs" file of the
811 common ancestor of the source and destination cgroups.
813 When delegating a sub-hierarchy, write access to this file
814 should be granted along with the containing directory.
816 In a threaded cgroup, reading this file fails with EOPNOTSUPP
817 as all the processes belong to the thread root. Writing is
818 supported and moves every thread of the process to the cgroup.
821 A read-write new-line separated values file which exists on
824 When read, it lists the TIDs of all threads which belong to
825 the cgroup one-per-line. The TIDs are not ordered and the
826 same TID may show up more than once if the thread got moved to
827 another cgroup and then back or the TID got recycled while
830 A TID can be written to migrate the thread associated with the
831 TID to the cgroup. The writer should match all of the
832 following conditions.
834 - It must have write access to the "cgroup.threads" file.
836 - The cgroup that the thread is currently in must be in the
837 same resource domain as the destination cgroup.
839 - It must have write access to the "cgroup.procs" file of the
840 common ancestor of the source and destination cgroups.
842 When delegating a sub-hierarchy, write access to this file
843 should be granted along with the containing directory.
846 A read-only space separated values file which exists on all
849 It shows space separated list of all controllers available to
850 the cgroup. The controllers are not ordered.
852 cgroup.subtree_control
853 A read-write space separated values file which exists on all
854 cgroups. Starts out empty.
856 When read, it shows space separated list of the controllers
857 which are enabled to control resource distribution from the
858 cgroup to its children.
860 Space separated list of controllers prefixed with '+' or '-'
861 can be written to enable or disable controllers. A controller
862 name prefixed with '+' enables the controller and '-'
863 disables. If a controller appears more than once on the list,
864 the last one is effective. When multiple enable and disable
865 operations are specified, either all succeed or all fail.
868 A read-only flat-keyed file which exists on non-root cgroups.
869 The following entries are defined. Unless specified
870 otherwise, a value change in this file generates a file
874 1 if the cgroup or its descendants contains any live
875 processes; otherwise, 0.
877 1 if the cgroup is frozen; otherwise, 0.
879 cgroup.max.descendants
880 A read-write single value files. The default is "max".
882 Maximum allowed number of descent cgroups.
883 If the actual number of descendants is equal or larger,
884 an attempt to create a new cgroup in the hierarchy will fail.
887 A read-write single value files. The default is "max".
889 Maximum allowed descent depth below the current cgroup.
890 If the actual descent depth is equal or larger,
891 an attempt to create a new child cgroup will fail.
894 A read-only flat-keyed file with the following entries:
897 Total number of visible descendant cgroups.
900 Total number of dying descendant cgroups. A cgroup becomes
901 dying after being deleted by a user. The cgroup will remain
902 in dying state for some time undefined time (which can depend
903 on system load) before being completely destroyed.
905 A process can't enter a dying cgroup under any circumstances,
906 a dying cgroup can't revive.
908 A dying cgroup can consume system resources not exceeding
909 limits, which were active at the moment of cgroup deletion.
912 A read-write single value file which exists on non-root cgroups.
913 Allowed values are "0" and "1". The default is "0".
915 Writing "1" to the file causes freezing of the cgroup and all
916 descendant cgroups. This means that all belonging processes will
917 be stopped and will not run until the cgroup will be explicitly
918 unfrozen. Freezing of the cgroup may take some time; when this action
919 is completed, the "frozen" value in the cgroup.events control file
920 will be updated to "1" and the corresponding notification will be
923 A cgroup can be frozen either by its own settings, or by settings
924 of any ancestor cgroups. If any of ancestor cgroups is frozen, the
925 cgroup will remain frozen.
927 Processes in the frozen cgroup can be killed by a fatal signal.
928 They also can enter and leave a frozen cgroup: either by an explicit
929 move by a user, or if freezing of the cgroup races with fork().
930 If a process is moved to a frozen cgroup, it stops. If a process is
931 moved out of a frozen cgroup, it becomes running.
933 Frozen status of a cgroup doesn't affect any cgroup tree operations:
934 it's possible to delete a frozen (and empty) cgroup, as well as
935 create new sub-cgroups.
943 The "cpu" controllers regulates distribution of CPU cycles. This
944 controller implements weight and absolute bandwidth limit models for
945 normal scheduling policy and absolute bandwidth allocation model for
946 realtime scheduling policy.
948 WARNING: cgroup2 doesn't yet support control of realtime processes and
949 the cpu controller can only be enabled when all RT processes are in
950 the root cgroup. Be aware that system management software may already
951 have placed RT processes into nonroot cgroups during the system boot
952 process, and these processes may need to be moved to the root cgroup
953 before the cpu controller can be enabled.
959 All time durations are in microseconds.
962 A read-only flat-keyed file which exists on non-root cgroups.
963 This file exists whether the controller is enabled or not.
965 It always reports the following three stats:
971 and the following three when the controller is enabled:
978 A read-write single value file which exists on non-root
979 cgroups. The default is "100".
981 The weight in the range [1, 10000].
984 A read-write single value file which exists on non-root
985 cgroups. The default is "0".
987 The nice value is in the range [-20, 19].
989 This interface file is an alternative interface for
990 "cpu.weight" and allows reading and setting weight using the
991 same values used by nice(2). Because the range is smaller and
992 granularity is coarser for the nice values, the read value is
993 the closest approximation of the current weight.
996 A read-write two value file which exists on non-root cgroups.
997 The default is "max 100000".
999 The maximum bandwidth limit. It's in the following format::
1003 which indicates that the group may consume upto $MAX in each
1004 $PERIOD duration. "max" for $MAX indicates no limit. If only
1005 one number is written, $MAX is updated.
1008 A read-only nested-key file which exists on non-root cgroups.
1010 Shows pressure stall information for CPU. See
1011 Documentation/accounting/psi.txt for details.
1017 The "memory" controller regulates distribution of memory. Memory is
1018 stateful and implements both limit and protection models. Due to the
1019 intertwining between memory usage and reclaim pressure and the
1020 stateful nature of memory, the distribution model is relatively
1023 While not completely water-tight, all major memory usages by a given
1024 cgroup are tracked so that the total memory consumption can be
1025 accounted and controlled to a reasonable extent. Currently, the
1026 following types of memory usages are tracked.
1028 - Userland memory - page cache and anonymous memory.
1030 - Kernel data structures such as dentries and inodes.
1032 - TCP socket buffers.
1034 The above list may expand in the future for better coverage.
1037 Memory Interface Files
1038 ~~~~~~~~~~~~~~~~~~~~~~
1040 All memory amounts are in bytes. If a value which is not aligned to
1041 PAGE_SIZE is written, the value may be rounded up to the closest
1042 PAGE_SIZE multiple when read back.
1045 A read-only single value file which exists on non-root
1048 The total amount of memory currently being used by the cgroup
1049 and its descendants.
1052 A read-write single value file which exists on non-root
1053 cgroups. The default is "0".
1055 Hard memory protection. If the memory usage of a cgroup
1056 is within its effective min boundary, the cgroup's memory
1057 won't be reclaimed under any conditions. If there is no
1058 unprotected reclaimable memory available, OOM killer
1061 Effective min boundary is limited by memory.min values of
1062 all ancestor cgroups. If there is memory.min overcommitment
1063 (child cgroup or cgroups are requiring more protected memory
1064 than parent will allow), then each child cgroup will get
1065 the part of parent's protection proportional to its
1066 actual memory usage below memory.min.
1068 Putting more memory than generally available under this
1069 protection is discouraged and may lead to constant OOMs.
1071 If a memory cgroup is not populated with processes,
1072 its memory.min is ignored.
1075 A read-write single value file which exists on non-root
1076 cgroups. The default is "0".
1078 Best-effort memory protection. If the memory usage of a
1079 cgroup is within its effective low boundary, the cgroup's
1080 memory won't be reclaimed unless memory can be reclaimed
1081 from unprotected cgroups.
1083 Effective low boundary is limited by memory.low values of
1084 all ancestor cgroups. If there is memory.low overcommitment
1085 (child cgroup or cgroups are requiring more protected memory
1086 than parent will allow), then each child cgroup will get
1087 the part of parent's protection proportional to its
1088 actual memory usage below memory.low.
1090 Putting more memory than generally available under this
1091 protection is discouraged.
1094 A read-write single value file which exists on non-root
1095 cgroups. The default is "max".
1097 Memory usage throttle limit. This is the main mechanism to
1098 control memory usage of a cgroup. If a cgroup's usage goes
1099 over the high boundary, the processes of the cgroup are
1100 throttled and put under heavy reclaim pressure.
1102 Going over the high limit never invokes the OOM killer and
1103 under extreme conditions the limit may be breached.
1106 A read-write single value file which exists on non-root
1107 cgroups. The default is "max".
1109 Memory usage hard limit. This is the final protection
1110 mechanism. If a cgroup's memory usage reaches this limit and
1111 can't be reduced, the OOM killer is invoked in the cgroup.
1112 Under certain circumstances, the usage may go over the limit
1115 This is the ultimate protection mechanism. As long as the
1116 high limit is used and monitored properly, this limit's
1117 utility is limited to providing the final safety net.
1120 A read-write single value file which exists on non-root
1121 cgroups. The default value is "0".
1123 Determines whether the cgroup should be treated as
1124 an indivisible workload by the OOM killer. If set,
1125 all tasks belonging to the cgroup or to its descendants
1126 (if the memory cgroup is not a leaf cgroup) are killed
1127 together or not at all. This can be used to avoid
1128 partial kills to guarantee workload integrity.
1130 Tasks with the OOM protection (oom_score_adj set to -1000)
1131 are treated as an exception and are never killed.
1133 If the OOM killer is invoked in a cgroup, it's not going
1134 to kill any tasks outside of this cgroup, regardless
1135 memory.oom.group values of ancestor cgroups.
1138 A read-only flat-keyed file which exists on non-root cgroups.
1139 The following entries are defined. Unless specified
1140 otherwise, a value change in this file generates a file
1144 The number of times the cgroup is reclaimed due to
1145 high memory pressure even though its usage is under
1146 the low boundary. This usually indicates that the low
1147 boundary is over-committed.
1150 The number of times processes of the cgroup are
1151 throttled and routed to perform direct memory reclaim
1152 because the high memory boundary was exceeded. For a
1153 cgroup whose memory usage is capped by the high limit
1154 rather than global memory pressure, this event's
1155 occurrences are expected.
1158 The number of times the cgroup's memory usage was
1159 about to go over the max boundary. If direct reclaim
1160 fails to bring it down, the cgroup goes to OOM state.
1163 The number of time the cgroup's memory usage was
1164 reached the limit and allocation was about to fail.
1166 Depending on context result could be invocation of OOM
1167 killer and retrying allocation or failing allocation.
1169 Failed allocation in its turn could be returned into
1170 userspace as -ENOMEM or silently ignored in cases like
1171 disk readahead. For now OOM in memory cgroup kills
1172 tasks iff shortage has happened inside page fault.
1174 This event is not raised if the OOM killer is not
1175 considered as an option, e.g. for failed high-order
1179 The number of processes belonging to this cgroup
1180 killed by any kind of OOM killer.
1183 A read-only flat-keyed file which exists on non-root cgroups.
1185 This breaks down the cgroup's memory footprint into different
1186 types of memory, type-specific details, and other information
1187 on the state and past events of the memory management system.
1189 All memory amounts are in bytes.
1191 The entries are ordered to be human readable, and new entries
1192 can show up in the middle. Don't rely on items remaining in a
1193 fixed position; use the keys to look up specific values!
1196 Amount of memory used in anonymous mappings such as
1197 brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1200 Amount of memory used to cache filesystem data,
1201 including tmpfs and shared memory.
1204 Amount of memory allocated to kernel stacks.
1207 Amount of memory used for storing in-kernel data
1211 Amount of memory used in network transmission buffers
1214 Amount of cached filesystem data that is swap-backed,
1215 such as tmpfs, shm segments, shared anonymous mmap()s
1218 Amount of cached filesystem data mapped with mmap()
1221 Amount of cached filesystem data that was modified but
1222 not yet written back to disk
1225 Amount of cached filesystem data that was modified and
1226 is currently being written back to disk
1229 Amount of memory used in anonymous mappings backed by
1230 transparent hugepages
1232 inactive_anon, active_anon, inactive_file, active_file, unevictable
1233 Amount of memory, swap-backed and filesystem-backed,
1234 on the internal memory management lists used by the
1235 page reclaim algorithm
1238 Part of "slab" that might be reclaimed, such as
1239 dentries and inodes.
1242 Part of "slab" that cannot be reclaimed on memory
1246 Total number of page faults incurred
1249 Number of major page faults incurred
1253 Number of refaults of previously evicted pages
1257 Number of refaulted pages that were immediately activated
1259 workingset_nodereclaim
1261 Number of times a shadow node has been reclaimed
1265 Amount of scanned pages (in an active LRU list)
1269 Amount of scanned pages (in an inactive LRU list)
1273 Amount of reclaimed pages
1277 Amount of pages moved to the active LRU list
1281 Amount of pages moved to the inactive LRU lis
1285 Amount of pages postponed to be freed under memory pressure
1289 Amount of reclaimed lazyfree pages
1293 Number of transparent hugepages which were allocated to satisfy
1294 a page fault, including COW faults. This counter is not present
1295 when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1299 Number of transparent hugepages which were allocated to allow
1300 collapsing an existing range of pages. This counter is not
1301 present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1304 A read-only single value file which exists on non-root
1307 The total amount of swap currently being used by the cgroup
1308 and its descendants.
1311 A read-write single value file which exists on non-root
1312 cgroups. The default is "max".
1314 Swap usage hard limit. If a cgroup's swap usage reaches this
1315 limit, anonymous memory of the cgroup will not be swapped out.
1318 A read-only flat-keyed file which exists on non-root cgroups.
1319 The following entries are defined. Unless specified
1320 otherwise, a value change in this file generates a file
1324 The number of times the cgroup's swap usage was about
1325 to go over the max boundary and swap allocation
1329 The number of times swap allocation failed either
1330 because of running out of swap system-wide or max
1333 When reduced under the current usage, the existing swap
1334 entries are reclaimed gradually and the swap usage may stay
1335 higher than the limit for an extended period of time. This
1336 reduces the impact on the workload and memory management.
1339 A read-only nested-key file which exists on non-root cgroups.
1341 Shows pressure stall information for memory. See
1342 Documentation/accounting/psi.txt for details.
1348 "memory.high" is the main mechanism to control memory usage.
1349 Over-committing on high limit (sum of high limits > available memory)
1350 and letting global memory pressure to distribute memory according to
1351 usage is a viable strategy.
1353 Because breach of the high limit doesn't trigger the OOM killer but
1354 throttles the offending cgroup, a management agent has ample
1355 opportunities to monitor and take appropriate actions such as granting
1356 more memory or terminating the workload.
1358 Determining whether a cgroup has enough memory is not trivial as
1359 memory usage doesn't indicate whether the workload can benefit from
1360 more memory. For example, a workload which writes data received from
1361 network to a file can use all available memory but can also operate as
1362 performant with a small amount of memory. A measure of memory
1363 pressure - how much the workload is being impacted due to lack of
1364 memory - is necessary to determine whether a workload needs more
1365 memory; unfortunately, memory pressure monitoring mechanism isn't
1372 A memory area is charged to the cgroup which instantiated it and stays
1373 charged to the cgroup until the area is released. Migrating a process
1374 to a different cgroup doesn't move the memory usages that it
1375 instantiated while in the previous cgroup to the new cgroup.
1377 A memory area may be used by processes belonging to different cgroups.
1378 To which cgroup the area will be charged is in-deterministic; however,
1379 over time, the memory area is likely to end up in a cgroup which has
1380 enough memory allowance to avoid high reclaim pressure.
1382 If a cgroup sweeps a considerable amount of memory which is expected
1383 to be accessed repeatedly by other cgroups, it may make sense to use
1384 POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1385 belonging to the affected files to ensure correct memory ownership.
1391 The "io" controller regulates the distribution of IO resources. This
1392 controller implements both weight based and absolute bandwidth or IOPS
1393 limit distribution; however, weight based distribution is available
1394 only if cfq-iosched is in use and neither scheme is available for
1402 A read-only nested-keyed file which exists on non-root
1405 Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1406 The following nested keys are defined.
1408 ====== =====================
1410 wbytes Bytes written
1411 rios Number of read IOs
1412 wios Number of write IOs
1413 dbytes Bytes discarded
1414 dios Number of discard IOs
1415 ====== =====================
1417 An example read output follows:
1419 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1420 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1423 A read-write flat-keyed file which exists on non-root cgroups.
1424 The default is "default 100".
1426 The first line is the default weight applied to devices
1427 without specific override. The rest are overrides keyed by
1428 $MAJ:$MIN device numbers and not ordered. The weights are in
1429 the range [1, 10000] and specifies the relative amount IO time
1430 the cgroup can use in relation to its siblings.
1432 The default weight can be updated by writing either "default
1433 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing
1434 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1436 An example read output follows::
1443 A read-write nested-keyed file which exists on non-root
1446 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN
1447 device numbers and not ordered. The following nested keys are
1450 ===== ==================================
1451 rbps Max read bytes per second
1452 wbps Max write bytes per second
1453 riops Max read IO operations per second
1454 wiops Max write IO operations per second
1455 ===== ==================================
1457 When writing, any number of nested key-value pairs can be
1458 specified in any order. "max" can be specified as the value
1459 to remove a specific limit. If the same key is specified
1460 multiple times, the outcome is undefined.
1462 BPS and IOPS are measured in each IO direction and IOs are
1463 delayed if limit is reached. Temporary bursts are allowed.
1465 Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
1467 echo "8:16 rbps=2097152 wiops=120" > io.max
1469 Reading returns the following::
1471 8:16 rbps=2097152 wbps=max riops=max wiops=120
1473 Write IOPS limit can be removed by writing the following::
1475 echo "8:16 wiops=max" > io.max
1477 Reading now returns the following::
1479 8:16 rbps=2097152 wbps=max riops=max wiops=max
1482 A read-only nested-key file which exists on non-root cgroups.
1484 Shows pressure stall information for IO. See
1485 Documentation/accounting/psi.txt for details.
1491 Page cache is dirtied through buffered writes and shared mmaps and
1492 written asynchronously to the backing filesystem by the writeback
1493 mechanism. Writeback sits between the memory and IO domains and
1494 regulates the proportion of dirty memory by balancing dirtying and
1497 The io controller, in conjunction with the memory controller,
1498 implements control of page cache writeback IOs. The memory controller
1499 defines the memory domain that dirty memory ratio is calculated and
1500 maintained for and the io controller defines the io domain which
1501 writes out dirty pages for the memory domain. Both system-wide and
1502 per-cgroup dirty memory states are examined and the more restrictive
1503 of the two is enforced.
1505 cgroup writeback requires explicit support from the underlying
1506 filesystem. Currently, cgroup writeback is implemented on ext2, ext4
1507 and btrfs. On other filesystems, all writeback IOs are attributed to
1510 There are inherent differences in memory and writeback management
1511 which affects how cgroup ownership is tracked. Memory is tracked per
1512 page while writeback per inode. For the purpose of writeback, an
1513 inode is assigned to a cgroup and all IO requests to write dirty pages
1514 from the inode are attributed to that cgroup.
1516 As cgroup ownership for memory is tracked per page, there can be pages
1517 which are associated with different cgroups than the one the inode is
1518 associated with. These are called foreign pages. The writeback
1519 constantly keeps track of foreign pages and, if a particular foreign
1520 cgroup becomes the majority over a certain period of time, switches
1521 the ownership of the inode to that cgroup.
1523 While this model is enough for most use cases where a given inode is
1524 mostly dirtied by a single cgroup even when the main writing cgroup
1525 changes over time, use cases where multiple cgroups write to a single
1526 inode simultaneously are not supported well. In such circumstances, a
1527 significant portion of IOs are likely to be attributed incorrectly.
1528 As memory controller assigns page ownership on the first use and
1529 doesn't update it until the page is released, even if writeback
1530 strictly follows page ownership, multiple cgroups dirtying overlapping
1531 areas wouldn't work as expected. It's recommended to avoid such usage
1534 The sysctl knobs which affect writeback behavior are applied to cgroup
1535 writeback as follows.
1537 vm.dirty_background_ratio, vm.dirty_ratio
1538 These ratios apply the same to cgroup writeback with the
1539 amount of available memory capped by limits imposed by the
1540 memory controller and system-wide clean memory.
1542 vm.dirty_background_bytes, vm.dirty_bytes
1543 For cgroup writeback, this is calculated into ratio against
1544 total available memory and applied the same way as
1545 vm.dirty[_background]_ratio.
1551 This is a cgroup v2 controller for IO workload protection. You provide a group
1552 with a latency target, and if the average latency exceeds that target the
1553 controller will throttle any peers that have a lower latency target than the
1556 The limits are only applied at the peer level in the hierarchy. This means that
1557 in the diagram below, only groups A, B, and C will influence each other, and
1558 groups D and F will influence each other. Group G will influence nobody::
1567 So the ideal way to configure this is to set io.latency in groups A, B, and C.
1568 Generally you do not want to set a value lower than the latency your device
1569 supports. Experiment to find the value that works best for your workload.
1570 Start at higher than the expected latency for your device and watch the
1571 avg_lat value in io.stat for your workload group to get an idea of the
1572 latency you see during normal operation. Use the avg_lat value as a basis for
1573 your real setting, setting at 10-15% higher than the value in io.stat.
1575 How IO Latency Throttling Works
1576 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1578 io.latency is work conserving; so as long as everybody is meeting their latency
1579 target the controller doesn't do anything. Once a group starts missing its
1580 target it begins throttling any peer group that has a higher target than itself.
1581 This throttling takes 2 forms:
1583 - Queue depth throttling. This is the number of outstanding IO's a group is
1584 allowed to have. We will clamp down relatively quickly, starting at no limit
1585 and going all the way down to 1 IO at a time.
1587 - Artificial delay induction. There are certain types of IO that cannot be
1588 throttled without possibly adversely affecting higher priority groups. This
1589 includes swapping and metadata IO. These types of IO are allowed to occur
1590 normally, however they are "charged" to the originating group. If the
1591 originating group is being throttled you will see the use_delay and delay
1592 fields in io.stat increase. The delay value is how many microseconds that are
1593 being added to any process that runs in this group. Because this number can
1594 grow quite large if there is a lot of swapping or metadata IO occurring we
1595 limit the individual delay events to 1 second at a time.
1597 Once the victimized group starts meeting its latency target again it will start
1598 unthrottling any peer groups that were throttled previously. If the victimized
1599 group simply stops doing IO the global counter will unthrottle appropriately.
1601 IO Latency Interface Files
1602 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1605 This takes a similar format as the other controllers.
1607 "MAJOR:MINOR target=<target time in microseconds"
1610 If the controller is enabled you will see extra stats in io.stat in
1611 addition to the normal ones.
1614 This is the current queue depth for the group.
1617 This is an exponential moving average with a decay rate of 1/exp
1618 bound by the sampling interval. The decay rate interval can be
1619 calculated by multiplying the win value in io.stat by the
1620 corresponding number of samples based on the win value.
1623 The sampling window size in milliseconds. This is the minimum
1624 duration of time between evaluation events. Windows only elapse
1625 with IO activity. Idle periods extend the most recent window.
1630 The process number controller is used to allow a cgroup to stop any
1631 new tasks from being fork()'d or clone()'d after a specified limit is
1634 The number of tasks in a cgroup can be exhausted in ways which other
1635 controllers cannot prevent, thus warranting its own controller. For
1636 example, a fork bomb is likely to exhaust the number of tasks before
1637 hitting memory restrictions.
1639 Note that PIDs used in this controller refer to TIDs, process IDs as
1647 A read-write single value file which exists on non-root
1648 cgroups. The default is "max".
1650 Hard limit of number of processes.
1653 A read-only single value file which exists on all cgroups.
1655 The number of processes currently in the cgroup and its
1658 Organisational operations are not blocked by cgroup policies, so it is
1659 possible to have pids.current > pids.max. This can be done by either
1660 setting the limit to be smaller than pids.current, or attaching enough
1661 processes to the cgroup such that pids.current is larger than
1662 pids.max. However, it is not possible to violate a cgroup PID policy
1663 through fork() or clone(). These will return -EAGAIN if the creation
1664 of a new process would cause a cgroup policy to be violated.
1670 The "cpuset" controller provides a mechanism for constraining
1671 the CPU and memory node placement of tasks to only the resources
1672 specified in the cpuset interface files in a task's current cgroup.
1673 This is especially valuable on large NUMA systems where placing jobs
1674 on properly sized subsets of the systems with careful processor and
1675 memory placement to reduce cross-node memory access and contention
1676 can improve overall system performance.
1678 The "cpuset" controller is hierarchical. That means the controller
1679 cannot use CPUs or memory nodes not allowed in its parent.
1682 Cpuset Interface Files
1683 ~~~~~~~~~~~~~~~~~~~~~~
1686 A read-write multiple values file which exists on non-root
1687 cpuset-enabled cgroups.
1689 It lists the requested CPUs to be used by tasks within this
1690 cgroup. The actual list of CPUs to be granted, however, is
1691 subjected to constraints imposed by its parent and can differ
1692 from the requested CPUs.
1694 The CPU numbers are comma-separated numbers or ranges.
1700 An empty value indicates that the cgroup is using the same
1701 setting as the nearest cgroup ancestor with a non-empty
1702 "cpuset.cpus" or all the available CPUs if none is found.
1704 The value of "cpuset.cpus" stays constant until the next update
1705 and won't be affected by any CPU hotplug events.
1707 cpuset.cpus.effective
1708 A read-only multiple values file which exists on all
1709 cpuset-enabled cgroups.
1711 It lists the onlined CPUs that are actually granted to this
1712 cgroup by its parent. These CPUs are allowed to be used by
1713 tasks within the current cgroup.
1715 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
1716 all the CPUs from the parent cgroup that can be available to
1717 be used by this cgroup. Otherwise, it should be a subset of
1718 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
1719 can be granted. In this case, it will be treated just like an
1720 empty "cpuset.cpus".
1722 Its value will be affected by CPU hotplug events.
1725 A read-write multiple values file which exists on non-root
1726 cpuset-enabled cgroups.
1728 It lists the requested memory nodes to be used by tasks within
1729 this cgroup. The actual list of memory nodes granted, however,
1730 is subjected to constraints imposed by its parent and can differ
1731 from the requested memory nodes.
1733 The memory node numbers are comma-separated numbers or ranges.
1739 An empty value indicates that the cgroup is using the same
1740 setting as the nearest cgroup ancestor with a non-empty
1741 "cpuset.mems" or all the available memory nodes if none
1744 The value of "cpuset.mems" stays constant until the next update
1745 and won't be affected by any memory nodes hotplug events.
1747 cpuset.mems.effective
1748 A read-only multiple values file which exists on all
1749 cpuset-enabled cgroups.
1751 It lists the onlined memory nodes that are actually granted to
1752 this cgroup by its parent. These memory nodes are allowed to
1753 be used by tasks within the current cgroup.
1755 If "cpuset.mems" is empty, it shows all the memory nodes from the
1756 parent cgroup that will be available to be used by this cgroup.
1757 Otherwise, it should be a subset of "cpuset.mems" unless none of
1758 the memory nodes listed in "cpuset.mems" can be granted. In this
1759 case, it will be treated just like an empty "cpuset.mems".
1761 Its value will be affected by memory nodes hotplug events.
1763 cpuset.cpus.partition
1764 A read-write single value file which exists on non-root
1765 cpuset-enabled cgroups. This flag is owned by the parent cgroup
1766 and is not delegatable.
1768 It accepts only the following input values when written to.
1770 "root" - a paritition root
1771 "member" - a non-root member of a partition
1773 When set to be a partition root, the current cgroup is the
1774 root of a new partition or scheduling domain that comprises
1775 itself and all its descendants except those that are separate
1776 partition roots themselves and their descendants. The root
1777 cgroup is always a partition root.
1779 There are constraints on where a partition root can be set.
1780 It can only be set in a cgroup if all the following conditions
1783 1) The "cpuset.cpus" is not empty and the list of CPUs are
1784 exclusive, i.e. they are not shared by any of its siblings.
1785 2) The parent cgroup is a partition root.
1786 3) The "cpuset.cpus" is also a proper subset of the parent's
1787 "cpuset.cpus.effective".
1788 4) There is no child cgroups with cpuset enabled. This is for
1789 eliminating corner cases that have to be handled if such a
1790 condition is allowed.
1792 Setting it to partition root will take the CPUs away from the
1793 effective CPUs of the parent cgroup. Once it is set, this
1794 file cannot be reverted back to "member" if there are any child
1795 cgroups with cpuset enabled.
1797 A parent partition cannot distribute all its CPUs to its
1798 child partitions. There must be at least one cpu left in the
1801 Once becoming a partition root, changes to "cpuset.cpus" is
1802 generally allowed as long as the first condition above is true,
1803 the change will not take away all the CPUs from the parent
1804 partition and the new "cpuset.cpus" value is a superset of its
1805 children's "cpuset.cpus" values.
1807 Sometimes, external factors like changes to ancestors'
1808 "cpuset.cpus" or cpu hotplug can cause the state of the partition
1809 root to change. On read, the "cpuset.sched.partition" file
1810 can show the following values.
1812 "member" Non-root member of a partition
1813 "root" Partition root
1814 "root invalid" Invalid partition root
1816 It is a partition root if the first 2 partition root conditions
1817 above are true and at least one CPU from "cpuset.cpus" is
1818 granted by the parent cgroup.
1820 A partition root can become invalid if none of CPUs requested
1821 in "cpuset.cpus" can be granted by the parent cgroup or the
1822 parent cgroup is no longer a partition root itself. In this
1823 case, it is not a real partition even though the restriction
1824 of the first partition root condition above will still apply.
1825 The cpu affinity of all the tasks in the cgroup will then be
1826 associated with CPUs in the nearest ancestor partition.
1828 An invalid partition root can be transitioned back to a
1829 real partition root if at least one of the requested CPUs
1830 can now be granted by its parent. In this case, the cpu
1831 affinity of all the tasks in the formerly invalid partition
1832 will be associated to the CPUs of the newly formed partition.
1833 Changing the partition state of an invalid partition root to
1834 "member" is always allowed even if child cpusets are present.
1840 Device controller manages access to device files. It includes both
1841 creation of new device files (using mknod), and access to the
1842 existing device files.
1844 Cgroup v2 device controller has no interface files and is implemented
1845 on top of cgroup BPF. To control access to device files, a user may
1846 create bpf programs of the BPF_CGROUP_DEVICE type and attach them
1847 to cgroups. On an attempt to access a device file, corresponding
1848 BPF programs will be executed, and depending on the return value
1849 the attempt will succeed or fail with -EPERM.
1851 A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx
1852 structure, which describes the device access attempt: access type
1853 (mknod/read/write) and device (type, major and minor numbers).
1854 If the program returns 0, the attempt fails with -EPERM, otherwise
1857 An example of BPF_CGROUP_DEVICE program may be found in the kernel
1858 source tree in the tools/testing/selftests/bpf/dev_cgroup.c file.
1864 The "rdma" controller regulates the distribution and accounting of
1867 RDMA Interface Files
1868 ~~~~~~~~~~~~~~~~~~~~
1871 A readwrite nested-keyed file that exists for all the cgroups
1872 except root that describes current configured resource limit
1873 for a RDMA/IB device.
1875 Lines are keyed by device name and are not ordered.
1876 Each line contains space separated resource name and its configured
1877 limit that can be distributed.
1879 The following nested keys are defined.
1881 ========== =============================
1882 hca_handle Maximum number of HCA Handles
1883 hca_object Maximum number of HCA Objects
1884 ========== =============================
1886 An example for mlx4 and ocrdma device follows::
1888 mlx4_0 hca_handle=2 hca_object=2000
1889 ocrdma1 hca_handle=3 hca_object=max
1892 A read-only file that describes current resource usage.
1893 It exists for all the cgroup except root.
1895 An example for mlx4 and ocrdma device follows::
1897 mlx4_0 hca_handle=1 hca_object=20
1898 ocrdma1 hca_handle=1 hca_object=23
1907 perf_event controller, if not mounted on a legacy hierarchy, is
1908 automatically enabled on the v2 hierarchy so that perf events can
1909 always be filtered by cgroup v2 path. The controller can still be
1910 moved to a legacy hierarchy after v2 hierarchy is populated.
1913 Non-normative information
1914 -------------------------
1916 This section contains information that isn't considered to be a part of
1917 the stable kernel API and so is subject to change.
1920 CPU controller root cgroup process behaviour
1921 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1923 When distributing CPU cycles in the root cgroup each thread in this
1924 cgroup is treated as if it was hosted in a separate child cgroup of the
1925 root cgroup. This child cgroup weight is dependent on its thread nice
1928 For details of this mapping see sched_prio_to_weight array in
1929 kernel/sched/core.c file (values from this array should be scaled
1930 appropriately so the neutral - nice 0 - value is 100 instead of 1024).
1933 IO controller root cgroup process behaviour
1934 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1936 Root cgroup processes are hosted in an implicit leaf child node.
1937 When distributing IO resources this implicit child node is taken into
1938 account as if it was a normal child cgroup of the root cgroup with a
1939 weight value of 200.
1948 cgroup namespace provides a mechanism to virtualize the view of the
1949 "/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone
1950 flag can be used with clone(2) and unshare(2) to create a new cgroup
1951 namespace. The process running inside the cgroup namespace will have
1952 its "/proc/$PID/cgroup" output restricted to cgroupns root. The
1953 cgroupns root is the cgroup of the process at the time of creation of
1954 the cgroup namespace.
1956 Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
1957 complete path of the cgroup of a process. In a container setup where
1958 a set of cgroups and namespaces are intended to isolate processes the
1959 "/proc/$PID/cgroup" file may leak potential system level information
1960 to the isolated processes. For Example::
1962 # cat /proc/self/cgroup
1963 0::/batchjobs/container_id1
1965 The path '/batchjobs/container_id1' can be considered as system-data
1966 and undesirable to expose to the isolated processes. cgroup namespace
1967 can be used to restrict visibility of this path. For example, before
1968 creating a cgroup namespace, one would see::
1970 # ls -l /proc/self/ns/cgroup
1971 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
1972 # cat /proc/self/cgroup
1973 0::/batchjobs/container_id1
1975 After unsharing a new namespace, the view changes::
1977 # ls -l /proc/self/ns/cgroup
1978 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
1979 # cat /proc/self/cgroup
1982 When some thread from a multi-threaded process unshares its cgroup
1983 namespace, the new cgroupns gets applied to the entire process (all
1984 the threads). This is natural for the v2 hierarchy; however, for the
1985 legacy hierarchies, this may be unexpected.
1987 A cgroup namespace is alive as long as there are processes inside or
1988 mounts pinning it. When the last usage goes away, the cgroup
1989 namespace is destroyed. The cgroupns root and the actual cgroups
1996 The 'cgroupns root' for a cgroup namespace is the cgroup in which the
1997 process calling unshare(2) is running. For example, if a process in
1998 /batchjobs/container_id1 cgroup calls unshare, cgroup
1999 /batchjobs/container_id1 becomes the cgroupns root. For the
2000 init_cgroup_ns, this is the real root ('/') cgroup.
2002 The cgroupns root cgroup does not change even if the namespace creator
2003 process later moves to a different cgroup::
2005 # ~/unshare -c # unshare cgroupns in some cgroup
2006 # cat /proc/self/cgroup
2009 # echo 0 > sub_cgrp_1/cgroup.procs
2010 # cat /proc/self/cgroup
2013 Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2015 Processes running inside the cgroup namespace will be able to see
2016 cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2017 From within an unshared cgroupns::
2021 # echo 7353 > sub_cgrp_1/cgroup.procs
2022 # cat /proc/7353/cgroup
2025 From the initial cgroup namespace, the real cgroup path will be
2028 $ cat /proc/7353/cgroup
2029 0::/batchjobs/container_id1/sub_cgrp_1
2031 From a sibling cgroup namespace (that is, a namespace rooted at a
2032 different cgroup), the cgroup path relative to its own cgroup
2033 namespace root will be shown. For instance, if PID 7353's cgroup
2034 namespace root is at '/batchjobs/container_id2', then it will see::
2036 # cat /proc/7353/cgroup
2037 0::/../container_id2/sub_cgrp_1
2039 Note that the relative path always starts with '/' to indicate that
2040 its relative to the cgroup namespace root of the caller.
2043 Migration and setns(2)
2044 ----------------------
2046 Processes inside a cgroup namespace can move into and out of the
2047 namespace root if they have proper access to external cgroups. For
2048 example, from inside a namespace with cgroupns root at
2049 /batchjobs/container_id1, and assuming that the global hierarchy is
2050 still accessible inside cgroupns::
2052 # cat /proc/7353/cgroup
2054 # echo 7353 > batchjobs/container_id2/cgroup.procs
2055 # cat /proc/7353/cgroup
2056 0::/../container_id2
2058 Note that this kind of setup is not encouraged. A task inside cgroup
2059 namespace should only be exposed to its own cgroupns hierarchy.
2061 setns(2) to another cgroup namespace is allowed when:
2063 (a) the process has CAP_SYS_ADMIN against its current user namespace
2064 (b) the process has CAP_SYS_ADMIN against the target cgroup
2067 No implicit cgroup changes happen with attaching to another cgroup
2068 namespace. It is expected that the someone moves the attaching
2069 process under the target cgroup namespace root.
2072 Interaction with Other Namespaces
2073 ---------------------------------
2075 Namespace specific cgroup hierarchy can be mounted by a process
2076 running inside a non-init cgroup namespace::
2078 # mount -t cgroup2 none $MOUNT_POINT
2080 This will mount the unified cgroup hierarchy with cgroupns root as the
2081 filesystem root. The process needs CAP_SYS_ADMIN against its user and
2084 The virtualization of /proc/self/cgroup file combined with restricting
2085 the view of cgroup hierarchy by namespace-private cgroupfs mount
2086 provides a properly isolated cgroup view inside the container.
2089 Information on Kernel Programming
2090 =================================
2092 This section contains kernel programming information in the areas
2093 where interacting with cgroup is necessary. cgroup core and
2094 controllers are not covered.
2097 Filesystem Support for Writeback
2098 --------------------------------
2100 A filesystem can support cgroup writeback by updating
2101 address_space_operations->writepage[s]() to annotate bio's using the
2102 following two functions.
2104 wbc_init_bio(@wbc, @bio)
2105 Should be called for each bio carrying writeback data and
2106 associates the bio with the inode's owner cgroup and the
2107 corresponding request queue. This must be called after
2108 a queue (device) has been associated with the bio and
2111 wbc_account_io(@wbc, @page, @bytes)
2112 Should be called for each data segment being written out.
2113 While this function doesn't care exactly when it's called
2114 during the writeback session, it's the easiest and most
2115 natural to call it as data segments are added to a bio.
2117 With writeback bio's annotated, cgroup support can be enabled per
2118 super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for
2119 selective disabling of cgroup writeback support which is helpful when
2120 certain filesystem features, e.g. journaled data mode, are
2123 wbc_init_bio() binds the specified bio to its cgroup. Depending on
2124 the configuration, the bio may be executed at a lower priority and if
2125 the writeback session is holding shared resources, e.g. a journal
2126 entry, may lead to priority inversion. There is no one easy solution
2127 for the problem. Filesystems can try to work around specific problem
2128 cases by skipping wbc_init_bio() and using bio_associate_blkg()
2132 Deprecated v1 Core Features
2133 ===========================
2135 - Multiple hierarchies including named ones are not supported.
2137 - All v1 mount options are not supported.
2139 - The "tasks" file is removed and "cgroup.procs" is not sorted.
2141 - "cgroup.clone_children" is removed.
2143 - /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file
2144 at the root instead.
2147 Issues with v1 and Rationales for v2
2148 ====================================
2150 Multiple Hierarchies
2151 --------------------
2153 cgroup v1 allowed an arbitrary number of hierarchies and each
2154 hierarchy could host any number of controllers. While this seemed to
2155 provide a high level of flexibility, it wasn't useful in practice.
2157 For example, as there is only one instance of each controller, utility
2158 type controllers such as freezer which can be useful in all
2159 hierarchies could only be used in one. The issue is exacerbated by
2160 the fact that controllers couldn't be moved to another hierarchy once
2161 hierarchies were populated. Another issue was that all controllers
2162 bound to a hierarchy were forced to have exactly the same view of the
2163 hierarchy. It wasn't possible to vary the granularity depending on
2164 the specific controller.
2166 In practice, these issues heavily limited which controllers could be
2167 put on the same hierarchy and most configurations resorted to putting
2168 each controller on its own hierarchy. Only closely related ones, such
2169 as the cpu and cpuacct controllers, made sense to be put on the same
2170 hierarchy. This often meant that userland ended up managing multiple
2171 similar hierarchies repeating the same steps on each hierarchy
2172 whenever a hierarchy management operation was necessary.
2174 Furthermore, support for multiple hierarchies came at a steep cost.
2175 It greatly complicated cgroup core implementation but more importantly
2176 the support for multiple hierarchies restricted how cgroup could be
2177 used in general and what controllers was able to do.
2179 There was no limit on how many hierarchies there might be, which meant
2180 that a thread's cgroup membership couldn't be described in finite
2181 length. The key might contain any number of entries and was unlimited
2182 in length, which made it highly awkward to manipulate and led to
2183 addition of controllers which existed only to identify membership,
2184 which in turn exacerbated the original problem of proliferating number
2187 Also, as a controller couldn't have any expectation regarding the
2188 topologies of hierarchies other controllers might be on, each
2189 controller had to assume that all other controllers were attached to
2190 completely orthogonal hierarchies. This made it impossible, or at
2191 least very cumbersome, for controllers to cooperate with each other.
2193 In most use cases, putting controllers on hierarchies which are
2194 completely orthogonal to each other isn't necessary. What usually is
2195 called for is the ability to have differing levels of granularity
2196 depending on the specific controller. In other words, hierarchy may
2197 be collapsed from leaf towards root when viewed from specific
2198 controllers. For example, a given configuration might not care about
2199 how memory is distributed beyond a certain level while still wanting
2200 to control how CPU cycles are distributed.
2206 cgroup v1 allowed threads of a process to belong to different cgroups.
2207 This didn't make sense for some controllers and those controllers
2208 ended up implementing different ways to ignore such situations but
2209 much more importantly it blurred the line between API exposed to
2210 individual applications and system management interface.
2212 Generally, in-process knowledge is available only to the process
2213 itself; thus, unlike service-level organization of processes,
2214 categorizing threads of a process requires active participation from
2215 the application which owns the target process.
2217 cgroup v1 had an ambiguously defined delegation model which got abused
2218 in combination with thread granularity. cgroups were delegated to
2219 individual applications so that they can create and manage their own
2220 sub-hierarchies and control resource distributions along them. This
2221 effectively raised cgroup to the status of a syscall-like API exposed
2224 First of all, cgroup has a fundamentally inadequate interface to be
2225 exposed this way. For a process to access its own knobs, it has to
2226 extract the path on the target hierarchy from /proc/self/cgroup,
2227 construct the path by appending the name of the knob to the path, open
2228 and then read and/or write to it. This is not only extremely clunky
2229 and unusual but also inherently racy. There is no conventional way to
2230 define transaction across the required steps and nothing can guarantee
2231 that the process would actually be operating on its own sub-hierarchy.
2233 cgroup controllers implemented a number of knobs which would never be
2234 accepted as public APIs because they were just adding control knobs to
2235 system-management pseudo filesystem. cgroup ended up with interface
2236 knobs which were not properly abstracted or refined and directly
2237 revealed kernel internal details. These knobs got exposed to
2238 individual applications through the ill-defined delegation mechanism
2239 effectively abusing cgroup as a shortcut to implementing public APIs
2240 without going through the required scrutiny.
2242 This was painful for both userland and kernel. Userland ended up with
2243 misbehaving and poorly abstracted interfaces and kernel exposing and
2244 locked into constructs inadvertently.
2247 Competition Between Inner Nodes and Threads
2248 -------------------------------------------
2250 cgroup v1 allowed threads to be in any cgroups which created an
2251 interesting problem where threads belonging to a parent cgroup and its
2252 children cgroups competed for resources. This was nasty as two
2253 different types of entities competed and there was no obvious way to
2254 settle it. Different controllers did different things.
2256 The cpu controller considered threads and cgroups as equivalents and
2257 mapped nice levels to cgroup weights. This worked for some cases but
2258 fell flat when children wanted to be allocated specific ratios of CPU
2259 cycles and the number of internal threads fluctuated - the ratios
2260 constantly changed as the number of competing entities fluctuated.
2261 There also were other issues. The mapping from nice level to weight
2262 wasn't obvious or universal, and there were various other knobs which
2263 simply weren't available for threads.
2265 The io controller implicitly created a hidden leaf node for each
2266 cgroup to host the threads. The hidden leaf had its own copies of all
2267 the knobs with ``leaf_`` prefixed. While this allowed equivalent
2268 control over internal threads, it was with serious drawbacks. It
2269 always added an extra layer of nesting which wouldn't be necessary
2270 otherwise, made the interface messy and significantly complicated the
2273 The memory controller didn't have a way to control what happened
2274 between internal tasks and child cgroups and the behavior was not
2275 clearly defined. There were attempts to add ad-hoc behaviors and
2276 knobs to tailor the behavior to specific workloads which would have
2277 led to problems extremely difficult to resolve in the long term.
2279 Multiple controllers struggled with internal tasks and came up with
2280 different ways to deal with it; unfortunately, all the approaches were
2281 severely flawed and, furthermore, the widely different behaviors
2282 made cgroup as a whole highly inconsistent.
2284 This clearly is a problem which needs to be addressed from cgroup core
2288 Other Interface Issues
2289 ----------------------
2291 cgroup v1 grew without oversight and developed a large number of
2292 idiosyncrasies and inconsistencies. One issue on the cgroup core side
2293 was how an empty cgroup was notified - a userland helper binary was
2294 forked and executed for each event. The event delivery wasn't
2295 recursive or delegatable. The limitations of the mechanism also led
2296 to in-kernel event delivery filtering mechanism further complicating
2299 Controller interfaces were problematic too. An extreme example is
2300 controllers completely ignoring hierarchical organization and treating
2301 all cgroups as if they were all located directly under the root
2302 cgroup. Some controllers exposed a large amount of inconsistent
2303 implementation details to userland.
2305 There also was no consistency across controllers. When a new cgroup
2306 was created, some controllers defaulted to not imposing extra
2307 restrictions while others disallowed any resource usage until
2308 explicitly configured. Configuration knobs for the same type of
2309 control used widely differing naming schemes and formats. Statistics
2310 and information knobs were named arbitrarily and used different
2311 formats and units even in the same controller.
2313 cgroup v2 establishes common conventions where appropriate and updates
2314 controllers so that they expose minimal and consistent interfaces.
2317 Controller Issues and Remedies
2318 ------------------------------
2323 The original lower boundary, the soft limit, is defined as a limit
2324 that is per default unset. As a result, the set of cgroups that
2325 global reclaim prefers is opt-in, rather than opt-out. The costs for
2326 optimizing these mostly negative lookups are so high that the
2327 implementation, despite its enormous size, does not even provide the
2328 basic desirable behavior. First off, the soft limit has no
2329 hierarchical meaning. All configured groups are organized in a global
2330 rbtree and treated like equal peers, regardless where they are located
2331 in the hierarchy. This makes subtree delegation impossible. Second,
2332 the soft limit reclaim pass is so aggressive that it not just
2333 introduces high allocation latencies into the system, but also impacts
2334 system performance due to overreclaim, to the point where the feature
2335 becomes self-defeating.
2337 The memory.low boundary on the other hand is a top-down allocated
2338 reserve. A cgroup enjoys reclaim protection when it's within its low,
2339 which makes delegation of subtrees possible.
2341 The original high boundary, the hard limit, is defined as a strict
2342 limit that can not budge, even if the OOM killer has to be called.
2343 But this generally goes against the goal of making the most out of the
2344 available memory. The memory consumption of workloads varies during
2345 runtime, and that requires users to overcommit. But doing that with a
2346 strict upper limit requires either a fairly accurate prediction of the
2347 working set size or adding slack to the limit. Since working set size
2348 estimation is hard and error prone, and getting it wrong results in
2349 OOM kills, most users tend to err on the side of a looser limit and
2350 end up wasting precious resources.
2352 The memory.high boundary on the other hand can be set much more
2353 conservatively. When hit, it throttles allocations by forcing them
2354 into direct reclaim to work off the excess, but it never invokes the
2355 OOM killer. As a result, a high boundary that is chosen too
2356 aggressively will not terminate the processes, but instead it will
2357 lead to gradual performance degradation. The user can monitor this
2358 and make corrections until the minimal memory footprint that still
2359 gives acceptable performance is found.
2361 In extreme cases, with many concurrent allocations and a complete
2362 breakdown of reclaim progress within the group, the high boundary can
2363 be exceeded. But even then it's mostly better to satisfy the
2364 allocation from the slack available in other groups or the rest of the
2365 system than killing the group. Otherwise, memory.max is there to
2366 limit this type of spillover and ultimately contain buggy or even
2367 malicious applications.
2369 Setting the original memory.limit_in_bytes below the current usage was
2370 subject to a race condition, where concurrent charges could cause the
2371 limit setting to fail. memory.max on the other hand will first set the
2372 limit to prevent new charges, and then reclaim and OOM kill until the
2373 new limit is met - or the task writing to memory.max is killed.
2375 The combined memory+swap accounting and limiting is replaced by real
2376 control over swap space.
2378 The main argument for a combined memory+swap facility in the original
2379 cgroup design was that global or parental pressure would always be
2380 able to swap all anonymous memory of a child group, regardless of the
2381 child's own (possibly untrusted) configuration. However, untrusted
2382 groups can sabotage swapping by other means - such as referencing its
2383 anonymous memory in a tight loop - and an admin can not assume full
2384 swappability when overcommitting untrusted jobs.
2386 For trusted jobs, on the other hand, a combined counter is not an
2387 intuitive userspace interface, and it flies in the face of the idea
2388 that cgroup controllers should account and limit specific physical
2389 resources. Swap space is a resource like all others in the system,
2390 and that's why unified hierarchy allows distributing it separately.