License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 10:07:57 -04:00
|
|
|
# SPDX-License-Identifier: GPL-2.0
|
2005-04-16 18:20:36 -04:00
|
|
|
#
|
|
|
|
# Makefile for some libs needed in the kernel.
|
|
|
|
#
|
|
|
|
|
2020-07-07 05:21:16 -04:00
|
|
|
ccflags-remove-$(CONFIG_FUNCTION_TRACER) += $(CC_FLAGS_FTRACE)
|
2008-07-17 11:40:48 -04:00
|
|
|
|
kernel: add kcov code coverage
kcov provides code coverage collection for coverage-guided fuzzing
(randomized testing). Coverage-guided fuzzing is a testing technique
that uses coverage feedback to determine new interesting inputs to a
system. A notable user-space example is AFL
(http://lcamtuf.coredump.cx/afl/). However, this technique is not
widely used for kernel testing due to missing compiler and kernel
support.
kcov does not aim to collect as much coverage as possible. It aims to
collect more or less stable coverage that is function of syscall inputs.
To achieve this goal it does not collect coverage in soft/hard
interrupts and instrumentation of some inherently non-deterministic or
non-interesting parts of kernel is disbled (e.g. scheduler, locking).
Currently there is a single coverage collection mode (tracing), but the
API anticipates additional collection modes. Initially I also
implemented a second mode which exposes coverage in a fixed-size hash
table of counters (what Quentin used in his original patch). I've
dropped the second mode for simplicity.
This patch adds the necessary support on kernel side. The complimentary
compiler support was added in gcc revision 231296.
We've used this support to build syzkaller system call fuzzer, which has
found 90 kernel bugs in just 2 months:
https://github.com/google/syzkaller/wiki/Found-Bugs
We've also found 30+ bugs in our internal systems with syzkaller.
Another (yet unexplored) direction where kcov coverage would greatly
help is more traditional "blob mutation". For example, mounting a
random blob as a filesystem, or receiving a random blob over wire.
Why not gcov. Typical fuzzing loop looks as follows: (1) reset
coverage, (2) execute a bit of code, (3) collect coverage, repeat. A
typical coverage can be just a dozen of basic blocks (e.g. an invalid
input). In such context gcov becomes prohibitively expensive as
reset/collect coverage steps depend on total number of basic
blocks/edges in program (in case of kernel it is about 2M). Cost of
kcov depends only on number of executed basic blocks/edges. On top of
that, kernel requires per-thread coverage because there are always
background threads and unrelated processes that also produce coverage.
With inlined gcov instrumentation per-thread coverage is not possible.
kcov exposes kernel PCs and control flow to user-space which is
insecure. But debugfs should not be mapped as user accessible.
Based on a patch by Quentin Casasnovas.
[akpm@linux-foundation.org: make task_struct.kcov_mode have type `enum kcov_mode']
[akpm@linux-foundation.org: unbreak allmodconfig]
[akpm@linux-foundation.org: follow x86 Makefile layout standards]
Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: syzkaller <syzkaller@googlegroups.com>
Cc: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Tavis Ormandy <taviso@google.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Kees Cook <keescook@google.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: David Drysdale <drysdale@google.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-22 17:27:30 -04:00
|
|
|
# These files are disabled because they produce lots of non-interesting and/or
|
|
|
|
# flaky coverage that is not a function of syscall inputs. For example,
|
|
|
|
# rbtree can be global and individual rotations don't correlate with inputs.
|
|
|
|
KCOV_INSTRUMENT_string.o := n
|
|
|
|
KCOV_INSTRUMENT_rbtree.o := n
|
|
|
|
KCOV_INSTRUMENT_list_debug.o := n
|
|
|
|
KCOV_INSTRUMENT_debugobjects.o := n
|
|
|
|
KCOV_INSTRUMENT_dynamic_debug.o := n
|
2020-01-31 01:17:35 -05:00
|
|
|
KCOV_INSTRUMENT_fault-inject.o := n
|
kernel: add kcov code coverage
kcov provides code coverage collection for coverage-guided fuzzing
(randomized testing). Coverage-guided fuzzing is a testing technique
that uses coverage feedback to determine new interesting inputs to a
system. A notable user-space example is AFL
(http://lcamtuf.coredump.cx/afl/). However, this technique is not
widely used for kernel testing due to missing compiler and kernel
support.
kcov does not aim to collect as much coverage as possible. It aims to
collect more or less stable coverage that is function of syscall inputs.
To achieve this goal it does not collect coverage in soft/hard
interrupts and instrumentation of some inherently non-deterministic or
non-interesting parts of kernel is disbled (e.g. scheduler, locking).
Currently there is a single coverage collection mode (tracing), but the
API anticipates additional collection modes. Initially I also
implemented a second mode which exposes coverage in a fixed-size hash
table of counters (what Quentin used in his original patch). I've
dropped the second mode for simplicity.
This patch adds the necessary support on kernel side. The complimentary
compiler support was added in gcc revision 231296.
We've used this support to build syzkaller system call fuzzer, which has
found 90 kernel bugs in just 2 months:
https://github.com/google/syzkaller/wiki/Found-Bugs
We've also found 30+ bugs in our internal systems with syzkaller.
Another (yet unexplored) direction where kcov coverage would greatly
help is more traditional "blob mutation". For example, mounting a
random blob as a filesystem, or receiving a random blob over wire.
Why not gcov. Typical fuzzing loop looks as follows: (1) reset
coverage, (2) execute a bit of code, (3) collect coverage, repeat. A
typical coverage can be just a dozen of basic blocks (e.g. an invalid
input). In such context gcov becomes prohibitively expensive as
reset/collect coverage steps depend on total number of basic
blocks/edges in program (in case of kernel it is about 2M). Cost of
kcov depends only on number of executed basic blocks/edges. On top of
that, kernel requires per-thread coverage because there are always
background threads and unrelated processes that also produce coverage.
With inlined gcov instrumentation per-thread coverage is not possible.
kcov exposes kernel PCs and control flow to user-space which is
insecure. But debugfs should not be mapped as user accessible.
Based on a patch by Quentin Casasnovas.
[akpm@linux-foundation.org: make task_struct.kcov_mode have type `enum kcov_mode']
[akpm@linux-foundation.org: unbreak allmodconfig]
[akpm@linux-foundation.org: follow x86 Makefile layout standards]
Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: syzkaller <syzkaller@googlegroups.com>
Cc: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Tavis Ormandy <taviso@google.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Kees Cook <keescook@google.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: David Drysdale <drysdale@google.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-22 17:27:30 -04:00
|
|
|
|
2020-08-19 10:08:16 -04:00
|
|
|
# string.o implements standard library functions like memset/memcpy etc.
|
|
|
|
# Use -ffreestanding to ensure that the compiler does not try to "optimize"
|
|
|
|
# them into calls to themselves.
|
|
|
|
CFLAGS_string.o := -ffreestanding
|
|
|
|
|
2019-04-29 18:22:58 -04:00
|
|
|
# Early boot use of cmdline, don't instrument it
|
|
|
|
ifdef CONFIG_AMD_MEM_ENCRYPT
|
|
|
|
KASAN_SANITIZE_string.o := n
|
|
|
|
|
2020-08-19 10:08:16 -04:00
|
|
|
CFLAGS_string.o += -fno-stack-protector
|
2019-04-29 18:22:58 -04:00
|
|
|
endif
|
|
|
|
|
2007-10-07 03:24:34 -04:00
|
|
|
lib-y := ctype.o string.o vsprintf.o cmdline.o \
|
2017-11-07 16:30:10 -05:00
|
|
|
rbtree.o radix-tree.o timerqueue.o xarray.o \
|
Maple Tree: add new data structure
Patch series "Introducing the Maple Tree"
The maple tree is an RCU-safe range based B-tree designed to use modern
processor cache efficiently. There are a number of places in the kernel
that a non-overlapping range-based tree would be beneficial, especially
one with a simple interface. If you use an rbtree with other data
structures to improve performance or an interval tree to track
non-overlapping ranges, then this is for you.
The tree has a branching factor of 10 for non-leaf nodes and 16 for leaf
nodes. With the increased branching factor, it is significantly shorter
than the rbtree so it has fewer cache misses. The removal of the linked
list between subsequent entries also reduces the cache misses and the need
to pull in the previous and next VMA during many tree alterations.
The first user that is covered in this patch set is the vm_area_struct,
where three data structures are replaced by the maple tree: the augmented
rbtree, the vma cache, and the linked list of VMAs in the mm_struct. The
long term goal is to reduce or remove the mmap_lock contention.
The plan is to get to the point where we use the maple tree in RCU mode.
Readers will not block for writers. A single write operation will be
allowed at a time. A reader re-walks if stale data is encountered. VMAs
would be RCU enabled and this mode would be entered once multiple tasks
are using the mm_struct.
Davidlor said
: Yes I like the maple tree, and at this stage I don't think we can ask for
: more from this series wrt the MM - albeit there seems to still be some
: folks reporting breakage. Fundamentally I see Liam's work to (re)move
: complexity out of the MM (not to say that the actual maple tree is not
: complex) by consolidating the three complimentary data structures very
: much worth it considering performance does not take a hit. This was very
: much a turn off with the range locking approach, which worst case scenario
: incurred in prohibitive overhead. Also as Liam and Matthew have
: mentioned, RCU opens up a lot of nice performance opportunities, and in
: addition academia[1] has shown outstanding scalability of address spaces
: with the foundation of replacing the locked rbtree with RCU aware trees.
A similar work has been discovered in the academic press
https://pdos.csail.mit.edu/papers/rcuvm:asplos12.pdf
Sheer coincidence. We designed our tree with the intention of solving the
hardest problem first. Upon settling on a b-tree variant and a rough
outline, we researched ranged based b-trees and RCU b-trees and did find
that article. So it was nice to find reassurances that we were on the
right path, but our design choice of using ranges made that paper unusable
for us.
This patch (of 70):
The maple tree is an RCU-safe range based B-tree designed to use modern
processor cache efficiently. There are a number of places in the kernel
that a non-overlapping range-based tree would be beneficial, especially
one with a simple interface. If you use an rbtree with other data
structures to improve performance or an interval tree to track
non-overlapping ranges, then this is for you.
The tree has a branching factor of 10 for non-leaf nodes and 16 for leaf
nodes. With the increased branching factor, it is significantly shorter
than the rbtree so it has fewer cache misses. The removal of the linked
list between subsequent entries also reduces the cache misses and the need
to pull in the previous and next VMA during many tree alterations.
The first user that is covered in this patch set is the vm_area_struct,
where three data structures are replaced by the maple tree: the augmented
rbtree, the vma cache, and the linked list of VMAs in the mm_struct. The
long term goal is to reduce or remove the mmap_lock contention.
The plan is to get to the point where we use the maple tree in RCU mode.
Readers will not block for writers. A single write operation will be
allowed at a time. A reader re-walks if stale data is encountered. VMAs
would be RCU enabled and this mode would be entered once multiple tasks
are using the mm_struct.
There is additional BUG_ON() calls added within the tree, most of which
are in debug code. These will be replaced with a WARN_ON() call in the
future. There is also additional BUG_ON() calls within the code which
will also be reduced in number at a later date. These exist to catch
things such as out-of-range accesses which would crash anyways.
Link: https://lkml.kernel.org/r/20220906194824.2110408-1-Liam.Howlett@oracle.com
Link: https://lkml.kernel.org/r/20220906194824.2110408-2-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Tested-by: David Howells <dhowells@redhat.com>
Tested-by: Sven Schnelle <svens@linux.ibm.com>
Tested-by: Yu Zhao <yuzhao@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: SeongJae Park <sj@kernel.org>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-09-06 15:48:39 -04:00
|
|
|
maple_tree.o idr.o extable.o irq_regs.o argv_split.o \
|
2023-05-16 02:38:12 -04:00
|
|
|
flex_proportions.o ratelimit.o \
|
2012-12-14 13:03:23 -05:00
|
|
|
is_single_threaded.o plist.o decompress.o kobject_uevent.o \
|
alpha: Remove custom dec_and_lock() implementation
Alpha provides a custom implementation of dec_and_lock(). The functions
is split into two parts:
- atomic_add_unless() + return 0 (fast path in assembly)
- remaining part including locking (slow path in C)
Comparing the result of the alpha implementation with the generic
implementation compiled by gcc it looks like the fast path is optimized
by avoiding a stack frame (and reloading the GP), register store and all
this. This is only done in the slowpath.
After marking the slowpath (atomic_dec_and_lock_1()) as "noinline" and
doing the slowpath in C (the atomic_add_unless(atomic, -1, 1) part) I
noticed differences in the resulting assembly:
- the GP is still reloaded
- atomic_add_unless() adds more memory barriers compared to the custom
assembly
- the custom assembly here does "load, sub, beq" while
atomic_add_unless() does "load, cmpeq, add, bne". This is okay because
it compares against zero after subtraction while the generic code
compares against 1 before.
I'm not sure if avoiding the stack frame (and GP reloading) brings a lot
in terms of performance. Regarding the different barriers, Peter
Zijlstra says:
|refcount decrement needs to be a RELEASE operation, such that all the
|load/stores to the object happen before we decrement the refcount.
|
|Otherwise things like:
|
| obj->foo = 5;
| refcnt_dec(&obj->ref);
|
|can be re-ordered, which then allows fun scenarios like:
|
| CPU0 CPU1
|
| refcnt_dec(&obj->ref);
| if (dec_and_test(&obj->ref))
| free(obj);
| obj->foo = 5; // oops UaF
|
|
|This means (for alpha) that there should be a memory barrier _before_
|the decrement, however the dec_and_lock asm thing only has one _after_,
|which, per the above, is too late.
|
|The generic version using add_unless will result in memory barrier
|before and after (because that is the rule for atomic ops with a return
|value) which is strictly too many barriers for the refcount story, but
|who knows what other ordering requirements code has.
Remove the custom alpha implementation of dec_and_lock() and if it is an
issue (performance wise) then the fast path could still be inlined.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: linux-alpha@vger.kernel.org
Link: https://lkml.kernel.org/r/20180606115918.GG12198@hirez.programming.kicks-ass.net
Link: https://lkml.kernel.org/r20180612161621.22645-2-bigeasy@linutronix.de
2018-06-12 12:16:19 -04:00
|
|
|
earlycpio.o seq_buf.o siphash.o dec_and_lock.o \
|
2022-07-25 12:39:17 -04:00
|
|
|
nmi_backtrace.o win_minmax.o memcat_p.o \
|
2023-10-17 09:56:50 -04:00
|
|
|
buildid.o objpool.o
|
[PATCH] Add initial implementation of klist helpers.
This klist interface provides a couple of structures that wrap around
struct list_head to provide explicit list "head" (struct klist) and
list "node" (struct klist_node) objects. For struct klist, a spinlock
is included that protects access to the actual list itself. struct
klist_node provides a pointer to the klist that owns it and a kref
reference count that indicates the number of current users of that node
in the list.
The entire point is to provide an interface for iterating over a list
that is safe and allows for modification of the list during the
iteration (e.g. insertion and removal), including modification of the
current node on the list.
It works using a 3rd object type - struct klist_iter - that is declared
and initialized before an iteration. klist_next() is used to acquire the
next element in the list. It returns NULL if there are no more items.
This klist interface provides a couple of structures that wrap around
struct list_head to provide explicit list "head" (struct klist) and
list "node" (struct klist_node) objects. For struct klist, a spinlock
is included that protects access to the actual list itself. struct
klist_node provides a pointer to the klist that owns it and a kref
reference count that indicates the number of current users of that node
in the list.
The entire point is to provide an interface for iterating over a list
that is safe and allows for modification of the list during the
iteration (e.g. insertion and removal), including modification of the
current node on the list.
It works using a 3rd object type - struct klist_iter - that is declared
and initialized before an iteration. klist_next() is used to acquire the
next element in the list. It returns NULL if there are no more items.
Internally, that routine takes the klist's lock, decrements the reference
count of the previous klist_node and increments the count of the next
klist_node. It then drops the lock and returns.
There are primitives for adding and removing nodes to/from a klist.
When deleting, klist_del() will simply decrement the reference count.
Only when the count goes to 0 is the node removed from the list.
klist_remove() will try to delete the node from the list and block
until it is actually removed. This is useful for objects (like devices)
that have been removed from the system and must be freed (but must wait
until all accessors have finished).
Internally, that routine takes the klist's lock, decrements the reference
count of the previous klist_node and increments the count of the next
klist_node. It then drops the lock and returns.
There are primitives for adding and removing nodes to/from a klist.
When deleting, klist_del() will simply decrement the reference count.
Only when the count goes to 0 is the node removed from the list.
klist_remove() will try to delete the node from the list and block
until it is actually removed. This is useful for objects (like devices)
that have been removed from the system and must be freed (but must wait
until all accessors have finished).
Signed-off-by: Patrick Mochel <mochel@digitalimplant.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
diff -Nru a/include/linux/klist.h b/include/linux/klist.h
2005-03-21 14:45:16 -05:00
|
|
|
|
2018-02-13 02:28:34 -05:00
|
|
|
lib-$(CONFIG_PRINTK) += dump_stack.o
|
2022-08-09 13:36:34 -04:00
|
|
|
lib-$(CONFIG_SMP) += cpumask.o
|
2006-03-25 06:08:08 -05:00
|
|
|
|
2011-12-13 04:36:20 -05:00
|
|
|
lib-y += kobject.o klist.o
|
2013-09-02 14:58:20 -04:00
|
|
|
obj-y += lockref.o
|
2005-04-16 18:20:36 -04:00
|
|
|
|
2019-05-14 18:43:05 -04:00
|
|
|
obj-y += bcd.o sort.o parser.o debug_locks.o random32.o \
|
2015-02-12 18:02:21 -05:00
|
|
|
bust_spinlocks.o kasprintf.o bitmap.o scatterlist.o \
|
2019-05-14 18:43:05 -04:00
|
|
|
list_sort.o uuid.o iov_iter.o clz_ctz.o \
|
2023-09-11 10:39:43 -04:00
|
|
|
bsearch.o find_bit.o llist.o lwq.o memweight.o kfifo.o \
|
2022-06-27 05:51:59 -04:00
|
|
|
percpu-refcount.o rhashtable.o base64.o \
|
2023-03-23 16:55:31 -04:00
|
|
|
once.o refcount.o rcuref.o usercopy.o errseq.o bucket_locks.o \
|
2023-10-07 19:35:10 -04:00
|
|
|
generic-radix-tree.o bitmap-str.o
|
2024-03-01 15:27:30 -05:00
|
|
|
obj-$(CONFIG_STRING_KUNIT_TEST) += string_kunit.o
|
2013-04-30 18:27:30 -04:00
|
|
|
obj-y += string_helpers.o
|
2024-03-01 15:27:31 -05:00
|
|
|
obj-$(CONFIG_STRING_HELPERS_KUNIT_TEST) += string_helpers_kunit.o
|
2015-02-12 18:02:21 -05:00
|
|
|
obj-y += hexdump.o
|
2016-01-20 17:58:44 -05:00
|
|
|
obj-$(CONFIG_TEST_HEXDUMP) += test_hexdump.o
|
2011-03-22 19:34:40 -04:00
|
|
|
obj-y += kstrtox.o
|
2018-02-06 18:38:27 -05:00
|
|
|
obj-$(CONFIG_FIND_BIT_BENCHMARK) += find_bit_benchmark.o
|
2014-05-08 17:10:52 -04:00
|
|
|
obj-$(CONFIG_TEST_BPF) += test_bpf.o
|
2022-12-08 09:31:28 -05:00
|
|
|
test_dhry-objs := dhry_1.o dhry_2.o dhry_run.o
|
|
|
|
obj-$(CONFIG_TEST_DHRY) += test_dhry.o
|
2014-07-14 17:38:12 -04:00
|
|
|
obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
|
2020-06-04 19:50:27 -04:00
|
|
|
obj-$(CONFIG_TEST_BITOPS) += test_bitops.o
|
|
|
|
CFLAGS_test_bitops.o += -Werror
|
2022-08-23 02:12:21 -04:00
|
|
|
obj-$(CONFIG_CPUMASK_KUNIT_TEST) += cpumask_kunit.o
|
2017-07-12 17:33:43 -04:00
|
|
|
obj-$(CONFIG_TEST_SYSCTL) += test_sysctl.o
|
2023-09-08 12:03:21 -04:00
|
|
|
obj-$(CONFIG_TEST_IOV_ITER) += kunit_iov_iter.o
|
2022-01-19 21:09:15 -05:00
|
|
|
obj-$(CONFIG_HASH_KUNIT_TEST) += test_hash.o
|
2018-06-18 16:59:29 -04:00
|
|
|
obj-$(CONFIG_TEST_IDA) += test_ida.o
|
2018-04-10 19:32:58 -04:00
|
|
|
obj-$(CONFIG_TEST_UBSAN) += test_ubsan.o
|
2018-06-25 18:59:34 -04:00
|
|
|
CFLAGS_test_ubsan.o += $(call cc-disable-warning, vla)
|
2024-01-30 17:12:55 -05:00
|
|
|
CFLAGS_test_ubsan.o += $(call cc-disable-warning, unused-but-set-variable)
|
2018-04-10 19:32:58 -04:00
|
|
|
UBSAN_SANITIZE_test_ubsan.o := y
|
2015-02-13 17:39:53 -05:00
|
|
|
obj-$(CONFIG_TEST_KSTRTOX) += test-kstrtox.o
|
2017-05-08 18:55:26 -04:00
|
|
|
obj-$(CONFIG_TEST_LIST_SORT) += test_list_sort.o
|
2020-02-14 02:51:29 -05:00
|
|
|
obj-$(CONFIG_TEST_MIN_HEAP) += test_min_heap.o
|
2015-02-13 17:39:53 -05:00
|
|
|
obj-$(CONFIG_TEST_LKM) += test_module.o
|
vmalloc: add test driver to analyse vmalloc allocator
This adds a new kernel module for analysis of vmalloc allocator. It is
only enabled as a module. There are two main reasons this module should
be used for: performance evaluation and stressing of vmalloc subsystem.
It consists of several test cases. As of now there are 8. The module
has five parameters we can specify to change its the behaviour.
1) run_test_mask - set of tests to be run
id: 1, name: fix_size_alloc_test
id: 2, name: full_fit_alloc_test
id: 4, name: long_busy_list_alloc_test
id: 8, name: random_size_alloc_test
id: 16, name: fix_align_alloc_test
id: 32, name: random_size_align_alloc_test
id: 64, name: align_shift_alloc_test
id: 128, name: pcpu_alloc_test
By default all tests are in run test mask. If you want to select some
specific tests it is possible to pass the mask. For example for first,
second and fourth tests we go 11 value.
2) test_repeat_count - how many times each test should be repeated
By default it is one time per test. It is possible to pass any number.
As high the value is the test duration gets increased.
3) test_loop_count - internal test loop counter. By default it is set
to 1000000.
4) single_cpu_test - use one CPU to run the tests
By default this parameter is set to false. It means that all online
CPUs execute tests. By setting it to 1, the tests are executed by
first online CPU only.
5) sequential_test_order - run tests in sequential order
By default this parameter is set to false. It means that before running
tests the order is shuffled. It is possible to make it sequential, just
set it to 1.
Performance analysis:
In order to evaluate performance of vmalloc allocations, usually it
makes sense to use only one CPU that runs tests, use sequential order,
number of repeat tests can be different as well as set of test mask.
For example if we want to run all tests, to use one CPU and repeat each
test 3 times. Insert the module passing following parameters:
single_cpu_test=1 sequential_test_order=1 test_repeat_count=3
with following output:
<snip>
Summary: fix_size_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 901177 usec
Summary: full_fit_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 1039341 usec
Summary: long_busy_list_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 11775763 usec
Summary: random_size_alloc_test passed 3: failed: 0 repeat: 3 loops: 1000000 avg: 6081992 usec
Summary: fix_align_alloc_test passed: 3 failed: 0 repeat: 3, loops: 1000000 avg: 2003712 usec
Summary: random_size_align_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 2895689 usec
Summary: align_shift_alloc_test passed: 0 failed: 3 repeat: 3 loops: 1000000 avg: 573 usec
Summary: pcpu_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 95802 usec
All test took CPU0=192945605995 cycles
<snip>
The align_shift_alloc_test is expected to be failed.
Stressing:
In order to stress the vmalloc subsystem we run all available test cases
on all available CPUs simultaneously. In order to prevent constant behaviour
pattern, the test cases array is shuffled by default to randomize the order
of test execution.
For example if we want to run all tests(default), use all online CPUs(default)
with shuffled order(default) and to repeat each test 30 times. The command
would be like:
modprobe vmalloc_test test_repeat_count=30
Expected results are the system is alive, there are no any BUG_ONs or Kernel
Panics the tests are completed, no memory leaks.
[urezki@gmail.com: fix 32-bit builds]
Link: http://lkml.kernel.org/r/20190106214839.ffvjvmrn52uqog7k@pc636
[urezki@gmail.com: make CONFIG_TEST_VMALLOC depend on CONFIG_MMU]
Link: http://lkml.kernel.org/r/20190219085441.s6bg2gpy4esny5vw@pc636
Link: http://lkml.kernel.org/r/20190103142108.20744-3-urezki@gmail.com
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-05 18:43:34 -05:00
|
|
|
obj-$(CONFIG_TEST_VMALLOC) += test_vmalloc.o
|
2015-01-29 09:40:25 -05:00
|
|
|
obj-$(CONFIG_TEST_RHASHTABLE) += test_rhashtable.o
|
2017-02-24 18:01:07 -05:00
|
|
|
obj-$(CONFIG_TEST_SORT) += test_sort.o
|
2015-02-13 17:39:53 -05:00
|
|
|
obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
|
2015-08-03 05:42:57 -04:00
|
|
|
obj-$(CONFIG_TEST_STATIC_KEYS) += test_static_keys.o
|
|
|
|
obj-$(CONFIG_TEST_STATIC_KEYS) += test_static_key_base.o
|
2022-09-04 17:40:45 -04:00
|
|
|
obj-$(CONFIG_TEST_DYNAMIC_DEBUG) += test_dynamic_debug.o
|
2015-11-06 19:30:29 -05:00
|
|
|
obj-$(CONFIG_TEST_PRINTF) += test_printf.o
|
2021-05-14 12:12:05 -04:00
|
|
|
obj-$(CONFIG_TEST_SCANF) += test_scanf.o
|
lib/bitmap: workaround const_eval test build failure
When building with Clang, and when KASAN and GCOV_PROFILE_ALL are both
enabled, the test fails to build [1]:
>> lib/test_bitmap.c:920:2: error: call to '__compiletime_assert_239' declared with 'error' attribute: BUILD_BUG_ON failed: !__builtin_constant_p(res)
BUILD_BUG_ON(!__builtin_constant_p(res));
^
include/linux/build_bug.h:50:2: note: expanded from macro 'BUILD_BUG_ON'
BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
^
include/linux/build_bug.h:39:37: note: expanded from macro 'BUILD_BUG_ON_MSG'
#define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
^
include/linux/compiler_types.h:352:2: note: expanded from macro 'compiletime_assert'
_compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
^
include/linux/compiler_types.h:340:2: note: expanded from macro '_compiletime_assert'
__compiletime_assert(condition, msg, prefix, suffix)
^
include/linux/compiler_types.h:333:4: note: expanded from macro '__compiletime_assert'
prefix ## suffix(); \
^
<scratch space>:185:1: note: expanded from here
__compiletime_assert_239
Originally it was attributed to s390, which now looks seemingly wrong. The
issue is not related to bitmap code itself, but it breaks build for a given
configuration.
Disabling the const_eval test under that config may potentially hide other
bugs. Instead, workaround it by disabling GCOV for the test_bitmap unless
the compiler will get fixed.
[1] https://github.com/ClangBuiltLinux/linux/issues/1874
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202307171254.yFcH97ej-lkp@intel.com/
Fixes: dc34d5036692 ("lib: test_bitmap: add compile-time optimization/evaluations assertions")
Co-developed-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Yury Norov <yury.norov@gmail.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com>
2023-07-17 15:17:03 -04:00
|
|
|
|
2016-02-19 09:24:00 -05:00
|
|
|
obj-$(CONFIG_TEST_BITMAP) += test_bitmap.o
|
lib/bitmap: workaround const_eval test build failure
When building with Clang, and when KASAN and GCOV_PROFILE_ALL are both
enabled, the test fails to build [1]:
>> lib/test_bitmap.c:920:2: error: call to '__compiletime_assert_239' declared with 'error' attribute: BUILD_BUG_ON failed: !__builtin_constant_p(res)
BUILD_BUG_ON(!__builtin_constant_p(res));
^
include/linux/build_bug.h:50:2: note: expanded from macro 'BUILD_BUG_ON'
BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
^
include/linux/build_bug.h:39:37: note: expanded from macro 'BUILD_BUG_ON_MSG'
#define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
^
include/linux/compiler_types.h:352:2: note: expanded from macro 'compiletime_assert'
_compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
^
include/linux/compiler_types.h:340:2: note: expanded from macro '_compiletime_assert'
__compiletime_assert(condition, msg, prefix, suffix)
^
include/linux/compiler_types.h:333:4: note: expanded from macro '__compiletime_assert'
prefix ## suffix(); \
^
<scratch space>:185:1: note: expanded from here
__compiletime_assert_239
Originally it was attributed to s390, which now looks seemingly wrong. The
issue is not related to bitmap code itself, but it breaks build for a given
configuration.
Disabling the const_eval test under that config may potentially hide other
bugs. Instead, workaround it by disabling GCOV for the test_bitmap unless
the compiler will get fixed.
[1] https://github.com/ClangBuiltLinux/linux/issues/1874
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202307171254.yFcH97ej-lkp@intel.com/
Fixes: dc34d5036692 ("lib: test_bitmap: add compile-time optimization/evaluations assertions")
Co-developed-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Yury Norov <yury.norov@gmail.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com>
2023-07-17 15:17:03 -04:00
|
|
|
ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_KASAN),yy)
|
|
|
|
# FIXME: Clang breaks test_bitmap_const_eval when KASAN and GCOV are enabled
|
|
|
|
GCOV_PROFILE_test_bitmap.o := n
|
|
|
|
endif
|
|
|
|
|
2016-05-30 10:40:41 -04:00
|
|
|
obj-$(CONFIG_TEST_UUID) += test_uuid.o
|
2017-11-07 14:57:46 -05:00
|
|
|
obj-$(CONFIG_TEST_XARRAY) += test_xarray.o
|
2022-10-28 14:04:30 -04:00
|
|
|
obj-$(CONFIG_TEST_MAPLE_TREE) += test_maple_tree.o
|
2017-02-03 04:29:06 -05:00
|
|
|
obj-$(CONFIG_TEST_PARMAN) += test_parman.o
|
kmod: add test driver to stress test the module loader
This adds a new stress test driver for kmod: the kernel module loader.
The new stress test driver, test_kmod, is only enabled as a module right
now. It should be possible to load this as built-in and load tests
early (refer to the force_init_test module parameter), however since a
lot of test can get a system out of memory fast we leave this disabled
for now.
Using a system with 1024 MiB of RAM can *easily* get your kernel OOM
fast with this test driver.
The test_kmod driver exposes API knobs for us to fine tune simple
request_module() and get_fs_type() calls. Since these API calls only
allow each one parameter a test driver for these is rather simple.
Other factors that can help out test driver though are the number of
calls we issue and knowing current limitations of each. This exposes
configuration as much as possible through userspace to be able to build
tests directly from userspace.
Since it allows multiple misc devices its will eventually (once we add a
knob to let us create new devices at will) also be possible to perform
more tests in parallel, provided you have enough memory.
We only enable tests we know work as of right now.
Demo screenshots:
# tools/testing/selftests/kmod/kmod.sh
kmod_test_0001_driver: OK! - loading kmod test
kmod_test_0001_driver: OK! - Return value: 256 (MODULE_NOT_FOUND), expected MODULE_NOT_FOUND
kmod_test_0001_fs: OK! - loading kmod test
kmod_test_0001_fs: OK! - Return value: -22 (-EINVAL), expected -EINVAL
kmod_test_0002_driver: OK! - loading kmod test
kmod_test_0002_driver: OK! - Return value: 256 (MODULE_NOT_FOUND), expected MODULE_NOT_FOUND
kmod_test_0002_fs: OK! - loading kmod test
kmod_test_0002_fs: OK! - Return value: -22 (-EINVAL), expected -EINVAL
kmod_test_0003: OK! - loading kmod test
kmod_test_0003: OK! - Return value: 0 (SUCCESS), expected SUCCESS
kmod_test_0004: OK! - loading kmod test
kmod_test_0004: OK! - Return value: 0 (SUCCESS), expected SUCCESS
kmod_test_0005: OK! - loading kmod test
kmod_test_0005: OK! - Return value: 0 (SUCCESS), expected SUCCESS
kmod_test_0006: OK! - loading kmod test
kmod_test_0006: OK! - Return value: 0 (SUCCESS), expected SUCCESS
kmod_test_0005: OK! - loading kmod test
kmod_test_0005: OK! - Return value: 0 (SUCCESS), expected SUCCESS
kmod_test_0006: OK! - loading kmod test
kmod_test_0006: OK! - Return value: 0 (SUCCESS), expected SUCCESS
XXX: add test restult for 0007
Test completed
You can also request for specific tests:
# tools/testing/selftests/kmod/kmod.sh -t 0001
kmod_test_0001_driver: OK! - loading kmod test
kmod_test_0001_driver: OK! - Return value: 256 (MODULE_NOT_FOUND), expected MODULE_NOT_FOUND
kmod_test_0001_fs: OK! - loading kmod test
kmod_test_0001_fs: OK! - Return value: -22 (-EINVAL), expected -EINVAL
Test completed
Lastly, the current available number of tests:
# tools/testing/selftests/kmod/kmod.sh --help
Usage: tools/testing/selftests/kmod/kmod.sh [ -t <4-number-digit> ]
Valid tests: 0001-0009
0001 - Simple test - 1 thread for empty string
0002 - Simple test - 1 thread for modules/filesystems that do not exist
0003 - Simple test - 1 thread for get_fs_type() only
0004 - Simple test - 2 threads for get_fs_type() only
0005 - multithreaded tests with default setup - request_module() only
0006 - multithreaded tests with default setup - get_fs_type() only
0007 - multithreaded tests with default setup test request_module() and get_fs_type()
0008 - multithreaded - push kmod_concurrent over max_modprobes for request_module()
0009 - multithreaded - push kmod_concurrent over max_modprobes for get_fs_type()
The following test cases currently fail, as such they are not currently
enabled by default:
# tools/testing/selftests/kmod/kmod.sh -t 0008
# tools/testing/selftests/kmod/kmod.sh -t 0009
To be sure to run them as intended please unload both of the modules:
o test_module
o xfs
And ensure they are not loaded on your system prior to testing them. If
you use these paritions for your rootfs you can change the default test
driver used for get_fs_type() by exporting it into your environment. For
example of other test defaults you can override refer to kmod.sh
allow_user_defaults().
Behind the scenes this is how we fine tune at a test case prior to
hitting a trigger to run it:
cat /sys/devices/virtual/misc/test_kmod0/config
echo -n "2" > /sys/devices/virtual/misc/test_kmod0/config_test_case
echo -n "ext4" > /sys/devices/virtual/misc/test_kmod0/config_test_fs
echo -n "80" > /sys/devices/virtual/misc/test_kmod0/config_num_threads
cat /sys/devices/virtual/misc/test_kmod0/config
echo -n "1" > /sys/devices/virtual/misc/test_kmod0/config_num_threads
Finally to trigger:
echo -n "1" > /sys/devices/virtual/misc/test_kmod0/trigger_config
The kmod.sh script uses the above constructs to build different test cases.
A bit of interpretation of the current failures follows, first two
premises:
a) When request_module() is used userspace figures out an optimized
version of module order for us. Once it finds the modules it needs, as
per depmod symbol dep map, it will finit_module() the respective
modules which are needed for the original request_module() request.
b) We have an optimization in place whereby if a kernel uses
request_module() on a module already loaded we never bother userspace
as the module already is loaded. This is all handled by kernel/kmod.c.
A few things to consider to help identify root causes of issues:
0) kmod 19 has a broken heuristic for modules being assumed to be
built-in to your kernel and will return 0 even though request_module()
failed. Upgrade to a newer version of kmod.
1) A get_fs_type() call for "xfs" will request_module() for "fs-xfs",
not for "xfs". The optimization in kernel described in b) fails to
catch if we have a lot of consecutive get_fs_type() calls. The reason
is the optimization in place does not look for aliases. This means two
consecutive get_fs_type() calls will bump kmod_concurrent, whereas
request_module() will not.
This one explanation why test case 0009 fails at least once for
get_fs_type().
2) If a module fails to load --- for whatever reason (kmod_concurrent
limit reached, file not yet present due to rootfs switch, out of
memory) we have a period of time during which module request for the
same name either with request_module() or get_fs_type() will *also*
fail to load even if the file for the module is ready.
This explains why *multiple* NULLs are possible on test 0009.
3) finit_module() consumes quite a bit of memory.
4) Filesystems typically also have more dependent modules than other
modules, its important to note though that even though a get_fs_type()
call does not incur additional kmod_concurrent bumps, since userspace
loads dependencies it finds it needs via finit_module_fd(), it *will*
take much more memory to load a module with a lot of dependencies.
Because of 3) and 4) we will easily run into out of memory failures with
certain tests. For instance test 0006 fails on qemu with 1024 MiB of RAM.
It panics a box after reaping all userspace processes and still not
having enough memory to reap.
[arnd@arndb.de: add dependencies for test module]
Link: http://lkml.kernel.org/r/20170630154834.3689272-1-arnd@arndb.de
Link: http://lkml.kernel.org/r/20170628223155.26472-3-mcgrof@kernel.org
Signed-off-by: Luis R. Rodriguez <mcgrof@kernel.org>
Cc: Jessica Yu <jeyu@redhat.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Michal Marek <mmarek@suse.com>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-14 17:50:08 -04:00
|
|
|
obj-$(CONFIG_TEST_KMOD) += test_kmod.o
|
2017-09-08 19:15:31 -04:00
|
|
|
obj-$(CONFIG_TEST_DEBUG_VIRTUAL) += test_debug_virtual.o
|
2018-10-05 08:43:05 -04:00
|
|
|
obj-$(CONFIG_TEST_MEMCAT_P) += test_memcat_p.o
|
2018-11-14 03:22:28 -05:00
|
|
|
obj-$(CONFIG_TEST_OBJAGG) += test_objagg.o
|
2019-07-01 17:39:01 -04:00
|
|
|
obj-$(CONFIG_TEST_BLACKHOLE_DEV) += test_blackhole_dev.o
|
2019-07-16 19:27:27 -04:00
|
|
|
obj-$(CONFIG_TEST_MEMINIT) += test_meminit.o
|
lib/test_lockup: test module to generate lockups
CONFIG_TEST_LOCKUP=m adds module "test_lockup" that helps to make sure
that watchdogs and lockup detectors are working properly.
Depending on module parameters test_lockup could emulate soft or hard
lockup, "hung task", hold arbitrary lock, allocate bunch of pages.
Also it could generate series of lockups with cooling-down periods, in
this way it could be used as "ping" for locks or page allocator. Loop
checks signals between iteration thus could be stopped by ^C.
# modinfo test_lockup
...
parm: time_secs:lockup time in seconds, default 0 (uint)
parm: time_nsecs:nanoseconds part of lockup time, default 0 (uint)
parm: cooldown_secs:cooldown time between iterations in seconds, default 0 (uint)
parm: cooldown_nsecs:nanoseconds part of cooldown, default 0 (uint)
parm: iterations:lockup iterations, default 1 (uint)
parm: all_cpus:trigger lockup at all cpus at once (bool)
parm: state:wait in 'R' running (default), 'D' uninterruptible, 'K' killable, 'S' interruptible state (charp)
parm: use_hrtimer:use high-resolution timer for sleeping (bool)
parm: iowait:account sleep time as iowait (bool)
parm: lock_read:lock read-write locks for read (bool)
parm: lock_single:acquire locks only at one cpu (bool)
parm: reacquire_locks:release and reacquire locks/irq/preempt between iterations (bool)
parm: touch_softlockup:touch soft-lockup watchdog between iterations (bool)
parm: touch_hardlockup:touch hard-lockup watchdog between iterations (bool)
parm: call_cond_resched:call cond_resched() between iterations (bool)
parm: measure_lock_wait:measure lock wait time (bool)
parm: lock_wait_threshold:print lock wait time longer than this in nanoseconds, default off (ulong)
parm: disable_irq:disable interrupts: generate hard-lockups (bool)
parm: disable_softirq:disable bottom-half irq handlers (bool)
parm: disable_preempt:disable preemption: generate soft-lockups (bool)
parm: lock_rcu:grab rcu_read_lock: generate rcu stalls (bool)
parm: lock_mmap_sem:lock mm->mmap_sem: block procfs interfaces (bool)
parm: lock_rwsem_ptr:lock rw_semaphore at address (ulong)
parm: lock_mutex_ptr:lock mutex at address (ulong)
parm: lock_spinlock_ptr:lock spinlock at address (ulong)
parm: lock_rwlock_ptr:lock rwlock at address (ulong)
parm: alloc_pages_nr:allocate and free pages under locks (uint)
parm: alloc_pages_order:page order to allocate (uint)
parm: alloc_pages_gfp:allocate pages with this gfp_mask, default GFP_KERNEL (uint)
parm: alloc_pages_atomic:allocate pages with GFP_ATOMIC (bool)
parm: reallocate_pages:free and allocate pages between iterations (bool)
Parameters for locking by address are unsafe and taints kernel. With
CONFIG_DEBUG_SPINLOCK=y they at least check magics for embedded spinlocks.
Examples:
task hang in D-state:
modprobe test_lockup time_secs=1 iterations=60 state=D
task hang in io-wait D-state:
modprobe test_lockup time_secs=1 iterations=60 state=D iowait
softlockup:
modprobe test_lockup time_secs=1 iterations=60 state=R
hardlockup:
modprobe test_lockup time_secs=1 iterations=60 state=R disable_irq
system-wide hardlockup:
modprobe test_lockup time_secs=1 iterations=60 state=R \
disable_irq all_cpus
rcu stall:
modprobe test_lockup time_secs=1 iterations=60 state=R \
lock_rcu touch_softlockup
lock mmap_sem / block procfs interfaces:
modprobe test_lockup time_secs=1 iterations=60 state=S lock_mmap_sem
lock tasklist_lock for read / block forks:
TASKLIST_LOCK=$(awk '$3 == "tasklist_lock" {print "0x"$1}' /proc/kallsyms)
modprobe test_lockup time_secs=1 iterations=60 state=R \
disable_irq lock_read lock_rwlock_ptr=$TASKLIST_LOCK
lock namespace_sem / block vfs mount operations:
NAMESPACE_SEM=$(awk '$3 == "namespace_sem" {print "0x"$1}' /proc/kallsyms)
modprobe test_lockup time_secs=1 iterations=60 state=S \
lock_rwsem_ptr=$NAMESPACE_SEM
lock cgroup mutex / block cgroup operations:
CGROUP_MUTEX=$(awk '$3 == "cgroup_mutex" {print "0x"$1}' /proc/kallsyms)
modprobe test_lockup time_secs=1 iterations=60 state=S \
lock_mutex_ptr=$CGROUP_MUTEX
ping cgroup_mutex every second and measure maximum lock wait time:
modprobe test_lockup cooldown_secs=1 iterations=60 state=S \
lock_mutex_ptr=$CGROUP_MUTEX reacquire_locks measure_lock_wait
[linux@roeck-us.net: rename disable_irq to fix build error]
Link: http://lkml.kernel.org/r/20200317133614.23152-1-linux@roeck-us.net
Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Sasha Levin <sashal@kernel.org>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Dmitry Monakhov <dmtrmonakhov@yandex-team.ru
Cc: Colin Ian King <colin.king@canonical.com>
Cc: Guenter Roeck <linux@roeck-us.net>
Link: http://lkml.kernel.org/r/158132859146.2797.525923171323227836.stgit@buzz
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-06 23:09:47 -04:00
|
|
|
obj-$(CONFIG_TEST_LOCKUP) += test_lockup.o
|
2020-04-22 15:50:26 -04:00
|
|
|
obj-$(CONFIG_TEST_HMM) += test_hmm.o
|
2020-10-13 19:56:04 -04:00
|
|
|
obj-$(CONFIG_TEST_FREE_PAGES) += test_free_pages.o
|
2021-10-25 21:51:30 -04:00
|
|
|
obj-$(CONFIG_KPROBES_SANITY_TEST) += test_kprobes.o
|
2021-12-04 23:21:56 -05:00
|
|
|
obj-$(CONFIG_TEST_REF_TRACKER) += test_ref_tracker.o
|
2022-03-15 10:02:35 -04:00
|
|
|
CFLAGS_test_fprobe.o += $(CC_FLAGS_FTRACE)
|
|
|
|
obj-$(CONFIG_FPROBE_SANITY_TEST) += test_fprobe.o
|
2023-10-17 09:56:51 -04:00
|
|
|
obj-$(CONFIG_TEST_OBJPOOL) += test_objpool.o
|
|
|
|
|
2020-06-18 10:37:37 -04:00
|
|
|
#
|
|
|
|
# CFLAGS for compiling floating point code inside the kernel. x86/Makefile turns
|
|
|
|
# off the generation of FPU/SSE* instructions for kernel proper but FPU_FLAGS
|
|
|
|
# get appended last to CFLAGS and thus override those previous compiler options.
|
|
|
|
#
|
2020-12-11 16:36:35 -05:00
|
|
|
FPU_CFLAGS := -msse -msse2
|
2020-06-18 10:37:37 -04:00
|
|
|
ifdef CONFIG_CC_IS_GCC
|
|
|
|
# Stack alignment mismatch, proceed with caution.
|
|
|
|
# GCC < 7.1 cannot compile code using `double` and -mpreferred-stack-boundary=3
|
|
|
|
# (8B stack alignment).
|
|
|
|
# See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53383
|
|
|
|
#
|
|
|
|
# The "-msse" in the first argument is there so that the
|
|
|
|
# -mpreferred-stack-boundary=3 build error:
|
|
|
|
#
|
|
|
|
# -mpreferred-stack-boundary=3 is not between 4 and 12
|
|
|
|
#
|
|
|
|
# can be triggered. Otherwise gcc doesn't complain.
|
2020-12-11 16:36:35 -05:00
|
|
|
FPU_CFLAGS += -mhard-float
|
2020-06-18 10:37:37 -04:00
|
|
|
FPU_CFLAGS += $(call cc-option,-msse -mpreferred-stack-boundary=3,-mpreferred-stack-boundary=4)
|
|
|
|
endif
|
|
|
|
|
|
|
|
obj-$(CONFIG_TEST_FPU) += test_fpu.o
|
|
|
|
CFLAGS_test_fpu.o += $(FPU_CFLAGS)
|
|
|
|
|
2023-02-24 20:45:30 -05:00
|
|
|
# Some KUnit files (hooks.o) need to be built-in even when KUnit is a module,
|
|
|
|
# so we can't just use obj-$(CONFIG_KUNIT).
|
|
|
|
ifdef CONFIG_KUNIT
|
|
|
|
obj-y += kunit/
|
kunit: Add "hooks" to call into KUnit when it's built as a module
KUnit has several macros and functions intended for use from non-test
code. These hooks, currently the kunit_get_current_test() and
kunit_fail_current_test() macros, didn't work when CONFIG_KUNIT=m.
In order to support this case, the required functions and static data
need to be available unconditionally, even when KUnit itself is not
built-in. The new 'hooks.c' file is therefore always included, and has
both the static key required for kunit_get_current_test(), and a table
of function pointers in struct kunit_hooks_table. This is filled in with
the real implementations by kunit_install_hooks(), which is kept in
hooks-impl.h and called when the kunit module is loaded.
This can be extended for future features which require similar
"hook" behaviour, such as static stubs, by simply adding new entries to
the struct, and the appropriate code to set them.
Fixed white-space errors during commit:
Shuah Khan <skhan@linuxfoundation.org>
Resolved merge conflicts with:
db105c37a4d6 ("kunit: Export kunit_running()")
This patch supersedes the above.
Shuah Khan <skhan@linuxfoundation.org>
Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Rae Moar <rmoar@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2023-01-28 02:10:07 -05:00
|
|
|
endif
|
2019-09-23 05:02:36 -04:00
|
|
|
|
2005-04-16 18:20:36 -04:00
|
|
|
ifeq ($(CONFIG_DEBUG_KOBJECT),y)
|
|
|
|
CFLAGS_kobject.o += -DDEBUG
|
|
|
|
CFLAGS_kobject_uevent.o += -DDEBUG
|
|
|
|
endif
|
|
|
|
|
2015-03-20 21:50:01 -04:00
|
|
|
obj-$(CONFIG_DEBUG_INFO_REDUCED) += debug_info.o
|
|
|
|
CFLAGS_debug_info.o += $(call cc-option, -femit-struct-debug-detailed=any)
|
|
|
|
|
2019-06-12 12:19:53 -04:00
|
|
|
obj-y += math/ crypto/
|
2019-05-14 18:43:05 -04:00
|
|
|
|
2007-02-11 10:41:31 -05:00
|
|
|
obj-$(CONFIG_GENERIC_IOMAP) += iomap.o
|
2007-08-22 17:01:36 -04:00
|
|
|
obj-$(CONFIG_HAS_IOMEM) += iomap_copy.o devres.o
|
|
|
|
obj-$(CONFIG_CHECK_SIGNATURE) += check_signature.o
|
2006-07-03 03:24:48 -04:00
|
|
|
obj-$(CONFIG_DEBUG_LOCKING_API_SELFTESTS) += locking-selftest.o
|
2010-03-05 11:34:46 -05:00
|
|
|
|
2019-11-04 12:22:19 -05:00
|
|
|
lib-y += logic_pio.o
|
2018-03-14 14:15:50 -04:00
|
|
|
|
2021-03-05 07:19:52 -05:00
|
|
|
lib-$(CONFIG_INDIRECT_IOMEM) += logic_iomem.o
|
|
|
|
|
lib: Add register read/write tracing support
Generic MMIO read/write i.e., __raw_{read,write}{b,l,w,q} accessors
are typically used to read/write from/to memory mapped registers
and can cause hangs or some undefined behaviour in following few
cases,
* If the access to the register space is unclocked, for example: if
there is an access to multimedia(MM) block registers without MM
clocks.
* If the register space is protected and not set to be accessible from
non-secure world, for example: only EL3 (EL: Exception level) access
is allowed and any EL2/EL1 access is forbidden.
* If xPU(memory/register protection units) is controlling access to
certain memory/register space for specific clients.
and more...
Such cases usually results in instant reboot/SErrors/NOC or interconnect
hangs and tracing these register accesses can be very helpful to debug
such issues during initial development stages and also in later stages.
So use ftrace trace events to log such MMIO register accesses which
provides rich feature set such as early enablement of trace events,
filtering capability, dumping ftrace logs on console and many more.
Sample output:
rwmmio_write: __qcom_geni_serial_console_write+0x160/0x1e0 width=32 val=0xa0d5d addr=0xfffffbfffdbff700
rwmmio_post_write: __qcom_geni_serial_console_write+0x160/0x1e0 width=32 val=0xa0d5d addr=0xfffffbfffdbff700
rwmmio_read: qcom_geni_serial_poll_bit+0x94/0x138 width=32 addr=0xfffffbfffdbff610
rwmmio_post_read: qcom_geni_serial_poll_bit+0x94/0x138 width=32 val=0x0 addr=0xfffffbfffdbff610
Co-developed-by: Sai Prakash Ranjan <quic_saipraka@quicinc.com>
Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org>
Signed-off-by: Sai Prakash Ranjan <quic_saipraka@quicinc.com>
Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-05-18 12:44:14 -04:00
|
|
|
obj-$(CONFIG_TRACE_MMIO_ACCESS) += trace_readwrite.o
|
|
|
|
|
2006-12-06 23:39:16 -05:00
|
|
|
obj-$(CONFIG_GENERIC_HWEIGHT) += hweight.o
|
2010-03-05 11:34:46 -05:00
|
|
|
|
2009-11-20 14:13:39 -05:00
|
|
|
obj-$(CONFIG_BTREE) += btree.o
|
2014-03-17 08:21:54 -04:00
|
|
|
obj-$(CONFIG_INTERVAL_TREE) += interval_tree.o
|
Add a generic associative array implementation.
Add a generic associative array implementation that can be used as the
container for keyrings, thereby massively increasing the capacity available
whilst also speeding up searching in keyrings that contain a lot of keys.
This may also be useful in FS-Cache for tracking cookies.
Documentation is added into Documentation/associative_array.txt
Some of the properties of the implementation are:
(1) Objects are opaque pointers. The implementation does not care where they
point (if anywhere) or what they point to (if anything).
[!] NOTE: Pointers to objects _must_ be zero in the two least significant
bits.
(2) Objects do not need to contain linkage blocks for use by the array. This
permits an object to be located in multiple arrays simultaneously.
Rather, the array is made up of metadata blocks that point to objects.
(3) Objects are labelled as being one of two types (the type is a bool value).
This information is stored in the array, but has no consequence to the
array itself or its algorithms.
(4) Objects require index keys to locate them within the array.
(5) Index keys must be unique. Inserting an object with the same key as one
already in the array will replace the old object.
(6) Index keys can be of any length and can be of different lengths.
(7) Index keys should encode the length early on, before any variation due to
length is seen.
(8) Index keys can include a hash to scatter objects throughout the array.
(9) The array can iterated over. The objects will not necessarily come out in
key order.
(10) The array can be iterated whilst it is being modified, provided the RCU
readlock is being held by the iterator. Note, however, under these
circumstances, some objects may be seen more than once. If this is a
problem, the iterator should lock against modification. Objects will not
be missed, however, unless deleted.
(11) Objects in the array can be looked up by means of their index key.
(12) Objects can be looked up whilst the array is being modified, provided the
RCU readlock is being held by the thread doing the look up.
The implementation uses a tree of 16-pointer nodes internally that are indexed
on each level by nibbles from the index key. To improve memory efficiency,
shortcuts can be emplaced to skip over what would otherwise be a series of
single-occupancy nodes. Further, nodes pack leaf object pointers into spare
space in the node rather than making an extra branch until as such time an
object needs to be added to a full node.
Signed-off-by: David Howells <dhowells@redhat.com>
2013-09-24 05:35:17 -04:00
|
|
|
obj-$(CONFIG_ASSOCIATIVE_ARRAY) += assoc_array.o
|
2005-06-21 20:14:34 -04:00
|
|
|
obj-$(CONFIG_DEBUG_PREEMPT) += smp_processor_id.o
|
list: Introduce CONFIG_LIST_HARDENED
Numerous production kernel configs (see [1, 2]) are choosing to enable
CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened
configs [3]. The motivation behind this is that the option can be used
as a security hardening feature (e.g. CVE-2019-2215 and CVE-2019-2025
are mitigated by the option [4]).
The feature has never been designed with performance in mind, yet common
list manipulation is happening across hot paths all over the kernel.
Introduce CONFIG_LIST_HARDENED, which performs list pointer checking
inline, and only upon list corruption calls the reporting slow path.
To generate optimal machine code with CONFIG_LIST_HARDENED:
1. Elide checking for pointer values which upon dereference would
result in an immediate access fault (i.e. minimal hardening
checks). The trade-off is lower-quality error reports.
2. Use the __preserve_most function attribute (available with Clang,
but not yet with GCC) to minimize the code footprint for calling
the reporting slow path. As a result, function size of callers is
reduced by avoiding saving registers before calling the rarely
called reporting slow path.
Note that all TUs in lib/Makefile already disable function tracing,
including list_debug.c, and __preserve_most's implied notrace has
no effect in this case.
3. Because the inline checks are a subset of the full set of checks in
__list_*_valid_or_report(), always return false if the inline
checks failed. This avoids redundant compare and conditional
branch right after return from the slow path.
As a side-effect of the checks being inline, if the compiler can prove
some condition to always be true, it can completely elide some checks.
Since DEBUG_LIST is functionally a superset of LIST_HARDENED, the
Kconfig variables are changed to reflect that: DEBUG_LIST selects
LIST_HARDENED, whereas LIST_HARDENED itself has no dependency on
DEBUG_LIST.
Running netperf with CONFIG_LIST_HARDENED (using a Clang compiler with
"preserve_most") shows throughput improvements, in my case of ~7% on
average (up to 20-30% on some test cases).
Link: https://r.android.com/1266735 [1]
Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2]
Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3]
Link: https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html [4]
Signed-off-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20230811151847.1594958-3-elver@google.com
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-11 11:18:40 -04:00
|
|
|
obj-$(CONFIG_LIST_HARDENED) += list_debug.o
|
2008-04-30 03:55:01 -04:00
|
|
|
obj-$(CONFIG_DEBUG_OBJECTS) += debugobjects.o
|
2005-04-16 18:20:36 -04:00
|
|
|
|
2006-12-08 05:36:25 -05:00
|
|
|
obj-$(CONFIG_BITREVERSE) += bitrev.o
|
2020-05-08 11:39:35 -04:00
|
|
|
obj-$(CONFIG_LINEAR_RANGES) += linear_ranges.o
|
2019-05-02 16:23:29 -04:00
|
|
|
obj-$(CONFIG_PACKING) += packing.o
|
2005-04-16 18:20:36 -04:00
|
|
|
obj-$(CONFIG_CRC_CCITT) += crc-ccitt.o
|
2005-08-17 07:17:26 -04:00
|
|
|
obj-$(CONFIG_CRC16) += crc16.o
|
2008-06-25 11:22:42 -04:00
|
|
|
obj-$(CONFIG_CRC_T10DIF)+= crc-t10dif.o
|
2006-06-12 10:17:04 -04:00
|
|
|
obj-$(CONFIG_CRC_ITU_T) += crc-itu-t.o
|
2005-04-16 18:20:36 -04:00
|
|
|
obj-$(CONFIG_CRC32) += crc32.o
|
lib: add crc64 calculation routines
Patch series "add crc64 calculation as kernel library", v5.
This patchset adds basic implementation of crc64 calculation as a Linux
kernel library. Since bcache already does crc64 by itself, this patchset
also modifies bcache code to use the new crc64 library routine.
Currently bcache is the only user of crc64 calculation, another potential
user is bcachefs which is on the way to be in mainline kernel. Therefore
it makes sense to make crc64 calculation to be a public library.
bcache uses crc64 as storage checksum, if a change of crc lib routines
results an inconsistent result, the unmatched checksum may make bcache
'think' the on-disk is corrupted, such a change should be avoided or
detected as early as possible. Therefore a patch is being prepared which
adds a crc test framework, to check consistency of different calculations.
This patch (of 2):
Add the re-write crc64 calculation routines for Linux kernel. The CRC64
polynomical arithmetic follows ECMA-182 specification, inspired by CRC
paper of Dr. Ross N. Williams (see
http://www.ross.net/crc/download/crc_v3.txt) and other public domain
implementations.
All the changes work in this way,
- When Linux kernel is built, host program lib/gen_crc64table.c will be
compiled to lib/gen_crc64table and executed.
- The output of gen_crc64table execution is an array called as lookup
table (a.k.a POLY 0x42f0e1eba9ea369) which contain 256 64-bit long
numbers, this table is dumped into header file lib/crc64table.h.
- Then the header file is included by lib/crc64.c for normal 64bit crc
calculation.
- Function declaration of the crc64 calculation routines is placed in
include/linux/crc64.h
Currently bcache is the only user of crc64_be(), another potential user is
bcachefs which is on the way to be in mainline kernel. Therefore it makes
sense to move crc64 calculation into lib/crc64.c as public code.
[colyli@suse.de: fix review comments from v4]
Link: http://lkml.kernel.org/r/20180726053352.2781-2-colyli@suse.de
Link: http://lkml.kernel.org/r/20180718165545.1622-2-colyli@suse.de
Signed-off-by: Coly Li <colyli@suse.de>
Co-developed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Kent Overstreet <kent.overstreet@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Kate Stewart <kstewart@linuxfoundation.org>
Cc: Eric Biggers <ebiggers3@gmail.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Noah Massey <noah.massey@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 00:57:11 -04:00
|
|
|
obj-$(CONFIG_CRC64) += crc64.o
|
2017-02-24 18:00:49 -05:00
|
|
|
obj-$(CONFIG_CRC32_SELFTEST) += crc32test.o
|
2017-06-06 17:08:39 -04:00
|
|
|
obj-$(CONFIG_CRC4) += crc4.o
|
2007-07-17 07:04:03 -04:00
|
|
|
obj-$(CONFIG_CRC7) += crc7.o
|
2005-04-16 18:20:36 -04:00
|
|
|
obj-$(CONFIG_LIBCRC32C) += libcrc32c.o
|
2011-05-31 05:22:15 -04:00
|
|
|
obj-$(CONFIG_CRC8) += crc8.o
|
2022-03-03 15:13:10 -05:00
|
|
|
obj-$(CONFIG_CRC64_ROCKSOFT) += crc64-rocksoft.o
|
lib: Add xxhash module
Adds xxhash kernel module with xxh32 and xxh64 hashes. xxhash is an
extremely fast non-cryptographic hash algorithm for checksumming.
The zstd compression and decompression modules added in the next patch
require xxhash. I extracted it out from zstd since it is useful on its
own. I copied the code from the upstream XXHash source repository and
translated it into kernel style. I ran benchmarks and tests in the kernel
and tests in userland.
I benchmarked xxhash as a special character device. I ran in four modes,
no-op, xxh32, xxh64, and crc32. The no-op mode simply copies the data to
kernel space and ignores it. The xxh32, xxh64, and crc32 modes compute
hashes on the copied data. I also ran it with four different buffer sizes.
The benchmark file is located in the upstream zstd source repository under
`contrib/linux-kernel/xxhash_test.c` [1].
I ran the benchmarks on a Ubuntu 14.04 VM with 2 cores and 4 GiB of RAM.
The VM is running on a MacBook Pro with a 3.1 GHz Intel Core i7 processor,
16 GB of RAM, and a SSD. I benchmarked using the file `filesystem.squashfs`
from `ubuntu-16.10-desktop-amd64.iso`, which is 1,536,217,088 B large.
Run the following commands for the benchmark:
modprobe xxhash_test
mknod xxhash_test c 245 0
time cp filesystem.squashfs xxhash_test
The time is reported by the time of the userland `cp`.
The GB/s is computed with
1,536,217,008 B / time(buffer size, hash)
which includes the time to copy from userland.
The Normalized GB/s is computed with
1,536,217,088 B / (time(buffer size, hash) - time(buffer size, none)).
| Buffer Size (B) | Hash | Time (s) | GB/s | Adjusted GB/s |
|-----------------|-------|----------|------|---------------|
| 1024 | none | 0.408 | 3.77 | - |
| 1024 | xxh32 | 0.649 | 2.37 | 6.37 |
| 1024 | xxh64 | 0.542 | 2.83 | 11.46 |
| 1024 | crc32 | 1.290 | 1.19 | 1.74 |
| 4096 | none | 0.380 | 4.04 | - |
| 4096 | xxh32 | 0.645 | 2.38 | 5.79 |
| 4096 | xxh64 | 0.500 | 3.07 | 12.80 |
| 4096 | crc32 | 1.168 | 1.32 | 1.95 |
| 8192 | none | 0.351 | 4.38 | - |
| 8192 | xxh32 | 0.614 | 2.50 | 5.84 |
| 8192 | xxh64 | 0.464 | 3.31 | 13.60 |
| 8192 | crc32 | 1.163 | 1.32 | 1.89 |
| 16384 | none | 0.346 | 4.43 | - |
| 16384 | xxh32 | 0.590 | 2.60 | 6.30 |
| 16384 | xxh64 | 0.466 | 3.30 | 12.80 |
| 16384 | crc32 | 1.183 | 1.30 | 1.84 |
Tested in userland using the test-suite in the zstd repo under
`contrib/linux-kernel/test/XXHashUserlandTest.cpp` [2] by mocking the
kernel functions. A line in each branch of every function in `xxhash.c`
was commented out to ensure that the test-suite fails. Additionally
tested while testing zstd and with SMHasher [3].
[1] https://phabricator.intern.facebook.com/P57526246
[2] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/test/XXHashUserlandTest.cpp
[3] https://github.com/aappleby/smhasher
zstd source repository: https://github.com/facebook/zstd
XXHash source repository: https://github.com/cyan4973/xxhash
Signed-off-by: Nick Terrell <terrelln@fb.com>
Signed-off-by: Chris Mason <clm@fb.com>
2017-08-04 16:19:17 -04:00
|
|
|
obj-$(CONFIG_XXHASH) += xxhash.o
|
2005-06-21 20:15:02 -04:00
|
|
|
obj-$(CONFIG_GENERIC_ALLOCATOR) += genalloc.o
|
2005-04-16 18:20:36 -04:00
|
|
|
|
2015-05-07 13:49:14 -04:00
|
|
|
obj-$(CONFIG_842_COMPRESS) += 842/
|
|
|
|
obj-$(CONFIG_842_DECOMPRESS) += 842/
|
2005-04-16 18:20:36 -04:00
|
|
|
obj-$(CONFIG_ZLIB_INFLATE) += zlib_inflate/
|
|
|
|
obj-$(CONFIG_ZLIB_DEFLATE) += zlib_deflate/
|
lib/zlib: add s390 hardware support for kernel zlib_deflate
Patch series "S390 hardware support for kernel zlib", v3.
With IBM z15 mainframe the new DFLTCC instruction is available. It
implements deflate algorithm in hardware (Nest Acceleration Unit - NXU)
with estimated compression and decompression performance orders of
magnitude faster than the current zlib.
This patchset adds s390 hardware compression support to kernel zlib.
The code is based on the userspace zlib implementation:
https://github.com/madler/zlib/pull/410
The coding style is also preserved for future maintainability. There is
only limited set of userspace zlib functions represented in kernel.
Apart from that, all the memory allocation should be performed in
advance. Thus, the workarea structures are extended with the parameter
lists required for the DEFLATE CONVENTION CALL instruction.
Since kernel zlib itself does not support gzip headers, only Adler-32
checksum is processed (also can be produced by DFLTCC facility). Like
it was implemented for userspace, kernel zlib will compress in hardware
on level 1, and in software on all other levels. Decompression will
always happen in hardware (when enabled).
Two DFLTCC compression calls produce the same results only when they
both are made on machines of the same generation, and when the
respective buffers have the same offset relative to the start of the
page. Therefore care should be taken when using hardware compression
when reproducible results are desired. However it does always produce
the standard conform output which can be inflated anyway.
The new kernel command line parameter 'dfltcc' is introduced to
configure s390 zlib hardware support:
Format: { on | off | def_only | inf_only | always }
on: s390 zlib hardware support for compression on
level 1 and decompression (default)
off: No s390 zlib hardware support
def_only: s390 zlib hardware support for deflate
only (compression on level 1)
inf_only: s390 zlib hardware support for inflate
only (decompression)
always: Same as 'on' but ignores the selected compression
level always using hardware support (used for debugging)
The main purpose of the integration of the NXU support into the kernel
zlib is the use of hardware deflate in btrfs filesystem with on-the-fly
compression enabled. Apart from that, hardware support can also be used
during boot for decompressing the kernel or the ramdisk image
With the patch for btrfs expanding zlib buffer from 1 to 4 pages (patch
6) the following performance results have been achieved using the
ramdisk with btrfs. These are relative numbers based on throughput rate
and compression ratio for zlib level 1:
Input data Deflate rate Inflate rate Compression ratio
NXU/Software NXU/Software NXU/Software
stream of zeroes 1.46 1.02 1.00
random ASCII data 10.44 3.00 0.96
ASCII text (dickens) 6,21 3.33 0.94
binary data (vmlinux) 8,37 3.90 1.02
This means that s390 hardware deflate can provide up to 10 times faster
compression (on level 1) and up to 4 times faster decompression (refers
to all compression levels) for btrfs zlib.
Disclaimer: Performance results are based on IBM internal tests using DD
command-line utility on btrfs on a Fedora 30 based internal driver in
native LPAR on a z15 system. Results may vary based on individual
workload, configuration and software levels.
This patch (of 9):
Create zlib_dfltcc library with the s390 DEFLATE CONVERSION CALL
implementation and related compression functions. Update zlib_deflate
functions with the hooks for s390 hardware support and adjust workspace
structures with extra parameter lists required for hardware deflate.
Link: http://lkml.kernel.org/r/20200103223334.20669-2-zaslonko@linux.ibm.com
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Mikhail Zaslonko <zaslonko@linux.ibm.com>
Co-developed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Cc: Chris Mason <clm@fb.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: David Sterba <dsterba@suse.com>
Cc: Eduard Shishkin <edward6@linux.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Richard Purdie <rpurdie@rpsys.net>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-01-31 01:16:17 -05:00
|
|
|
obj-$(CONFIG_ZLIB_DFLTCC) += zlib_dfltcc/
|
2005-04-16 18:20:36 -04:00
|
|
|
obj-$(CONFIG_REED_SOLOMON) += reed_solomon/
|
lib: add shared BCH ECC library
This is a new software BCH encoding/decoding library, similar to the shared
Reed-Solomon library.
Binary BCH (Bose-Chaudhuri-Hocquenghem) codes are widely used to correct
errors in NAND flash devices requiring more than 1-bit ecc correction; they
are generally better suited for NAND flash than RS codes because NAND bit
errors do not occur in bursts. Latest SLC NAND devices typically require at
least 4-bit ecc protection per 512 bytes block.
This library provides software encoding/decoding, but may also be used with
ASIC/SoC hardware BCH engines to perform error correction. It is being
currently used for this purpose on an OMAP3630 board (4bit/8bit HW BCH). It
has also been used to decode raw dumps of NAND devices with on-die BCH ecc
engines (e.g. Micron 4bit ecc SLC devices).
Latest NAND devices (including SLC) can exhibit high error rates (typically
a dozen or more bitflips per hour during stress tests); in order to
minimize the performance impact of error correction, this library
implements recently developed algorithms for fast polynomial root finding
(see bch.c header for details) instead of the traditional exhaustive Chien
root search; a few performance figures are provided below:
Platform: arm926ejs @ 468 MHz, 32 KiB icache, 16 KiB dcache
BCH ecc : 4-bit per 512 bytes
Encoding average throughput: 250 Mbits/s
Error correction time (compared with Chien search):
average worst average (Chien) worst (Chien)
----------------------------------------------------------
1 bit 8.5 µs 11 µs 200 µs 383 µs
2 bit 9.7 µs 12.5 µs 477 µs 728 µs
3 bit 18.1 µs 20.6 µs 758 µs 1010 µs
4 bit 19.5 µs 23 µs 1028 µs 1280 µs
In the above figures, "worst" is meant in terms of error pattern, not in
terms of cache miss / page faults effects (not taken into account here).
The library has been extensively tested on the following platforms: x86,
x86_64, arm926ejs, omap3630, qemu-ppc64, qemu-mips.
Signed-off-by: Ivan Djelic <ivan.djelic@parrot.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2011-03-11 05:05:32 -05:00
|
|
|
obj-$(CONFIG_BCH) += bch.o
|
2007-07-10 20:22:24 -04:00
|
|
|
obj-$(CONFIG_LZO_COMPRESS) += lzo/
|
|
|
|
obj-$(CONFIG_LZO_DECOMPRESS) += lzo/
|
2013-07-08 19:01:49 -04:00
|
|
|
obj-$(CONFIG_LZ4_COMPRESS) += lz4/
|
|
|
|
obj-$(CONFIG_LZ4HC_COMPRESS) += lz4/
|
2013-07-08 19:01:46 -04:00
|
|
|
obj-$(CONFIG_LZ4_DECOMPRESS) += lz4/
|
lib: Add zstd modules
Add zstd compression and decompression kernel modules.
zstd offers a wide varity of compression speed and quality trade-offs.
It can compress at speeds approaching lz4, and quality approaching lzma.
zstd decompressions at speeds more than twice as fast as zlib, and
decompression speed remains roughly the same across all compression levels.
The code was ported from the upstream zstd source repository. The
`linux/zstd.h` header was modified to match linux kernel style.
The cross-platform and allocation code was stripped out. Instead zstd
requires the caller to pass a preallocated workspace. The source files
were clang-formatted [1] to match the Linux Kernel style as much as
possible. Otherwise, the code was unmodified. We would like to avoid
as much further manual modification to the source code as possible, so it
will be easier to keep the kernel zstd up to date.
I benchmarked zstd compression as a special character device. I ran zstd
and zlib compression at several levels, as well as performing no
compression, which measure the time spent copying the data to kernel space.
Data is passed to the compresser 4096 B at a time. The benchmark file is
located in the upstream zstd source repository under
`contrib/linux-kernel/zstd_compress_test.c` [2].
I ran the benchmarks on a Ubuntu 14.04 VM with 2 cores and 4 GiB of RAM.
The VM is running on a MacBook Pro with a 3.1 GHz Intel Core i7 processor,
16 GB of RAM, and a SSD. I benchmarked using `silesia.tar` [3], which is
211,988,480 B large. Run the following commands for the benchmark:
sudo modprobe zstd_compress_test
sudo mknod zstd_compress_test c 245 0
sudo cp silesia.tar zstd_compress_test
The time is reported by the time of the userland `cp`.
The MB/s is computed with
1,536,217,008 B / time(buffer size, hash)
which includes the time to copy from userland.
The Adjusted MB/s is computed with
1,536,217,088 B / (time(buffer size, hash) - time(buffer size, none)).
The memory reported is the amount of memory the compressor requests.
| Method | Size (B) | Time (s) | Ratio | MB/s | Adj MB/s | Mem (MB) |
|----------|----------|----------|-------|---------|----------|----------|
| none | 11988480 | 0.100 | 1 | 2119.88 | - | - |
| zstd -1 | 73645762 | 1.044 | 2.878 | 203.05 | 224.56 | 1.23 |
| zstd -3 | 66988878 | 1.761 | 3.165 | 120.38 | 127.63 | 2.47 |
| zstd -5 | 65001259 | 2.563 | 3.261 | 82.71 | 86.07 | 2.86 |
| zstd -10 | 60165346 | 13.242 | 3.523 | 16.01 | 16.13 | 13.22 |
| zstd -15 | 58009756 | 47.601 | 3.654 | 4.45 | 4.46 | 21.61 |
| zstd -19 | 54014593 | 102.835 | 3.925 | 2.06 | 2.06 | 60.15 |
| zlib -1 | 77260026 | 2.895 | 2.744 | 73.23 | 75.85 | 0.27 |
| zlib -3 | 72972206 | 4.116 | 2.905 | 51.50 | 52.79 | 0.27 |
| zlib -6 | 68190360 | 9.633 | 3.109 | 22.01 | 22.24 | 0.27 |
| zlib -9 | 67613382 | 22.554 | 3.135 | 9.40 | 9.44 | 0.27 |
I benchmarked zstd decompression using the same method on the same machine.
The benchmark file is located in the upstream zstd repo under
`contrib/linux-kernel/zstd_decompress_test.c` [4]. The memory reported is
the amount of memory required to decompress data compressed with the given
compression level. If you know the maximum size of your input, you can
reduce the memory usage of decompression irrespective of the compression
level.
| Method | Time (s) | MB/s | Adjusted MB/s | Memory (MB) |
|----------|----------|---------|---------------|-------------|
| none | 0.025 | 8479.54 | - | - |
| zstd -1 | 0.358 | 592.15 | 636.60 | 0.84 |
| zstd -3 | 0.396 | 535.32 | 571.40 | 1.46 |
| zstd -5 | 0.396 | 535.32 | 571.40 | 1.46 |
| zstd -10 | 0.374 | 566.81 | 607.42 | 2.51 |
| zstd -15 | 0.379 | 559.34 | 598.84 | 4.61 |
| zstd -19 | 0.412 | 514.54 | 547.77 | 8.80 |
| zlib -1 | 0.940 | 225.52 | 231.68 | 0.04 |
| zlib -3 | 0.883 | 240.08 | 247.07 | 0.04 |
| zlib -6 | 0.844 | 251.17 | 258.84 | 0.04 |
| zlib -9 | 0.837 | 253.27 | 287.64 | 0.04 |
Tested in userland using the test-suite in the zstd repo under
`contrib/linux-kernel/test/UserlandTest.cpp` [5] by mocking the kernel
functions. Fuzz tested using libfuzzer [6] with the fuzz harnesses under
`contrib/linux-kernel/test/{RoundTripCrash.c,DecompressCrash.c}` [7] [8]
with ASAN, UBSAN, and MSAN. Additionaly, it was tested while testing the
BtrFS and SquashFS patches coming next.
[1] https://clang.llvm.org/docs/ClangFormat.html
[2] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/zstd_compress_test.c
[3] http://sun.aei.polsl.pl/~sdeor/index.php?page=silesia
[4] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/zstd_decompress_test.c
[5] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/test/UserlandTest.cpp
[6] http://llvm.org/docs/LibFuzzer.html
[7] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/test/RoundTripCrash.c
[8] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/test/DecompressCrash.c
zstd source repository: https://github.com/facebook/zstd
Signed-off-by: Nick Terrell <terrelln@fb.com>
Signed-off-by: Chris Mason <clm@fb.com>
2017-08-09 22:35:53 -04:00
|
|
|
obj-$(CONFIG_ZSTD_COMPRESS) += zstd/
|
|
|
|
obj-$(CONFIG_ZSTD_DECOMPRESS) += zstd/
|
2011-01-12 20:01:22 -05:00
|
|
|
obj-$(CONFIG_XZ_DEC) += xz/
|
2009-07-13 06:35:12 -04:00
|
|
|
obj-$(CONFIG_RAID6_PQ) += raid6/
|
2005-04-16 18:20:36 -04:00
|
|
|
|
2009-01-08 18:14:17 -05:00
|
|
|
lib-$(CONFIG_DECOMPRESS_GZIP) += decompress_inflate.o
|
|
|
|
lib-$(CONFIG_DECOMPRESS_BZIP2) += decompress_bunzip2.o
|
|
|
|
lib-$(CONFIG_DECOMPRESS_LZMA) += decompress_unlzma.o
|
decompressors: add boot-time XZ support
This implements the API defined in <linux/decompress/generic.h> which is
used for kernel, initramfs, and initrd decompression. This patch together
with the first patch is enough for XZ-compressed initramfs and initrd;
XZ-compressed kernel will need arch-specific changes.
The buffering requirements described in decompress_unxz.c are stricter
than with gzip, so the relevant changes should be done to the
arch-specific code when adding support for XZ-compressed kernel.
Similarly, the heap size in arch-specific pre-boot code may need to be
increased (30 KiB is enough).
The XZ decompressor needs memmove(), memeq() (memcmp() == 0), and
memzero() (memset(ptr, 0, size)), which aren't available in all
arch-specific pre-boot environments. I'm including simple versions in
decompress_unxz.c, but a cleaner solution would naturally be nicer.
Signed-off-by: Lasse Collin <lasse.collin@tukaani.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Alain Knaff <alain@knaff.lu>
Cc: Albin Tonnerre <albin.tonnerre@free-electrons.com>
Cc: Phillip Lougher <phillip@lougher.demon.co.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-12 20:01:23 -05:00
|
|
|
lib-$(CONFIG_DECOMPRESS_XZ) += decompress_unxz.o
|
2010-01-08 17:42:46 -05:00
|
|
|
lib-$(CONFIG_DECOMPRESS_LZO) += decompress_unlzo.o
|
2013-07-08 19:01:46 -04:00
|
|
|
lib-$(CONFIG_DECOMPRESS_LZ4) += decompress_unlz4.o
|
2020-07-30 15:08:35 -04:00
|
|
|
lib-$(CONFIG_DECOMPRESS_ZSTD) += decompress_unzstd.o
|
2009-01-05 16:48:31 -05:00
|
|
|
|
2005-06-24 02:49:52 -04:00
|
|
|
obj-$(CONFIG_TEXTSEARCH) += textsearch.o
|
[LIB]: Knuth-Morris-Pratt textsearch algorithm
Implements a linear-time string-matching algorithm due to Knuth,
Morris, and Pratt [1]. Their algorithm avoids the explicit
computation of the transition function DELTA altogether. Its
matching time is O(n), for n being length(text), using just an
auxiliary function PI[1..m], for m being length(pattern),
precomputed from the pattern in time O(m). The array PI allows
the transition function DELTA to be computed efficiently
"on the fly" as needed. Roughly speaking, for any state
"q" = 0,1,...,m and any character "a" in SIGMA, the value
PI["q"] contains the information that is independent of "a" and
is needed to compute DELTA("q", "a") [2]. Since the array PI
has only m entries, whereas DELTA has O(m|SIGMA|) entries, we
save a factor of |SIGMA| in the preprocessing time by computing
PI rather than DELTA.
[1] Cormen, Leiserson, Rivest, Stein
Introdcution to Algorithms, 2nd Edition, MIT Press
[2] See finite automation theory
Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-23 23:58:37 -04:00
|
|
|
obj-$(CONFIG_TEXTSEARCH_KMP) += ts_kmp.o
|
2005-08-25 19:12:22 -04:00
|
|
|
obj-$(CONFIG_TEXTSEARCH_BM) += ts_bm.o
|
2005-06-23 23:59:16 -04:00
|
|
|
obj-$(CONFIG_TEXTSEARCH_FSM) += ts_fsm.o
|
2006-06-23 05:05:40 -04:00
|
|
|
obj-$(CONFIG_SMP) += percpu_counter.o
|
2006-09-12 03:04:40 -04:00
|
|
|
obj-$(CONFIG_AUDIT_GENERIC) += audit.o
|
2014-03-15 01:48:00 -04:00
|
|
|
obj-$(CONFIG_AUDIT_COMPAT_GENERIC) += compat_audit.o
|
2005-06-23 23:49:30 -04:00
|
|
|
|
2018-04-03 09:34:58 -04:00
|
|
|
obj-$(CONFIG_IOMMU_HELPER) += iommu-helper.o
|
2006-12-08 05:39:43 -05:00
|
|
|
obj-$(CONFIG_FAULT_INJECTION) += fault-inject.o
|
2020-10-15 23:13:46 -04:00
|
|
|
obj-$(CONFIG_FAULT_INJECTION_USERCOPY) += fault-inject-usercopy.o
|
2012-07-30 17:43:02 -04:00
|
|
|
obj-$(CONFIG_NOTIFIER_ERROR_INJECTION) += notifier-error-inject.o
|
2012-07-30 17:43:07 -04:00
|
|
|
obj-$(CONFIG_PM_NOTIFIER_ERROR_INJECT) += pm-notifier-error-inject.o
|
2015-11-28 07:45:28 -05:00
|
|
|
obj-$(CONFIG_NETDEV_NOTIFIER_ERROR_INJECT) += netdev-notifier-error-inject.o
|
2012-07-30 17:43:10 -04:00
|
|
|
obj-$(CONFIG_MEMORY_NOTIFIER_ERROR_INJECT) += memory-notifier-error-inject.o
|
2012-12-13 18:32:52 -05:00
|
|
|
obj-$(CONFIG_OF_RECONFIG_NOTIFIER_ERROR_INJECT) += \
|
|
|
|
of-reconfig-notifier-error-inject.o
|
2018-01-12 12:55:03 -05:00
|
|
|
obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
|
2005-09-29 17:42:42 -04:00
|
|
|
|
[PATCH] Generic BUG implementation
This patch adds common handling for kernel BUGs, for use by architectures as
they wish. The code is derived from arch/powerpc.
The advantages of having common BUG handling are:
- consistent BUG reporting across architectures
- shared implementation of out-of-line file/line data
- implement CONFIG_DEBUG_BUGVERBOSE consistently
This means that in inline impact of BUG is just the illegal instruction
itself, which is an improvement for i386 and x86-64.
A BUG is represented in the instruction stream as an illegal instruction,
which has file/line information associated with it. This extra information is
stored in the __bug_table section in the ELF file.
When the kernel gets an illegal instruction, it first confirms it might
possibly be from a BUG (ie, in kernel mode, the right illegal instruction).
It then calls report_bug(). This searches __bug_table for a matching
instruction pointer, and if found, prints the corresponding file/line
information. If report_bug() determines that it wasn't a BUG which caused the
trap, it returns BUG_TRAP_TYPE_NONE.
Some architectures (powerpc) implement WARN using the same mechanism; if the
illegal instruction was the result of a WARN, then report_bug(Q) returns
CONFIG_DEBUG_BUGVERBOSE; otherwise it returns BUG_TRAP_TYPE_BUG.
lib/bug.c keeps a list of loaded modules which can be searched for __bug_table
entries. The architecture must call
module_bug_finalize()/module_bug_cleanup() from its corresponding
module_finalize/cleanup functions.
Unsetting CONFIG_DEBUG_BUGVERBOSE will reduce the kernel size by some amount.
At the very least, filename and line information will not be recorded for each
but, but architectures may decide to store no extra information per BUG at
all.
Unfortunately, gcc doesn't have a general way to mark an asm() as noreturn, so
architectures will generally have to include an infinite loop (or similar) in
the BUG code, so that gcc knows execution won't continue beyond that point.
gcc does have a __builtin_trap() operator which may be useful to achieve the
same effect, unfortunately it cannot be used to actually implement the BUG
itself, because there's no way to get the instruction's address for use in
generating the __bug_table entry.
[randy.dunlap@oracle.com: Handle BUG=n, GENERIC_BUG=n to prevent build errors]
[bunk@stusta.de: include/linux/bug.h must always #include <linux/module.h]
Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Andi Kleen <ak@muc.de>
Cc: Hugh Dickens <hugh@veritas.com>
Cc: Michael Ellerman <michael@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-08 05:36:19 -05:00
|
|
|
lib-$(CONFIG_GENERIC_BUG) += bug.o
|
|
|
|
|
2008-07-25 22:45:59 -04:00
|
|
|
obj-$(CONFIG_HAVE_ARCH_TRACEHOOK) += syscall.o
|
|
|
|
|
2020-06-08 00:40:14 -04:00
|
|
|
obj-$(CONFIG_DYNAMIC_DEBUG_CORE) += dynamic_debug.o
|
dyndbg: cleanup dynamic usage in ib_srp.c
Currently, in dynamic_debug.h we only provide
DEFINE_DYNAMIC_DEBUG_METADATA() and DYNAMIC_DEBUG_BRANCH()
definitions if CONFIG_DYNAMIC_CORE is enabled. Thus, drivers
such as infiniband srp (see: drivers/infiniband/ulp/srp/ib_srp.c)
must provide their own definitions for !CONFIG_DYNAMIC_CORE.
Thus, let's move this !CONFIG_DYNAMIC_CORE case into dynamic_debug.h.
However, the dynamic debug interfaces should really only be defined
if CONFIG_DYNAMIC_DEBUG is set or CONFIG_DYNAMIC_CORE is set along
with DYNAMIC_DEBUG_MODULE, (see:
Documentation/admin-guide/dynamic-debug-howto.rst). Thus, the
undefined case becomes: !((CONFIG_DYNAMIC_DEBUG ||
(CONFIG_DYNAMIC_CORE && DYNAMIC_DEBUG_MODULE)).
With those changes in place, we can remove the !CONFIG_DYNAMIC_CORE
case from ib_srp.c
This change was prompted by a build breakeage in ib_srp.c stemming
from the inclusion of dynamic_debug.h unconditionally in module.h, due
to commit 7deabd674988 ("dyndbg: use the module notifier callbacks").
In that case, if we have CONFIG_DYNAMIC_CORE=y and
CONFIG_DYNAMIC_DEBUG=n then the definitions for
DEFINE_DYNAMIC_DEBUG_METADATA() and DYNAMIC_DEBUG_BRANCH() are defined
once in ib_srp.c and then again in the dynamic_debug.h. This had been
working prior to the above referenced commit because dynamic_debug.h
was only pulled into ib_srp.c conditinally via printk.h if
CONFIG_DYNAMIC_DEBUG was set.
Also, the exported functions in lib/dynamic_debug.c itself may
not have a prototype if CONFIG_DYNAMIC_DEBUG=n and
CONFIG_DYNAMIC_CORE=y. This would trigger the -Wmissing-prototypes
warning.
The exported functions are behind (include/linux/dynamic_debug.h):
if defined(CONFIG_DYNAMIC_DEBUG) || \
(defined(CONFIG_DYNAMIC_DEBUG_CORE) && defined(DYNAMIC_DEBUG_MODULE))
Thus, by adding -DDYNAMIC_CONFIG_MODULE to the lib/Makefile we
can ensure that the exported functions have a prototype in all cases,
since lib/dynamic_debug.c is built whenever
CONFIG_DYNAMIC_DEBUG_CORE=y.
Fixes: 7deabd674988 ("dyndbg: use the module notifier callbacks")
Reported-by: kernel test robot <lkp@intel.com>
Link: https://lore.kernel.org/oe-kbuild-all/202303071444.sIbZTDCy-lkp@intel.com/
Signed-off-by: Jason Baron <jbaron@akamai.com>
[mcgrof: adjust commit log, and remove urldefense from URL]
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2023-03-10 16:27:28 -05:00
|
|
|
#ensure exported functions have prototypes
|
|
|
|
CFLAGS_dynamic_debug.o := -DDYNAMIC_DEBUG_MODULE
|
|
|
|
|
printf: add support for printing symbolic error names
It has been suggested several times to extend vsnprintf() to be able
to convert the numeric value of ENOSPC to print "ENOSPC". This
implements that as a %p extension: With %pe, one can do
if (IS_ERR(foo)) {
pr_err("Sorry, can't do that: %pe\n", foo);
return PTR_ERR(foo);
}
instead of what is seen in quite a few places in the kernel:
if (IS_ERR(foo)) {
pr_err("Sorry, can't do that: %ld\n", PTR_ERR(foo));
return PTR_ERR(foo);
}
If the value passed to %pe is an ERR_PTR, but the library function
errname() added here doesn't know about the value, the value is simply
printed in decimal. If the value passed to %pe is not an ERR_PTR, we
treat it as an ordinary %p and thus print the hashed value (passing
non-ERR_PTR values to %pe indicates a bug in the caller, but we can't
do much about that).
With my embedded hat on, and because it's not very invasive to do,
I've made it possible to remove this. The errname() function and
associated lookup tables take up about 3K. For most, that's probably
quite acceptable and a price worth paying for more readable
dmesg (once this starts getting used), while for those that disable
printk() it's of very little use - I don't see a
procfs/sysfs/seq_printf() file reasonably making use of this - and
they clearly want to squeeze vmlinux as much as possible. Hence the
default y if PRINTK.
The symbols to include have been found by massaging the output of
find arch include -iname 'errno*.h' | xargs grep -E 'define\s*E'
In the cases where some common aliasing exists
(e.g. EAGAIN=EWOULDBLOCK on all platforms, EDEADLOCK=EDEADLK on most),
I've moved the more popular one (in terms of 'git grep -w Efoo | wc)
to the bottom so that one takes precedence.
Link: http://lkml.kernel.org/r/20191015190706.15989-1-linux@rasmusvillemoes.dk
To: "Jonathan Corbet" <corbet@lwn.net>
To: linux-kernel@vger.kernel.org
Cc: "Andy Shevchenko" <andy.shevchenko@gmail.com>
Cc: "Andrew Morton" <akpm@linux-foundation.org>
Cc: "Joe Perches" <joe@perches.com>
Cc: linux-doc@vger.kernel.org
Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Acked-by: Uwe Kleine-König <uwe@kleine-koenig.org>
Reviewed-by: Petr Mladek <pmladek@suse.com>
[andy.shevchenko@gmail.com: use abs()]
Acked-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
2019-10-15 15:07:05 -04:00
|
|
|
obj-$(CONFIG_SYMBOLIC_ERRNAME) += errname.o
|
driver core: basic infrastructure for per-module dynamic debug messages
Base infrastructure to enable per-module debug messages.
I've introduced CONFIG_DYNAMIC_PRINTK_DEBUG, which when enabled centralizes
control of debugging statements on a per-module basis in one /proc file,
currently, <debugfs>/dynamic_printk/modules. When, CONFIG_DYNAMIC_PRINTK_DEBUG,
is not set, debugging statements can still be enabled as before, often by
defining 'DEBUG' for the proper compilation unit. Thus, this patch set has no
affect when CONFIG_DYNAMIC_PRINTK_DEBUG is not set.
The infrastructure currently ties into all pr_debug() and dev_dbg() calls. That
is, if CONFIG_DYNAMIC_PRINTK_DEBUG is set, all pr_debug() and dev_dbg() calls
can be dynamically enabled/disabled on a per-module basis.
Future plans include extending this functionality to subsystems, that define
their own debug levels and flags.
Usage:
Dynamic debugging is controlled by the debugfs file,
<debugfs>/dynamic_printk/modules. This file contains a list of the modules that
can be enabled. The format of the file is as follows:
<module_name> <enabled=0/1>
.
.
.
<module_name> : Name of the module in which the debug call resides
<enabled=0/1> : whether the messages are enabled or not
For example:
snd_hda_intel enabled=0
fixup enabled=1
driver enabled=0
Enable a module:
$echo "set enabled=1 <module_name>" > dynamic_printk/modules
Disable a module:
$echo "set enabled=0 <module_name>" > dynamic_printk/modules
Enable all modules:
$echo "set enabled=1 all" > dynamic_printk/modules
Disable all modules:
$echo "set enabled=0 all" > dynamic_printk/modules
Finally, passing "dynamic_printk" at the command line enables
debugging for all modules. This mode can be turned off via the above
disable command.
[gkh: minor cleanups and tweaks to make the build work quietly]
Signed-off-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2008-08-12 16:46:19 -04:00
|
|
|
|
2009-03-04 01:53:30 -05:00
|
|
|
obj-$(CONFIG_NLATTR) += nlattr.o
|
driver core: basic infrastructure for per-module dynamic debug messages
Base infrastructure to enable per-module debug messages.
I've introduced CONFIG_DYNAMIC_PRINTK_DEBUG, which when enabled centralizes
control of debugging statements on a per-module basis in one /proc file,
currently, <debugfs>/dynamic_printk/modules. When, CONFIG_DYNAMIC_PRINTK_DEBUG,
is not set, debugging statements can still be enabled as before, often by
defining 'DEBUG' for the proper compilation unit. Thus, this patch set has no
affect when CONFIG_DYNAMIC_PRINTK_DEBUG is not set.
The infrastructure currently ties into all pr_debug() and dev_dbg() calls. That
is, if CONFIG_DYNAMIC_PRINTK_DEBUG is set, all pr_debug() and dev_dbg() calls
can be dynamically enabled/disabled on a per-module basis.
Future plans include extending this functionality to subsystems, that define
their own debug levels and flags.
Usage:
Dynamic debugging is controlled by the debugfs file,
<debugfs>/dynamic_printk/modules. This file contains a list of the modules that
can be enabled. The format of the file is as follows:
<module_name> <enabled=0/1>
.
.
.
<module_name> : Name of the module in which the debug call resides
<enabled=0/1> : whether the messages are enabled or not
For example:
snd_hda_intel enabled=0
fixup enabled=1
driver enabled=0
Enable a module:
$echo "set enabled=1 <module_name>" > dynamic_printk/modules
Disable a module:
$echo "set enabled=0 <module_name>" > dynamic_printk/modules
Enable all modules:
$echo "set enabled=1 all" > dynamic_printk/modules
Disable all modules:
$echo "set enabled=0 all" > dynamic_printk/modules
Finally, passing "dynamic_printk" at the command line enables
debugging for all modules. This mode can be turned off via the above
disable command.
[gkh: minor cleanups and tweaks to make the build work quietly]
Signed-off-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2008-08-12 16:46:19 -04:00
|
|
|
|
2009-09-25 19:07:19 -04:00
|
|
|
obj-$(CONFIG_LRU_CACHE) += lru_cache.o
|
|
|
|
|
2009-05-13 18:56:38 -04:00
|
|
|
obj-$(CONFIG_GENERIC_CSUM) += checksum.o
|
|
|
|
|
2009-06-12 17:10:05 -04:00
|
|
|
obj-$(CONFIG_GENERIC_ATOMIC64) += atomic64.o
|
|
|
|
|
2010-02-24 04:54:24 -05:00
|
|
|
obj-$(CONFIG_ATOMIC64_SELFTEST) += atomic64_test.o
|
|
|
|
|
2011-01-19 06:03:25 -05:00
|
|
|
obj-$(CONFIG_CPU_RMAP) += cpu_rmap.o
|
|
|
|
|
2017-03-17 20:35:23 -04:00
|
|
|
obj-$(CONFIG_CLOSURES) += closure.o
|
|
|
|
|
dql: Dynamic queue limits
Implementation of dynamic queue limits (dql). This is a libary which
allows a queue limit to be dynamically managed. The goal of dql is
to set the queue limit, number of objects to the queue, to be minimized
without allowing the queue to be starved.
dql would be used with a queue which has these properties:
1) Objects are queued up to some limit which can be expressed as a
count of objects.
2) Periodically a completion process executes which retires consumed
objects.
3) Starvation occurs when limit has been reached, all queued data has
actually been consumed but completion processing has not yet run,
so queuing new data is blocked.
4) Minimizing the amount of queued data is desirable.
A canonical example of such a queue would be a NIC HW transmit queue.
The queue limit is dynamic, it will increase or decrease over time
depending on the workload. The queue limit is recalculated each time
completion processing is done. Increases occur when the queue is
starved and can exponentially increase over successive intervals.
Decreases occur when more data is being maintained in the queue than
needed to prevent starvation. The number of extra objects, or "slack",
is measured over successive intervals, and to avoid hysteresis the
limit is only reduced by the miminum slack seen over a configurable
time period.
dql API provides routines to manage the queue:
- dql_init is called to intialize the dql structure
- dql_reset is called to reset dynamic values
- dql_queued called when objects are being enqueued
- dql_avail returns availability in the queue
- dql_completed is called when objects have be consumed in the queue
Configuration consists of:
- max_limit, maximum limit
- min_limit, minimum limit
- slack_hold_time, time to measure instances of slack before reducing
queue limit
Signed-off-by: Tom Herbert <therbert@google.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-11-28 11:32:35 -05:00
|
|
|
obj-$(CONFIG_DQL) += dynamic_queue_limits.o
|
|
|
|
|
2014-08-06 19:09:23 -04:00
|
|
|
obj-$(CONFIG_GLOB) += glob.o
|
2017-02-24 18:00:52 -05:00
|
|
|
obj-$(CONFIG_GLOB_SELFTEST) += globtest.o
|
2014-08-06 19:09:23 -04:00
|
|
|
|
2019-01-10 10:33:17 -05:00
|
|
|
obj-$(CONFIG_DIMLIB) += dim/
|
2012-01-17 10:12:03 -05:00
|
|
|
obj-$(CONFIG_SIGNATURE) += digsig.o
|
2011-08-31 07:05:16 -04:00
|
|
|
|
2016-01-20 17:59:12 -05:00
|
|
|
lib-$(CONFIG_CLZ_TAB) += clz_tab.o
|
2012-02-01 17:17:54 -05:00
|
|
|
|
2012-05-24 16:12:28 -04:00
|
|
|
obj-$(CONFIG_GENERIC_STRNCPY_FROM_USER) += strncpy_from_user.o
|
2012-05-26 14:06:38 -04:00
|
|
|
obj-$(CONFIG_GENERIC_STRNLEN_USER) += strnlen_user.o
|
2012-05-24 16:12:28 -04:00
|
|
|
|
2013-06-04 12:46:26 -04:00
|
|
|
obj-$(CONFIG_GENERIC_NET_UTILS) += net_utils.o
|
|
|
|
|
lib: scatterlist: add sg splitting function
Sometimes a scatter-gather has to be split into several chunks, or sub
scatter lists. This happens for example if a scatter list will be
handled by multiple DMA channels, each one filling a part of it.
A concrete example comes with the media V4L2 API, where the scatter list
is allocated from userspace to hold an image, regardless of the
knowledge of how many DMAs will fill it :
- in a simple RGB565 case, one DMA will pump data from the camera ISP
to memory
- in the trickier YUV422 case, 3 DMAs will pump data from the camera
ISP pipes, one for pipe Y, one for pipe U and one for pipe V
For these cases, it is necessary to split the original scatter list into
multiple scatter lists, which is the purpose of this patch.
The guarantees that are required for this patch are :
- the intersection of spans of any couple of resulting scatter lists is
empty.
- the union of spans of all resulting scatter lists is a subrange of
the span of the original scatter list.
- streaming DMA API operations (mapping, unmapping) should not happen
both on both the resulting and the original scatter list. It's either
the first or the later ones.
- the caller is reponsible to call kfree() on the resulting
scatterlists.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-08 04:44:10 -04:00
|
|
|
obj-$(CONFIG_SG_SPLIT) += sg_split.o
|
2016-04-04 17:48:11 -04:00
|
|
|
obj-$(CONFIG_SG_POOL) += sg_pool.o
|
2019-11-06 20:43:31 -05:00
|
|
|
obj-$(CONFIG_MEMREGION) += memregion.o
|
lib: add support for stmp-style devices
MX23/28 use IP cores which follow a register layout I have first seen on
STMP3xxx SoCs. In this layout, every register actually has four u32:
1.) to store a value directly
2.) a SET register where every 1-bit sets the corresponding bit,
others are unaffected
3.) same with a CLR register
4.) same with a TOG (toggle) register
Also, the 2 MSBs in register 0 are always the same and can be used to reset
the IP core.
All this is strictly speaking not mach-specific (but IP core specific) and,
thus, doesn't need to be in mach-mxs/include. At least mx6 also uses IP cores
following this stmp-style. So:
Introduce a stmp-style device, put the code and defines for that in a public
place (lib/), and let drivers for stmp-style devices select that code.
To avoid regressions and ease reviewing, the actual code is simply copied from
mach-mxs. It definately wants updates, but those need a seperate patch series.
Voila, mach dependency gone, reusable code introduced. Note that I didn't
remove the duplicated code from mach-mxs yet, first the drivers have to be
converted.
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Acked-by: Dong Aisheng <dong.aisheng@linaro.org>
2011-08-31 14:35:40 -04:00
|
|
|
obj-$(CONFIG_STMP_DEVICE) += stmp_device.o
|
2015-11-10 08:56:14 -05:00
|
|
|
obj-$(CONFIG_IRQ_POLL) += irq_poll.o
|
lib: add support for stmp-style devices
MX23/28 use IP cores which follow a register layout I have first seen on
STMP3xxx SoCs. In this layout, every register actually has four u32:
1.) to store a value directly
2.) a SET register where every 1-bit sets the corresponding bit,
others are unaffected
3.) same with a CLR register
4.) same with a TOG (toggle) register
Also, the 2 MSBs in register 0 are always the same and can be used to reset
the IP core.
All this is strictly speaking not mach-specific (but IP core specific) and,
thus, doesn't need to be in mach-mxs/include. At least mx6 also uses IP cores
following this stmp-style. So:
Introduce a stmp-style device, put the code and defines for that in a public
place (lib/), and let drivers for stmp-style devices select that code.
To avoid regressions and ease reviewing, the actual code is simply copied from
mach-mxs. It definately wants updates, but those need a seperate patch series.
Voila, mach dependency gone, reusable code introduced. Note that I didn't
remove the duplicated code from mach-mxs yet, first the drivers have to be
converted.
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Acked-by: Dong Aisheng <dong.aisheng@linaro.org>
2011-08-31 14:35:40 -04:00
|
|
|
|
2022-04-01 17:40:29 -04:00
|
|
|
obj-$(CONFIG_POLYNOMIAL) += polynomial.o
|
|
|
|
|
2020-04-06 23:10:19 -04:00
|
|
|
# stackdepot.c should not be instrumented or call instrumented functions.
|
|
|
|
# Prevent the compiler from calling builtins like memcmp() or bcmp() from this
|
|
|
|
# file.
|
|
|
|
CFLAGS_stackdepot.o += -fno-builtin
|
2016-03-25 17:22:08 -04:00
|
|
|
obj-$(CONFIG_STACKDEPOT) += stackdepot.o
|
|
|
|
KASAN_SANITIZE_stackdepot.o := n
|
2022-09-15 11:03:46 -04:00
|
|
|
# In particular, instrumenting stackdepot.c with KMSAN will result in infinite
|
|
|
|
# recursion.
|
|
|
|
KMSAN_SANITIZE_stackdepot.o := n
|
2016-10-11 16:54:47 -04:00
|
|
|
KCOV_INSTRUMENT_stackdepot.o := n
|
2016-03-25 17:22:08 -04:00
|
|
|
|
2021-12-04 23:21:55 -05:00
|
|
|
obj-$(CONFIG_REF_TRACKER) += ref_tracker.o
|
|
|
|
|
2014-02-04 11:11:10 -05:00
|
|
|
libfdt_files = fdt.o fdt_ro.o fdt_wip.o fdt_rw.o fdt_sw.o fdt_strerror.o \
|
2019-12-08 22:03:44 -05:00
|
|
|
fdt_empty_tree.o fdt_addresses.o
|
2012-07-05 12:12:38 -04:00
|
|
|
$(foreach file, $(libfdt_files), \
|
2019-01-24 22:41:38 -05:00
|
|
|
$(eval CFLAGS_$(file) = -I $(srctree)/scripts/dtc/libfdt))
|
2012-07-05 12:12:38 -04:00
|
|
|
lib-$(CONFIG_LIBFDT) += $(libfdt_files)
|
|
|
|
|
2022-04-05 22:30:59 -04:00
|
|
|
obj-$(CONFIG_BOOT_CONFIG) += bootconfig.o
|
2022-04-05 22:31:19 -04:00
|
|
|
obj-$(CONFIG_BOOT_CONFIG_EMBED) += bootconfig-data.o
|
|
|
|
|
|
|
|
$(obj)/bootconfig-data.o: $(obj)/default.bconf
|
|
|
|
|
|
|
|
targets += default.bconf
|
|
|
|
filechk_defbconf = cat $(or $(real-prereqs), /dev/null)
|
|
|
|
$(obj)/default.bconf: $(CONFIG_BOOT_CONFIG_EMBED_FILE) FORCE
|
|
|
|
$(call filechk,defbconf)
|
2020-01-10 11:03:32 -05:00
|
|
|
|
2012-10-08 19:30:39 -04:00
|
|
|
obj-$(CONFIG_RBTREE_TEST) += rbtree_test.o
|
rbtree: add prio tree and interval tree tests
Patch 1 implements support for interval trees, on top of the augmented
rbtree API. It also adds synthetic tests to compare the performance of
interval trees vs prio trees. Short answers is that interval trees are
slightly faster (~25%) on insert/erase, and much faster (~2.4 - 3x)
on search. It is debatable how realistic the synthetic test is, and I have
not made such measurements yet, but my impression is that interval trees
would still come out faster.
Patch 2 uses a preprocessor template to make the interval tree generic,
and uses it as a replacement for the vma prio_tree.
Patch 3 takes the other prio_tree user, kmemleak, and converts it to use
a basic rbtree. We don't actually need the augmented rbtree support here
because the intervals are always non-overlapping.
Patch 4 removes the now-unused prio tree library.
Patch 5 proposes an additional optimization to rb_erase_augmented, now
providing it as an inline function so that the augmented callbacks can be
inlined in. This provides an additional 5-10% performance improvement
for the interval tree insert/erase benchmark. There is a maintainance cost
as it exposes augmented rbtree users to some of the rbtree library internals;
however I think this cost shouldn't be too high as I expect the augmented
rbtree will always have much less users than the base rbtree.
I should probably add a quick summary of why I think it makes sense to
replace prio trees with augmented rbtree based interval trees now. One of
the drivers is that we need augmented rbtrees for Rik's vma gap finding
code, and once you have them, it just makes sense to use them for interval
trees as well, as this is the simpler and more well known algorithm. prio
trees, in comparison, seem *too* clever: they impose an additional 'heap'
constraint on the tree, which they use to guarantee a faster worst-case
complexity of O(k+log N) for stabbing queries in a well-balanced prio
tree, vs O(k*log N) for interval trees (where k=number of matches,
N=number of intervals). Now this sounds great, but in practice prio trees
don't realize this theorical benefit. First, the additional constraint
makes them harder to update, so that the kernel implementation has to
simplify things by balancing them like a radix tree, which is not always
ideal. Second, the fact that there are both index and heap properties
makes both tree manipulation and search more complex, which results in a
higher multiplicative time constant. As it turns out, the simple interval
tree algorithm ends up running faster than the more clever prio tree.
This patch:
Add two test modules:
- prio_tree_test measures the performance of lib/prio_tree.c, both for
insertion/removal and for stabbing searches
- interval_tree_test measures the performance of a library of equivalent
functionality, built using the augmented rbtree support.
In order to support the second test module, lib/interval_tree.c is
introduced. It is kept separate from the interval_tree_test main file
for two reasons: first we don't want to provide an unfair advantage
over prio_tree_test by having everything in a single compilation unit,
and second there is the possibility that the interval tree functionality
could get some non-test users in kernel over time.
Signed-off-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Hillf Danton <dhillf@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-10-08 19:31:23 -04:00
|
|
|
obj-$(CONFIG_INTERVAL_TREE_TEST) += interval_tree_test.o
|
|
|
|
|
2013-11-12 18:08:34 -05:00
|
|
|
obj-$(CONFIG_PERCPU_TEST) += percpu_test.o
|
|
|
|
|
2012-09-24 12:11:16 -04:00
|
|
|
obj-$(CONFIG_ASN1) += asn1_decoder.o
|
2021-01-27 14:06:13 -05:00
|
|
|
obj-$(CONFIG_ASN1_ENCODER) += asn1_encoder.o
|
2012-09-24 12:11:16 -04:00
|
|
|
|
2013-06-09 05:46:43 -04:00
|
|
|
obj-$(CONFIG_FONT_SUPPORT) += fonts/
|
|
|
|
|
2020-02-01 11:49:24 -05:00
|
|
|
hostprogs := gen_crc32table
|
|
|
|
hostprogs += gen_crc64table
|
2005-04-16 18:20:36 -04:00
|
|
|
clean-files := crc32table.h
|
lib: add crc64 calculation routines
Patch series "add crc64 calculation as kernel library", v5.
This patchset adds basic implementation of crc64 calculation as a Linux
kernel library. Since bcache already does crc64 by itself, this patchset
also modifies bcache code to use the new crc64 library routine.
Currently bcache is the only user of crc64 calculation, another potential
user is bcachefs which is on the way to be in mainline kernel. Therefore
it makes sense to make crc64 calculation to be a public library.
bcache uses crc64 as storage checksum, if a change of crc lib routines
results an inconsistent result, the unmatched checksum may make bcache
'think' the on-disk is corrupted, such a change should be avoided or
detected as early as possible. Therefore a patch is being prepared which
adds a crc test framework, to check consistency of different calculations.
This patch (of 2):
Add the re-write crc64 calculation routines for Linux kernel. The CRC64
polynomical arithmetic follows ECMA-182 specification, inspired by CRC
paper of Dr. Ross N. Williams (see
http://www.ross.net/crc/download/crc_v3.txt) and other public domain
implementations.
All the changes work in this way,
- When Linux kernel is built, host program lib/gen_crc64table.c will be
compiled to lib/gen_crc64table and executed.
- The output of gen_crc64table execution is an array called as lookup
table (a.k.a POLY 0x42f0e1eba9ea369) which contain 256 64-bit long
numbers, this table is dumped into header file lib/crc64table.h.
- Then the header file is included by lib/crc64.c for normal 64bit crc
calculation.
- Function declaration of the crc64 calculation routines is placed in
include/linux/crc64.h
Currently bcache is the only user of crc64_be(), another potential user is
bcachefs which is on the way to be in mainline kernel. Therefore it makes
sense to move crc64 calculation into lib/crc64.c as public code.
[colyli@suse.de: fix review comments from v4]
Link: http://lkml.kernel.org/r/20180726053352.2781-2-colyli@suse.de
Link: http://lkml.kernel.org/r/20180718165545.1622-2-colyli@suse.de
Signed-off-by: Coly Li <colyli@suse.de>
Co-developed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Kent Overstreet <kent.overstreet@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Kate Stewart <kstewart@linuxfoundation.org>
Cc: Eric Biggers <ebiggers3@gmail.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Noah Massey <noah.massey@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 00:57:11 -04:00
|
|
|
clean-files += crc64table.h
|
2005-04-16 18:20:36 -04:00
|
|
|
|
|
|
|
$(obj)/crc32.o: $(obj)/crc32table.h
|
|
|
|
|
|
|
|
quiet_cmd_crc32 = GEN $@
|
|
|
|
cmd_crc32 = $< > $@
|
|
|
|
|
|
|
|
$(obj)/crc32table.h: $(obj)/gen_crc32table
|
|
|
|
$(call cmd,crc32)
|
2012-09-21 18:30:46 -04:00
|
|
|
|
lib: add crc64 calculation routines
Patch series "add crc64 calculation as kernel library", v5.
This patchset adds basic implementation of crc64 calculation as a Linux
kernel library. Since bcache already does crc64 by itself, this patchset
also modifies bcache code to use the new crc64 library routine.
Currently bcache is the only user of crc64 calculation, another potential
user is bcachefs which is on the way to be in mainline kernel. Therefore
it makes sense to make crc64 calculation to be a public library.
bcache uses crc64 as storage checksum, if a change of crc lib routines
results an inconsistent result, the unmatched checksum may make bcache
'think' the on-disk is corrupted, such a change should be avoided or
detected as early as possible. Therefore a patch is being prepared which
adds a crc test framework, to check consistency of different calculations.
This patch (of 2):
Add the re-write crc64 calculation routines for Linux kernel. The CRC64
polynomical arithmetic follows ECMA-182 specification, inspired by CRC
paper of Dr. Ross N. Williams (see
http://www.ross.net/crc/download/crc_v3.txt) and other public domain
implementations.
All the changes work in this way,
- When Linux kernel is built, host program lib/gen_crc64table.c will be
compiled to lib/gen_crc64table and executed.
- The output of gen_crc64table execution is an array called as lookup
table (a.k.a POLY 0x42f0e1eba9ea369) which contain 256 64-bit long
numbers, this table is dumped into header file lib/crc64table.h.
- Then the header file is included by lib/crc64.c for normal 64bit crc
calculation.
- Function declaration of the crc64 calculation routines is placed in
include/linux/crc64.h
Currently bcache is the only user of crc64_be(), another potential user is
bcachefs which is on the way to be in mainline kernel. Therefore it makes
sense to move crc64 calculation into lib/crc64.c as public code.
[colyli@suse.de: fix review comments from v4]
Link: http://lkml.kernel.org/r/20180726053352.2781-2-colyli@suse.de
Link: http://lkml.kernel.org/r/20180718165545.1622-2-colyli@suse.de
Signed-off-by: Coly Li <colyli@suse.de>
Co-developed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Kent Overstreet <kent.overstreet@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Kate Stewart <kstewart@linuxfoundation.org>
Cc: Eric Biggers <ebiggers3@gmail.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Noah Massey <noah.massey@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 00:57:11 -04:00
|
|
|
$(obj)/crc64.o: $(obj)/crc64table.h
|
|
|
|
|
|
|
|
quiet_cmd_crc64 = GEN $@
|
|
|
|
cmd_crc64 = $< > $@
|
|
|
|
|
|
|
|
$(obj)/crc64table.h: $(obj)/gen_crc64table
|
|
|
|
$(call cmd,crc64)
|
|
|
|
|
2012-09-21 18:30:46 -04:00
|
|
|
#
|
|
|
|
# Build a fast OID lookip registry from include/linux/oid_registry.h
|
|
|
|
#
|
|
|
|
obj-$(CONFIG_OID_REGISTRY) += oid_registry.o
|
|
|
|
|
2012-12-04 14:52:28 -05:00
|
|
|
$(obj)/oid_registry.o: $(obj)/oid_registry_data.c
|
2012-09-21 18:30:46 -04:00
|
|
|
|
|
|
|
$(obj)/oid_registry_data.c: $(srctree)/include/linux/oid_registry.h \
|
|
|
|
$(src)/build_OID_registry
|
|
|
|
$(call cmd,build_OID_registry)
|
|
|
|
|
|
|
|
quiet_cmd_build_OID_registry = GEN $@
|
|
|
|
cmd_build_OID_registry = perl $(srctree)/$(src)/build_OID_registry $< $@
|
|
|
|
|
|
|
|
clean-files += oid_registry_data.c
|
2013-04-15 16:09:45 -04:00
|
|
|
|
|
|
|
obj-$(CONFIG_UCS2_STRING) += ucs2_string.o
|
2016-01-20 18:00:55 -05:00
|
|
|
obj-$(CONFIG_UBSAN) += ubsan.o
|
|
|
|
|
|
|
|
UBSAN_SANITIZE_ubsan.o := n
|
2019-08-03 00:48:58 -04:00
|
|
|
KASAN_SANITIZE_ubsan.o := n
|
2019-11-19 13:57:42 -05:00
|
|
|
KCSAN_SANITIZE_ubsan.o := n
|
2020-06-26 14:59:12 -04:00
|
|
|
CFLAGS_ubsan.o := -fno-stack-protector $(DISABLE_STACKLEAK_PLUGIN)
|
2016-09-17 10:38:44 -04:00
|
|
|
|
|
|
|
obj-$(CONFIG_SBITMAP) += sbitmap.o
|
2017-02-03 04:29:06 -05:00
|
|
|
|
|
|
|
obj-$(CONFIG_PARMAN) += parman.o
|
2017-05-23 13:28:26 -04:00
|
|
|
|
2022-12-26 21:29:04 -05:00
|
|
|
obj-y += group_cpus.o
|
|
|
|
|
2017-05-23 13:28:26 -04:00
|
|
|
# GCC library routines
|
2018-04-11 03:50:17 -04:00
|
|
|
obj-$(CONFIG_GENERIC_LIB_ASHLDI3) += ashldi3.o
|
|
|
|
obj-$(CONFIG_GENERIC_LIB_ASHRDI3) += ashrdi3.o
|
|
|
|
obj-$(CONFIG_GENERIC_LIB_LSHRDI3) += lshrdi3.o
|
|
|
|
obj-$(CONFIG_GENERIC_LIB_MULDI3) += muldi3.o
|
|
|
|
obj-$(CONFIG_GENERIC_LIB_CMPDI2) += cmpdi2.o
|
|
|
|
obj-$(CONFIG_GENERIC_LIB_UCMPDI2) += ucmpdi2.o
|
2018-11-14 03:22:28 -05:00
|
|
|
obj-$(CONFIG_OBJAGG) += objagg.o
|
2019-10-24 18:46:31 -04:00
|
|
|
|
2020-07-23 20:21:59 -04:00
|
|
|
# pldmfw library
|
|
|
|
obj-$(CONFIG_PLDMFW) += pldmfw/
|
|
|
|
|
2019-10-24 18:46:31 -04:00
|
|
|
# KUnit tests
|
2021-09-29 17:27:13 -04:00
|
|
|
CFLAGS_bitfield_kunit.o := $(DISABLE_STRUCTLEAK_PLUGIN)
|
2020-07-29 13:58:49 -04:00
|
|
|
obj-$(CONFIG_BITFIELD_KUNIT) += bitfield_kunit.o
|
2023-05-10 21:10:02 -04:00
|
|
|
obj-$(CONFIG_CHECKSUM_KUNIT) += checksum_kunit.o
|
2019-10-24 18:46:31 -04:00
|
|
|
obj-$(CONFIG_LIST_KUNIT_TEST) += list-test.o
|
2023-01-25 17:54:49 -05:00
|
|
|
obj-$(CONFIG_HASHTABLE_KUNIT_TEST) += hashtable_test.o
|
2020-05-08 11:40:43 -04:00
|
|
|
obj-$(CONFIG_LINEAR_RANGES_TEST) += test_linear_ranges.o
|
2020-08-11 21:35:03 -04:00
|
|
|
obj-$(CONFIG_BITS_TEST) += test_bits.o
|
2020-12-15 23:43:34 -05:00
|
|
|
obj-$(CONFIG_CMDLINE_KUNIT_TEST) += cmdline_kunit.o
|
2021-06-28 22:34:33 -04:00
|
|
|
obj-$(CONFIG_SLUB_KUNIT_TEST) += slub_kunit.o
|
2021-06-25 20:45:15 -04:00
|
|
|
obj-$(CONFIG_MEMCPY_KUNIT_TEST) += memcpy_kunit.o
|
2022-08-26 12:21:15 -04:00
|
|
|
obj-$(CONFIG_IS_SIGNED_TYPE_KUNIT_TEST) += is_signed_type_kunit.o
|
overflow: Introduce overflows_type() and castable_to_type()
Implement a robust overflows_type() macro to test if a variable or
constant value would overflow another variable or type. This can be
used as a constant expression for static_assert() (which requires a
constant expression[1][2]) when used on constant values. This must be
constructed manually, since __builtin_add_overflow() does not produce
a constant expression[3].
Additionally adds castable_to_type(), similar to __same_type(), but for
checking if a constant value would overflow if cast to a given type.
Add unit tests for overflows_type(), __same_type(), and castable_to_type()
to the existing KUnit "overflow" test:
[16:03:33] ================== overflow (21 subtests) ==================
...
[16:03:33] [PASSED] overflows_type_test
[16:03:33] [PASSED] same_type_test
[16:03:33] [PASSED] castable_to_type_test
[16:03:33] ==================== [PASSED] overflow =====================
[16:03:33] ============================================================
[16:03:33] Testing complete. Ran 21 tests: passed: 21
[16:03:33] Elapsed time: 24.022s total, 0.002s configuring, 22.598s building, 0.767s running
[1] https://en.cppreference.com/w/c/language/_Static_assert
[2] C11 standard (ISO/IEC 9899:2011): 6.7.10 Static assertions
[3] https://gcc.gnu.org/onlinedocs/gcc/Integer-Overflow-Builtins.html
6.56 Built-in Functions to Perform Arithmetic with Overflow Checking
Built-in Function: bool __builtin_add_overflow (type1 a, type2 b,
Cc: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Tom Rix <trix@redhat.com>
Cc: Daniel Latypov <dlatypov@google.com>
Cc: Vitor Massaru Iha <vitor@massaru.org>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
Cc: linux-hardening@vger.kernel.org
Cc: llvm@lists.linux.dev
Co-developed-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
Signed-off-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20221024201125.1416422-1-gwan-gyeong.mun@intel.com
2022-10-24 16:11:25 -04:00
|
|
|
CFLAGS_overflow_kunit.o = $(call cc-disable-warning, tautological-constant-out-of-range-compare)
|
2022-02-16 17:17:49 -05:00
|
|
|
obj-$(CONFIG_OVERFLOW_KUNIT_TEST) += overflow_kunit.o
|
2022-02-16 19:03:41 -05:00
|
|
|
CFLAGS_stackinit_kunit.o += $(call cc-disable-warning, switch-unreachable)
|
|
|
|
obj-$(CONFIG_STACKINIT_KUNIT_TEST) += stackinit_kunit.o
|
kunit/fortify: Validate __alloc_size attribute results
Validate the effect of the __alloc_size attribute on allocators. If the
compiler doesn't support __builtin_dynamic_object_size(), skip the
associated tests.
(For GCC, just remove the "--make_options" line below...)
$ ./tools/testing/kunit/kunit.py run --arch x86_64 \
--kconfig_add CONFIG_FORTIFY_SOURCE=y \
--make_options LLVM=1
fortify
...
[15:16:30] ================== fortify (10 subtests) ===================
[15:16:30] [PASSED] known_sizes_test
[15:16:30] [PASSED] control_flow_split_test
[15:16:30] [PASSED] alloc_size_kmalloc_const_test
[15:16:30] [PASSED] alloc_size_kmalloc_dynamic_test
[15:16:30] [PASSED] alloc_size_vmalloc_const_test
[15:16:30] [PASSED] alloc_size_vmalloc_dynamic_test
[15:16:30] [PASSED] alloc_size_kvmalloc_const_test
[15:16:30] [PASSED] alloc_size_kvmalloc_dynamic_test
[15:16:30] [PASSED] alloc_size_devm_kmalloc_const_test
[15:16:30] [PASSED] alloc_size_devm_kmalloc_dynamic_test
[15:16:30] ===================== [PASSED] fortify =====================
[15:16:30] ============================================================
[15:16:30] Testing complete. Ran 10 tests: passed: 10
[15:16:31] Elapsed time: 8.348s total, 0.002s configuring, 6.923s building, 1.075s running
For earlier GCC prior to version 12, the dynamic tests will be skipped:
[15:18:59] ================== fortify (10 subtests) ===================
[15:18:59] [PASSED] known_sizes_test
[15:18:59] [PASSED] control_flow_split_test
[15:18:59] [PASSED] alloc_size_kmalloc_const_test
[15:18:59] [SKIPPED] alloc_size_kmalloc_dynamic_test
[15:18:59] [PASSED] alloc_size_vmalloc_const_test
[15:18:59] [SKIPPED] alloc_size_vmalloc_dynamic_test
[15:18:59] [PASSED] alloc_size_kvmalloc_const_test
[15:18:59] [SKIPPED] alloc_size_kvmalloc_dynamic_test
[15:18:59] [PASSED] alloc_size_devm_kmalloc_const_test
[15:18:59] [SKIPPED] alloc_size_devm_kmalloc_dynamic_test
[15:18:59] ===================== [PASSED] fortify =====================
[15:18:59] ============================================================
[15:18:59] Testing complete. Ran 10 tests: passed: 6, skipped: 4
[15:18:59] Elapsed time: 11.965s total, 0.002s configuring, 10.540s building, 1.068s running
Cc: David Gow <davidgow@google.com>
Cc: linux-hardening@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
2022-09-29 04:58:59 -04:00
|
|
|
CFLAGS_fortify_kunit.o += $(call cc-disable-warning, unsequenced)
|
2023-04-07 15:27:15 -04:00
|
|
|
CFLAGS_fortify_kunit.o += $(call cc-disable-warning, stringop-overread)
|
|
|
|
CFLAGS_fortify_kunit.o += $(call cc-disable-warning, stringop-truncation)
|
2022-11-28 05:44:03 -05:00
|
|
|
CFLAGS_fortify_kunit.o += $(DISABLE_STRUCTLEAK_PLUGIN)
|
2022-09-02 16:02:26 -04:00
|
|
|
obj-$(CONFIG_FORTIFY_KUNIT_TEST) += fortify_kunit.o
|
2023-04-04 18:43:35 -04:00
|
|
|
obj-$(CONFIG_STRCAT_KUNIT_TEST) += strcat_kunit.o
|
2022-10-02 12:51:46 -04:00
|
|
|
obj-$(CONFIG_STRSCPY_KUNIT_TEST) += strscpy_kunit.o
|
2022-10-02 22:45:23 -04:00
|
|
|
obj-$(CONFIG_SIPHASH_KUNIT_TEST) += siphash_kunit.o
|
2020-07-09 14:43:21 -04:00
|
|
|
|
|
|
|
obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) += devmem_is_allowed.o
|
2021-04-21 02:22:52 -04:00
|
|
|
|
2023-10-12 14:53:54 -04:00
|
|
|
obj-$(CONFIG_FIRMWARE_TABLE) += fw_table.o
|
|
|
|
|
2021-04-21 02:22:52 -04:00
|
|
|
# FORTIFY_SOURCE compile-time behavior tests
|
|
|
|
TEST_FORTIFY_SRCS = $(wildcard $(srctree)/$(src)/test_fortify/*-*.c)
|
|
|
|
TEST_FORTIFY_LOGS = $(patsubst $(srctree)/$(src)/%.c, %.log, $(TEST_FORTIFY_SRCS))
|
|
|
|
TEST_FORTIFY_LOG = test_fortify.log
|
|
|
|
|
|
|
|
quiet_cmd_test_fortify = TEST $@
|
|
|
|
cmd_test_fortify = $(CONFIG_SHELL) $(srctree)/scripts/test_fortify.sh \
|
|
|
|
$< $@ "$(NM)" $(CC) $(c_flags) \
|
fortify: Detect struct member overflows in memcpy() at compile-time
memcpy() is dead; long live memcpy()
tl;dr: In order to eliminate a large class of common buffer overflow
flaws that continue to persist in the kernel, have memcpy() (under
CONFIG_FORTIFY_SOURCE) perform bounds checking of the destination struct
member when they have a known size. This would have caught all of the
memcpy()-related buffer write overflow flaws identified in at least the
last three years.
Background and analysis:
While stack-based buffer overflow flaws are largely mitigated by stack
canaries (and similar) features, heap-based buffer overflow flaws continue
to regularly appear in the kernel. Many classes of heap buffer overflows
are mitigated by FORTIFY_SOURCE when using the strcpy() family of
functions, but a significant number remain exposed through the memcpy()
family of functions.
At its core, FORTIFY_SOURCE uses the compiler's __builtin_object_size()
internal[0] to determine the available size at a target address based on
the compile-time known structure layout details. It operates in two
modes: outer bounds (0) and inner bounds (1). In mode 0, the size of the
enclosing structure is used. In mode 1, the size of the specific field
is used. For example:
struct object {
u16 scalar1; /* 2 bytes */
char array[6]; /* 6 bytes */
u64 scalar2; /* 8 bytes */
u32 scalar3; /* 4 bytes */
u32 scalar4; /* 4 bytes */
} instance;
__builtin_object_size(instance.array, 0) == 22, since the remaining size
of the enclosing structure starting from "array" is 22 bytes (6 + 8 +
4 + 4).
__builtin_object_size(instance.array, 1) == 6, since the remaining size
of the specific field "array" is 6 bytes.
The initial implementation of FORTIFY_SOURCE used mode 0 because there
were many cases of both strcpy() and memcpy() functions being used to
write (or read) across multiple fields in a structure. For example,
it would catch this, which is writing 2 bytes beyond the end of
"instance":
memcpy(&instance.array, data, 25);
While this didn't protect against overwriting adjacent fields in a given
structure, it would at least stop overflows from reaching beyond the
end of the structure into neighboring memory, and provided a meaningful
mitigation of a subset of buffer overflow flaws. However, many desirable
targets remain within the enclosing structure (for example function
pointers).
As it happened, there were very few cases of strcpy() family functions
intentionally writing beyond the end of a string buffer. Once all known
cases were removed from the kernel, the strcpy() family was tightened[1]
to use mode 1, providing greater mitigation coverage.
What remains is switching memcpy() to mode 1 as well, but making the
switch is much more difficult because of how frustrating it can be to
find existing "normal" uses of memcpy() that expect to write (or read)
across multiple fields. The root cause of the problem is that the C
language lacks a common pattern to indicate the intent of an author's
use of memcpy(), and is further complicated by the available compile-time
and run-time mitigation behaviors.
The FORTIFY_SOURCE mitigation comes in two halves: the compile-time half,
when both the buffer size _and_ the length of the copy is known, and the
run-time half, when only the buffer size is known. If neither size is
known, there is no bounds checking possible. At compile-time when the
compiler sees that a length will always exceed a known buffer size,
a warning can be deterministically emitted. For the run-time half,
the length is tested against the known size of the buffer, and the
overflowing operation is detected. (The performance overhead for these
tests is virtually zero.)
It is relatively easy to find compile-time false-positives since a warning
is always generated. Fixing the false positives, however, can be very
time-consuming as there are hundreds of instances. While it's possible
some over-read conditions could lead to kernel memory exposures, the bulk
of the risk comes from the run-time flaws where the length of a write
may end up being attacker-controlled and lead to an overflow.
Many of the compile-time false-positives take a form similar to this:
memcpy(&instance.scalar2, data, sizeof(instance.scalar2) +
sizeof(instance.scalar3));
and the run-time ones are similar, but lack a constant expression for the
size of the copy:
memcpy(instance.array, data, length);
The former is meant to cover multiple fields (though its style has been
frowned upon more recently), but has been technically legal. Both lack
any expressivity in the C language about the author's _intent_ in a way
that a compiler can check when the length isn't known at compile time.
A comment doesn't work well because what's needed is something a compiler
can directly reason about. Is a given memcpy() call expected to overflow
into neighbors? Is it not? By using the new struct_group() macro, this
intent can be much more easily encoded.
It is not as easy to find the run-time false-positives since the code path
to exercise a seemingly out-of-bounds condition that is actually expected
may not be trivially reachable. Tightening the restrictions to block an
operation for a false positive will either potentially create a greater
flaw (if a copy is truncated by the mitigation), or destabilize the kernel
(e.g. with a BUG()), making things completely useless for the end user.
As a result, tightening the memcpy() restriction (when there is a
reasonable level of uncertainty of the number of false positives), needs
to first WARN() with no truncation. (Though any sufficiently paranoid
end-user can always opt to set the panic_on_warn=1 sysctl.) Once enough
development time has passed, the mitigation can be further intensified.
(Note that this patch is only the compile-time checking step, which is
a prerequisite to doing run-time checking, which will come in future
patches.)
Given the potential frustrations of weeding out all the false positives
when tightening the run-time checks, it is reasonable to wonder if these
changes would actually add meaningful protection. Looking at just the
last three years, there are 23 identified flaws with a CVE that mention
"buffer overflow", and 11 are memcpy()-related buffer overflows.
(For the remaining 12: 7 are array index overflows that would be
mitigated by systems built with CONFIG_UBSAN_BOUNDS=y: CVE-2019-0145,
CVE-2019-14835, CVE-2019-14896, CVE-2019-14897, CVE-2019-14901,
CVE-2019-17666, CVE-2021-28952. 2 are miscalculated allocation
sizes which could be mitigated with memory tagging: CVE-2019-16746,
CVE-2019-2181. 1 is an iovec buffer bug maybe mitigated by memory tagging:
CVE-2020-10742. 1 is a type confusion bug mitigated by stack canaries:
CVE-2020-10942. 1 is a string handling logic bug with no mitigation I'm
aware of: CVE-2021-28972.)
At my last count on an x86_64 allmodconfig build, there are 35,294
calls to memcpy(). With callers instrumented to report all places
where the buffer size is known but the length remains unknown (i.e. a
run-time bounds check is added), we can count how many new run-time
bounds checks are added when the destination and source arguments of
memcpy() are changed to use "mode 1" bounds checking: 1,276. This means
for the future run-time checking, there is a worst-case upper bounds
of 3.6% false positives to fix. In addition, there were around 150 new
compile-time warnings to evaluate and fix (which have now been fixed).
With this instrumentation it's also possible to compare the places where
the known 11 memcpy() flaw overflows manifested against the resulting
list of potential new run-time bounds checks, as a measure of potential
efficacy of the tightened mitigation. Much to my surprise, horror, and
delight, all 11 flaws would have been detected by the newly added run-time
bounds checks, making this a distinctly clear mitigation improvement: 100%
coverage for known memcpy() flaws, with a possible 2 orders of magnitude
gain in coverage over existing but undiscovered run-time dynamic length
flaws (i.e. 1265 newly covered sites in addition to the 11 known), against
only <4% of all memcpy() callers maybe gaining a false positive run-time
check, with only about 150 new compile-time instances needing evaluation.
Specifically these would have been mitigated:
CVE-2020-24490 https://git.kernel.org/linus/a2ec905d1e160a33b2e210e45ad30445ef26ce0e
CVE-2020-12654 https://git.kernel.org/linus/3a9b153c5591548612c3955c9600a98150c81875
CVE-2020-12653 https://git.kernel.org/linus/b70261a288ea4d2f4ac7cd04be08a9f0f2de4f4d
CVE-2019-14895 https://git.kernel.org/linus/3d94a4a8373bf5f45cf5f939e88b8354dbf2311b
CVE-2019-14816 https://git.kernel.org/linus/7caac62ed598a196d6ddf8d9c121e12e082cac3a
CVE-2019-14815 https://git.kernel.org/linus/7caac62ed598a196d6ddf8d9c121e12e082cac3a
CVE-2019-14814 https://git.kernel.org/linus/7caac62ed598a196d6ddf8d9c121e12e082cac3a
CVE-2019-10126 https://git.kernel.org/linus/69ae4f6aac1578575126319d3f55550e7e440449
CVE-2019-9500 https://git.kernel.org/linus/1b5e2423164b3670e8bc9174e4762d297990deff
no-CVE-yet https://git.kernel.org/linus/130f634da1af649205f4a3dd86cbe5c126b57914
no-CVE-yet https://git.kernel.org/linus/d10a87a3535cce2b890897914f5d0d83df669c63
To accelerate the review of potential run-time false positives, it's
also worth noting that it is possible to partially automate checking
by examining the memcpy() buffer argument to check for the destination
struct member having a neighboring array member. It is reasonable to
expect that the vast majority of run-time false positives would look like
the already evaluated and fixed compile-time false positives, where the
most common pattern is neighboring arrays. (And, FWIW, many of the
compile-time fixes were actual bugs, so it is reasonable to assume we'll
have similar cases of actual bugs getting fixed for run-time checks.)
Implementation:
Tighten the memcpy() destination buffer size checking to use the actual
("mode 1") target buffer size as the bounds check instead of their
enclosing structure's ("mode 0") size. Use a common inline for memcpy()
(and memmove() in a following patch), since all the tests are the
same. All new cross-field memcpy() uses must use the struct_group() macro
or similar to target a specific range of fields, so that FORTIFY_SOURCE
can reason about the size and safety of the copy.
For now, cross-member "mode 1" _read_ detection at compile-time will be
limited to W=1 builds, since it is, unfortunately, very common. As the
priority is solving write overflows, read overflows will be part of a
future phase (and can be fixed in parallel, for anyone wanting to look
at W=1 build output).
For run-time, the "mode 0" size checking and mitigation is left unchanged,
with "mode 1" to be added in stages. In this patch, no new run-time
checks are added. Future patches will first bounds-check writes,
and only perform a WARN() for now. This way any missed run-time false
positives can be flushed out over the coming several development cycles,
but system builders who have tested their workloads to be WARN()-free
can enable the panic_on_warn=1 sysctl to immediately gain a mitigation
against this class of buffer overflows. Once that is under way, run-time
bounds-checking of reads can be similarly enabled.
Related classes of flaws that will remain unmitigated:
- memcpy() with flexible array structures, as the compiler does not
currently have visibility into the size of the trailing flexible
array. These can be fixed in the future by refactoring such cases
to use a new set of flexible array structure helpers to perform the
common serialization/deserialization code patterns doing allocation
and/or copying.
- memcpy() with raw pointers (e.g. void *, char *, etc), or otherwise
having their buffer size unknown at compile time, have no good
mitigation beyond memory tagging (and even that would only protect
against inter-object overflow, not intra-object neighboring field
overflows), or refactoring. Some kind of "fat pointer" solution is
likely needed to gain proper size-of-buffer awareness. (e.g. see
struct membuf)
- type confusion where a higher level type's allocation size does
not match the resulting cast type eventually passed to a deeper
memcpy() call where the compiler cannot see the true type. In
theory, greater static analysis could catch these, and the use
of -Warray-bounds will help find some of these.
[0] https://gcc.gnu.org/onlinedocs/gcc/Object-Size-Checking.html
[1] https://git.kernel.org/linus/6a39e62abbafd1d58d1722f40c7d26ef379c6a2f
Signed-off-by: Kees Cook <keescook@chromium.org>
2021-04-21 02:22:52 -04:00
|
|
|
$(call cc-disable-warning,fortify-source) \
|
|
|
|
-DKBUILD_EXTRA_WARN1
|
2021-04-21 02:22:52 -04:00
|
|
|
|
|
|
|
targets += $(TEST_FORTIFY_LOGS)
|
|
|
|
clean-files += $(TEST_FORTIFY_LOGS)
|
|
|
|
clean-files += $(addsuffix .o, $(TEST_FORTIFY_LOGS))
|
|
|
|
$(obj)/test_fortify/%.log: $(src)/test_fortify/%.c \
|
|
|
|
$(src)/test_fortify/test_fortify.h \
|
|
|
|
$(srctree)/include/linux/fortify-string.h \
|
|
|
|
$(srctree)/scripts/test_fortify.sh \
|
|
|
|
FORCE
|
|
|
|
$(call if_changed,test_fortify)
|
|
|
|
|
|
|
|
quiet_cmd_gen_fortify_log = GEN $@
|
|
|
|
cmd_gen_fortify_log = cat </dev/null $(filter-out FORCE,$^) 2>/dev/null > $@ || true
|
|
|
|
|
|
|
|
targets += $(TEST_FORTIFY_LOG)
|
|
|
|
clean-files += $(TEST_FORTIFY_LOG)
|
|
|
|
$(obj)/$(TEST_FORTIFY_LOG): $(addprefix $(obj)/, $(TEST_FORTIFY_LOGS)) FORCE
|
|
|
|
$(call if_changed,gen_fortify_log)
|
|
|
|
|
|
|
|
# Fake dependency to trigger the fortify tests.
|
|
|
|
ifeq ($(CONFIG_FORTIFY_SOURCE),y)
|
|
|
|
$(obj)/string.o: $(obj)/$(TEST_FORTIFY_LOG)
|
|
|
|
endif
|