x86, kaslr: fix module lock ordering problem
There was a potential lock ordering problem with the module kASLR patch
("x86, kaslr: randomize module base load address"). This patch removes
the usage of the module_mutex and creates a new mutex to protect the
module base address offset value.
Chain exists of:
text_mutex --> kprobe_insn_slots.mutex --> module_mutex
[ 0.515561] Possible unsafe locking scenario:
[ 0.515561]
[ 0.515561] CPU0 CPU1
[ 0.515561] ---- ----
[ 0.515561] lock(module_mutex);
[ 0.515561] lock(kprobe_insn_slots.mutex);
[ 0.515561] lock(module_mutex);
[ 0.515561] lock(text_mutex);
[ 0.515561]
[ 0.515561] *** DEADLOCK ***
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Andy Honig <ahonig@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index 4948313..e69f988 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -48,6 +48,9 @@
static unsigned long module_load_offset;
static int randomize_modules = 1;
+/* Mutex protects the module_load_offset. */
+static DEFINE_MUTEX(module_kaslr_mutex);
+
static int __init parse_nokaslr(char *p)
{
randomize_modules = 0;
@@ -58,7 +61,7 @@
static unsigned long int get_module_load_offset(void)
{
if (randomize_modules) {
- mutex_lock(&module_mutex);
+ mutex_lock(&module_kaslr_mutex);
/*
* Calculate the module_load_offset the first time this
* code is called. Once calculated it stays the same until
@@ -67,7 +70,7 @@
if (module_load_offset == 0)
module_load_offset =
(get_random_int() % 1024 + 1) * PAGE_SIZE;
- mutex_unlock(&module_mutex);
+ mutex_unlock(&module_kaslr_mutex);
}
return module_load_offset;
}