diff options
author | Anthony G. Basile <blueness@gentoo.org> | 2015-10-24 09:03:23 -0400 |
---|---|---|
committer | Anthony G. Basile <blueness@gentoo.org> | 2015-10-24 09:03:23 -0400 |
commit | 3f2329e91facf2ecbd85a83340a1bafe4c6f278b (patch) | |
tree | ec762187daaf6107d45563ba6bdbe8927d922bc6 | |
parent | grsecurity-3.1-4.2.3-201510202025 (diff) | |
download | hardened-patchset-3f2329e91facf2ecbd85a83340a1bafe4c6f278b.tar.gz hardened-patchset-3f2329e91facf2ecbd85a83340a1bafe4c6f278b.tar.bz2 hardened-patchset-3f2329e91facf2ecbd85a83340a1bafe4c6f278b.zip |
grsecurity-3.1-4.2.4-20151022205920151022
-rw-r--r-- | 4.2.4/0000_README (renamed from 4.2.3/0000_README) | 6 | ||||
-rw-r--r-- | 4.2.4/1003_linux-4.2.4.patch | 10010 | ||||
-rw-r--r-- | 4.2.4/4420_grsecurity-3.1-4.2.4-201510222059.patch (renamed from 4.2.3/4420_grsecurity-3.1-4.2.3-201510202025.patch) | 1543 | ||||
-rw-r--r-- | 4.2.4/4425_grsec_remove_EI_PAX.patch (renamed from 4.2.3/4425_grsec_remove_EI_PAX.patch) | 0 | ||||
-rw-r--r-- | 4.2.4/4427_force_XATTR_PAX_tmpfs.patch (renamed from 4.2.3/4427_force_XATTR_PAX_tmpfs.patch) | 0 | ||||
-rw-r--r-- | 4.2.4/4430_grsec-remove-localversion-grsec.patch (renamed from 4.2.3/4430_grsec-remove-localversion-grsec.patch) | 0 | ||||
-rw-r--r-- | 4.2.4/4435_grsec-mute-warnings.patch (renamed from 4.2.3/4435_grsec-mute-warnings.patch) | 0 | ||||
-rw-r--r-- | 4.2.4/4440_grsec-remove-protected-paths.patch (renamed from 4.2.3/4440_grsec-remove-protected-paths.patch) | 0 | ||||
-rw-r--r-- | 4.2.4/4450_grsec-kconfig-default-gids.patch (renamed from 4.2.3/4450_grsec-kconfig-default-gids.patch) | 0 | ||||
-rw-r--r-- | 4.2.4/4465_selinux-avc_audit-log-curr_ip.patch (renamed from 4.2.3/4465_selinux-avc_audit-log-curr_ip.patch) | 0 | ||||
-rw-r--r-- | 4.2.4/4470_disable-compat_vdso.patch (renamed from 4.2.3/4470_disable-compat_vdso.patch) | 0 | ||||
-rw-r--r-- | 4.2.4/4475_emutramp_default_on.patch (renamed from 4.2.3/4475_emutramp_default_on.patch) | 0 |
12 files changed, 10388 insertions, 1171 deletions
diff --git a/4.2.3/0000_README b/4.2.4/0000_README index 08cde44..a7f6aae 100644 --- a/4.2.3/0000_README +++ b/4.2.4/0000_README @@ -2,7 +2,11 @@ README ----------------------------------------------------------------------------- Individual Patch Descriptions: ----------------------------------------------------------------------------- -Patch: 4420_grsecurity-3.1-4.2.3-201510202025.patch +Patch: 1003_linux-4.2.4.patch +From: http://www.kernel.org +Desc: Linux 4.2.4 + +Patch: 4420_grsecurity-3.1-4.2.4-201510222059.patch From: http://www.grsecurity.net Desc: hardened-sources base patch from upstream grsecurity diff --git a/4.2.4/1003_linux-4.2.4.patch b/4.2.4/1003_linux-4.2.4.patch new file mode 100644 index 0000000..a7e5a43 --- /dev/null +++ b/4.2.4/1003_linux-4.2.4.patch @@ -0,0 +1,10010 @@ +diff --git a/Documentation/HOWTO b/Documentation/HOWTO +index 93aa860..21152d3 100644 +--- a/Documentation/HOWTO ++++ b/Documentation/HOWTO +@@ -218,16 +218,16 @@ The development process + Linux kernel development process currently consists of a few different + main kernel "branches" and lots of different subsystem-specific kernel + branches. These different branches are: +- - main 3.x kernel tree +- - 3.x.y -stable kernel tree +- - 3.x -git kernel patches ++ - main 4.x kernel tree ++ - 4.x.y -stable kernel tree ++ - 4.x -git kernel patches + - subsystem specific kernel trees and patches +- - the 3.x -next kernel tree for integration tests ++ - the 4.x -next kernel tree for integration tests + +-3.x kernel tree ++4.x kernel tree + ----------------- +-3.x kernels are maintained by Linus Torvalds, and can be found on +-kernel.org in the pub/linux/kernel/v3.x/ directory. Its development ++4.x kernels are maintained by Linus Torvalds, and can be found on ++kernel.org in the pub/linux/kernel/v4.x/ directory. Its development + process is as follows: + - As soon as a new kernel is released a two weeks window is open, + during this period of time maintainers can submit big diffs to +@@ -262,20 +262,20 @@ mailing list about kernel releases: + released according to perceived bug status, not according to a + preconceived timeline." + +-3.x.y -stable kernel tree ++4.x.y -stable kernel tree + --------------------------- + Kernels with 3-part versions are -stable kernels. They contain + relatively small and critical fixes for security problems or significant +-regressions discovered in a given 3.x kernel. ++regressions discovered in a given 4.x kernel. + + This is the recommended branch for users who want the most recent stable + kernel and are not interested in helping test development/experimental + versions. + +-If no 3.x.y kernel is available, then the highest numbered 3.x ++If no 4.x.y kernel is available, then the highest numbered 4.x + kernel is the current stable kernel. + +-3.x.y are maintained by the "stable" team <stable@vger.kernel.org>, and ++4.x.y are maintained by the "stable" team <stable@vger.kernel.org>, and + are released as needs dictate. The normal release period is approximately + two weeks, but it can be longer if there are no pressing problems. A + security-related problem, instead, can cause a release to happen almost +@@ -285,7 +285,7 @@ The file Documentation/stable_kernel_rules.txt in the kernel tree + documents what kinds of changes are acceptable for the -stable tree, and + how the release process works. + +-3.x -git patches ++4.x -git patches + ------------------ + These are daily snapshots of Linus' kernel tree which are managed in a + git repository (hence the name.) These patches are usually released +@@ -317,9 +317,9 @@ revisions to it, and maintainers can mark patches as under review, + accepted, or rejected. Most of these patchwork sites are listed at + http://patchwork.kernel.org/. + +-3.x -next kernel tree for integration tests ++4.x -next kernel tree for integration tests + --------------------------------------------- +-Before updates from subsystem trees are merged into the mainline 3.x ++Before updates from subsystem trees are merged into the mainline 4.x + tree, they need to be integration-tested. For this purpose, a special + testing repository exists into which virtually all subsystem trees are + pulled on an almost daily basis: +diff --git a/Makefile b/Makefile +index a6edbb1..a952801 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 4 + PATCHLEVEL = 2 +-SUBLEVEL = 3 ++SUBLEVEL = 4 + EXTRAVERSION = + NAME = Hurr durr I'ma sheep + +diff --git a/arch/arc/plat-axs10x/axs10x.c b/arch/arc/plat-axs10x/axs10x.c +index e7769c3..ac79491 100644 +--- a/arch/arc/plat-axs10x/axs10x.c ++++ b/arch/arc/plat-axs10x/axs10x.c +@@ -402,6 +402,8 @@ static void __init axs103_early_init(void) + unsigned int num_cores = (read_aux_reg(ARC_REG_MCIP_BCR) >> 16) & 0x3F; + if (num_cores > 2) + arc_set_core_freq(50 * 1000000); ++ else if (num_cores == 2) ++ arc_set_core_freq(75 * 1000000); + #endif + + switch (arc_get_core_freq()/1000000) { +diff --git a/arch/arm/Makefile b/arch/arm/Makefile +index 7451b44..2c2b28e 100644 +--- a/arch/arm/Makefile ++++ b/arch/arm/Makefile +@@ -54,6 +54,14 @@ AS += -EL + LD += -EL + endif + ++# ++# The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and ++# later may result in code being generated that handles signed short and signed ++# char struct members incorrectly. So disable it. ++# (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65932) ++# ++KBUILD_CFLAGS += $(call cc-option,-fno-ipa-sra) ++ + # This selects which instruction set is used. + # Note that GCC does not numerically define an architecture version + # macro, but instead defines a whole series of macros which makes +diff --git a/arch/arm/boot/dts/exynos5420.dtsi b/arch/arm/boot/dts/exynos5420.dtsi +index 534f27c..fa8107d 100644 +--- a/arch/arm/boot/dts/exynos5420.dtsi ++++ b/arch/arm/boot/dts/exynos5420.dtsi +@@ -1118,7 +1118,7 @@ + interrupt-parent = <&combiner>; + interrupts = <3 0>; + clock-names = "sysmmu", "master"; +- clocks = <&clock CLK_SMMU_FIMD1M0>, <&clock CLK_FIMD1>; ++ clocks = <&clock CLK_SMMU_FIMD1M1>, <&clock CLK_FIMD1>; + power-domains = <&disp_pd>; + #iommu-cells = <0>; + }; +diff --git a/arch/arm/boot/dts/imx6qdl-rex.dtsi b/arch/arm/boot/dts/imx6qdl-rex.dtsi +index 3373fd9..a5035624 100644 +--- a/arch/arm/boot/dts/imx6qdl-rex.dtsi ++++ b/arch/arm/boot/dts/imx6qdl-rex.dtsi +@@ -35,7 +35,6 @@ + compatible = "regulator-fixed"; + reg = <1>; + pinctrl-names = "default"; +- pinctrl-0 = <&pinctrl_usbh1>; + regulator-name = "usbh1_vbus"; + regulator-min-microvolt = <5000000>; + regulator-max-microvolt = <5000000>; +@@ -47,7 +46,6 @@ + compatible = "regulator-fixed"; + reg = <2>; + pinctrl-names = "default"; +- pinctrl-0 = <&pinctrl_usbotg>; + regulator-name = "usb_otg_vbus"; + regulator-min-microvolt = <5000000>; + regulator-max-microvolt = <5000000>; +diff --git a/arch/arm/boot/dts/omap3-beagle.dts b/arch/arm/boot/dts/omap3-beagle.dts +index a547411..67659a0 100644 +--- a/arch/arm/boot/dts/omap3-beagle.dts ++++ b/arch/arm/boot/dts/omap3-beagle.dts +@@ -202,7 +202,7 @@ + + tfp410_pins: pinmux_tfp410_pins { + pinctrl-single,pins = < +- 0x194 (PIN_OUTPUT | MUX_MODE4) /* hdq_sio.gpio_170 */ ++ 0x196 (PIN_OUTPUT | MUX_MODE4) /* hdq_sio.gpio_170 */ + >; + }; + +diff --git a/arch/arm/boot/dts/omap5-uevm.dts b/arch/arm/boot/dts/omap5-uevm.dts +index 275618f..5771a14 100644 +--- a/arch/arm/boot/dts/omap5-uevm.dts ++++ b/arch/arm/boot/dts/omap5-uevm.dts +@@ -174,8 +174,8 @@ + + i2c5_pins: pinmux_i2c5_pins { + pinctrl-single,pins = < +- 0x184 (PIN_INPUT | MUX_MODE0) /* i2c5_scl */ +- 0x186 (PIN_INPUT | MUX_MODE0) /* i2c5_sda */ ++ 0x186 (PIN_INPUT | MUX_MODE0) /* i2c5_scl */ ++ 0x188 (PIN_INPUT | MUX_MODE0) /* i2c5_sda */ + >; + }; + +diff --git a/arch/arm/boot/dts/sun7i-a20.dtsi b/arch/arm/boot/dts/sun7i-a20.dtsi +index 6a63f30..f5f384c 100644 +--- a/arch/arm/boot/dts/sun7i-a20.dtsi ++++ b/arch/arm/boot/dts/sun7i-a20.dtsi +@@ -107,7 +107,7 @@ + 720000 1200000 + 528000 1100000 + 312000 1000000 +- 144000 900000 ++ 144000 1000000 + >; + #cooling-cells = <2>; + cooling-min-level = <0>; +diff --git a/arch/arm/kernel/kgdb.c b/arch/arm/kernel/kgdb.c +index a6ad93c..fd9eefc 100644 +--- a/arch/arm/kernel/kgdb.c ++++ b/arch/arm/kernel/kgdb.c +@@ -259,15 +259,17 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt) + if (err) + return err; + +- patch_text((void *)bpt->bpt_addr, +- *(unsigned int *)arch_kgdb_ops.gdb_bpt_instr); ++ /* Machine is already stopped, so we can use __patch_text() directly */ ++ __patch_text((void *)bpt->bpt_addr, ++ *(unsigned int *)arch_kgdb_ops.gdb_bpt_instr); + + return err; + } + + int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt) + { +- patch_text((void *)bpt->bpt_addr, *(unsigned int *)bpt->saved_instr); ++ /* Machine is already stopped, so we can use __patch_text() directly */ ++ __patch_text((void *)bpt->bpt_addr, *(unsigned int *)bpt->saved_instr); + + return 0; + } +diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c +index 54272e0..7d5379c 100644 +--- a/arch/arm/kernel/perf_event.c ++++ b/arch/arm/kernel/perf_event.c +@@ -795,8 +795,10 @@ static int of_pmu_irq_cfg(struct arm_pmu *pmu) + + /* Don't bother with PPIs; they're already affine */ + irq = platform_get_irq(pdev, 0); +- if (irq >= 0 && irq_is_percpu(irq)) ++ if (irq >= 0 && irq_is_percpu(irq)) { ++ cpumask_setall(&pmu->supported_cpus); + return 0; ++ } + + irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL); + if (!irqs) +diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c +index 423663e..586eef2 100644 +--- a/arch/arm/kernel/signal.c ++++ b/arch/arm/kernel/signal.c +@@ -343,12 +343,17 @@ setup_return(struct pt_regs *regs, struct ksignal *ksig, + */ + thumb = handler & 1; + +-#if __LINUX_ARM_ARCH__ >= 7 ++#if __LINUX_ARM_ARCH__ >= 6 + /* +- * Clear the If-Then Thumb-2 execution state +- * ARM spec requires this to be all 000s in ARM mode +- * Snapdragon S4/Krait misbehaves on a Thumb=>ARM +- * signal transition without this. ++ * Clear the If-Then Thumb-2 execution state. ARM spec ++ * requires this to be all 000s in ARM mode. Snapdragon ++ * S4/Krait misbehaves on a Thumb=>ARM signal transition ++ * without this. ++ * ++ * We must do this whenever we are running on a Thumb-2 ++ * capable CPU, which includes ARMv6T2. However, we elect ++ * to do this whenever we're on an ARMv6 or later CPU for ++ * simplicity. + */ + cpsr &= ~PSR_IT_MASK; + #endif +diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S +index 702740d..51a5950 100644 +--- a/arch/arm/kvm/interrupts_head.S ++++ b/arch/arm/kvm/interrupts_head.S +@@ -515,8 +515,7 @@ ARM_BE8(rev r6, r6 ) + + mrc p15, 0, r2, c14, c3, 1 @ CNTV_CTL + str r2, [vcpu, #VCPU_TIMER_CNTV_CTL] +- bic r2, #1 @ Clear ENABLE +- mcr p15, 0, r2, c14, c3, 1 @ CNTV_CTL ++ + isb + + mrrc p15, 3, rr_lo_hi(r2, r3), c14 @ CNTV_CVAL +@@ -529,6 +528,9 @@ ARM_BE8(rev r6, r6 ) + mcrr p15, 4, r2, r2, c14 @ CNTVOFF + + 1: ++ mov r2, #0 @ Clear ENABLE ++ mcr p15, 0, r2, c14, c3, 1 @ CNTV_CTL ++ + @ Allow physical timer/counter access for the host + mrc p15, 4, r2, c14, c1, 0 @ CNTHCTL + orr r2, r2, #(CNTHCTL_PL1PCEN | CNTHCTL_PL1PCTEN) +diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c +index 7b42012..6984342 100644 +--- a/arch/arm/kvm/mmu.c ++++ b/arch/arm/kvm/mmu.c +@@ -1792,8 +1792,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, + if (vma->vm_flags & VM_PFNMAP) { + gpa_t gpa = mem->guest_phys_addr + + (vm_start - mem->userspace_addr); +- phys_addr_t pa = (vma->vm_pgoff << PAGE_SHIFT) + +- vm_start - vma->vm_start; ++ phys_addr_t pa; ++ ++ pa = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT; ++ pa += vm_start - vma->vm_start; + + /* IO region dirty page logging not allowed */ + if (memslot->flags & KVM_MEM_LOG_DIRTY_PAGES) +diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c +index 9bdf547..5697819 100644 +--- a/arch/arm/mach-exynos/mcpm-exynos.c ++++ b/arch/arm/mach-exynos/mcpm-exynos.c +@@ -20,6 +20,7 @@ + #include <asm/cputype.h> + #include <asm/cp15.h> + #include <asm/mcpm.h> ++#include <asm/smp_plat.h> + + #include "regs-pmu.h" + #include "common.h" +@@ -70,7 +71,31 @@ static int exynos_cpu_powerup(unsigned int cpu, unsigned int cluster) + cluster >= EXYNOS5420_NR_CLUSTERS) + return -EINVAL; + +- exynos_cpu_power_up(cpunr); ++ if (!exynos_cpu_power_state(cpunr)) { ++ exynos_cpu_power_up(cpunr); ++ ++ /* ++ * This assumes the cluster number of the big cores(Cortex A15) ++ * is 0 and the Little cores(Cortex A7) is 1. ++ * When the system was booted from the Little core, ++ * they should be reset during power up cpu. ++ */ ++ if (cluster && ++ cluster == MPIDR_AFFINITY_LEVEL(cpu_logical_map(0), 1)) { ++ /* ++ * Before we reset the Little cores, we should wait ++ * the SPARE2 register is set to 1 because the init ++ * codes of the iROM will set the register after ++ * initialization. ++ */ ++ while (!pmu_raw_readl(S5P_PMU_SPARE2)) ++ udelay(10); ++ ++ pmu_raw_writel(EXYNOS5420_KFC_CORE_RESET(cpu), ++ EXYNOS_SWRESET); ++ } ++ } ++ + return 0; + } + +diff --git a/arch/arm/mach-exynos/regs-pmu.h b/arch/arm/mach-exynos/regs-pmu.h +index b761433..fba9068 100644 +--- a/arch/arm/mach-exynos/regs-pmu.h ++++ b/arch/arm/mach-exynos/regs-pmu.h +@@ -513,6 +513,12 @@ static inline unsigned int exynos_pmu_cpunr(unsigned int mpidr) + #define SPREAD_ENABLE 0xF + #define SPREAD_USE_STANDWFI 0xF + ++#define EXYNOS5420_KFC_CORE_RESET0 BIT(8) ++#define EXYNOS5420_KFC_ETM_RESET0 BIT(20) ++ ++#define EXYNOS5420_KFC_CORE_RESET(_nr) \ ++ ((EXYNOS5420_KFC_CORE_RESET0 | EXYNOS5420_KFC_ETM_RESET0) << (_nr)) ++ + #define EXYNOS5420_BB_CON1 0x0784 + #define EXYNOS5420_BB_SEL_EN BIT(31) + #define EXYNOS5420_BB_PMOS_EN BIT(7) +diff --git a/arch/arm/plat-pxa/ssp.c b/arch/arm/plat-pxa/ssp.c +index ad9529c..daa1a65 100644 +--- a/arch/arm/plat-pxa/ssp.c ++++ b/arch/arm/plat-pxa/ssp.c +@@ -107,7 +107,6 @@ static const struct of_device_id pxa_ssp_of_ids[] = { + { .compatible = "mvrl,pxa168-ssp", .data = (void *) PXA168_SSP }, + { .compatible = "mrvl,pxa910-ssp", .data = (void *) PXA910_SSP }, + { .compatible = "mrvl,ce4100-ssp", .data = (void *) CE4100_SSP }, +- { .compatible = "mrvl,lpss-ssp", .data = (void *) LPSS_SSP }, + { }, + }; + MODULE_DEVICE_TABLE(of, pxa_ssp_of_ids); +diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c +index e8ca6ea..13671a9 100644 +--- a/arch/arm64/kernel/efi.c ++++ b/arch/arm64/kernel/efi.c +@@ -258,7 +258,8 @@ static bool __init efi_virtmap_init(void) + */ + if (!is_normal_ram(md)) + prot = __pgprot(PROT_DEVICE_nGnRE); +- else if (md->type == EFI_RUNTIME_SERVICES_CODE) ++ else if (md->type == EFI_RUNTIME_SERVICES_CODE || ++ !PAGE_ALIGNED(md->phys_addr)) + prot = PAGE_KERNEL_EXEC; + else + prot = PAGE_KERNEL; +diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S +index 08cafc5..0f03a8f 100644 +--- a/arch/arm64/kernel/entry-ftrace.S ++++ b/arch/arm64/kernel/entry-ftrace.S +@@ -178,6 +178,24 @@ ENTRY(ftrace_stub) + ENDPROC(ftrace_stub) + + #ifdef CONFIG_FUNCTION_GRAPH_TRACER ++ /* save return value regs*/ ++ .macro save_return_regs ++ sub sp, sp, #64 ++ stp x0, x1, [sp] ++ stp x2, x3, [sp, #16] ++ stp x4, x5, [sp, #32] ++ stp x6, x7, [sp, #48] ++ .endm ++ ++ /* restore return value regs*/ ++ .macro restore_return_regs ++ ldp x0, x1, [sp] ++ ldp x2, x3, [sp, #16] ++ ldp x4, x5, [sp, #32] ++ ldp x6, x7, [sp, #48] ++ add sp, sp, #64 ++ .endm ++ + /* + * void ftrace_graph_caller(void) + * +@@ -204,11 +222,11 @@ ENDPROC(ftrace_graph_caller) + * only when CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST is enabled. + */ + ENTRY(return_to_handler) +- str x0, [sp, #-16]! ++ save_return_regs + mov x0, x29 // parent's fp + bl ftrace_return_to_handler// addr = ftrace_return_to_hander(fp); + mov x30, x0 // restore the original return address +- ldr x0, [sp], #16 ++ restore_return_regs + ret + END(return_to_handler) + #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ +diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c +index 94d98cd..27c3e6f 100644 +--- a/arch/arm64/mm/fault.c ++++ b/arch/arm64/mm/fault.c +@@ -278,6 +278,7 @@ retry: + * starvation. + */ + mm_flags &= ~FAULT_FLAG_ALLOW_RETRY; ++ mm_flags |= FAULT_FLAG_TRIED; + goto retry; + } + } +diff --git a/arch/m68k/include/asm/linkage.h b/arch/m68k/include/asm/linkage.h +index 5a822bb..066e74f 100644 +--- a/arch/m68k/include/asm/linkage.h ++++ b/arch/m68k/include/asm/linkage.h +@@ -4,4 +4,34 @@ + #define __ALIGN .align 4 + #define __ALIGN_STR ".align 4" + ++/* ++ * Make sure the compiler doesn't do anything stupid with the ++ * arguments on the stack - they are owned by the *caller*, not ++ * the callee. This just fools gcc into not spilling into them, ++ * and keeps it from doing tailcall recursion and/or using the ++ * stack slots for temporaries, since they are live and "used" ++ * all the way to the end of the function. ++ */ ++#define asmlinkage_protect(n, ret, args...) \ ++ __asmlinkage_protect##n(ret, ##args) ++#define __asmlinkage_protect_n(ret, args...) \ ++ __asm__ __volatile__ ("" : "=r" (ret) : "0" (ret), ##args) ++#define __asmlinkage_protect0(ret) \ ++ __asmlinkage_protect_n(ret) ++#define __asmlinkage_protect1(ret, arg1) \ ++ __asmlinkage_protect_n(ret, "m" (arg1)) ++#define __asmlinkage_protect2(ret, arg1, arg2) \ ++ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2)) ++#define __asmlinkage_protect3(ret, arg1, arg2, arg3) \ ++ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3)) ++#define __asmlinkage_protect4(ret, arg1, arg2, arg3, arg4) \ ++ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \ ++ "m" (arg4)) ++#define __asmlinkage_protect5(ret, arg1, arg2, arg3, arg4, arg5) \ ++ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \ ++ "m" (arg4), "m" (arg5)) ++#define __asmlinkage_protect6(ret, arg1, arg2, arg3, arg4, arg5, arg6) \ ++ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \ ++ "m" (arg4), "m" (arg5), "m" (arg6)) ++ + #endif +diff --git a/arch/mips/kernel/cps-vec.S b/arch/mips/kernel/cps-vec.S +index 9f71c06..209ded1 100644 +--- a/arch/mips/kernel/cps-vec.S ++++ b/arch/mips/kernel/cps-vec.S +@@ -39,6 +39,7 @@ + mfc0 \dest, CP0_CONFIG, 3 + andi \dest, \dest, MIPS_CONF3_MT + beqz \dest, \nomt ++ nop + .endm + + .section .text.cps-vec +@@ -223,10 +224,9 @@ LEAF(excep_ejtag) + END(excep_ejtag) + + LEAF(mips_cps_core_init) +-#ifdef CONFIG_MIPS_MT ++#ifdef CONFIG_MIPS_MT_SMP + /* Check that the core implements the MT ASE */ + has_mt t0, 3f +- nop + + .set push + .set mips64r2 +@@ -310,8 +310,9 @@ LEAF(mips_cps_boot_vpes) + PTR_ADDU t0, t0, t1 + + /* Calculate this VPEs ID. If the core doesn't support MT use 0 */ ++ li t9, 0 ++#ifdef CONFIG_MIPS_MT_SMP + has_mt ta2, 1f +- li t9, 0 + + /* Find the number of VPEs present in the core */ + mfc0 t1, CP0_MVPCONF0 +@@ -330,6 +331,7 @@ LEAF(mips_cps_boot_vpes) + /* Retrieve the VPE ID from EBase.CPUNum */ + mfc0 t9, $15, 1 + and t9, t9, t1 ++#endif + + 1: /* Calculate a pointer to this VPEs struct vpe_boot_config */ + li t1, VPEBOOTCFG_SIZE +@@ -337,7 +339,7 @@ LEAF(mips_cps_boot_vpes) + PTR_L ta3, COREBOOTCFG_VPECONFIG(t0) + PTR_ADDU v0, v0, ta3 + +-#ifdef CONFIG_MIPS_MT ++#ifdef CONFIG_MIPS_MT_SMP + + /* If the core doesn't support MT then return */ + bnez ta2, 1f +@@ -451,7 +453,7 @@ LEAF(mips_cps_boot_vpes) + + 2: .set pop + +-#endif /* CONFIG_MIPS_MT */ ++#endif /* CONFIG_MIPS_MT_SMP */ + + /* Return */ + jr ra +diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c +index 008b337..4ceac5c 100644 +--- a/arch/mips/kernel/setup.c ++++ b/arch/mips/kernel/setup.c +@@ -338,7 +338,7 @@ static void __init bootmem_init(void) + if (end <= reserved_end) + continue; + #ifdef CONFIG_BLK_DEV_INITRD +- /* mapstart should be after initrd_end */ ++ /* Skip zones before initrd and initrd itself */ + if (initrd_end && end <= (unsigned long)PFN_UP(__pa(initrd_end))) + continue; + #endif +@@ -371,6 +371,14 @@ static void __init bootmem_init(void) + max_low_pfn = PFN_DOWN(HIGHMEM_START); + } + ++#ifdef CONFIG_BLK_DEV_INITRD ++ /* ++ * mapstart should be after initrd_end ++ */ ++ if (initrd_end) ++ mapstart = max(mapstart, (unsigned long)PFN_UP(__pa(initrd_end))); ++#endif ++ + /* + * Initialize the boot-time allocator with low memory only. + */ +diff --git a/arch/mips/loongson64/common/env.c b/arch/mips/loongson64/common/env.c +index f6c44dd..d6d07ad 100644 +--- a/arch/mips/loongson64/common/env.c ++++ b/arch/mips/loongson64/common/env.c +@@ -64,6 +64,9 @@ void __init prom_init_env(void) + } + if (memsize == 0) + memsize = 256; ++ ++ loongson_sysconf.nr_uarts = 1; ++ + pr_info("memsize=%u, highmemsize=%u\n", memsize, highmemsize); + #else + struct boot_params *boot_p; +diff --git a/arch/mips/mm/dma-default.c b/arch/mips/mm/dma-default.c +index eeaf024..815892e 100644 +--- a/arch/mips/mm/dma-default.c ++++ b/arch/mips/mm/dma-default.c +@@ -100,7 +100,7 @@ static gfp_t massage_gfp_flags(const struct device *dev, gfp_t gfp) + else + #endif + #if defined(CONFIG_ZONE_DMA) && !defined(CONFIG_ZONE_DMA32) +- if (dev->coherent_dma_mask < DMA_BIT_MASK(64)) ++ if (dev->coherent_dma_mask < DMA_BIT_MASK(sizeof(phys_addr_t) * 8)) + dma_flag = __GFP_DMA; + else + #endif +diff --git a/arch/mips/net/bpf_jit_asm.S b/arch/mips/net/bpf_jit_asm.S +index e927260..dabf417 100644 +--- a/arch/mips/net/bpf_jit_asm.S ++++ b/arch/mips/net/bpf_jit_asm.S +@@ -64,8 +64,20 @@ sk_load_word_positive: + PTR_ADDU t1, $r_skb_data, offset + lw $r_A, 0(t1) + #ifdef CONFIG_CPU_LITTLE_ENDIAN ++# if defined(__mips_isa_rev) && (__mips_isa_rev >= 2) + wsbh t0, $r_A + rotr $r_A, t0, 16 ++# else ++ sll t0, $r_A, 24 ++ srl t1, $r_A, 24 ++ srl t2, $r_A, 8 ++ or t0, t0, t1 ++ andi t2, t2, 0xff00 ++ andi t1, $r_A, 0xff00 ++ or t0, t0, t2 ++ sll t1, t1, 8 ++ or $r_A, t0, t1 ++# endif + #endif + jr $r_ra + move $r_ret, zero +@@ -80,8 +92,16 @@ sk_load_half_positive: + PTR_ADDU t1, $r_skb_data, offset + lh $r_A, 0(t1) + #ifdef CONFIG_CPU_LITTLE_ENDIAN ++# if defined(__mips_isa_rev) && (__mips_isa_rev >= 2) + wsbh t0, $r_A + seh $r_A, t0 ++# else ++ sll t0, $r_A, 24 ++ andi t1, $r_A, 0xff00 ++ sra t0, t0, 16 ++ srl t1, t1, 8 ++ or $r_A, t0, t1 ++# endif + #endif + jr $r_ra + move $r_ret, zero +@@ -148,23 +168,47 @@ sk_load_byte_positive: + NESTED(bpf_slow_path_word, (6 * SZREG), $r_sp) + bpf_slow_path_common(4) + #ifdef CONFIG_CPU_LITTLE_ENDIAN ++# if defined(__mips_isa_rev) && (__mips_isa_rev >= 2) + wsbh t0, $r_s0 + jr $r_ra + rotr $r_A, t0, 16 +-#endif ++# else ++ sll t0, $r_s0, 24 ++ srl t1, $r_s0, 24 ++ srl t2, $r_s0, 8 ++ or t0, t0, t1 ++ andi t2, t2, 0xff00 ++ andi t1, $r_s0, 0xff00 ++ or t0, t0, t2 ++ sll t1, t1, 8 ++ jr $r_ra ++ or $r_A, t0, t1 ++# endif ++#else + jr $r_ra +- move $r_A, $r_s0 ++ move $r_A, $r_s0 ++#endif + + END(bpf_slow_path_word) + + NESTED(bpf_slow_path_half, (6 * SZREG), $r_sp) + bpf_slow_path_common(2) + #ifdef CONFIG_CPU_LITTLE_ENDIAN ++# if defined(__mips_isa_rev) && (__mips_isa_rev >= 2) + jr $r_ra + wsbh $r_A, $r_s0 +-#endif ++# else ++ sll t0, $r_s0, 8 ++ andi t1, $r_s0, 0xff00 ++ andi t0, t0, 0xff00 ++ srl t1, t1, 8 ++ jr $r_ra ++ or $r_A, t0, t1 ++# endif ++#else + jr $r_ra + move $r_A, $r_s0 ++#endif + + END(bpf_slow_path_half) + +diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c +index 05ea8fc..4816fe2 100644 +--- a/arch/powerpc/kvm/book3s.c ++++ b/arch/powerpc/kvm/book3s.c +@@ -827,12 +827,15 @@ int kvmppc_h_logical_ci_load(struct kvm_vcpu *vcpu) + unsigned long size = kvmppc_get_gpr(vcpu, 4); + unsigned long addr = kvmppc_get_gpr(vcpu, 5); + u64 buf; ++ int srcu_idx; + int ret; + + if (!is_power_of_2(size) || (size > sizeof(buf))) + return H_TOO_HARD; + ++ srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); + ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, addr, size, &buf); ++ srcu_read_unlock(&vcpu->kvm->srcu, srcu_idx); + if (ret != 0) + return H_TOO_HARD; + +@@ -867,6 +870,7 @@ int kvmppc_h_logical_ci_store(struct kvm_vcpu *vcpu) + unsigned long addr = kvmppc_get_gpr(vcpu, 5); + unsigned long val = kvmppc_get_gpr(vcpu, 6); + u64 buf; ++ int srcu_idx; + int ret; + + switch (size) { +@@ -890,7 +894,9 @@ int kvmppc_h_logical_ci_store(struct kvm_vcpu *vcpu) + return H_TOO_HARD; + } + ++ srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); + ret = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, addr, size, &buf); ++ srcu_read_unlock(&vcpu->kvm->srcu, srcu_idx); + if (ret != 0) + return H_TOO_HARD; + +diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c +index 68d067a..a9f753f 100644 +--- a/arch/powerpc/kvm/book3s_hv.c ++++ b/arch/powerpc/kvm/book3s_hv.c +@@ -2178,7 +2178,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) + vc->runner = vcpu; + if (n_ceded == vc->n_runnable) { + kvmppc_vcore_blocked(vc); +- } else if (should_resched()) { ++ } else if (need_resched()) { + vc->vcore_state = VCORE_PREEMPT; + /* Let something else run */ + cond_resched_lock(&vc->lock); +diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S +index 76408cf..437f643 100644 +--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S ++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S +@@ -1171,6 +1171,7 @@ mc_cont: + bl kvmhv_accumulate_time + #endif + ++ mr r3, r12 + /* Increment exit count, poke other threads to exit */ + bl kvmhv_commence_exit + nop +diff --git a/arch/powerpc/platforms/pasemi/msi.c b/arch/powerpc/platforms/pasemi/msi.c +index 27f2b18..ff1bb4b 100644 +--- a/arch/powerpc/platforms/pasemi/msi.c ++++ b/arch/powerpc/platforms/pasemi/msi.c +@@ -63,6 +63,7 @@ static struct irq_chip mpic_pasemi_msi_chip = { + static void pasemi_msi_teardown_msi_irqs(struct pci_dev *pdev) + { + struct msi_desc *entry; ++ irq_hw_number_t hwirq; + + pr_debug("pasemi_msi_teardown_msi_irqs, pdev %p\n", pdev); + +@@ -70,10 +71,10 @@ static void pasemi_msi_teardown_msi_irqs(struct pci_dev *pdev) + if (entry->irq == NO_IRQ) + continue; + ++ hwirq = virq_to_hw(entry->irq); + irq_set_msi_desc(entry->irq, NULL); +- msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap, +- virq_to_hw(entry->irq), ALLOC_CHUNK); + irq_dispose_mapping(entry->irq); ++ msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap, hwirq, ALLOC_CHUNK); + } + + return; +diff --git a/arch/powerpc/platforms/powernv/pci.c b/arch/powerpc/platforms/powernv/pci.c +index 765d8ed..fd16f86 100644 +--- a/arch/powerpc/platforms/powernv/pci.c ++++ b/arch/powerpc/platforms/powernv/pci.c +@@ -99,6 +99,7 @@ void pnv_teardown_msi_irqs(struct pci_dev *pdev) + struct pci_controller *hose = pci_bus_to_host(pdev->bus); + struct pnv_phb *phb = hose->private_data; + struct msi_desc *entry; ++ irq_hw_number_t hwirq; + + if (WARN_ON(!phb)) + return; +@@ -106,10 +107,10 @@ void pnv_teardown_msi_irqs(struct pci_dev *pdev) + list_for_each_entry(entry, &pdev->msi_list, list) { + if (entry->irq == NO_IRQ) + continue; ++ hwirq = virq_to_hw(entry->irq); + irq_set_msi_desc(entry->irq, NULL); +- msi_bitmap_free_hwirqs(&phb->msi_bmp, +- virq_to_hw(entry->irq) - phb->msi_base, 1); + irq_dispose_mapping(entry->irq); ++ msi_bitmap_free_hwirqs(&phb->msi_bmp, hwirq - phb->msi_base, 1); + } + } + #endif /* CONFIG_PCI_MSI */ +diff --git a/arch/powerpc/sysdev/fsl_msi.c b/arch/powerpc/sysdev/fsl_msi.c +index 5236e54..691e8e5 100644 +--- a/arch/powerpc/sysdev/fsl_msi.c ++++ b/arch/powerpc/sysdev/fsl_msi.c +@@ -128,15 +128,16 @@ static void fsl_teardown_msi_irqs(struct pci_dev *pdev) + { + struct msi_desc *entry; + struct fsl_msi *msi_data; ++ irq_hw_number_t hwirq; + + list_for_each_entry(entry, &pdev->msi_list, list) { + if (entry->irq == NO_IRQ) + continue; ++ hwirq = virq_to_hw(entry->irq); + msi_data = irq_get_chip_data(entry->irq); + irq_set_msi_desc(entry->irq, NULL); +- msi_bitmap_free_hwirqs(&msi_data->bitmap, +- virq_to_hw(entry->irq), 1); + irq_dispose_mapping(entry->irq); ++ msi_bitmap_free_hwirqs(&msi_data->bitmap, hwirq, 1); + } + + return; +diff --git a/arch/powerpc/sysdev/mpic_u3msi.c b/arch/powerpc/sysdev/mpic_u3msi.c +index fc46ef3..4c3165f 100644 +--- a/arch/powerpc/sysdev/mpic_u3msi.c ++++ b/arch/powerpc/sysdev/mpic_u3msi.c +@@ -107,15 +107,16 @@ static u64 find_u4_magic_addr(struct pci_dev *pdev, unsigned int hwirq) + static void u3msi_teardown_msi_irqs(struct pci_dev *pdev) + { + struct msi_desc *entry; ++ irq_hw_number_t hwirq; + + list_for_each_entry(entry, &pdev->msi_list, list) { + if (entry->irq == NO_IRQ) + continue; + ++ hwirq = virq_to_hw(entry->irq); + irq_set_msi_desc(entry->irq, NULL); +- msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap, +- virq_to_hw(entry->irq), 1); + irq_dispose_mapping(entry->irq); ++ msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap, hwirq, 1); + } + + return; +diff --git a/arch/powerpc/sysdev/ppc4xx_msi.c b/arch/powerpc/sysdev/ppc4xx_msi.c +index 6eb21f2..060f237 100644 +--- a/arch/powerpc/sysdev/ppc4xx_msi.c ++++ b/arch/powerpc/sysdev/ppc4xx_msi.c +@@ -124,16 +124,17 @@ void ppc4xx_teardown_msi_irqs(struct pci_dev *dev) + { + struct msi_desc *entry; + struct ppc4xx_msi *msi_data = &ppc4xx_msi; ++ irq_hw_number_t hwirq; + + dev_dbg(&dev->dev, "PCIE-MSI: tearing down msi irqs\n"); + + list_for_each_entry(entry, &dev->msi_list, list) { + if (entry->irq == NO_IRQ) + continue; ++ hwirq = virq_to_hw(entry->irq); + irq_set_msi_desc(entry->irq, NULL); +- msi_bitmap_free_hwirqs(&msi_data->bitmap, +- virq_to_hw(entry->irq), 1); + irq_dispose_mapping(entry->irq); ++ msi_bitmap_free_hwirqs(&msi_data->bitmap, hwirq, 1); + } + } + +diff --git a/arch/s390/boot/compressed/Makefile b/arch/s390/boot/compressed/Makefile +index d478811..fac6ac9 100644 +--- a/arch/s390/boot/compressed/Makefile ++++ b/arch/s390/boot/compressed/Makefile +@@ -10,7 +10,7 @@ targets += misc.o piggy.o sizes.h head.o + + KBUILD_CFLAGS := -m64 -D__KERNEL__ $(LINUX_INCLUDE) -O2 + KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING +-KBUILD_CFLAGS += $(cflags-y) -fno-delete-null-pointer-checks ++KBUILD_CFLAGS += $(cflags-y) -fno-delete-null-pointer-checks -msoft-float + KBUILD_CFLAGS += $(call cc-option,-mpacked-stack) + KBUILD_CFLAGS += $(call cc-option,-ffreestanding) + +diff --git a/arch/s390/kernel/compat_signal.c b/arch/s390/kernel/compat_signal.c +index fe8d692..c78ba51 100644 +--- a/arch/s390/kernel/compat_signal.c ++++ b/arch/s390/kernel/compat_signal.c +@@ -48,6 +48,19 @@ typedef struct + struct ucontext32 uc; + } rt_sigframe32; + ++static inline void sigset_to_sigset32(unsigned long *set64, ++ compat_sigset_word *set32) ++{ ++ set32[0] = (compat_sigset_word) set64[0]; ++ set32[1] = (compat_sigset_word)(set64[0] >> 32); ++} ++ ++static inline void sigset32_to_sigset(compat_sigset_word *set32, ++ unsigned long *set64) ++{ ++ set64[0] = (unsigned long) set32[0] | ((unsigned long) set32[1] << 32); ++} ++ + int copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from) + { + int err; +@@ -303,10 +316,12 @@ COMPAT_SYSCALL_DEFINE0(sigreturn) + { + struct pt_regs *regs = task_pt_regs(current); + sigframe32 __user *frame = (sigframe32 __user *)regs->gprs[15]; ++ compat_sigset_t cset; + sigset_t set; + +- if (__copy_from_user(&set.sig, &frame->sc.oldmask, _SIGMASK_COPY_SIZE32)) ++ if (__copy_from_user(&cset.sig, &frame->sc.oldmask, _SIGMASK_COPY_SIZE32)) + goto badframe; ++ sigset32_to_sigset(cset.sig, set.sig); + set_current_blocked(&set); + if (restore_sigregs32(regs, &frame->sregs)) + goto badframe; +@@ -323,10 +338,12 @@ COMPAT_SYSCALL_DEFINE0(rt_sigreturn) + { + struct pt_regs *regs = task_pt_regs(current); + rt_sigframe32 __user *frame = (rt_sigframe32 __user *)regs->gprs[15]; ++ compat_sigset_t cset; + sigset_t set; + +- if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set))) ++ if (__copy_from_user(&cset, &frame->uc.uc_sigmask, sizeof(cset))) + goto badframe; ++ sigset32_to_sigset(cset.sig, set.sig); + set_current_blocked(&set); + if (compat_restore_altstack(&frame->uc.uc_stack)) + goto badframe; +@@ -397,7 +414,7 @@ static int setup_frame32(struct ksignal *ksig, sigset_t *set, + return -EFAULT; + + /* Create struct sigcontext32 on the signal stack */ +- memcpy(&sc.oldmask, &set->sig, _SIGMASK_COPY_SIZE32); ++ sigset_to_sigset32(set->sig, sc.oldmask); + sc.sregs = (__u32)(unsigned long __force) &frame->sregs; + if (__copy_to_user(&frame->sc, &sc, sizeof(frame->sc))) + return -EFAULT; +@@ -458,6 +475,7 @@ static int setup_frame32(struct ksignal *ksig, sigset_t *set, + static int setup_rt_frame32(struct ksignal *ksig, sigset_t *set, + struct pt_regs *regs) + { ++ compat_sigset_t cset; + rt_sigframe32 __user *frame; + unsigned long restorer; + size_t frame_size; +@@ -505,11 +523,12 @@ static int setup_rt_frame32(struct ksignal *ksig, sigset_t *set, + store_sigregs(); + + /* Create ucontext on the signal stack. */ ++ sigset_to_sigset32(set->sig, cset.sig); + if (__put_user(uc_flags, &frame->uc.uc_flags) || + __put_user(0, &frame->uc.uc_link) || + __compat_save_altstack(&frame->uc.uc_stack, regs->gprs[15]) || + save_sigregs32(regs, &frame->uc.uc_mcontext) || +- __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set)) || ++ __copy_to_user(&frame->uc.uc_sigmask, &cset, sizeof(cset)) || + save_sigregs_ext32(regs, &frame->uc.uc_mcontext_ext)) + return -EFAULT; + +diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S +index 8cb3e43..d330840 100644 +--- a/arch/x86/entry/entry_64.S ++++ b/arch/x86/entry/entry_64.S +@@ -1219,7 +1219,18 @@ END(error_exit) + + /* Runs on exception stack */ + ENTRY(nmi) ++ /* ++ * Fix up the exception frame if we're on Xen. ++ * PARAVIRT_ADJUST_EXCEPTION_FRAME is guaranteed to push at most ++ * one value to the stack on native, so it may clobber the rdx ++ * scratch slot, but it won't clobber any of the important ++ * slots past it. ++ * ++ * Xen is a different story, because the Xen frame itself overlaps ++ * the "NMI executing" variable. ++ */ + PARAVIRT_ADJUST_EXCEPTION_FRAME ++ + /* + * We allow breakpoints in NMIs. If a breakpoint occurs, then + * the iretq it performs will take us out of NMI context. +@@ -1270,9 +1281,12 @@ ENTRY(nmi) + * we don't want to enable interrupts, because then we'll end + * up in an awkward situation in which IRQs are on but NMIs + * are off. ++ * ++ * We also must not push anything to the stack before switching ++ * stacks lest we corrupt the "NMI executing" variable. + */ + +- SWAPGS ++ SWAPGS_UNSAFE_STACK + cld + movq %rsp, %rdx + movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp +diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h +index 9ebc3d0..2350ab7 100644 +--- a/arch/x86/include/asm/msr-index.h ++++ b/arch/x86/include/asm/msr-index.h +@@ -311,6 +311,7 @@ + /* C1E active bits in int pending message */ + #define K8_INTP_C1E_ACTIVE_MASK 0x18000000 + #define MSR_K8_TSEG_ADDR 0xc0010112 ++#define MSR_K8_TSEG_MASK 0xc0010113 + #define K8_MTRRFIXRANGE_DRAM_ENABLE 0x00040000 /* MtrrFixDramEn bit */ + #define K8_MTRRFIXRANGE_DRAM_MODIFY 0x00080000 /* MtrrFixDramModEn bit */ + #define K8_MTRR_RDMEM_WRMEM_MASK 0x18181818 /* Mask: RdMem|WrMem */ +diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h +index dca71714..b12f810 100644 +--- a/arch/x86/include/asm/preempt.h ++++ b/arch/x86/include/asm/preempt.h +@@ -90,9 +90,9 @@ static __always_inline bool __preempt_count_dec_and_test(void) + /* + * Returns true when we need to resched and can (barring IRQ state). + */ +-static __always_inline bool should_resched(void) ++static __always_inline bool should_resched(int preempt_offset) + { +- return unlikely(!raw_cpu_read_4(__preempt_count)); ++ return unlikely(raw_cpu_read_4(__preempt_count) == preempt_offset); + } + + #ifdef CONFIG_PREEMPT +diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h +index 9d51fae..eaba080 100644 +--- a/arch/x86/include/asm/qspinlock.h ++++ b/arch/x86/include/asm/qspinlock.h +@@ -39,18 +39,27 @@ static inline void queued_spin_unlock(struct qspinlock *lock) + } + #endif + +-#define virt_queued_spin_lock virt_queued_spin_lock +- +-static inline bool virt_queued_spin_lock(struct qspinlock *lock) ++#ifdef CONFIG_PARAVIRT ++#define virt_spin_lock virt_spin_lock ++static inline bool virt_spin_lock(struct qspinlock *lock) + { + if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) + return false; + +- while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0) +- cpu_relax(); ++ /* ++ * On hypervisors without PARAVIRT_SPINLOCKS support we fall ++ * back to a Test-and-Set spinlock, because fair locks have ++ * horrible lock 'holder' preemption issues. ++ */ ++ ++ do { ++ while (atomic_read(&lock->val) != 0) ++ cpu_relax(); ++ } while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0); + + return true; + } ++#endif /* CONFIG_PARAVIRT */ + + #include <asm-generic/qspinlock.h> + +diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c +index c42827e..25f9093 100644 +--- a/arch/x86/kernel/alternative.c ++++ b/arch/x86/kernel/alternative.c +@@ -338,10 +338,15 @@ done: + + static void __init_or_module optimize_nops(struct alt_instr *a, u8 *instr) + { ++ unsigned long flags; ++ + if (instr[0] != 0x90) + return; + ++ local_irq_save(flags); + add_nops(instr + (a->instrlen - a->padlen), a->padlen); ++ sync_core(); ++ local_irq_restore(flags); + + DUMP_BYTES(instr, a->instrlen, "%p: [%d:%d) optimized NOPs: ", + instr, a->instrlen - a->padlen, a->padlen); +diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c +index cde732c..307a498 100644 +--- a/arch/x86/kernel/apic/apic.c ++++ b/arch/x86/kernel/apic/apic.c +@@ -336,6 +336,13 @@ static void __setup_APIC_LVTT(unsigned int clocks, int oneshot, int irqen) + apic_write(APIC_LVTT, lvtt_value); + + if (lvtt_value & APIC_LVT_TIMER_TSCDEADLINE) { ++ /* ++ * See Intel SDM: TSC-Deadline Mode chapter. In xAPIC mode, ++ * writing to the APIC LVTT and TSC_DEADLINE MSR isn't serialized. ++ * According to Intel, MFENCE can do the serialization here. ++ */ ++ asm volatile("mfence" : : : "memory"); ++ + printk_once(KERN_DEBUG "TSC deadline timer enabled\n"); + return; + } +diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c +index 206052e..5880b48 100644 +--- a/arch/x86/kernel/apic/io_apic.c ++++ b/arch/x86/kernel/apic/io_apic.c +@@ -2522,6 +2522,7 @@ void __init setup_ioapic_dest(void) + int pin, ioapic, irq, irq_entry; + const struct cpumask *mask; + struct irq_data *idata; ++ struct irq_chip *chip; + + if (skip_ioapic_setup == 1) + return; +@@ -2545,9 +2546,9 @@ void __init setup_ioapic_dest(void) + else + mask = apic->target_cpus(); + +- irq_set_affinity(irq, mask); ++ chip = irq_data_get_irq_chip(idata); ++ chip->irq_set_affinity(idata, mask, false); + } +- + } + #endif + +diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c +index 6326ae2..1b09c42 100644 +--- a/arch/x86/kernel/cpu/perf_event_intel.c ++++ b/arch/x86/kernel/cpu/perf_event_intel.c +@@ -2102,9 +2102,12 @@ static struct event_constraint * + intel_get_event_constraints(struct cpu_hw_events *cpuc, int idx, + struct perf_event *event) + { +- struct event_constraint *c1 = cpuc->event_constraint[idx]; ++ struct event_constraint *c1 = NULL; + struct event_constraint *c2; + ++ if (idx >= 0) /* fake does < 0 */ ++ c1 = cpuc->event_constraint[idx]; ++ + /* + * first time only + * - static constraint: no change across incremental scheduling calls +diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c +index e068d66..74ca2fe 100644 +--- a/arch/x86/kernel/crash.c ++++ b/arch/x86/kernel/crash.c +@@ -185,10 +185,9 @@ void native_machine_crash_shutdown(struct pt_regs *regs) + } + + #ifdef CONFIG_KEXEC_FILE +-static int get_nr_ram_ranges_callback(unsigned long start_pfn, +- unsigned long nr_pfn, void *arg) ++static int get_nr_ram_ranges_callback(u64 start, u64 end, void *arg) + { +- int *nr_ranges = arg; ++ unsigned int *nr_ranges = arg; + + (*nr_ranges)++; + return 0; +@@ -214,7 +213,7 @@ static void fill_up_crash_elf_data(struct crash_elf_data *ced, + + ced->image = image; + +- walk_system_ram_range(0, -1, &nr_ranges, ++ walk_system_ram_res(0, -1, &nr_ranges, + get_nr_ram_ranges_callback); + + ced->max_nr_ranges = nr_ranges; +diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c +index 58bcfb6..ebb5657 100644 +--- a/arch/x86/kernel/paravirt.c ++++ b/arch/x86/kernel/paravirt.c +@@ -41,10 +41,18 @@ + #include <asm/timer.h> + #include <asm/special_insns.h> + +-/* nop stub */ +-void _paravirt_nop(void) +-{ +-} ++/* ++ * nop stub, which must not clobber anything *including the stack* to ++ * avoid confusing the entry prologues. ++ */ ++extern void _paravirt_nop(void); ++asm (".pushsection .entry.text, \"ax\"\n" ++ ".global _paravirt_nop\n" ++ "_paravirt_nop:\n\t" ++ "ret\n\t" ++ ".size _paravirt_nop, . - _paravirt_nop\n\t" ++ ".type _paravirt_nop, @function\n\t" ++ ".popsection"); + + /* identity function, which can be inlined */ + u32 _paravirt_ident_32(u32 x) +diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c +index f6b9163..a90ac95 100644 +--- a/arch/x86/kernel/process_64.c ++++ b/arch/x86/kernel/process_64.c +@@ -497,27 +497,59 @@ void set_personality_ia32(bool x32) + } + EXPORT_SYMBOL_GPL(set_personality_ia32); + ++/* ++ * Called from fs/proc with a reference on @p to find the function ++ * which called into schedule(). This needs to be done carefully ++ * because the task might wake up and we might look at a stack ++ * changing under us. ++ */ + unsigned long get_wchan(struct task_struct *p) + { +- unsigned long stack; +- u64 fp, ip; ++ unsigned long start, bottom, top, sp, fp, ip; + int count = 0; + + if (!p || p == current || p->state == TASK_RUNNING) + return 0; +- stack = (unsigned long)task_stack_page(p); +- if (p->thread.sp < stack || p->thread.sp >= stack+THREAD_SIZE) ++ ++ start = (unsigned long)task_stack_page(p); ++ if (!start) ++ return 0; ++ ++ /* ++ * Layout of the stack page: ++ * ++ * ----------- topmax = start + THREAD_SIZE - sizeof(unsigned long) ++ * PADDING ++ * ----------- top = topmax - TOP_OF_KERNEL_STACK_PADDING ++ * stack ++ * ----------- bottom = start + sizeof(thread_info) ++ * thread_info ++ * ----------- start ++ * ++ * The tasks stack pointer points at the location where the ++ * framepointer is stored. The data on the stack is: ++ * ... IP FP ... IP FP ++ * ++ * We need to read FP and IP, so we need to adjust the upper ++ * bound by another unsigned long. ++ */ ++ top = start + THREAD_SIZE - TOP_OF_KERNEL_STACK_PADDING; ++ top -= 2 * sizeof(unsigned long); ++ bottom = start + sizeof(struct thread_info); ++ ++ sp = READ_ONCE(p->thread.sp); ++ if (sp < bottom || sp > top) + return 0; +- fp = *(u64 *)(p->thread.sp); ++ ++ fp = READ_ONCE(*(unsigned long *)sp); + do { +- if (fp < (unsigned long)stack || +- fp >= (unsigned long)stack+THREAD_SIZE) ++ if (fp < bottom || fp > top) + return 0; +- ip = *(u64 *)(fp+8); ++ ip = READ_ONCE(*(unsigned long *)(fp + sizeof(unsigned long))); + if (!in_sched_functions(ip)) + return ip; +- fp = *(u64 *)fp; +- } while (count++ < 16); ++ fp = READ_ONCE(*(unsigned long *)fp); ++ } while (count++ < 16 && p->state != TASK_RUNNING); + return 0; + } + +diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c +index 7437b41..dc9af7a 100644 +--- a/arch/x86/kernel/tsc.c ++++ b/arch/x86/kernel/tsc.c +@@ -21,6 +21,7 @@ + #include <asm/hypervisor.h> + #include <asm/nmi.h> + #include <asm/x86_init.h> ++#include <asm/geode.h> + + unsigned int __read_mostly cpu_khz; /* TSC clocks / usec, not used here */ + EXPORT_SYMBOL(cpu_khz); +@@ -1013,15 +1014,17 @@ EXPORT_SYMBOL_GPL(mark_tsc_unstable); + + static void __init check_system_tsc_reliable(void) + { +-#ifdef CONFIG_MGEODE_LX +- /* RTSC counts during suspend */ ++#if defined(CONFIG_MGEODEGX1) || defined(CONFIG_MGEODE_LX) || defined(CONFIG_X86_GENERIC) ++ if (is_geode_lx()) { ++ /* RTSC counts during suspend */ + #define RTSC_SUSP 0x100 +- unsigned long res_low, res_high; ++ unsigned long res_low, res_high; + +- rdmsr_safe(MSR_GEODE_BUSCONT_CONF0, &res_low, &res_high); +- /* Geode_LX - the OLPC CPU has a very reliable TSC */ +- if (res_low & RTSC_SUSP) +- tsc_clocksource_reliable = 1; ++ rdmsr_safe(MSR_GEODE_BUSCONT_CONF0, &res_low, &res_high); ++ /* Geode_LX - the OLPC CPU has a very reliable TSC */ ++ if (res_low & RTSC_SUSP) ++ tsc_clocksource_reliable = 1; ++ } + #endif + if (boot_cpu_has(X86_FEATURE_TSC_RELIABLE)) + tsc_clocksource_reliable = 1; +diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c +index 8e0c084..2d32b67 100644 +--- a/arch/x86/kvm/svm.c ++++ b/arch/x86/kvm/svm.c +@@ -513,7 +513,7 @@ static void skip_emulated_instruction(struct kvm_vcpu *vcpu) + struct vcpu_svm *svm = to_svm(vcpu); + + if (svm->vmcb->control.next_rip != 0) { +- WARN_ON(!static_cpu_has(X86_FEATURE_NRIPS)); ++ WARN_ON_ONCE(!static_cpu_has(X86_FEATURE_NRIPS)); + svm->next_rip = svm->vmcb->control.next_rip; + } + +@@ -865,64 +865,6 @@ static void svm_disable_lbrv(struct vcpu_svm *svm) + set_msr_interception(msrpm, MSR_IA32_LASTINTTOIP, 0, 0); + } + +-#define MTRR_TYPE_UC_MINUS 7 +-#define MTRR2PROTVAL_INVALID 0xff +- +-static u8 mtrr2protval[8]; +- +-static u8 fallback_mtrr_type(int mtrr) +-{ +- /* +- * WT and WP aren't always available in the host PAT. Treat +- * them as UC and UC- respectively. Everything else should be +- * there. +- */ +- switch (mtrr) +- { +- case MTRR_TYPE_WRTHROUGH: +- return MTRR_TYPE_UNCACHABLE; +- case MTRR_TYPE_WRPROT: +- return MTRR_TYPE_UC_MINUS; +- default: +- BUG(); +- } +-} +- +-static void build_mtrr2protval(void) +-{ +- int i; +- u64 pat; +- +- for (i = 0; i < 8; i++) +- mtrr2protval[i] = MTRR2PROTVAL_INVALID; +- +- /* Ignore the invalid MTRR types. */ +- mtrr2protval[2] = 0; +- mtrr2protval[3] = 0; +- +- /* +- * Use host PAT value to figure out the mapping from guest MTRR +- * values to nested page table PAT/PCD/PWT values. We do not +- * want to change the host PAT value every time we enter the +- * guest. +- */ +- rdmsrl(MSR_IA32_CR_PAT, pat); +- for (i = 0; i < 8; i++) { +- u8 mtrr = pat >> (8 * i); +- +- if (mtrr2protval[mtrr] == MTRR2PROTVAL_INVALID) +- mtrr2protval[mtrr] = __cm_idx2pte(i); +- } +- +- for (i = 0; i < 8; i++) { +- if (mtrr2protval[i] == MTRR2PROTVAL_INVALID) { +- u8 fallback = fallback_mtrr_type(i); +- mtrr2protval[i] = mtrr2protval[fallback]; +- BUG_ON(mtrr2protval[i] == MTRR2PROTVAL_INVALID); +- } +- } +-} +- + static __init int svm_hardware_setup(void) + { + int cpu; +@@ -989,7 +931,6 @@ static __init int svm_hardware_setup(void) + } else + kvm_disable_tdp(); + +- build_mtrr2protval(); + return 0; + + err: +@@ -1144,39 +1085,6 @@ static u64 svm_compute_tsc_offset(struct kvm_vcpu *vcpu, u64 target_tsc) + return target_tsc - tsc; + } + +-static void svm_set_guest_pat(struct vcpu_svm *svm, u64 *g_pat) +-{ +- struct kvm_vcpu *vcpu = &svm->vcpu; +- +- /* Unlike Intel, AMD takes the guest's CR0.CD into account. +- * +- * AMD does not have IPAT. To emulate it for the case of guests +- * with no assigned devices, just set everything to WB. If guests +- * have assigned devices, however, we cannot force WB for RAM +- * pages only, so use the guest PAT directly. +- */ +- if (!kvm_arch_has_assigned_device(vcpu->kvm)) +- *g_pat = 0x0606060606060606; +- else +- *g_pat = vcpu->arch.pat; +-} +- +-static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) +-{ +- u8 mtrr; +- +- /* +- * 1. MMIO: trust guest MTRR, so same as item 3. +- * 2. No passthrough: always map as WB, and force guest PAT to WB as well +- * 3. Passthrough: can't guarantee the result, try to trust guest. +- */ +- if (!is_mmio && !kvm_arch_has_assigned_device(vcpu->kvm)) +- return 0; +- +- mtrr = kvm_mtrr_get_guest_memory_type(vcpu, gfn); +- return mtrr2protval[mtrr]; +-} +- + static void init_vmcb(struct vcpu_svm *svm, bool init_event) + { + struct vmcb_control_area *control = &svm->vmcb->control; +@@ -1260,6 +1168,7 @@ static void init_vmcb(struct vcpu_svm *svm, bool init_event) + * It also updates the guest-visible cr0 value. + */ + (void)kvm_set_cr0(&svm->vcpu, X86_CR0_NW | X86_CR0_CD | X86_CR0_ET); ++ kvm_mmu_reset_context(&svm->vcpu); + + save->cr4 = X86_CR4_PAE; + /* rdx = ?? */ +@@ -1272,7 +1181,6 @@ static void init_vmcb(struct vcpu_svm *svm, bool init_event) + clr_cr_intercept(svm, INTERCEPT_CR3_READ); + clr_cr_intercept(svm, INTERCEPT_CR3_WRITE); + save->g_pat = svm->vcpu.arch.pat; +- svm_set_guest_pat(svm, &save->g_pat); + save->cr3 = 0; + save->cr4 = 0; + } +@@ -3347,16 +3255,6 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) + case MSR_VM_IGNNE: + vcpu_unimpl(vcpu, "unimplemented wrmsr: 0x%x data 0x%llx\n", ecx, data); + break; +- case MSR_IA32_CR_PAT: +- if (npt_enabled) { +- if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data)) +- return 1; +- vcpu->arch.pat = data; +- svm_set_guest_pat(svm, &svm->vmcb->save.g_pat); +- mark_dirty(svm->vmcb, VMCB_NPT); +- break; +- } +- /* fall through */ + default: + return kvm_set_msr_common(vcpu, msr); + } +@@ -4191,6 +4089,11 @@ static bool svm_has_high_real_mode_segbase(void) + return true; + } + ++static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) ++{ ++ return 0; ++} ++ + static void svm_cpuid_update(struct kvm_vcpu *vcpu) + { + } +diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c +index 83b7b5c..aa9e822 100644 +--- a/arch/x86/kvm/vmx.c ++++ b/arch/x86/kvm/vmx.c +@@ -6134,6 +6134,8 @@ static __init int hardware_setup(void) + memcpy(vmx_msr_bitmap_longmode_x2apic, + vmx_msr_bitmap_longmode, PAGE_SIZE); + ++ set_bit(0, vmx_vpid_bitmap); /* 0 is reserved for host */ ++ + if (enable_apicv) { + for (msr = 0x800; msr <= 0x8ff; msr++) + vmx_disable_intercept_msr_read_x2apic(msr); +@@ -8632,17 +8634,22 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) + u64 ipat = 0; + + /* For VT-d and EPT combination +- * 1. MMIO: guest may want to apply WC, trust it. ++ * 1. MMIO: always map as UC + * 2. EPT with VT-d: + * a. VT-d without snooping control feature: can't guarantee the +- * result, try to trust guest. So the same as item 1. ++ * result, try to trust guest. + * b. VT-d with snooping control feature: snooping control feature of + * VT-d engine can guarantee the cache correctness. Just set it + * to WB to keep consistent with host. So the same as item 3. + * 3. EPT without VT-d: always map as WB and set IPAT=1 to keep + * consistent with host MTRR + */ +- if (!is_mmio && !kvm_arch_has_noncoherent_dma(vcpu->kvm)) { ++ if (is_mmio) { ++ cache = MTRR_TYPE_UNCACHABLE; ++ goto exit; ++ } ++ ++ if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) { + ipat = VMX_EPT_IPAT_BIT; + cache = MTRR_TYPE_WRBACK; + goto exit; +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index 8f0f6ec..32c6e6a 100644 +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -2388,6 +2388,8 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) + case MSR_IA32_LASTINTFROMIP: + case MSR_IA32_LASTINTTOIP: + case MSR_K8_SYSCFG: ++ case MSR_K8_TSEG_ADDR: ++ case MSR_K8_TSEG_MASK: + case MSR_K7_HWCR: + case MSR_VM_HSAVE_PA: + case MSR_K8_INT_PENDING_MSG: +diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c +index 3fba623..f9977a7 100644 +--- a/arch/x86/mm/init_64.c ++++ b/arch/x86/mm/init_64.c +@@ -1132,7 +1132,7 @@ void mark_rodata_ro(void) + * has been zapped already via cleanup_highmem(). + */ + all_end = roundup((unsigned long)_brk_end, PMD_SIZE); +- set_memory_nx(rodata_start, (all_end - rodata_start) >> PAGE_SHIFT); ++ set_memory_nx(text_end, (all_end - text_end) >> PAGE_SHIFT); + + rodata_test(); + +diff --git a/arch/x86/pci/intel_mid_pci.c b/arch/x86/pci/intel_mid_pci.c +index 2706230..7553921 100644 +--- a/arch/x86/pci/intel_mid_pci.c ++++ b/arch/x86/pci/intel_mid_pci.c +@@ -35,6 +35,9 @@ + + #define PCIE_CAP_OFFSET 0x100 + ++/* Quirks for the listed devices */ ++#define PCI_DEVICE_ID_INTEL_MRFL_MMC 0x1190 ++ + /* Fixed BAR fields */ + #define PCIE_VNDR_CAP_ID_FIXED_BAR 0x00 /* Fixed BAR (TBD) */ + #define PCI_FIXED_BAR_0_SIZE 0x04 +@@ -214,10 +217,27 @@ static int intel_mid_pci_irq_enable(struct pci_dev *dev) + if (dev->irq_managed && dev->irq > 0) + return 0; + +- if (intel_mid_identify_cpu() == INTEL_MID_CPU_CHIP_TANGIER) ++ switch (intel_mid_identify_cpu()) { ++ case INTEL_MID_CPU_CHIP_TANGIER: + polarity = 0; /* active high */ +- else ++ ++ /* Special treatment for IRQ0 */ ++ if (dev->irq == 0) { ++ /* ++ * TNG has IRQ0 assigned to eMMC controller. But there ++ * are also other devices with bogus PCI configuration ++ * that have IRQ0 assigned. This check ensures that ++ * eMMC gets it. ++ */ ++ if (dev->device != PCI_DEVICE_ID_INTEL_MRFL_MMC) ++ return -EBUSY; ++ } ++ break; ++ default: + polarity = 1; /* active low */ ++ break; ++ } ++ + ioapic_set_alloc_attr(&info, dev_to_node(&dev->dev), 1, polarity); + + /* +diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c +index e4308fe..c6835bf 100644 +--- a/arch/x86/platform/efi/efi.c ++++ b/arch/x86/platform/efi/efi.c +@@ -705,6 +705,70 @@ out: + } + + /* ++ * Iterate the EFI memory map in reverse order because the regions ++ * will be mapped top-down. The end result is the same as if we had ++ * mapped things forward, but doesn't require us to change the ++ * existing implementation of efi_map_region(). ++ */ ++static inline void *efi_map_next_entry_reverse(void *entry) ++{ ++ /* Initial call */ ++ if (!entry) ++ return memmap.map_end - memmap.desc_size; ++ ++ entry -= memmap.desc_size; ++ if (entry < memmap.map) ++ return NULL; ++ ++ return entry; ++} ++ ++/* ++ * efi_map_next_entry - Return the next EFI memory map descriptor ++ * @entry: Previous EFI memory map descriptor ++ * ++ * This is a helper function to iterate over the EFI memory map, which ++ * we do in different orders depending on the current configuration. ++ * ++ * To begin traversing the memory map @entry must be %NULL. ++ * ++ * Returns %NULL when we reach the end of the memory map. ++ */ ++static void *efi_map_next_entry(void *entry) ++{ ++ if (!efi_enabled(EFI_OLD_MEMMAP) && efi_enabled(EFI_64BIT)) { ++ /* ++ * Starting in UEFI v2.5 the EFI_PROPERTIES_TABLE ++ * config table feature requires us to map all entries ++ * in the same order as they appear in the EFI memory ++ * map. That is to say, entry N must have a lower ++ * virtual address than entry N+1. This is because the ++ * firmware toolchain leaves relative references in ++ * the code/data sections, which are split and become ++ * separate EFI memory regions. Mapping things ++ * out-of-order leads to the firmware accessing ++ * unmapped addresses. ++ * ++ * Since we need to map things this way whether or not ++ * the kernel actually makes use of ++ * EFI_PROPERTIES_TABLE, let's just switch to this ++ * scheme by default for 64-bit. ++ */ ++ return efi_map_next_entry_reverse(entry); ++ } ++ ++ /* Initial call */ ++ if (!entry) ++ return memmap.map; ++ ++ entry += memmap.desc_size; ++ if (entry >= memmap.map_end) ++ return NULL; ++ ++ return entry; ++} ++ ++/* + * Map the efi memory ranges of the runtime services and update new_mmap with + * virtual addresses. + */ +@@ -714,7 +778,8 @@ static void * __init efi_map_regions(int *count, int *pg_shift) + unsigned long left = 0; + efi_memory_desc_t *md; + +- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) { ++ p = NULL; ++ while ((p = efi_map_next_entry(p))) { + md = p; + if (!(md->attribute & EFI_MEMORY_RUNTIME)) { + #ifdef CONFIG_X86_64 +diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c +index 11d6fb4..777ad2f 100644 +--- a/arch/x86/xen/enlighten.c ++++ b/arch/x86/xen/enlighten.c +@@ -33,6 +33,10 @@ + #include <linux/memblock.h> + #include <linux/edd.h> + ++#ifdef CONFIG_KEXEC_CORE ++#include <linux/kexec.h> ++#endif ++ + #include <xen/xen.h> + #include <xen/events.h> + #include <xen/interface/xen.h> +@@ -1800,6 +1804,21 @@ static struct notifier_block xen_hvm_cpu_notifier = { + .notifier_call = xen_hvm_cpu_notify, + }; + ++#ifdef CONFIG_KEXEC_CORE ++static void xen_hvm_shutdown(void) ++{ ++ native_machine_shutdown(); ++ if (kexec_in_progress) ++ xen_reboot(SHUTDOWN_soft_reset); ++} ++ ++static void xen_hvm_crash_shutdown(struct pt_regs *regs) ++{ ++ native_machine_crash_shutdown(regs); ++ xen_reboot(SHUTDOWN_soft_reset); ++} ++#endif ++ + static void __init xen_hvm_guest_init(void) + { + if (xen_pv_domain()) +@@ -1819,6 +1838,10 @@ static void __init xen_hvm_guest_init(void) + x86_init.irqs.intr_init = xen_init_IRQ; + xen_hvm_init_time_ops(); + xen_hvm_init_mmu_ops(); ++#ifdef CONFIG_KEXEC_CORE ++ machine_ops.shutdown = xen_hvm_shutdown; ++ machine_ops.crash_shutdown = xen_hvm_crash_shutdown; ++#endif + } + #endif + +diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c +index d6283b3..9cc48d1d 100644 +--- a/block/blk-cgroup.c ++++ b/block/blk-cgroup.c +@@ -387,6 +387,9 @@ static void blkg_destroy_all(struct request_queue *q) + blkg_destroy(blkg); + spin_unlock(&blkcg->lock); + } ++ ++ q->root_blkg = NULL; ++ q->root_rl.blkg = NULL; + } + + /* +diff --git a/block/blk-mq.c b/block/blk-mq.c +index 176262e..c699026 100644 +--- a/block/blk-mq.c ++++ b/block/blk-mq.c +@@ -1807,7 +1807,6 @@ static void blk_mq_map_swqueue(struct request_queue *q) + + hctx = q->mq_ops->map_queue(q, i); + cpumask_set_cpu(i, hctx->cpumask); +- cpumask_set_cpu(i, hctx->tags->cpumask); + ctx->index_hw = hctx->nr_ctx; + hctx->ctxs[hctx->nr_ctx++] = ctx; + } +@@ -1847,6 +1846,14 @@ static void blk_mq_map_swqueue(struct request_queue *q) + hctx->next_cpu = cpumask_first(hctx->cpumask); + hctx->next_cpu_batch = BLK_MQ_CPU_WORK_BATCH; + } ++ ++ queue_for_each_ctx(q, ctx, i) { ++ if (!cpu_online(i)) ++ continue; ++ ++ hctx = q->mq_ops->map_queue(q, i); ++ cpumask_set_cpu(i, hctx->tags->cpumask); ++ } + } + + static void blk_mq_update_tag_set_depth(struct blk_mq_tag_set *set) +diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c +index 764280a..e9fd32e 100644 +--- a/drivers/base/cacheinfo.c ++++ b/drivers/base/cacheinfo.c +@@ -148,7 +148,11 @@ static void cache_shared_cpu_map_remove(unsigned int cpu) + + if (sibling == cpu) /* skip itself */ + continue; ++ + sib_cpu_ci = get_cpu_cacheinfo(sibling); ++ if (!sib_cpu_ci->info_list) ++ continue; ++ + sib_leaf = sib_cpu_ci->info_list + index; + cpumask_clear_cpu(cpu, &sib_leaf->shared_cpu_map); + cpumask_clear_cpu(sibling, &this_leaf->shared_cpu_map); +@@ -159,6 +163,9 @@ static void cache_shared_cpu_map_remove(unsigned int cpu) + + static void free_cache_attributes(unsigned int cpu) + { ++ if (!per_cpu_cacheinfo(cpu)) ++ return; ++ + cache_shared_cpu_map_remove(cpu); + + kfree(per_cpu_cacheinfo(cpu)); +@@ -514,8 +521,7 @@ static int cacheinfo_cpu_callback(struct notifier_block *nfb, + break; + case CPU_DEAD: + cache_remove_dev(cpu); +- if (per_cpu_cacheinfo(cpu)) +- free_cache_attributes(cpu); ++ free_cache_attributes(cpu); + break; + } + return notifier_from_errno(rc); +diff --git a/drivers/base/property.c b/drivers/base/property.c +index f3f6d16..37a7bb7 100644 +--- a/drivers/base/property.c ++++ b/drivers/base/property.c +@@ -27,9 +27,10 @@ + */ + void device_add_property_set(struct device *dev, struct property_set *pset) + { +- if (pset) +- pset->fwnode.type = FWNODE_PDATA; ++ if (!pset) ++ return; + ++ pset->fwnode.type = FWNODE_PDATA; + set_secondary_fwnode(dev, &pset->fwnode); + } + EXPORT_SYMBOL_GPL(device_add_property_set); +diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c +index 5799a0b..c8941f3 100644 +--- a/drivers/base/regmap/regmap-debugfs.c ++++ b/drivers/base/regmap/regmap-debugfs.c +@@ -32,8 +32,7 @@ static DEFINE_MUTEX(regmap_debugfs_early_lock); + /* Calculate the length of a fixed format */ + static size_t regmap_calc_reg_len(int max_val, char *buf, size_t buf_size) + { +- snprintf(buf, buf_size, "%x", max_val); +- return strlen(buf); ++ return snprintf(NULL, 0, "%x", max_val); + } + + static ssize_t regmap_name_read_file(struct file *file, +@@ -432,7 +431,7 @@ static ssize_t regmap_access_read_file(struct file *file, + /* If we're in the region the user is trying to read */ + if (p >= *ppos) { + /* ...but not beyond it */ +- if (buf_pos >= count - 1 - tot_len) ++ if (buf_pos + tot_len + 1 >= count) + break; + + /* Format the register */ +diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c +index deb3f00..7676575 100644 +--- a/drivers/block/xen-blkback/xenbus.c ++++ b/drivers/block/xen-blkback/xenbus.c +@@ -212,6 +212,9 @@ static int xen_blkif_map(struct xen_blkif *blkif, grant_ref_t *gref, + + static int xen_blkif_disconnect(struct xen_blkif *blkif) + { ++ struct pending_req *req, *n; ++ int i = 0, j; ++ + if (blkif->xenblkd) { + kthread_stop(blkif->xenblkd); + wake_up(&blkif->shutdown_wq); +@@ -238,13 +241,28 @@ static int xen_blkif_disconnect(struct xen_blkif *blkif) + /* Remove all persistent grants and the cache of ballooned pages. */ + xen_blkbk_free_caches(blkif); + ++ /* Check that there is no request in use */ ++ list_for_each_entry_safe(req, n, &blkif->pending_free, free_list) { ++ list_del(&req->free_list); ++ ++ for (j = 0; j < MAX_INDIRECT_SEGMENTS; j++) ++ kfree(req->segments[j]); ++ ++ for (j = 0; j < MAX_INDIRECT_PAGES; j++) ++ kfree(req->indirect_pages[j]); ++ ++ kfree(req); ++ i++; ++ } ++ ++ WARN_ON(i != (XEN_BLKIF_REQS_PER_PAGE * blkif->nr_ring_pages)); ++ blkif->nr_ring_pages = 0; ++ + return 0; + } + + static void xen_blkif_free(struct xen_blkif *blkif) + { +- struct pending_req *req, *n; +- int i = 0, j; + + xen_blkif_disconnect(blkif); + xen_vbd_free(&blkif->vbd); +@@ -257,22 +275,6 @@ static void xen_blkif_free(struct xen_blkif *blkif) + BUG_ON(!list_empty(&blkif->free_pages)); + BUG_ON(!RB_EMPTY_ROOT(&blkif->persistent_gnts)); + +- /* Check that there is no request in use */ +- list_for_each_entry_safe(req, n, &blkif->pending_free, free_list) { +- list_del(&req->free_list); +- +- for (j = 0; j < MAX_INDIRECT_SEGMENTS; j++) +- kfree(req->segments[j]); +- +- for (j = 0; j < MAX_INDIRECT_PAGES; j++) +- kfree(req->indirect_pages[j]); +- +- kfree(req); +- i++; +- } +- +- WARN_ON(i != (XEN_BLKIF_REQS_PER_PAGE * blkif->nr_ring_pages)); +- + kmem_cache_free(xen_blkif_cachep, blkif); + } + +diff --git a/drivers/clk/samsung/clk-cpu.c b/drivers/clk/samsung/clk-cpu.c +index 3a1fe07..dd02356 100644 +--- a/drivers/clk/samsung/clk-cpu.c ++++ b/drivers/clk/samsung/clk-cpu.c +@@ -161,7 +161,7 @@ static int exynos_cpuclk_pre_rate_change(struct clk_notifier_data *ndata, + * the values for DIV_COPY and DIV_HPM dividers need not be set. + */ + div0 = cfg_data->div0; +- if (test_bit(CLK_CPU_HAS_DIV1, &cpuclk->flags)) { ++ if (cpuclk->flags & CLK_CPU_HAS_DIV1) { + div1 = cfg_data->div1; + if (readl(base + E4210_SRC_CPU) & E4210_MUX_HPM_MASK) + div1 = readl(base + E4210_DIV_CPU1) & +@@ -182,7 +182,7 @@ static int exynos_cpuclk_pre_rate_change(struct clk_notifier_data *ndata, + alt_div = DIV_ROUND_UP(alt_prate, tmp_rate) - 1; + WARN_ON(alt_div >= MAX_DIV); + +- if (test_bit(CLK_CPU_NEEDS_DEBUG_ALT_DIV, &cpuclk->flags)) { ++ if (cpuclk->flags & CLK_CPU_NEEDS_DEBUG_ALT_DIV) { + /* + * In Exynos4210, ATB clock parent is also mout_core. So + * ATB clock also needs to be mantained at safe speed. +@@ -203,7 +203,7 @@ static int exynos_cpuclk_pre_rate_change(struct clk_notifier_data *ndata, + writel(div0, base + E4210_DIV_CPU0); + wait_until_divider_stable(base + E4210_DIV_STAT_CPU0, DIV_MASK_ALL); + +- if (test_bit(CLK_CPU_HAS_DIV1, &cpuclk->flags)) { ++ if (cpuclk->flags & CLK_CPU_HAS_DIV1) { + writel(div1, base + E4210_DIV_CPU1); + wait_until_divider_stable(base + E4210_DIV_STAT_CPU1, + DIV_MASK_ALL); +@@ -222,7 +222,7 @@ static int exynos_cpuclk_post_rate_change(struct clk_notifier_data *ndata, + unsigned long mux_reg; + + /* find out the divider values to use for clock data */ +- if (test_bit(CLK_CPU_NEEDS_DEBUG_ALT_DIV, &cpuclk->flags)) { ++ if (cpuclk->flags & CLK_CPU_NEEDS_DEBUG_ALT_DIV) { + while ((cfg_data->prate * 1000) != ndata->new_rate) { + if (cfg_data->prate == 0) + return -EINVAL; +@@ -237,7 +237,7 @@ static int exynos_cpuclk_post_rate_change(struct clk_notifier_data *ndata, + writel(mux_reg & ~(1 << 16), base + E4210_SRC_CPU); + wait_until_mux_stable(base + E4210_STAT_CPU, 16, 1); + +- if (test_bit(CLK_CPU_NEEDS_DEBUG_ALT_DIV, &cpuclk->flags)) { ++ if (cpuclk->flags & CLK_CPU_NEEDS_DEBUG_ALT_DIV) { + div |= (cfg_data->div0 & E4210_DIV0_ATB_MASK); + div_mask |= E4210_DIV0_ATB_MASK; + } +diff --git a/drivers/clk/ti/clk-3xxx.c b/drivers/clk/ti/clk-3xxx.c +index 757636d..4ab28cf 100644 +--- a/drivers/clk/ti/clk-3xxx.c ++++ b/drivers/clk/ti/clk-3xxx.c +@@ -163,7 +163,6 @@ static struct ti_dt_clk omap3xxx_clks[] = { + DT_CLK(NULL, "gpio2_ick", "gpio2_ick"), + DT_CLK(NULL, "wdt3_ick", "wdt3_ick"), + DT_CLK(NULL, "uart3_ick", "uart3_ick"), +- DT_CLK(NULL, "uart4_ick", "uart4_ick"), + DT_CLK(NULL, "gpt9_ick", "gpt9_ick"), + DT_CLK(NULL, "gpt8_ick", "gpt8_ick"), + DT_CLK(NULL, "gpt7_ick", "gpt7_ick"), +@@ -308,6 +307,7 @@ static struct ti_dt_clk am35xx_clks[] = { + static struct ti_dt_clk omap36xx_clks[] = { + DT_CLK(NULL, "omap_192m_alwon_fck", "omap_192m_alwon_fck"), + DT_CLK(NULL, "uart4_fck", "uart4_fck"), ++ DT_CLK(NULL, "uart4_ick", "uart4_ick"), + { .node_name = NULL }, + }; + +diff --git a/drivers/clk/ti/clk-7xx.c b/drivers/clk/ti/clk-7xx.c +index 63b8323..0eb82107 100644 +--- a/drivers/clk/ti/clk-7xx.c ++++ b/drivers/clk/ti/clk-7xx.c +@@ -16,7 +16,6 @@ + #include <linux/clkdev.h> + #include <linux/clk/ti.h> + +-#define DRA7_DPLL_ABE_DEFFREQ 180633600 + #define DRA7_DPLL_GMAC_DEFFREQ 1000000000 + #define DRA7_DPLL_USB_DEFFREQ 960000000 + +@@ -312,27 +311,12 @@ static struct ti_dt_clk dra7xx_clks[] = { + int __init dra7xx_dt_clk_init(void) + { + int rc; +- struct clk *abe_dpll_mux, *sys_clkin2, *dpll_ck, *hdcp_ck; ++ struct clk *dpll_ck, *hdcp_ck; + + ti_dt_clocks_register(dra7xx_clks); + + omap2_clk_disable_autoidle_all(); + +- abe_dpll_mux = clk_get_sys(NULL, "abe_dpll_sys_clk_mux"); +- sys_clkin2 = clk_get_sys(NULL, "sys_clkin2"); +- dpll_ck = clk_get_sys(NULL, "dpll_abe_ck"); +- +- rc = clk_set_parent(abe_dpll_mux, sys_clkin2); +- if (!rc) +- rc = clk_set_rate(dpll_ck, DRA7_DPLL_ABE_DEFFREQ); +- if (rc) +- pr_err("%s: failed to configure ABE DPLL!\n", __func__); +- +- dpll_ck = clk_get_sys(NULL, "dpll_abe_m2x2_ck"); +- rc = clk_set_rate(dpll_ck, DRA7_DPLL_ABE_DEFFREQ * 2); +- if (rc) +- pr_err("%s: failed to configure ABE DPLL m2x2!\n", __func__); +- + dpll_ck = clk_get_sys(NULL, "dpll_gmac_ck"); + rc = clk_set_rate(dpll_ck, DRA7_DPLL_GMAC_DEFFREQ); + if (rc) +diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c +index 0136dfc..7c2a738 100644 +--- a/drivers/cpufreq/acpi-cpufreq.c ++++ b/drivers/cpufreq/acpi-cpufreq.c +@@ -146,6 +146,9 @@ static ssize_t show_freqdomain_cpus(struct cpufreq_policy *policy, char *buf) + { + struct acpi_cpufreq_data *data = per_cpu(acfreq_data, policy->cpu); + ++ if (unlikely(!data)) ++ return -ENODEV; ++ + return cpufreq_show_cpus(data->freqdomain_cpus, buf); + } + +diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c +index 528a82bf..99a4065 100644 +--- a/drivers/cpufreq/cpufreq-dt.c ++++ b/drivers/cpufreq/cpufreq-dt.c +@@ -255,7 +255,8 @@ static int cpufreq_init(struct cpufreq_policy *policy) + rcu_read_unlock(); + + tol_uV = opp_uV * priv->voltage_tolerance / 100; +- if (regulator_is_supported_voltage(cpu_reg, opp_uV, ++ if (regulator_is_supported_voltage(cpu_reg, ++ opp_uV - tol_uV, + opp_uV + tol_uV)) { + if (opp_uV < min_uV) + min_uV = opp_uV; +diff --git a/drivers/crypto/marvell/cesa.h b/drivers/crypto/marvell/cesa.h +index b60698b..bc2a55b 100644 +--- a/drivers/crypto/marvell/cesa.h ++++ b/drivers/crypto/marvell/cesa.h +@@ -687,6 +687,33 @@ static inline u32 mv_cesa_get_int_mask(struct mv_cesa_engine *engine) + + int mv_cesa_queue_req(struct crypto_async_request *req); + ++/* ++ * Helper function that indicates whether a crypto request needs to be ++ * cleaned up or not after being enqueued using mv_cesa_queue_req(). ++ */ ++static inline int mv_cesa_req_needs_cleanup(struct crypto_async_request *req, ++ int ret) ++{ ++ /* ++ * The queue still had some space, the request was queued ++ * normally, so there's no need to clean it up. ++ */ ++ if (ret == -EINPROGRESS) ++ return false; ++ ++ /* ++ * The queue had not space left, but since the request is ++ * flagged with CRYPTO_TFM_REQ_MAY_BACKLOG, it was added to ++ * the backlog and will be processed later. There's no need to ++ * clean it up. ++ */ ++ if (ret == -EBUSY && req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG) ++ return false; ++ ++ /* Request wasn't queued, we need to clean it up */ ++ return true; ++} ++ + /* TDMA functions */ + + static inline void mv_cesa_req_dma_iter_init(struct mv_cesa_dma_iter *iter, +diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c +index 0745cf3..3df2f4e 100644 +--- a/drivers/crypto/marvell/cipher.c ++++ b/drivers/crypto/marvell/cipher.c +@@ -189,7 +189,6 @@ static inline void mv_cesa_ablkcipher_prepare(struct crypto_async_request *req, + { + struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req); + struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(ablkreq); +- + creq->req.base.engine = engine; + + if (creq->req.base.type == CESA_DMA_REQ) +@@ -431,7 +430,7 @@ static int mv_cesa_des_op(struct ablkcipher_request *req, + return ret; + + ret = mv_cesa_queue_req(&req->base); +- if (ret && ret != -EINPROGRESS) ++ if (mv_cesa_req_needs_cleanup(&req->base, ret)) + mv_cesa_ablkcipher_cleanup(req); + + return ret; +@@ -551,7 +550,7 @@ static int mv_cesa_des3_op(struct ablkcipher_request *req, + return ret; + + ret = mv_cesa_queue_req(&req->base); +- if (ret && ret != -EINPROGRESS) ++ if (mv_cesa_req_needs_cleanup(&req->base, ret)) + mv_cesa_ablkcipher_cleanup(req); + + return ret; +@@ -693,7 +692,7 @@ static int mv_cesa_aes_op(struct ablkcipher_request *req, + return ret; + + ret = mv_cesa_queue_req(&req->base); +- if (ret && ret != -EINPROGRESS) ++ if (mv_cesa_req_needs_cleanup(&req->base, ret)) + mv_cesa_ablkcipher_cleanup(req); + + return ret; +diff --git a/drivers/crypto/marvell/hash.c b/drivers/crypto/marvell/hash.c +index ae9272e..e8d0d71 100644 +--- a/drivers/crypto/marvell/hash.c ++++ b/drivers/crypto/marvell/hash.c +@@ -739,10 +739,8 @@ static int mv_cesa_ahash_update(struct ahash_request *req) + return 0; + + ret = mv_cesa_queue_req(&req->base); +- if (ret && ret != -EINPROGRESS) { ++ if (mv_cesa_req_needs_cleanup(&req->base, ret)) + mv_cesa_ahash_cleanup(req); +- return ret; +- } + + return ret; + } +@@ -766,7 +764,7 @@ static int mv_cesa_ahash_final(struct ahash_request *req) + return 0; + + ret = mv_cesa_queue_req(&req->base); +- if (ret && ret != -EINPROGRESS) ++ if (mv_cesa_req_needs_cleanup(&req->base, ret)) + mv_cesa_ahash_cleanup(req); + + return ret; +@@ -791,7 +789,7 @@ static int mv_cesa_ahash_finup(struct ahash_request *req) + return 0; + + ret = mv_cesa_queue_req(&req->base); +- if (ret && ret != -EINPROGRESS) ++ if (mv_cesa_req_needs_cleanup(&req->base, ret)) + mv_cesa_ahash_cleanup(req); + + return ret; +diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c +index 40afa2a..da7917a 100644 +--- a/drivers/dma/at_xdmac.c ++++ b/drivers/dma/at_xdmac.c +@@ -455,6 +455,15 @@ static struct at_xdmac_desc *at_xdmac_alloc_desc(struct dma_chan *chan, + return desc; + } + ++void at_xdmac_init_used_desc(struct at_xdmac_desc *desc) ++{ ++ memset(&desc->lld, 0, sizeof(desc->lld)); ++ INIT_LIST_HEAD(&desc->descs_list); ++ desc->direction = DMA_TRANS_NONE; ++ desc->xfer_size = 0; ++ desc->active_xfer = false; ++} ++ + /* Call must be protected by lock. */ + static struct at_xdmac_desc *at_xdmac_get_desc(struct at_xdmac_chan *atchan) + { +@@ -466,7 +475,7 @@ static struct at_xdmac_desc *at_xdmac_get_desc(struct at_xdmac_chan *atchan) + desc = list_first_entry(&atchan->free_descs_list, + struct at_xdmac_desc, desc_node); + list_del(&desc->desc_node); +- desc->active_xfer = false; ++ at_xdmac_init_used_desc(desc); + } + + return desc; +@@ -797,10 +806,7 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr, + list_add_tail(&desc->desc_node, &first->descs_list); + } + +- prev->lld.mbr_nda = first->tx_dma_desc.phys; +- dev_dbg(chan2dev(chan), +- "%s: chain lld: prev=0x%p, mbr_nda=%pad\n", +- __func__, prev, &prev->lld.mbr_nda); ++ at_xdmac_queue_desc(chan, prev, first); + first->tx_dma_desc.flags = flags; + first->xfer_size = buf_len; + first->direction = direction; +@@ -878,14 +884,14 @@ at_xdmac_interleaved_queue_desc(struct dma_chan *chan, + + if (xt->src_inc) { + if (xt->src_sgl) +- chan_cc |= AT_XDMAC_CC_SAM_UBS_DS_AM; ++ chan_cc |= AT_XDMAC_CC_SAM_UBS_AM; + else + chan_cc |= AT_XDMAC_CC_SAM_INCREMENTED_AM; + } + + if (xt->dst_inc) { + if (xt->dst_sgl) +- chan_cc |= AT_XDMAC_CC_DAM_UBS_DS_AM; ++ chan_cc |= AT_XDMAC_CC_DAM_UBS_AM; + else + chan_cc |= AT_XDMAC_CC_DAM_INCREMENTED_AM; + } +diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c +index cf1c87f..bedce03 100644 +--- a/drivers/dma/dw/core.c ++++ b/drivers/dma/dw/core.c +@@ -1591,7 +1591,6 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata) + INIT_LIST_HEAD(&dw->dma.channels); + for (i = 0; i < nr_channels; i++) { + struct dw_dma_chan *dwc = &dw->chan[i]; +- int r = nr_channels - i - 1; + + dwc->chan.device = &dw->dma; + dma_cookie_init(&dwc->chan); +@@ -1603,7 +1602,7 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata) + + /* 7 is highest priority & 0 is lowest. */ + if (pdata->chan_priority == CHAN_PRIORITY_ASCENDING) +- dwc->priority = r; ++ dwc->priority = nr_channels - i - 1; + else + dwc->priority = i; + +@@ -1622,6 +1621,7 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata) + /* Hardware configuration */ + if (autocfg) { + unsigned int dwc_params; ++ unsigned int r = DW_DMA_MAX_NR_CHANNELS - i - 1; + void __iomem *addr = chip->regs + r * sizeof(u32); + + dwc_params = dma_read_byaddr(addr, DWC_PARAMS); +diff --git a/drivers/dma/pxa_dma.c b/drivers/dma/pxa_dma.c +index ddcbbf5..95bdbbe 100644 +--- a/drivers/dma/pxa_dma.c ++++ b/drivers/dma/pxa_dma.c +@@ -888,6 +888,7 @@ pxad_tx_prep(struct virt_dma_chan *vc, struct virt_dma_desc *vd, + struct dma_async_tx_descriptor *tx; + struct pxad_chan *chan = container_of(vc, struct pxad_chan, vc); + ++ INIT_LIST_HEAD(&vd->node); + tx = vchan_tx_prep(vc, vd, tx_flags); + tx->tx_submit = pxad_tx_submit; + dev_dbg(&chan->vc.chan.dev->device, +diff --git a/drivers/extcon/extcon.c b/drivers/extcon/extcon.c +index 43b57b0..ca94f47 100644 +--- a/drivers/extcon/extcon.c ++++ b/drivers/extcon/extcon.c +@@ -126,7 +126,7 @@ static int find_cable_index_by_id(struct extcon_dev *edev, const unsigned int id + + static int find_cable_id_by_name(struct extcon_dev *edev, const char *name) + { +- unsigned int id = -EINVAL; ++ int id = -EINVAL; + int i = 0; + + /* Find the id of extcon cable */ +@@ -143,7 +143,7 @@ static int find_cable_id_by_name(struct extcon_dev *edev, const char *name) + + static int find_cable_index_by_name(struct extcon_dev *edev, const char *name) + { +- unsigned int id; ++ int id; + + if (edev->max_supported == 0) + return -EINVAL; +@@ -159,7 +159,7 @@ static int find_cable_index_by_name(struct extcon_dev *edev, const char *name) + static bool is_extcon_changed(u32 prev, u32 new, int idx, bool *attached) + { + if (((prev >> idx) & 0x1) != ((new >> idx) & 0x1)) { +- *attached = new ? true : false; ++ *attached = ((new >> idx) & 0x1) ? true : false; + return true; + } + +@@ -378,7 +378,7 @@ EXPORT_SYMBOL_GPL(extcon_get_cable_state_); + */ + int extcon_get_cable_state(struct extcon_dev *edev, const char *cable_name) + { +- unsigned int id; ++ int id; + + id = find_cable_id_by_name(edev, cable_name); + if (id < 0) +@@ -426,7 +426,7 @@ EXPORT_SYMBOL_GPL(extcon_set_cable_state_); + int extcon_set_cable_state(struct extcon_dev *edev, + const char *cable_name, bool cable_state) + { +- unsigned int id; ++ int id; + + id = find_cable_id_by_name(edev, cable_name); + if (id < 0) +diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c +index e29560e..950c87f 100644 +--- a/drivers/firmware/efi/libstub/arm-stub.c ++++ b/drivers/firmware/efi/libstub/arm-stub.c +@@ -13,6 +13,7 @@ + */ + + #include <linux/efi.h> ++#include <linux/sort.h> + #include <asm/efi.h> + + #include "efistub.h" +@@ -305,6 +306,44 @@ fail: + */ + #define EFI_RT_VIRTUAL_BASE 0x40000000 + ++static int cmp_mem_desc(const void *l, const void *r) ++{ ++ const efi_memory_desc_t *left = l, *right = r; ++ ++ return (left->phys_addr > right->phys_addr) ? 1 : -1; ++} ++ ++/* ++ * Returns whether region @left ends exactly where region @right starts, ++ * or false if either argument is NULL. ++ */ ++static bool regions_are_adjacent(efi_memory_desc_t *left, ++ efi_memory_desc_t *right) ++{ ++ u64 left_end; ++ ++ if (left == NULL || right == NULL) ++ return false; ++ ++ left_end = left->phys_addr + left->num_pages * EFI_PAGE_SIZE; ++ ++ return left_end == right->phys_addr; ++} ++ ++/* ++ * Returns whether region @left and region @right have compatible memory type ++ * mapping attributes, and are both EFI_MEMORY_RUNTIME regions. ++ */ ++static bool regions_have_compatible_memory_type_attrs(efi_memory_desc_t *left, ++ efi_memory_desc_t *right) ++{ ++ static const u64 mem_type_mask = EFI_MEMORY_WB | EFI_MEMORY_WT | ++ EFI_MEMORY_WC | EFI_MEMORY_UC | ++ EFI_MEMORY_RUNTIME; ++ ++ return ((left->attribute ^ right->attribute) & mem_type_mask) == 0; ++} ++ + /* + * efi_get_virtmap() - create a virtual mapping for the EFI memory map + * +@@ -317,33 +356,52 @@ void efi_get_virtmap(efi_memory_desc_t *memory_map, unsigned long map_size, + int *count) + { + u64 efi_virt_base = EFI_RT_VIRTUAL_BASE; +- efi_memory_desc_t *out = runtime_map; ++ efi_memory_desc_t *in, *prev = NULL, *out = runtime_map; + int l; + +- for (l = 0; l < map_size; l += desc_size) { +- efi_memory_desc_t *in = (void *)memory_map + l; ++ /* ++ * To work around potential issues with the Properties Table feature ++ * introduced in UEFI 2.5, which may split PE/COFF executable images ++ * in memory into several RuntimeServicesCode and RuntimeServicesData ++ * regions, we need to preserve the relative offsets between adjacent ++ * EFI_MEMORY_RUNTIME regions with the same memory type attributes. ++ * The easiest way to find adjacent regions is to sort the memory map ++ * before traversing it. ++ */ ++ sort(memory_map, map_size / desc_size, desc_size, cmp_mem_desc, NULL); ++ ++ for (l = 0; l < map_size; l += desc_size, prev = in) { + u64 paddr, size; + ++ in = (void *)memory_map + l; + if (!(in->attribute & EFI_MEMORY_RUNTIME)) + continue; + ++ paddr = in->phys_addr; ++ size = in->num_pages * EFI_PAGE_SIZE; ++ + /* + * Make the mapping compatible with 64k pages: this allows + * a 4k page size kernel to kexec a 64k page size kernel and + * vice versa. + */ +- paddr = round_down(in->phys_addr, SZ_64K); +- size = round_up(in->num_pages * EFI_PAGE_SIZE + +- in->phys_addr - paddr, SZ_64K); +- +- /* +- * Avoid wasting memory on PTEs by choosing a virtual base that +- * is compatible with section mappings if this region has the +- * appropriate size and physical alignment. (Sections are 2 MB +- * on 4k granule kernels) +- */ +- if (IS_ALIGNED(in->phys_addr, SZ_2M) && size >= SZ_2M) +- efi_virt_base = round_up(efi_virt_base, SZ_2M); ++ if (!regions_are_adjacent(prev, in) || ++ !regions_have_compatible_memory_type_attrs(prev, in)) { ++ ++ paddr = round_down(in->phys_addr, SZ_64K); ++ size += in->phys_addr - paddr; ++ ++ /* ++ * Avoid wasting memory on PTEs by choosing a virtual ++ * base that is compatible with section mappings if this ++ * region has the appropriate size and physical ++ * alignment. (Sections are 2 MB on 4k granule kernels) ++ */ ++ if (IS_ALIGNED(in->phys_addr, SZ_2M) && size >= SZ_2M) ++ efi_virt_base = round_up(efi_virt_base, SZ_2M); ++ else ++ efi_virt_base = round_up(efi_virt_base, SZ_64K); ++ } + + in->virt_addr = efi_virt_base + in->phys_addr - paddr; + efi_virt_base += size; +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c +index b4d36f0..c098d76 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c +@@ -140,7 +140,7 @@ void amdgpu_irq_preinstall(struct drm_device *dev) + */ + int amdgpu_irq_postinstall(struct drm_device *dev) + { +- dev->max_vblank_count = 0x001fffff; ++ dev->max_vblank_count = 0x00ffffff; + return 0; + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c +index 2abc661..ddcfbf3 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c +@@ -543,46 +543,60 @@ static int amdgpu_uvd_cs_msg(struct amdgpu_uvd_cs_ctx *ctx, + return -EINVAL; + } + +- if (msg_type == 1) { ++ switch (msg_type) { ++ case 0: ++ /* it's a create msg, calc image size (width * height) */ ++ amdgpu_bo_kunmap(bo); ++ ++ /* try to alloc a new handle */ ++ for (i = 0; i < AMDGPU_MAX_UVD_HANDLES; ++i) { ++ if (atomic_read(&adev->uvd.handles[i]) == handle) { ++ DRM_ERROR("Handle 0x%x already in use!\n", handle); ++ return -EINVAL; ++ } ++ ++ if (!atomic_cmpxchg(&adev->uvd.handles[i], 0, handle)) { ++ adev->uvd.filp[i] = ctx->parser->filp; ++ return 0; ++ } ++ } ++ ++ DRM_ERROR("No more free UVD handles!\n"); ++ return -EINVAL; ++ ++ case 1: + /* it's a decode msg, calc buffer sizes */ + r = amdgpu_uvd_cs_msg_decode(msg, ctx->buf_sizes); + amdgpu_bo_kunmap(bo); + if (r) + return r; + +- } else if (msg_type == 2) { ++ /* validate the handle */ ++ for (i = 0; i < AMDGPU_MAX_UVD_HANDLES; ++i) { ++ if (atomic_read(&adev->uvd.handles[i]) == handle) { ++ if (adev->uvd.filp[i] != ctx->parser->filp) { ++ DRM_ERROR("UVD handle collision detected!\n"); ++ return -EINVAL; ++ } ++ return 0; ++ } ++ } ++ ++ DRM_ERROR("Invalid UVD handle 0x%x!\n", handle); ++ return -ENOENT; ++ ++ case 2: + /* it's a destroy msg, free the handle */ + for (i = 0; i < AMDGPU_MAX_UVD_HANDLES; ++i) + atomic_cmpxchg(&adev->uvd.handles[i], handle, 0); + amdgpu_bo_kunmap(bo); + return 0; +- } else { +- /* it's a create msg */ +- amdgpu_bo_kunmap(bo); +- +- if (msg_type != 0) { +- DRM_ERROR("Illegal UVD message type (%d)!\n", msg_type); +- return -EINVAL; +- } +- +- /* it's a create msg, no special handling needed */ +- } +- +- /* create or decode, validate the handle */ +- for (i = 0; i < AMDGPU_MAX_UVD_HANDLES; ++i) { +- if (atomic_read(&adev->uvd.handles[i]) == handle) +- return 0; +- } + +- /* handle not found try to alloc a new one */ +- for (i = 0; i < AMDGPU_MAX_UVD_HANDLES; ++i) { +- if (!atomic_cmpxchg(&adev->uvd.handles[i], 0, handle)) { +- adev->uvd.filp[i] = ctx->parser->filp; +- return 0; +- } ++ default: ++ DRM_ERROR("Illegal UVD message type (%d)!\n", msg_type); ++ return -EINVAL; + } +- +- DRM_ERROR("No more free UVD handles!\n"); ++ BUG(); + return -EINVAL; + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +index 9a4e3b6..b07402f 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +@@ -787,7 +787,7 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev, + int r; + + if (mem) { +- addr = mem->start << PAGE_SHIFT; ++ addr = (u64)mem->start << PAGE_SHIFT; + if (mem->mem_type != TTM_PL_TT) + addr += adev->vm_manager.vram_base_offset; + } else { +diff --git a/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c b/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c +index ae8caca..e605574 100644 +--- a/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c ++++ b/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c +@@ -1279,8 +1279,7 @@ amdgpu_atombios_encoder_setup_dig(struct drm_encoder *encoder, int action) + amdgpu_atombios_encoder_setup_dig_encoder(encoder, ATOM_ENCODER_CMD_DP_VIDEO_ON, 0); + } + if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) +- amdgpu_atombios_encoder_setup_dig_transmitter(encoder, +- ATOM_TRANSMITTER_ACTION_LCD_BLON, 0, 0); ++ amdgpu_atombios_encoder_set_backlight_level(amdgpu_encoder, dig->backlight_level); + if (ext_encoder) + amdgpu_atombios_encoder_setup_external_encoder(encoder, ext_encoder, ATOM_ENABLE); + } else { +diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c b/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c +index 4efd671..9488ea6 100644 +--- a/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c ++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c +@@ -224,11 +224,11 @@ static int uvd_v4_2_suspend(void *handle) + int r; + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + +- r = uvd_v4_2_hw_fini(adev); ++ r = amdgpu_uvd_suspend(adev); + if (r) + return r; + +- r = amdgpu_uvd_suspend(adev); ++ r = uvd_v4_2_hw_fini(adev); + if (r) + return r; + +diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c +index b756bd9..d0ed998 100644 +--- a/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c +@@ -220,11 +220,11 @@ static int uvd_v5_0_suspend(void *handle) + int r; + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + +- r = uvd_v5_0_hw_fini(adev); ++ r = amdgpu_uvd_suspend(adev); + if (r) + return r; + +- r = amdgpu_uvd_suspend(adev); ++ r = uvd_v5_0_hw_fini(adev); + if (r) + return r; + +diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c +index 49aa931..345eb76 100644 +--- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c +@@ -214,11 +214,11 @@ static int uvd_v6_0_suspend(void *handle) + int r; + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + +- r = uvd_v6_0_hw_fini(adev); ++ r = amdgpu_uvd_suspend(adev); + if (r) + return r; + +- r = amdgpu_uvd_suspend(adev); ++ r = uvd_v6_0_hw_fini(adev); + if (r) + return r; + +diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c +index 68552da..4f58a1e 100644 +--- a/drivers/gpu/drm/amd/amdgpu/vi.c ++++ b/drivers/gpu/drm/amd/amdgpu/vi.c +@@ -1290,7 +1290,8 @@ static int vi_common_early_init(void *handle) + case CHIP_CARRIZO: + adev->has_uvd = true; + adev->cg_flags = 0; +- adev->pg_flags = AMDGPU_PG_SUPPORT_UVD | AMDGPU_PG_SUPPORT_VCE; ++ /* Disable UVD pg */ ++ adev->pg_flags = /* AMDGPU_PG_SUPPORT_UVD | */AMDGPU_PG_SUPPORT_VCE; + adev->external_rev_id = adev->rev_id + 0x1; + if (amdgpu_smc_load_fw && smc_enabled) + adev->firmware.smu_load = true; +diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c +index eb603f1de..969e789 100644 +--- a/drivers/gpu/drm/drm_dp_mst_topology.c ++++ b/drivers/gpu/drm/drm_dp_mst_topology.c +@@ -804,8 +804,6 @@ static void drm_dp_destroy_mst_branch_device(struct kref *kref) + struct drm_dp_mst_port *port, *tmp; + bool wake_tx = false; + +- cancel_work_sync(&mstb->mgr->work); +- + /* + * destroy all ports - don't need lock + * as there are no more references to the mst branch +@@ -863,29 +861,33 @@ static void drm_dp_destroy_port(struct kref *kref) + { + struct drm_dp_mst_port *port = container_of(kref, struct drm_dp_mst_port, kref); + struct drm_dp_mst_topology_mgr *mgr = port->mgr; ++ + if (!port->input) { + port->vcpi.num_slots = 0; + + kfree(port->cached_edid); + +- /* we can't destroy the connector here, as +- we might be holding the mode_config.mutex +- from an EDID retrieval */ ++ /* ++ * The only time we don't have a connector ++ * on an output port is if the connector init ++ * fails. ++ */ + if (port->connector) { ++ /* we can't destroy the connector here, as ++ * we might be holding the mode_config.mutex ++ * from an EDID retrieval */ ++ + mutex_lock(&mgr->destroy_connector_lock); + list_add(&port->next, &mgr->destroy_connector_list); + mutex_unlock(&mgr->destroy_connector_lock); + schedule_work(&mgr->destroy_connector_work); + return; + } ++ /* no need to clean up vcpi ++ * as if we have no connector we never setup a vcpi */ + drm_dp_port_teardown_pdt(port, port->pdt); +- +- if (!port->input && port->vcpi.vcpi > 0) +- drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi); + } + kfree(port); +- +- (*mgr->cbs->hotplug)(mgr); + } + + static void drm_dp_put_port(struct drm_dp_mst_port *port) +@@ -1115,12 +1117,21 @@ static void drm_dp_add_port(struct drm_dp_mst_branch *mstb, + char proppath[255]; + build_mst_prop_path(port, mstb, proppath, sizeof(proppath)); + port->connector = (*mstb->mgr->cbs->add_connector)(mstb->mgr, port, proppath); +- ++ if (!port->connector) { ++ /* remove it from the port list */ ++ mutex_lock(&mstb->mgr->lock); ++ list_del(&port->next); ++ mutex_unlock(&mstb->mgr->lock); ++ /* drop port list reference */ ++ drm_dp_put_port(port); ++ goto out; ++ } + if (port->port_num >= 8) { + port->cached_edid = drm_get_edid(port->connector, &port->aux.ddc); + } + } + ++out: + /* put reference to this port */ + drm_dp_put_port(port); + } +@@ -1978,6 +1989,8 @@ void drm_dp_mst_topology_mgr_suspend(struct drm_dp_mst_topology_mgr *mgr) + drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL, + DP_MST_EN | DP_UPSTREAM_IS_SRC); + mutex_unlock(&mgr->lock); ++ flush_work(&mgr->work); ++ flush_work(&mgr->destroy_connector_work); + } + EXPORT_SYMBOL(drm_dp_mst_topology_mgr_suspend); + +@@ -2661,7 +2674,7 @@ static void drm_dp_destroy_connector_work(struct work_struct *work) + { + struct drm_dp_mst_topology_mgr *mgr = container_of(work, struct drm_dp_mst_topology_mgr, destroy_connector_work); + struct drm_dp_mst_port *port; +- ++ bool send_hotplug = false; + /* + * Not a regular list traverse as we have to drop the destroy + * connector lock before destroying the connector, to avoid AB->BA +@@ -2684,7 +2697,10 @@ static void drm_dp_destroy_connector_work(struct work_struct *work) + if (!port->input && port->vcpi.vcpi > 0) + drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi); + kfree(port); ++ send_hotplug = true; + } ++ if (send_hotplug) ++ (*mgr->cbs->hotplug)(mgr); + } + + /** +@@ -2737,6 +2753,7 @@ EXPORT_SYMBOL(drm_dp_mst_topology_mgr_init); + */ + void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr) + { ++ flush_work(&mgr->work); + flush_work(&mgr->destroy_connector_work); + mutex_lock(&mgr->payload_lock); + kfree(mgr->payloads); +diff --git a/drivers/gpu/drm/drm_lock.c b/drivers/gpu/drm/drm_lock.c +index f861361..4924d381 100644 +--- a/drivers/gpu/drm/drm_lock.c ++++ b/drivers/gpu/drm/drm_lock.c +@@ -61,6 +61,9 @@ int drm_legacy_lock(struct drm_device *dev, void *data, + struct drm_master *master = file_priv->master; + int ret = 0; + ++ if (drm_core_check_feature(dev, DRIVER_MODESET)) ++ return -EINVAL; ++ + ++file_priv->lock_count; + + if (lock->context == DRM_KERNEL_CONTEXT) { +@@ -153,6 +156,9 @@ int drm_legacy_unlock(struct drm_device *dev, void *data, struct drm_file *file_ + struct drm_lock *lock = data; + struct drm_master *master = file_priv->master; + ++ if (drm_core_check_feature(dev, DRIVER_MODESET)) ++ return -EINVAL; ++ + if (lock->context == DRM_KERNEL_CONTEXT) { + DRM_ERROR("Process %d using kernel context %d\n", + task_pid_nr(current), lock->context); +diff --git a/drivers/gpu/drm/i915/intel_bios.c b/drivers/gpu/drm/i915/intel_bios.c +index 198fc3c..17522f7 100644 +--- a/drivers/gpu/drm/i915/intel_bios.c ++++ b/drivers/gpu/drm/i915/intel_bios.c +@@ -42,7 +42,7 @@ find_section(const void *_bdb, int section_id) + const struct bdb_header *bdb = _bdb; + const u8 *base = _bdb; + int index = 0; +- u16 total, current_size; ++ u32 total, current_size; + u8 current_id; + + /* skip to first section */ +@@ -57,6 +57,10 @@ find_section(const void *_bdb, int section_id) + current_size = *((const u16 *)(base + index)); + index += 2; + ++ /* The MIPI Sequence Block v3+ has a separate size field. */ ++ if (current_id == BDB_MIPI_SEQUENCE && *(base + index) >= 3) ++ current_size = *((const u32 *)(base + index + 1)); ++ + if (index + current_size > total) + return NULL; + +@@ -859,6 +863,12 @@ parse_mipi(struct drm_i915_private *dev_priv, const struct bdb_header *bdb) + return; + } + ++ /* Fail gracefully for forward incompatible sequence block. */ ++ if (sequence->version >= 3) { ++ DRM_ERROR("Unable to parse MIPI Sequence Block v3+\n"); ++ return; ++ } ++ + DRM_DEBUG_DRIVER("Found MIPI sequence block\n"); + + block_size = get_blocksize(sequence); +diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c +index 7c6225c..4649bd2 100644 +--- a/drivers/gpu/drm/qxl/qxl_display.c ++++ b/drivers/gpu/drm/qxl/qxl_display.c +@@ -618,7 +618,7 @@ static int qxl_crtc_mode_set(struct drm_crtc *crtc, + adjusted_mode->hdisplay, + adjusted_mode->vdisplay); + +- if (qcrtc->index == 0) ++ if (bo->is_primary == false) + recreate_primary = true; + + if (bo->surf.stride * bo->surf.height > qdev->vram_size) { +@@ -886,13 +886,15 @@ static enum drm_connector_status qxl_conn_detect( + drm_connector_to_qxl_output(connector); + struct drm_device *ddev = connector->dev; + struct qxl_device *qdev = ddev->dev_private; +- int connected; ++ bool connected = false; + + /* The first monitor is always connected */ +- connected = (output->index == 0) || +- (qdev->client_monitors_config && +- qdev->client_monitors_config->count > output->index && +- qxl_head_enabled(&qdev->client_monitors_config->heads[output->index])); ++ if (!qdev->client_monitors_config) { ++ if (output->index == 0) ++ connected = true; ++ } else ++ connected = qdev->client_monitors_config->count > output->index && ++ qxl_head_enabled(&qdev->client_monitors_config->heads[output->index]); + + DRM_DEBUG("#%d connected: %d\n", output->index, connected); + if (!connected) +diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c +index c387259..65adb9c 100644 +--- a/drivers/gpu/drm/radeon/atombios_encoders.c ++++ b/drivers/gpu/drm/radeon/atombios_encoders.c +@@ -1624,8 +1624,9 @@ radeon_atom_encoder_dpms_avivo(struct drm_encoder *encoder, int mode) + } else + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { +- args.ucAction = ATOM_LCD_BLON; +- atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); ++ struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; ++ ++ atombios_set_backlight_level(radeon_encoder, dig->backlight_level); + } + break; + case DRM_MODE_DPMS_STANDBY: +@@ -1706,8 +1707,7 @@ radeon_atom_encoder_dpms_dig(struct drm_encoder *encoder, int mode) + atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_ON, 0); + } + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) +- atombios_dig_transmitter_setup(encoder, +- ATOM_TRANSMITTER_ACTION_LCD_BLON, 0, 0); ++ atombios_set_backlight_level(radeon_encoder, dig->backlight_level); + if (ext_encoder) + atombios_external_encoder_setup(encoder, ext_encoder, ATOM_ENABLE); + break; +diff --git a/drivers/hv/hv_utils_transport.c b/drivers/hv/hv_utils_transport.c +index ea7ba5e..6a9d80a 100644 +--- a/drivers/hv/hv_utils_transport.c ++++ b/drivers/hv/hv_utils_transport.c +@@ -186,7 +186,7 @@ int hvutil_transport_send(struct hvutil_transport *hvt, void *msg, int len) + return -EINVAL; + } else if (hvt->mode == HVUTIL_TRANSPORT_NETLINK) { + cn_msg = kzalloc(sizeof(*cn_msg) + len, GFP_ATOMIC); +- if (!msg) ++ if (!cn_msg) + return -ENOMEM; + cn_msg->id.idx = hvt->cn_id.idx; + cn_msg->id.val = hvt->cn_id.val; +diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c +index bd1c99d..2aaedbe 100644 +--- a/drivers/hwmon/nct6775.c ++++ b/drivers/hwmon/nct6775.c +@@ -354,6 +354,10 @@ static const u16 NCT6775_REG_TEMP_CRIT[ARRAY_SIZE(nct6775_temp_label) - 1] + + /* NCT6776 specific data */ + ++/* STEP_UP_TIME and STEP_DOWN_TIME regs are swapped for all chips but NCT6775 */ ++#define NCT6776_REG_FAN_STEP_UP_TIME NCT6775_REG_FAN_STEP_DOWN_TIME ++#define NCT6776_REG_FAN_STEP_DOWN_TIME NCT6775_REG_FAN_STEP_UP_TIME ++ + static const s8 NCT6776_ALARM_BITS[] = { + 0, 1, 2, 3, 8, 21, 20, 16, /* in0.. in7 */ + 17, -1, -1, -1, -1, -1, -1, /* in8..in14 */ +@@ -3528,8 +3532,8 @@ static int nct6775_probe(struct platform_device *pdev) + data->REG_FAN_PULSES = NCT6776_REG_FAN_PULSES; + data->FAN_PULSE_SHIFT = NCT6775_FAN_PULSE_SHIFT; + data->REG_FAN_TIME[0] = NCT6775_REG_FAN_STOP_TIME; +- data->REG_FAN_TIME[1] = NCT6775_REG_FAN_STEP_UP_TIME; +- data->REG_FAN_TIME[2] = NCT6775_REG_FAN_STEP_DOWN_TIME; ++ data->REG_FAN_TIME[1] = NCT6776_REG_FAN_STEP_UP_TIME; ++ data->REG_FAN_TIME[2] = NCT6776_REG_FAN_STEP_DOWN_TIME; + data->REG_TOLERANCE_H = NCT6776_REG_TOLERANCE_H; + data->REG_PWM[0] = NCT6775_REG_PWM; + data->REG_PWM[1] = NCT6775_REG_FAN_START_OUTPUT; +@@ -3600,8 +3604,8 @@ static int nct6775_probe(struct platform_device *pdev) + data->REG_FAN_PULSES = NCT6779_REG_FAN_PULSES; + data->FAN_PULSE_SHIFT = NCT6775_FAN_PULSE_SHIFT; + data->REG_FAN_TIME[0] = NCT6775_REG_FAN_STOP_TIME; +- data->REG_FAN_TIME[1] = NCT6775_REG_FAN_STEP_UP_TIME; +- data->REG_FAN_TIME[2] = NCT6775_REG_FAN_STEP_DOWN_TIME; ++ data->REG_FAN_TIME[1] = NCT6776_REG_FAN_STEP_UP_TIME; ++ data->REG_FAN_TIME[2] = NCT6776_REG_FAN_STEP_DOWN_TIME; + data->REG_TOLERANCE_H = NCT6776_REG_TOLERANCE_H; + data->REG_PWM[0] = NCT6775_REG_PWM; + data->REG_PWM[1] = NCT6775_REG_FAN_START_OUTPUT; +@@ -3677,8 +3681,8 @@ static int nct6775_probe(struct platform_device *pdev) + data->REG_FAN_PULSES = NCT6779_REG_FAN_PULSES; + data->FAN_PULSE_SHIFT = NCT6775_FAN_PULSE_SHIFT; + data->REG_FAN_TIME[0] = NCT6775_REG_FAN_STOP_TIME; +- data->REG_FAN_TIME[1] = NCT6775_REG_FAN_STEP_UP_TIME; +- data->REG_FAN_TIME[2] = NCT6775_REG_FAN_STEP_DOWN_TIME; ++ data->REG_FAN_TIME[1] = NCT6776_REG_FAN_STEP_UP_TIME; ++ data->REG_FAN_TIME[2] = NCT6776_REG_FAN_STEP_DOWN_TIME; + data->REG_TOLERANCE_H = NCT6776_REG_TOLERANCE_H; + data->REG_PWM[0] = NCT6775_REG_PWM; + data->REG_PWM[1] = NCT6775_REG_FAN_START_OUTPUT; +diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c +index d851e18..85761b7 100644 +--- a/drivers/infiniband/ulp/isert/ib_isert.c ++++ b/drivers/infiniband/ulp/isert/ib_isert.c +@@ -3012,9 +3012,16 @@ isert_get_dataout(struct iscsi_conn *conn, struct iscsi_cmd *cmd, bool recovery) + static int + isert_immediate_queue(struct iscsi_conn *conn, struct iscsi_cmd *cmd, int state) + { +- int ret; ++ struct isert_cmd *isert_cmd = iscsit_priv_cmd(cmd); ++ int ret = 0; + + switch (state) { ++ case ISTATE_REMOVE: ++ spin_lock_bh(&conn->cmd_lock); ++ list_del_init(&cmd->i_conn_node); ++ spin_unlock_bh(&conn->cmd_lock); ++ isert_put_cmd(isert_cmd, true); ++ break; + case ISTATE_SEND_NOPIN_WANT_RESPONSE: + ret = isert_put_nopin(cmd, conn, false); + break; +@@ -3379,6 +3386,41 @@ isert_wait4flush(struct isert_conn *isert_conn) + wait_for_completion(&isert_conn->wait_comp_err); + } + ++/** ++ * isert_put_unsol_pending_cmds() - Drop commands waiting for ++ * unsolicitate dataout ++ * @conn: iscsi connection ++ * ++ * We might still have commands that are waiting for unsolicited ++ * dataouts messages. We must put the extra reference on those ++ * before blocking on the target_wait_for_session_cmds ++ */ ++static void ++isert_put_unsol_pending_cmds(struct iscsi_conn *conn) ++{ ++ struct iscsi_cmd *cmd, *tmp; ++ static LIST_HEAD(drop_cmd_list); ++ ++ spin_lock_bh(&conn->cmd_lock); ++ list_for_each_entry_safe(cmd, tmp, &conn->conn_cmd_list, i_conn_node) { ++ if ((cmd->cmd_flags & ICF_NON_IMMEDIATE_UNSOLICITED_DATA) && ++ (cmd->write_data_done < conn->sess->sess_ops->FirstBurstLength) && ++ (cmd->write_data_done < cmd->se_cmd.data_length)) ++ list_move_tail(&cmd->i_conn_node, &drop_cmd_list); ++ } ++ spin_unlock_bh(&conn->cmd_lock); ++ ++ list_for_each_entry_safe(cmd, tmp, &drop_cmd_list, i_conn_node) { ++ list_del_init(&cmd->i_conn_node); ++ if (cmd->i_state != ISTATE_REMOVE) { ++ struct isert_cmd *isert_cmd = iscsit_priv_cmd(cmd); ++ ++ isert_info("conn %p dropping cmd %p\n", conn, cmd); ++ isert_put_cmd(isert_cmd, true); ++ } ++ } ++} ++ + static void isert_wait_conn(struct iscsi_conn *conn) + { + struct isert_conn *isert_conn = conn->context; +@@ -3397,8 +3439,9 @@ static void isert_wait_conn(struct iscsi_conn *conn) + isert_conn_terminate(isert_conn); + mutex_unlock(&isert_conn->mutex); + +- isert_wait4cmds(conn); + isert_wait4flush(isert_conn); ++ isert_put_unsol_pending_cmds(conn); ++ isert_wait4cmds(conn); + isert_wait4logout(isert_conn); + + queue_work(isert_release_wq, &isert_conn->release_work); +diff --git a/drivers/irqchip/irq-atmel-aic5.c b/drivers/irqchip/irq-atmel-aic5.c +index 459bf44..7e077bf 100644 +--- a/drivers/irqchip/irq-atmel-aic5.c ++++ b/drivers/irqchip/irq-atmel-aic5.c +@@ -88,28 +88,36 @@ static void aic5_mask(struct irq_data *d) + { + struct irq_domain *domain = d->domain; + struct irq_domain_chip_generic *dgc = domain->gc; +- struct irq_chip_generic *gc = dgc->gc[0]; ++ struct irq_chip_generic *bgc = dgc->gc[0]; ++ struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); + +- /* Disable interrupt on AIC5 */ +- irq_gc_lock(gc); ++ /* ++ * Disable interrupt on AIC5. We always take the lock of the ++ * first irq chip as all chips share the same registers. ++ */ ++ irq_gc_lock(bgc); + irq_reg_writel(gc, d->hwirq, AT91_AIC5_SSR); + irq_reg_writel(gc, 1, AT91_AIC5_IDCR); + gc->mask_cache &= ~d->mask; +- irq_gc_unlock(gc); ++ irq_gc_unlock(bgc); + } + + static void aic5_unmask(struct irq_data *d) + { + struct irq_domain *domain = d->domain; + struct irq_domain_chip_generic *dgc = domain->gc; +- struct irq_chip_generic *gc = dgc->gc[0]; ++ struct irq_chip_generic *bgc = dgc->gc[0]; ++ struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); + +- /* Enable interrupt on AIC5 */ +- irq_gc_lock(gc); ++ /* ++ * Enable interrupt on AIC5. We always take the lock of the ++ * first irq chip as all chips share the same registers. ++ */ ++ irq_gc_lock(bgc); + irq_reg_writel(gc, d->hwirq, AT91_AIC5_SSR); + irq_reg_writel(gc, 1, AT91_AIC5_IECR); + gc->mask_cache |= d->mask; +- irq_gc_unlock(gc); ++ irq_gc_unlock(bgc); + } + + static int aic5_retrigger(struct irq_data *d) +diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c +index c00e2db..9a791dd 100644 +--- a/drivers/irqchip/irq-gic-v3-its.c ++++ b/drivers/irqchip/irq-gic-v3-its.c +@@ -921,8 +921,10 @@ retry_baser: + * non-cacheable as well. + */ + shr = tmp & GITS_BASER_SHAREABILITY_MASK; +- if (!shr) ++ if (!shr) { + cache = GITS_BASER_nC; ++ __flush_dcache_area(base, alloc_size); ++ } + goto retry_baser; + } + +@@ -1163,6 +1165,8 @@ static struct its_device *its_create_device(struct its_node *its, u32 dev_id, + return NULL; + } + ++ __flush_dcache_area(itt, sz); ++ + dev->its = its; + dev->itt = itt; + dev->nr_ites = nr_ites; +diff --git a/drivers/leds/Kconfig b/drivers/leds/Kconfig +index 9ad35f7..433fb9d 100644 +--- a/drivers/leds/Kconfig ++++ b/drivers/leds/Kconfig +@@ -229,7 +229,7 @@ config LEDS_LP55XX_COMMON + tristate "Common Driver for TI/National LP5521/5523/55231/5562/8501" + depends on LEDS_LP5521 || LEDS_LP5523 || LEDS_LP5562 || LEDS_LP8501 + select FW_LOADER +- select FW_LOADER_USER_HELPER_FALLBACK ++ select FW_LOADER_USER_HELPER + help + This option supports common operations for LP5521/5523/55231/5562/8501 + devices. +diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c +index beabfbc..ca51d58 100644 +--- a/drivers/leds/led-class.c ++++ b/drivers/leds/led-class.c +@@ -228,12 +228,15 @@ static int led_classdev_next_name(const char *init_name, char *name, + { + unsigned int i = 0; + int ret = 0; ++ struct device *dev; + + strlcpy(name, init_name, len); + +- while (class_find_device(leds_class, NULL, name, match_name) && +- (ret < len)) ++ while ((ret < len) && ++ (dev = class_find_device(leds_class, NULL, name, match_name))) { ++ put_device(dev); + ret = snprintf(name, len, "%s_%u", init_name, ++i); ++ } + + if (ret >= len) + return -ENOMEM; +diff --git a/drivers/macintosh/windfarm_core.c b/drivers/macintosh/windfarm_core.c +index 3ee198b..cc7ece1 100644 +--- a/drivers/macintosh/windfarm_core.c ++++ b/drivers/macintosh/windfarm_core.c +@@ -435,7 +435,7 @@ int wf_unregister_client(struct notifier_block *nb) + { + mutex_lock(&wf_lock); + blocking_notifier_chain_unregister(&wf_client_list, nb); +- wf_client_count++; ++ wf_client_count--; + if (wf_client_count == 0) + wf_stop_thread(); + mutex_unlock(&wf_lock); +diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c +index e51de52..48b5890 100644 +--- a/drivers/md/bitmap.c ++++ b/drivers/md/bitmap.c +@@ -1997,7 +1997,8 @@ int bitmap_resize(struct bitmap *bitmap, sector_t blocks, + if (bitmap->mddev->bitmap_info.offset || bitmap->mddev->bitmap_info.file) + ret = bitmap_storage_alloc(&store, chunks, + !bitmap->mddev->bitmap_info.external, +- bitmap->cluster_slot); ++ mddev_is_clustered(bitmap->mddev) ++ ? bitmap->cluster_slot : 0); + if (ret) + goto err; + +diff --git a/drivers/md/dm-cache-policy-cleaner.c b/drivers/md/dm-cache-policy-cleaner.c +index 240c9f0..8a09645 100644 +--- a/drivers/md/dm-cache-policy-cleaner.c ++++ b/drivers/md/dm-cache-policy-cleaner.c +@@ -436,7 +436,7 @@ static struct dm_cache_policy *wb_create(dm_cblock_t cache_size, + static struct dm_cache_policy_type wb_policy_type = { + .name = "cleaner", + .version = {1, 0, 0}, +- .hint_size = 0, ++ .hint_size = 4, + .owner = THIS_MODULE, + .create = wb_create + }; +diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c +index 0f48fed..0d28c5b 100644 +--- a/drivers/md/dm-crypt.c ++++ b/drivers/md/dm-crypt.c +@@ -968,7 +968,8 @@ static void crypt_free_buffer_pages(struct crypt_config *cc, struct bio *clone); + + /* + * Generate a new unfragmented bio with the given size +- * This should never violate the device limitations ++ * This should never violate the device limitations (but only because ++ * max_segment_size is being constrained to PAGE_SIZE). + * + * This function may be called concurrently. If we allocate from the mempool + * concurrently, there is a possibility of deadlock. For example, if we have +@@ -2058,9 +2059,20 @@ static int crypt_iterate_devices(struct dm_target *ti, + return fn(ti, cc->dev, cc->start, ti->len, data); + } + ++static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits) ++{ ++ /* ++ * Unfortunate constraint that is required to avoid the potential ++ * for exceeding underlying device's max_segments limits -- due to ++ * crypt_alloc_buffer() possibly allocating pages for the encryption ++ * bio that are not as physically contiguous as the original bio. ++ */ ++ limits->max_segment_size = PAGE_SIZE; ++} ++ + static struct target_type crypt_target = { + .name = "crypt", +- .version = {1, 14, 0}, ++ .version = {1, 14, 1}, + .module = THIS_MODULE, + .ctr = crypt_ctr, + .dtr = crypt_dtr, +@@ -2072,6 +2084,7 @@ static struct target_type crypt_target = { + .message = crypt_message, + .merge = crypt_merge, + .iterate_devices = crypt_iterate_devices, ++ .io_hints = crypt_io_hints, + }; + + static int __init dm_crypt_init(void) +diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c +index 2daa677..1257d48 100644 +--- a/drivers/md/dm-raid.c ++++ b/drivers/md/dm-raid.c +@@ -329,8 +329,7 @@ static int validate_region_size(struct raid_set *rs, unsigned long region_size) + */ + if (min_region_size > (1 << 13)) { + /* If not a power of 2, make it the next power of 2 */ +- if (min_region_size & (min_region_size - 1)) +- region_size = 1 << fls(region_size); ++ region_size = roundup_pow_of_two(min_region_size); + DMINFO("Choosing default region size of %lu sectors", + region_size); + } else { +diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c +index d2bbe8c..75aef24 100644 +--- a/drivers/md/dm-thin.c ++++ b/drivers/md/dm-thin.c +@@ -4333,6 +4333,10 @@ static void thin_io_hints(struct dm_target *ti, struct queue_limits *limits) + { + struct thin_c *tc = ti->private; + struct pool *pool = tc->pool; ++ struct queue_limits *pool_limits = dm_get_queue_limits(pool->pool_md); ++ ++ if (!pool_limits->discard_granularity) ++ return; /* pool's discard support is disabled */ + + limits->discard_granularity = pool->sectors_per_block << SECTOR_SHIFT; + limits->max_discard_sectors = 2048 * 1024 * 16; /* 16G */ +diff --git a/drivers/md/dm.c b/drivers/md/dm.c +index 0d7ab20..3e32f4e 100644 +--- a/drivers/md/dm.c ++++ b/drivers/md/dm.c +@@ -2952,8 +2952,6 @@ static void __dm_destroy(struct mapped_device *md, bool wait) + + might_sleep(); + +- map = dm_get_live_table(md, &srcu_idx); +- + spin_lock(&_minor_lock); + idr_replace(&_minor_idr, MINOR_ALLOCED, MINOR(disk_devt(dm_disk(md)))); + set_bit(DMF_FREEING, &md->flags); +@@ -2967,14 +2965,14 @@ static void __dm_destroy(struct mapped_device *md, bool wait) + * do not race with internal suspend. + */ + mutex_lock(&md->suspend_lock); ++ map = dm_get_live_table(md, &srcu_idx); + if (!dm_suspended_md(md)) { + dm_table_presuspend_targets(map); + dm_table_postsuspend_targets(map); + } +- mutex_unlock(&md->suspend_lock); +- + /* dm_put_live_table must be before msleep, otherwise deadlock is possible */ + dm_put_live_table(md, srcu_idx); ++ mutex_unlock(&md->suspend_lock); + + /* + * Rare, but there may be I/O requests still going to complete, +diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c +index efb654e..0875e5e 100644 +--- a/drivers/md/raid0.c ++++ b/drivers/md/raid0.c +@@ -83,7 +83,7 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf) + char b[BDEVNAME_SIZE]; + char b2[BDEVNAME_SIZE]; + struct r0conf *conf = kzalloc(sizeof(*conf), GFP_KERNEL); +- bool discard_supported = false; ++ unsigned short blksize = 512; + + if (!conf) + return -ENOMEM; +@@ -98,6 +98,9 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf) + sector_div(sectors, mddev->chunk_sectors); + rdev1->sectors = sectors * mddev->chunk_sectors; + ++ blksize = max(blksize, queue_logical_block_size( ++ rdev1->bdev->bd_disk->queue)); ++ + rdev_for_each(rdev2, mddev) { + pr_debug("md/raid0:%s: comparing %s(%llu)" + " with %s(%llu)\n", +@@ -134,6 +137,18 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf) + } + pr_debug("md/raid0:%s: FINAL %d zones\n", + mdname(mddev), conf->nr_strip_zones); ++ /* ++ * now since we have the hard sector sizes, we can make sure ++ * chunk size is a multiple of that sector size ++ */ ++ if ((mddev->chunk_sectors << 9) % blksize) { ++ printk(KERN_ERR "md/raid0:%s: chunk_size of %d not multiple of block size %d\n", ++ mdname(mddev), ++ mddev->chunk_sectors << 9, blksize); ++ err = -EINVAL; ++ goto abort; ++ } ++ + err = -ENOMEM; + conf->strip_zone = kzalloc(sizeof(struct strip_zone)* + conf->nr_strip_zones, GFP_KERNEL); +@@ -188,19 +203,12 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf) + } + dev[j] = rdev1; + +- if (mddev->queue) +- disk_stack_limits(mddev->gendisk, rdev1->bdev, +- rdev1->data_offset << 9); +- + if (rdev1->bdev->bd_disk->queue->merge_bvec_fn) + conf->has_merge_bvec = 1; + + if (!smallest || (rdev1->sectors < smallest->sectors)) + smallest = rdev1; + cnt++; +- +- if (blk_queue_discard(bdev_get_queue(rdev1->bdev))) +- discard_supported = true; + } + if (cnt != mddev->raid_disks) { + printk(KERN_ERR "md/raid0:%s: too few disks (%d of %d) - " +@@ -261,28 +269,6 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf) + (unsigned long long)smallest->sectors); + } + +- /* +- * now since we have the hard sector sizes, we can make sure +- * chunk size is a multiple of that sector size +- */ +- if ((mddev->chunk_sectors << 9) % queue_logical_block_size(mddev->queue)) { +- printk(KERN_ERR "md/raid0:%s: chunk_size of %d not valid\n", +- mdname(mddev), +- mddev->chunk_sectors << 9); +- goto abort; +- } +- +- if (mddev->queue) { +- blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9); +- blk_queue_io_opt(mddev->queue, +- (mddev->chunk_sectors << 9) * mddev->raid_disks); +- +- if (!discard_supported) +- queue_flag_clear_unlocked(QUEUE_FLAG_DISCARD, mddev->queue); +- else +- queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, mddev->queue); +- } +- + pr_debug("md/raid0:%s: done.\n", mdname(mddev)); + *private_conf = conf; + +@@ -433,12 +419,6 @@ static int raid0_run(struct mddev *mddev) + if (md_check_no_bitmap(mddev)) + return -EINVAL; + +- if (mddev->queue) { +- blk_queue_max_hw_sectors(mddev->queue, mddev->chunk_sectors); +- blk_queue_max_write_same_sectors(mddev->queue, mddev->chunk_sectors); +- blk_queue_max_discard_sectors(mddev->queue, mddev->chunk_sectors); +- } +- + /* if private is not null, we are here after takeover */ + if (mddev->private == NULL) { + ret = create_strip_zones(mddev, &conf); +@@ -447,6 +427,29 @@ static int raid0_run(struct mddev *mddev) + mddev->private = conf; + } + conf = mddev->private; ++ if (mddev->queue) { ++ struct md_rdev *rdev; ++ bool discard_supported = false; ++ ++ blk_queue_max_hw_sectors(mddev->queue, mddev->chunk_sectors); ++ blk_queue_max_write_same_sectors(mddev->queue, mddev->chunk_sectors); ++ blk_queue_max_discard_sectors(mddev->queue, mddev->chunk_sectors); ++ ++ blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9); ++ blk_queue_io_opt(mddev->queue, ++ (mddev->chunk_sectors << 9) * mddev->raid_disks); ++ ++ rdev_for_each(rdev, mddev) { ++ disk_stack_limits(mddev->gendisk, rdev->bdev, ++ rdev->data_offset << 9); ++ if (blk_queue_discard(bdev_get_queue(rdev->bdev))) ++ discard_supported = true; ++ } ++ if (!discard_supported) ++ queue_flag_clear_unlocked(QUEUE_FLAG_DISCARD, mddev->queue); ++ else ++ queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, mddev->queue); ++ } + + /* calculate array device size */ + md_set_array_sectors(mddev, raid0_size(mddev, 0, 0)); +diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c +index 9e3fdbd..2f4503a 100644 +--- a/drivers/mmc/core/core.c ++++ b/drivers/mmc/core/core.c +@@ -134,9 +134,11 @@ void mmc_request_done(struct mmc_host *host, struct mmc_request *mrq) + int err = cmd->error; + + /* Flag re-tuning needed on CRC errors */ +- if (err == -EILSEQ || (mrq->sbc && mrq->sbc->error == -EILSEQ) || ++ if ((cmd->opcode != MMC_SEND_TUNING_BLOCK && ++ cmd->opcode != MMC_SEND_TUNING_BLOCK_HS200) && ++ (err == -EILSEQ || (mrq->sbc && mrq->sbc->error == -EILSEQ) || + (mrq->data && mrq->data->error == -EILSEQ) || +- (mrq->stop && mrq->stop->error == -EILSEQ)) ++ (mrq->stop && mrq->stop->error == -EILSEQ))) + mmc_retune_needed(host); + + if (err && cmd->retries && mmc_host_is_spi(host)) { +diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c +index 99a9c90..79979e9 100644 +--- a/drivers/mmc/core/host.c ++++ b/drivers/mmc/core/host.c +@@ -457,7 +457,7 @@ int mmc_of_parse(struct mmc_host *host) + 0, &cd_gpio_invert); + if (!ret) + dev_info(host->parent, "Got CD GPIO\n"); +- else if (ret != -ENOENT) ++ else if (ret != -ENOENT && ret != -ENOSYS) + return ret; + + /* +@@ -481,7 +481,7 @@ int mmc_of_parse(struct mmc_host *host) + ret = mmc_gpiod_request_ro(host, "wp", 0, false, 0, &ro_gpio_invert); + if (!ret) + dev_info(host->parent, "Got WP GPIO\n"); +- else if (ret != -ENOENT) ++ else if (ret != -ENOENT && ret != -ENOSYS) + return ret; + + if (of_property_read_bool(np, "disable-wp")) +diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c +index 40e9d8e..e41fb74 100644 +--- a/drivers/mmc/host/dw_mmc.c ++++ b/drivers/mmc/host/dw_mmc.c +@@ -99,6 +99,9 @@ struct idmac_desc { + + __le32 des3; /* buffer 2 physical address */ + }; ++ ++/* Each descriptor can transfer up to 4KB of data in chained mode */ ++#define DW_MCI_DESC_DATA_LENGTH 0x1000 + #endif /* CONFIG_MMC_DW_IDMAC */ + + static bool dw_mci_reset(struct dw_mci *host); +@@ -462,66 +465,96 @@ static void dw_mci_idmac_complete_dma(struct dw_mci *host) + static void dw_mci_translate_sglist(struct dw_mci *host, struct mmc_data *data, + unsigned int sg_len) + { ++ unsigned int desc_len; + int i; + if (host->dma_64bit_address == 1) { +- struct idmac_desc_64addr *desc = host->sg_cpu; ++ struct idmac_desc_64addr *desc_first, *desc_last, *desc; ++ ++ desc_first = desc_last = desc = host->sg_cpu; + +- for (i = 0; i < sg_len; i++, desc++) { ++ for (i = 0; i < sg_len; i++) { + unsigned int length = sg_dma_len(&data->sg[i]); + u64 mem_addr = sg_dma_address(&data->sg[i]); + +- /* +- * Set the OWN bit and disable interrupts for this +- * descriptor +- */ +- desc->des0 = IDMAC_DES0_OWN | IDMAC_DES0_DIC | +- IDMAC_DES0_CH; +- /* Buffer length */ +- IDMAC_64ADDR_SET_BUFFER1_SIZE(desc, length); +- +- /* Physical address to DMA to/from */ +- desc->des4 = mem_addr & 0xffffffff; +- desc->des5 = mem_addr >> 32; ++ for ( ; length ; desc++) { ++ desc_len = (length <= DW_MCI_DESC_DATA_LENGTH) ? ++ length : DW_MCI_DESC_DATA_LENGTH; ++ ++ length -= desc_len; ++ ++ /* ++ * Set the OWN bit and disable interrupts ++ * for this descriptor ++ */ ++ desc->des0 = IDMAC_DES0_OWN | IDMAC_DES0_DIC | ++ IDMAC_DES0_CH; ++ ++ /* Buffer length */ ++ IDMAC_64ADDR_SET_BUFFER1_SIZE(desc, desc_len); ++ ++ /* Physical address to DMA to/from */ ++ desc->des4 = mem_addr & 0xffffffff; ++ desc->des5 = mem_addr >> 32; ++ ++ /* Update physical address for the next desc */ ++ mem_addr += desc_len; ++ ++ /* Save pointer to the last descriptor */ ++ desc_last = desc; ++ } + } + + /* Set first descriptor */ +- desc = host->sg_cpu; +- desc->des0 |= IDMAC_DES0_FD; ++ desc_first->des0 |= IDMAC_DES0_FD; + + /* Set last descriptor */ +- desc = host->sg_cpu + (i - 1) * +- sizeof(struct idmac_desc_64addr); +- desc->des0 &= ~(IDMAC_DES0_CH | IDMAC_DES0_DIC); +- desc->des0 |= IDMAC_DES0_LD; ++ desc_last->des0 &= ~(IDMAC_DES0_CH | IDMAC_DES0_DIC); ++ desc_last->des0 |= IDMAC_DES0_LD; + + } else { +- struct idmac_desc *desc = host->sg_cpu; ++ struct idmac_desc *desc_first, *desc_last, *desc; ++ ++ desc_first = desc_last = desc = host->sg_cpu; + +- for (i = 0; i < sg_len; i++, desc++) { ++ for (i = 0; i < sg_len; i++) { + unsigned int length = sg_dma_len(&data->sg[i]); + u32 mem_addr = sg_dma_address(&data->sg[i]); + +- /* +- * Set the OWN bit and disable interrupts for this +- * descriptor +- */ +- desc->des0 = cpu_to_le32(IDMAC_DES0_OWN | +- IDMAC_DES0_DIC | IDMAC_DES0_CH); +- /* Buffer length */ +- IDMAC_SET_BUFFER1_SIZE(desc, length); ++ for ( ; length ; desc++) { ++ desc_len = (length <= DW_MCI_DESC_DATA_LENGTH) ? ++ length : DW_MCI_DESC_DATA_LENGTH; ++ ++ length -= desc_len; ++ ++ /* ++ * Set the OWN bit and disable interrupts ++ * for this descriptor ++ */ ++ desc->des0 = cpu_to_le32(IDMAC_DES0_OWN | ++ IDMAC_DES0_DIC | ++ IDMAC_DES0_CH); ++ ++ /* Buffer length */ ++ IDMAC_SET_BUFFER1_SIZE(desc, desc_len); + +- /* Physical address to DMA to/from */ +- desc->des2 = cpu_to_le32(mem_addr); ++ /* Physical address to DMA to/from */ ++ desc->des2 = cpu_to_le32(mem_addr); ++ ++ /* Update physical address for the next desc */ ++ mem_addr += desc_len; ++ ++ /* Save pointer to the last descriptor */ ++ desc_last = desc; ++ } + } + + /* Set first descriptor */ +- desc = host->sg_cpu; +- desc->des0 |= cpu_to_le32(IDMAC_DES0_FD); ++ desc_first->des0 |= cpu_to_le32(IDMAC_DES0_FD); + + /* Set last descriptor */ +- desc = host->sg_cpu + (i - 1) * sizeof(struct idmac_desc); +- desc->des0 &= cpu_to_le32(~(IDMAC_DES0_CH | IDMAC_DES0_DIC)); +- desc->des0 |= cpu_to_le32(IDMAC_DES0_LD); ++ desc_last->des0 &= cpu_to_le32(~(IDMAC_DES0_CH | ++ IDMAC_DES0_DIC)); ++ desc_last->des0 |= cpu_to_le32(IDMAC_DES0_LD); + } + + wmb(); +@@ -2394,7 +2427,7 @@ static int dw_mci_init_slot(struct dw_mci *host, unsigned int id) + #ifdef CONFIG_MMC_DW_IDMAC + mmc->max_segs = host->ring_size; + mmc->max_blk_size = 65536; +- mmc->max_seg_size = 0x1000; ++ mmc->max_seg_size = DW_MCI_DESC_DATA_LENGTH; + mmc->max_req_size = mmc->max_seg_size * host->ring_size; + mmc->max_blk_count = mmc->max_req_size / 512; + #else +diff --git a/drivers/mmc/host/sdhci-pxav3.c b/drivers/mmc/host/sdhci-pxav3.c +index 946d37f..f5edf9d 100644 +--- a/drivers/mmc/host/sdhci-pxav3.c ++++ b/drivers/mmc/host/sdhci-pxav3.c +@@ -135,6 +135,7 @@ static int armada_38x_quirks(struct platform_device *pdev, + struct sdhci_pxa *pxa = pltfm_host->priv; + struct resource *res; + ++ host->quirks &= ~SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN; + host->quirks |= SDHCI_QUIRK_MISSING_CAPS; + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, + "conf-sdio3"); +@@ -290,6 +291,9 @@ static void pxav3_set_uhs_signaling(struct sdhci_host *host, unsigned int uhs) + uhs == MMC_TIMING_UHS_DDR50) { + reg_val &= ~SDIO3_CONF_CLK_INV; + reg_val |= SDIO3_CONF_SD_FB_CLK; ++ } else if (uhs == MMC_TIMING_MMC_HS) { ++ reg_val &= ~SDIO3_CONF_CLK_INV; ++ reg_val &= ~SDIO3_CONF_SD_FB_CLK; + } else { + reg_val |= SDIO3_CONF_CLK_INV; + reg_val &= ~SDIO3_CONF_SD_FB_CLK; +@@ -398,7 +402,7 @@ static int sdhci_pxav3_probe(struct platform_device *pdev) + if (of_device_is_compatible(np, "marvell,armada-380-sdhci")) { + ret = armada_38x_quirks(pdev, host); + if (ret < 0) +- goto err_clk_get; ++ goto err_mbus_win; + ret = mv_conf_mbus_windows(pdev, mv_mbus_dram_info()); + if (ret < 0) + goto err_mbus_win; +diff --git a/drivers/mtd/nand/pxa3xx_nand.c b/drivers/mtd/nand/pxa3xx_nand.c +index 1259cc5..5465fa4 100644 +--- a/drivers/mtd/nand/pxa3xx_nand.c ++++ b/drivers/mtd/nand/pxa3xx_nand.c +@@ -1473,6 +1473,9 @@ static int pxa3xx_nand_scan(struct mtd_info *mtd) + if (pdata->keep_config && !pxa3xx_nand_detect_config(info)) + goto KEEP_CONFIG; + ++ /* Set a default chunk size */ ++ info->chunk_size = 512; ++ + ret = pxa3xx_nand_sensing(info); + if (ret) { + dev_info(&info->pdev->dev, "There is no chip on cs %d!\n", +diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c +index 6f93b29..499b8e43 100644 +--- a/drivers/mtd/nand/sunxi_nand.c ++++ b/drivers/mtd/nand/sunxi_nand.c +@@ -138,6 +138,10 @@ + #define NFC_ECC_MODE GENMASK(15, 12) + #define NFC_RANDOM_SEED GENMASK(30, 16) + ++/* NFC_USER_DATA helper macros */ ++#define NFC_BUF_TO_USER_DATA(buf) ((buf)[0] | ((buf)[1] << 8) | \ ++ ((buf)[2] << 16) | ((buf)[3] << 24)) ++ + #define NFC_DEFAULT_TIMEOUT_MS 1000 + + #define NFC_SRAM_SIZE 1024 +@@ -632,15 +636,9 @@ static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd, + offset = layout->eccpos[i * ecc->bytes] - 4 + mtd->writesize; + + /* Fill OOB data in */ +- if (oob_required) { +- tmp = 0xffffffff; +- memcpy_toio(nfc->regs + NFC_REG_USER_DATA_BASE, &tmp, +- 4); +- } else { +- memcpy_toio(nfc->regs + NFC_REG_USER_DATA_BASE, +- chip->oob_poi + offset - mtd->writesize, +- 4); +- } ++ writel(NFC_BUF_TO_USER_DATA(chip->oob_poi + ++ layout->oobfree[i].offset), ++ nfc->regs + NFC_REG_USER_DATA_BASE); + + chip->cmdfunc(mtd, NAND_CMD_RNDIN, offset, -1); + +@@ -770,14 +768,8 @@ static int sunxi_nfc_hw_syndrome_ecc_write_page(struct mtd_info *mtd, + offset += ecc->size; + + /* Fill OOB data in */ +- if (oob_required) { +- tmp = 0xffffffff; +- memcpy_toio(nfc->regs + NFC_REG_USER_DATA_BASE, &tmp, +- 4); +- } else { +- memcpy_toio(nfc->regs + NFC_REG_USER_DATA_BASE, oob, +- 4); +- } ++ writel(NFC_BUF_TO_USER_DATA(oob), ++ nfc->regs + NFC_REG_USER_DATA_BASE); + + tmp = NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD | NFC_ACCESS_DIR | + (1 << 30); +@@ -1312,6 +1304,7 @@ static void sunxi_nand_chips_cleanup(struct sunxi_nfc *nfc) + node); + nand_release(&chip->mtd); + sunxi_nand_ecc_cleanup(&chip->nand.ecc); ++ list_del(&chip->node); + } + } + +diff --git a/drivers/mtd/ubi/io.c b/drivers/mtd/ubi/io.c +index 5bbd1f0..1fc23e4 100644 +--- a/drivers/mtd/ubi/io.c ++++ b/drivers/mtd/ubi/io.c +@@ -926,6 +926,11 @@ static int validate_vid_hdr(const struct ubi_device *ubi, + goto bad; + } + ++ if (data_size > ubi->leb_size) { ++ ubi_err(ubi, "bad data_size"); ++ goto bad; ++ } ++ + if (vol_type == UBI_VID_STATIC) { + /* + * Although from high-level point of view static volumes may +diff --git a/drivers/mtd/ubi/vtbl.c b/drivers/mtd/ubi/vtbl.c +index 80bdd5b..d85c197 100644 +--- a/drivers/mtd/ubi/vtbl.c ++++ b/drivers/mtd/ubi/vtbl.c +@@ -649,6 +649,7 @@ static int init_volumes(struct ubi_device *ubi, + if (ubi->corr_peb_count) + ubi_err(ubi, "%d PEBs are corrupted and not used", + ubi->corr_peb_count); ++ return -ENOSPC; + } + ubi->rsvd_pebs += reserved_pebs; + ubi->avail_pebs -= reserved_pebs; +diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c +index 275d9fb..eb4489f9 100644 +--- a/drivers/mtd/ubi/wl.c ++++ b/drivers/mtd/ubi/wl.c +@@ -1601,6 +1601,7 @@ int ubi_wl_init(struct ubi_device *ubi, struct ubi_attach_info *ai) + if (ubi->corr_peb_count) + ubi_err(ubi, "%d PEBs are corrupted and not used", + ubi->corr_peb_count); ++ err = -ENOSPC; + goto out_free; + } + ubi->avail_pebs -= reserved_pebs; +diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c +index 89d788d..adfe1de 100644 +--- a/drivers/net/ethernet/intel/e1000e/netdev.c ++++ b/drivers/net/ethernet/intel/e1000e/netdev.c +@@ -4280,18 +4280,29 @@ static cycle_t e1000e_cyclecounter_read(const struct cyclecounter *cc) + struct e1000_adapter *adapter = container_of(cc, struct e1000_adapter, + cc); + struct e1000_hw *hw = &adapter->hw; ++ u32 systimel_1, systimel_2, systimeh; + cycle_t systim, systim_next; +- /* SYSTIMH latching upon SYSTIML read does not work well. To fix that +- * we don't want to allow overflow of SYSTIML and a change to SYSTIMH +- * to occur between reads, so if we read a vale close to overflow, we +- * wait for overflow to occur and read both registers when its safe. ++ /* SYSTIMH latching upon SYSTIML read does not work well. ++ * This means that if SYSTIML overflows after we read it but before ++ * we read SYSTIMH, the value of SYSTIMH has been incremented and we ++ * will experience a huge non linear increment in the systime value ++ * to fix that we test for overflow and if true, we re-read systime. + */ +- u32 systim_overflow_latch_fix = 0x3FFFFFFF; +- +- do { +- systim = (cycle_t)er32(SYSTIML); +- } while (systim > systim_overflow_latch_fix); +- systim |= (cycle_t)er32(SYSTIMH) << 32; ++ systimel_1 = er32(SYSTIML); ++ systimeh = er32(SYSTIMH); ++ systimel_2 = er32(SYSTIML); ++ /* Check for overflow. If there was no overflow, use the values */ ++ if (systimel_1 < systimel_2) { ++ systim = (cycle_t)systimel_1; ++ systim |= (cycle_t)systimeh << 32; ++ } else { ++ /* There was an overflow, read again SYSTIMH, and use ++ * systimel_2 ++ */ ++ systimeh = er32(SYSTIMH); ++ systim = (cycle_t)systimel_2; ++ systim |= (cycle_t)systimeh << 32; ++ } + + if ((hw->mac.type == e1000_82574) || (hw->mac.type == e1000_82583)) { + u64 incvalue, time_delta, rem, temp; +diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c +index 8d7b596..5bc9fca 100644 +--- a/drivers/net/ethernet/intel/igb/igb_main.c ++++ b/drivers/net/ethernet/intel/igb/igb_main.c +@@ -2851,7 +2851,7 @@ static void igb_probe_vfs(struct igb_adapter *adapter) + return; + + pci_sriov_set_totalvfs(pdev, 7); +- igb_pci_enable_sriov(pdev, max_vfs); ++ igb_enable_sriov(pdev, max_vfs); + + #endif /* CONFIG_PCI_IOV */ + } +diff --git a/drivers/net/ethernet/via/Kconfig b/drivers/net/ethernet/via/Kconfig +index 2f1264b..d3d0947 100644 +--- a/drivers/net/ethernet/via/Kconfig ++++ b/drivers/net/ethernet/via/Kconfig +@@ -17,7 +17,7 @@ if NET_VENDOR_VIA + + config VIA_RHINE + tristate "VIA Rhine support" +- depends on (PCI || OF_IRQ) ++ depends on PCI || (OF_IRQ && GENERIC_PCI_IOMAP) + depends on HAS_DMA + select CRC32 + select MII +diff --git a/drivers/net/wireless/ath/ath10k/htc.c b/drivers/net/wireless/ath/ath10k/htc.c +index 85bfa2a..32d9ff1 100644 +--- a/drivers/net/wireless/ath/ath10k/htc.c ++++ b/drivers/net/wireless/ath/ath10k/htc.c +@@ -145,8 +145,10 @@ int ath10k_htc_send(struct ath10k_htc *htc, + skb_cb->eid = eid; + skb_cb->paddr = dma_map_single(dev, skb->data, skb->len, DMA_TO_DEVICE); + ret = dma_mapping_error(dev, skb_cb->paddr); +- if (ret) ++ if (ret) { ++ ret = -EIO; + goto err_credits; ++ } + + sg_item.transfer_id = ep->eid; + sg_item.transfer_context = skb; +diff --git a/drivers/net/wireless/ath/ath10k/htt_tx.c b/drivers/net/wireless/ath/ath10k/htt_tx.c +index a60ef7d..7be3ce6 100644 +--- a/drivers/net/wireless/ath/ath10k/htt_tx.c ++++ b/drivers/net/wireless/ath/ath10k/htt_tx.c +@@ -371,8 +371,10 @@ int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *msdu) + skb_cb->paddr = dma_map_single(dev, msdu->data, msdu->len, + DMA_TO_DEVICE); + res = dma_mapping_error(dev, skb_cb->paddr); +- if (res) ++ if (res) { ++ res = -EIO; + goto err_free_txdesc; ++ } + + skb_put(txdesc, len); + cmd = (struct htt_cmd *)txdesc->data; +@@ -456,8 +458,10 @@ int ath10k_htt_tx(struct ath10k_htt *htt, struct sk_buff *msdu) + skb_cb->paddr = dma_map_single(dev, msdu->data, msdu->len, + DMA_TO_DEVICE); + res = dma_mapping_error(dev, skb_cb->paddr); +- if (res) ++ if (res) { ++ res = -EIO; + goto err_free_txbuf; ++ } + + switch (skb_cb->txmode) { + case ATH10K_HW_TXRX_RAW: +diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c +index 218b6af..0d3c474 100644 +--- a/drivers/net/wireless/ath/ath10k/mac.c ++++ b/drivers/net/wireless/ath/ath10k/mac.c +@@ -591,11 +591,19 @@ ath10k_mac_get_any_chandef_iter(struct ieee80211_hw *hw, + static int ath10k_peer_create(struct ath10k *ar, u32 vdev_id, const u8 *addr, + enum wmi_peer_type peer_type) + { ++ struct ath10k_vif *arvif; ++ int num_peers = 0; + int ret; + + lockdep_assert_held(&ar->conf_mutex); + +- if (ar->num_peers >= ar->max_num_peers) ++ num_peers = ar->num_peers; ++ ++ /* Each vdev consumes a peer entry as well */ ++ list_for_each_entry(arvif, &ar->arvifs, list) ++ num_peers++; ++ ++ if (num_peers >= ar->max_num_peers) + return -ENOBUFS; + + ret = ath10k_wmi_peer_create(ar, vdev_id, addr, peer_type); +@@ -2995,6 +3003,8 @@ void ath10k_mac_tx_unlock(struct ath10k *ar, int reason) + IEEE80211_IFACE_ITER_RESUME_ALL, + ath10k_mac_tx_unlock_iter, + ar); ++ ++ ieee80211_wake_queue(ar->hw, ar->hw->offchannel_tx_hw_queue); + } + + void ath10k_mac_vif_tx_lock(struct ath10k_vif *arvif, int reason) +@@ -3034,38 +3044,16 @@ static void ath10k_mac_vif_handle_tx_pause(struct ath10k_vif *arvif, + + lockdep_assert_held(&ar->htt.tx_lock); + +- switch (pause_id) { +- case WMI_TLV_TX_PAUSE_ID_MCC: +- case WMI_TLV_TX_PAUSE_ID_P2P_CLI_NOA: +- case WMI_TLV_TX_PAUSE_ID_P2P_GO_PS: +- case WMI_TLV_TX_PAUSE_ID_AP_PS: +- case WMI_TLV_TX_PAUSE_ID_IBSS_PS: +- switch (action) { +- case WMI_TLV_TX_PAUSE_ACTION_STOP: +- ath10k_mac_vif_tx_lock(arvif, pause_id); +- break; +- case WMI_TLV_TX_PAUSE_ACTION_WAKE: +- ath10k_mac_vif_tx_unlock(arvif, pause_id); +- break; +- default: +- ath10k_warn(ar, "received unknown tx pause action %d on vdev %i, ignoring\n", +- action, arvif->vdev_id); +- break; +- } ++ switch (action) { ++ case WMI_TLV_TX_PAUSE_ACTION_STOP: ++ ath10k_mac_vif_tx_lock(arvif, pause_id); ++ break; ++ case WMI_TLV_TX_PAUSE_ACTION_WAKE: ++ ath10k_mac_vif_tx_unlock(arvif, pause_id); + break; +- case WMI_TLV_TX_PAUSE_ID_AP_PEER_PS: +- case WMI_TLV_TX_PAUSE_ID_AP_PEER_UAPSD: +- case WMI_TLV_TX_PAUSE_ID_STA_ADD_BA: +- case WMI_TLV_TX_PAUSE_ID_HOST: + default: +- /* FIXME: Some pause_ids aren't vdev specific. Instead they +- * target peer_id and tid. Implementing these could improve +- * traffic scheduling fairness across multiple connected +- * stations in AP/IBSS modes. +- */ +- ath10k_dbg(ar, ATH10K_DBG_MAC, +- "mac ignoring unsupported tx pause vdev %i id %d\n", +- arvif->vdev_id, pause_id); ++ ath10k_warn(ar, "received unknown tx pause action %d on vdev %i, ignoring\n", ++ action, arvif->vdev_id); + break; + } + } +@@ -3082,12 +3070,15 @@ static void ath10k_mac_handle_tx_pause_iter(void *data, u8 *mac, + struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif); + struct ath10k_mac_tx_pause *arg = data; + ++ if (arvif->vdev_id != arg->vdev_id) ++ return; ++ + ath10k_mac_vif_handle_tx_pause(arvif, arg->pause_id, arg->action); + } + +-void ath10k_mac_handle_tx_pause(struct ath10k *ar, u32 vdev_id, +- enum wmi_tlv_tx_pause_id pause_id, +- enum wmi_tlv_tx_pause_action action) ++void ath10k_mac_handle_tx_pause_vdev(struct ath10k *ar, u32 vdev_id, ++ enum wmi_tlv_tx_pause_id pause_id, ++ enum wmi_tlv_tx_pause_action action) + { + struct ath10k_mac_tx_pause arg = { + .vdev_id = vdev_id, +@@ -4080,6 +4071,11 @@ static int ath10k_add_interface(struct ieee80211_hw *hw, + sizeof(arvif->bitrate_mask.control[i].vht_mcs)); + } + ++ if (ar->num_peers >= ar->max_num_peers) { ++ ath10k_warn(ar, "refusing vdev creation due to insufficient peer entry resources in firmware\n"); ++ return -ENOBUFS; ++ } ++ + if (ar->free_vdev_map == 0) { + ath10k_warn(ar, "Free vdev map is empty, no more interfaces allowed.\n"); + ret = -EBUSY; +@@ -4287,6 +4283,11 @@ static int ath10k_add_interface(struct ieee80211_hw *hw, + } + } + ++ spin_lock_bh(&ar->htt.tx_lock); ++ if (!ar->tx_paused) ++ ieee80211_wake_queue(ar->hw, arvif->vdev_id); ++ spin_unlock_bh(&ar->htt.tx_lock); ++ + mutex_unlock(&ar->conf_mutex); + return 0; + +@@ -5561,6 +5562,21 @@ static int ath10k_set_rts_threshold(struct ieee80211_hw *hw, u32 value) + return ret; + } + ++static int ath10k_mac_op_set_frag_threshold(struct ieee80211_hw *hw, u32 value) ++{ ++ /* Even though there's a WMI enum for fragmentation threshold no known ++ * firmware actually implements it. Moreover it is not possible to rely ++ * frame fragmentation to mac80211 because firmware clears the "more ++ * fragments" bit in frame control making it impossible for remote ++ * devices to reassemble frames. ++ * ++ * Hence implement a dummy callback just to say fragmentation isn't ++ * supported. This effectively prevents mac80211 from doing frame ++ * fragmentation in software. ++ */ ++ return -EOPNOTSUPP; ++} ++ + static void ath10k_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif, + u32 queues, bool drop) + { +@@ -6395,6 +6411,7 @@ static const struct ieee80211_ops ath10k_ops = { + .remain_on_channel = ath10k_remain_on_channel, + .cancel_remain_on_channel = ath10k_cancel_remain_on_channel, + .set_rts_threshold = ath10k_set_rts_threshold, ++ .set_frag_threshold = ath10k_mac_op_set_frag_threshold, + .flush = ath10k_flush, + .tx_last_beacon = ath10k_tx_last_beacon, + .set_antenna = ath10k_set_antenna, +diff --git a/drivers/net/wireless/ath/ath10k/mac.h b/drivers/net/wireless/ath/ath10k/mac.h +index b291f06..e3cefe4 100644 +--- a/drivers/net/wireless/ath/ath10k/mac.h ++++ b/drivers/net/wireless/ath/ath10k/mac.h +@@ -61,9 +61,9 @@ int ath10k_mac_vif_chan(struct ieee80211_vif *vif, + + void ath10k_mac_handle_beacon(struct ath10k *ar, struct sk_buff *skb); + void ath10k_mac_handle_beacon_miss(struct ath10k *ar, u32 vdev_id); +-void ath10k_mac_handle_tx_pause(struct ath10k *ar, u32 vdev_id, +- enum wmi_tlv_tx_pause_id pause_id, +- enum wmi_tlv_tx_pause_action action); ++void ath10k_mac_handle_tx_pause_vdev(struct ath10k *ar, u32 vdev_id, ++ enum wmi_tlv_tx_pause_id pause_id, ++ enum wmi_tlv_tx_pause_action action); + + u8 ath10k_mac_hw_rate_to_idx(const struct ieee80211_supported_band *sband, + u8 hw_rate); +diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c +index ea656e0..8c5cc1f 100644 +--- a/drivers/net/wireless/ath/ath10k/pci.c ++++ b/drivers/net/wireless/ath/ath10k/pci.c +@@ -1546,8 +1546,10 @@ static int ath10k_pci_hif_exchange_bmi_msg(struct ath10k *ar, + + req_paddr = dma_map_single(ar->dev, treq, req_len, DMA_TO_DEVICE); + ret = dma_mapping_error(ar->dev, req_paddr); +- if (ret) ++ if (ret) { ++ ret = -EIO; + goto err_dma; ++ } + + if (resp && resp_len) { + tresp = kzalloc(*resp_len, GFP_KERNEL); +@@ -1559,8 +1561,10 @@ static int ath10k_pci_hif_exchange_bmi_msg(struct ath10k *ar, + resp_paddr = dma_map_single(ar->dev, tresp, *resp_len, + DMA_FROM_DEVICE); + ret = dma_mapping_error(ar->dev, resp_paddr); +- if (ret) ++ if (ret) { ++ ret = EIO; + goto err_req; ++ } + + xfer.wait_for_resp = true; + xfer.resp_len = 0; +diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c +index 8fdba386..6f477e8 100644 +--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c ++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c +@@ -377,12 +377,34 @@ static int ath10k_wmi_tlv_event_tx_pause(struct ath10k *ar, + "wmi tlv tx pause pause_id %u action %u vdev_map 0x%08x peer_id %u tid_map 0x%08x\n", + pause_id, action, vdev_map, peer_id, tid_map); + +- for (vdev_id = 0; vdev_map; vdev_id++) { +- if (!(vdev_map & BIT(vdev_id))) +- continue; +- +- vdev_map &= ~BIT(vdev_id); +- ath10k_mac_handle_tx_pause(ar, vdev_id, pause_id, action); ++ switch (pause_id) { ++ case WMI_TLV_TX_PAUSE_ID_MCC: ++ case WMI_TLV_TX_PAUSE_ID_P2P_CLI_NOA: ++ case WMI_TLV_TX_PAUSE_ID_P2P_GO_PS: ++ case WMI_TLV_TX_PAUSE_ID_AP_PS: ++ case WMI_TLV_TX_PAUSE_ID_IBSS_PS: ++ for (vdev_id = 0; vdev_map; vdev_id++) { ++ if (!(vdev_map & BIT(vdev_id))) ++ continue; ++ ++ vdev_map &= ~BIT(vdev_id); ++ ath10k_mac_handle_tx_pause_vdev(ar, vdev_id, pause_id, ++ action); ++ } ++ break; ++ case WMI_TLV_TX_PAUSE_ID_AP_PEER_PS: ++ case WMI_TLV_TX_PAUSE_ID_AP_PEER_UAPSD: ++ case WMI_TLV_TX_PAUSE_ID_STA_ADD_BA: ++ case WMI_TLV_TX_PAUSE_ID_HOST: ++ ath10k_dbg(ar, ATH10K_DBG_MAC, ++ "mac ignoring unsupported tx pause id %d\n", ++ pause_id); ++ break; ++ default: ++ ath10k_dbg(ar, ATH10K_DBG_MAC, ++ "mac ignoring unknown tx pause vdev %d\n", ++ pause_id); ++ break; + } + + kfree(tb); +diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c +index 6c046c2..8dd84c1 100644 +--- a/drivers/net/wireless/ath/ath10k/wmi.c ++++ b/drivers/net/wireless/ath/ath10k/wmi.c +@@ -2391,6 +2391,7 @@ void ath10k_wmi_event_host_swba(struct ath10k *ar, struct sk_buff *skb) + ath10k_warn(ar, "failed to map beacon: %d\n", + ret); + dev_kfree_skb_any(bcn); ++ ret = -EIO; + goto skip; + } + +diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c b/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c +index 1c6788a..40d7231 100644 +--- a/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c ++++ b/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c +@@ -203,8 +203,10 @@ static int rsi_load_ta_instructions(struct rsi_common *common) + + /* Copy firmware into DMA-accessible memory */ + fw = kmemdup(fw_entry->data, fw_entry->size, GFP_KERNEL); +- if (!fw) +- return -ENOMEM; ++ if (!fw) { ++ status = -ENOMEM; ++ goto out; ++ } + len = fw_entry->size; + + if (len % 4) +@@ -217,6 +219,8 @@ static int rsi_load_ta_instructions(struct rsi_common *common) + + status = rsi_copy_to_card(common, fw, len, num_blocks); + kfree(fw); ++ ++out: + release_firmware(fw_entry); + return status; + } +diff --git a/drivers/net/wireless/rsi/rsi_91x_usb_ops.c b/drivers/net/wireless/rsi/rsi_91x_usb_ops.c +index 30c2cf7..de49008 100644 +--- a/drivers/net/wireless/rsi/rsi_91x_usb_ops.c ++++ b/drivers/net/wireless/rsi/rsi_91x_usb_ops.c +@@ -148,8 +148,10 @@ static int rsi_load_ta_instructions(struct rsi_common *common) + + /* Copy firmware into DMA-accessible memory */ + fw = kmemdup(fw_entry->data, fw_entry->size, GFP_KERNEL); +- if (!fw) +- return -ENOMEM; ++ if (!fw) { ++ status = -ENOMEM; ++ goto out; ++ } + len = fw_entry->size; + + if (len % 4) +@@ -162,6 +164,8 @@ static int rsi_load_ta_instructions(struct rsi_common *common) + + status = rsi_copy_to_card(common, fw, len, num_blocks); + kfree(fw); ++ ++out: + release_firmware(fw_entry); + return status; + } +diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c +index f948c46..5ff0cfd 100644 +--- a/drivers/net/xen-netfront.c ++++ b/drivers/net/xen-netfront.c +@@ -1348,7 +1348,8 @@ static void xennet_disconnect_backend(struct netfront_info *info) + queue->tx_evtchn = queue->rx_evtchn = 0; + queue->tx_irq = queue->rx_irq = 0; + +- napi_synchronize(&queue->napi); ++ if (netif_running(info->netdev)) ++ napi_synchronize(&queue->napi); + + xennet_release_tx_bufs(queue); + xennet_release_rx_bufs(queue); +diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c +index ade9eb9..b796d1b 100644 +--- a/drivers/nvdimm/pmem.c ++++ b/drivers/nvdimm/pmem.c +@@ -86,6 +86,8 @@ static int pmem_rw_page(struct block_device *bdev, sector_t sector, + struct pmem_device *pmem = bdev->bd_disk->private_data; + + pmem_do_bvec(pmem, page, PAGE_CACHE_SIZE, 0, rw, sector); ++ if (rw & WRITE) ++ wmb_pmem(); + page_endio(page, rw & WRITE, 0); + + return 0; +diff --git a/drivers/pci/access.c b/drivers/pci/access.c +index b965c12..502a82c 100644 +--- a/drivers/pci/access.c ++++ b/drivers/pci/access.c +@@ -442,7 +442,8 @@ static const struct pci_vpd_ops pci_vpd_pci22_ops = { + static ssize_t pci_vpd_f0_read(struct pci_dev *dev, loff_t pos, size_t count, + void *arg) + { +- struct pci_dev *tdev = pci_get_slot(dev->bus, PCI_SLOT(dev->devfn)); ++ struct pci_dev *tdev = pci_get_slot(dev->bus, ++ PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); + ssize_t ret; + + if (!tdev) +@@ -456,7 +457,8 @@ static ssize_t pci_vpd_f0_read(struct pci_dev *dev, loff_t pos, size_t count, + static ssize_t pci_vpd_f0_write(struct pci_dev *dev, loff_t pos, size_t count, + const void *arg) + { +- struct pci_dev *tdev = pci_get_slot(dev->bus, PCI_SLOT(dev->devfn)); ++ struct pci_dev *tdev = pci_get_slot(dev->bus, ++ PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); + ssize_t ret; + + if (!tdev) +@@ -473,22 +475,6 @@ static const struct pci_vpd_ops pci_vpd_f0_ops = { + .release = pci_vpd_pci22_release, + }; + +-static int pci_vpd_f0_dev_check(struct pci_dev *dev) +-{ +- struct pci_dev *tdev = pci_get_slot(dev->bus, PCI_SLOT(dev->devfn)); +- int ret = 0; +- +- if (!tdev) +- return -ENODEV; +- if (!tdev->vpd || !tdev->multifunction || +- dev->class != tdev->class || dev->vendor != tdev->vendor || +- dev->device != tdev->device) +- ret = -ENODEV; +- +- pci_dev_put(tdev); +- return ret; +-} +- + int pci_vpd_pci22_init(struct pci_dev *dev) + { + struct pci_vpd_pci22 *vpd; +@@ -497,12 +483,7 @@ int pci_vpd_pci22_init(struct pci_dev *dev) + cap = pci_find_capability(dev, PCI_CAP_ID_VPD); + if (!cap) + return -ENODEV; +- if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) { +- int ret = pci_vpd_f0_dev_check(dev); + +- if (ret) +- return ret; +- } + vpd = kzalloc(sizeof(*vpd), GFP_ATOMIC); + if (!vpd) + return -ENOMEM; +diff --git a/drivers/pci/bus.c b/drivers/pci/bus.c +index 6fbd3f2..d3346d2 100644 +--- a/drivers/pci/bus.c ++++ b/drivers/pci/bus.c +@@ -256,6 +256,8 @@ bool pci_bus_clip_resource(struct pci_dev *dev, int idx) + + res->start = start; + res->end = end; ++ res->flags &= ~IORESOURCE_UNSET; ++ orig_res.flags &= ~IORESOURCE_UNSET; + dev_printk(KERN_DEBUG, &dev->dev, "%pR clipped to %pR\n", + &orig_res, res); + +diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c +index dbd1385..6b1c6a9 100644 +--- a/drivers/pci/quirks.c ++++ b/drivers/pci/quirks.c +@@ -1906,11 +1906,27 @@ static void quirk_netmos(struct pci_dev *dev) + DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_NETMOS, PCI_ANY_ID, + PCI_CLASS_COMMUNICATION_SERIAL, 8, quirk_netmos); + ++/* ++ * Quirk non-zero PCI functions to route VPD access through function 0 for ++ * devices that share VPD resources between functions. The functions are ++ * expected to be identical devices. ++ */ + static void quirk_f0_vpd_link(struct pci_dev *dev) + { +- if (!dev->multifunction || !PCI_FUNC(dev->devfn)) ++ struct pci_dev *f0; ++ ++ if (!PCI_FUNC(dev->devfn)) + return; +- dev->dev_flags |= PCI_DEV_FLAGS_VPD_REF_F0; ++ ++ f0 = pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); ++ if (!f0) ++ return; ++ ++ if (f0->vpd && dev->class == f0->class && ++ dev->vendor == f0->vendor && dev->device == f0->device) ++ dev->dev_flags |= PCI_DEV_FLAGS_VPD_REF_F0; ++ ++ pci_dev_put(f0); + } + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID, + PCI_CLASS_NETWORK_ETHERNET, 8, quirk_f0_vpd_link); +diff --git a/drivers/pcmcia/sa1100_generic.c b/drivers/pcmcia/sa1100_generic.c +index 8039452..42861cc 100644 +--- a/drivers/pcmcia/sa1100_generic.c ++++ b/drivers/pcmcia/sa1100_generic.c +@@ -93,7 +93,6 @@ static int sa11x0_drv_pcmcia_remove(struct platform_device *dev) + for (i = 0; i < sinfo->nskt; i++) + soc_pcmcia_remove_one(&sinfo->skt[i]); + +- clk_put(sinfo->clk); + kfree(sinfo); + return 0; + } +diff --git a/drivers/pcmcia/sa11xx_base.c b/drivers/pcmcia/sa11xx_base.c +index cf6de2c..553d70a 100644 +--- a/drivers/pcmcia/sa11xx_base.c ++++ b/drivers/pcmcia/sa11xx_base.c +@@ -222,7 +222,7 @@ int sa11xx_drv_pcmcia_probe(struct device *dev, struct pcmcia_low_level *ops, + int i, ret = 0; + struct clk *clk; + +- clk = clk_get(dev, NULL); ++ clk = devm_clk_get(dev, NULL); + if (IS_ERR(clk)) + return PTR_ERR(clk); + +@@ -251,7 +251,6 @@ int sa11xx_drv_pcmcia_probe(struct device *dev, struct pcmcia_low_level *ops, + if (ret) { + while (--i >= 0) + soc_pcmcia_remove_one(&sinfo->skt[i]); +- clk_put(clk); + kfree(sinfo); + } else { + dev_set_drvdata(dev, sinfo); +diff --git a/drivers/platform/x86/toshiba_acpi.c b/drivers/platform/x86/toshiba_acpi.c +index 3ad7b1f..6f4f310 100644 +--- a/drivers/platform/x86/toshiba_acpi.c ++++ b/drivers/platform/x86/toshiba_acpi.c +@@ -2408,11 +2408,9 @@ static int toshiba_acpi_setup_keyboard(struct toshiba_acpi_dev *dev) + if (error) + return error; + +- error = toshiba_hotkey_event_type_get(dev, &events_type); +- if (error) { +- pr_err("Unable to query Hotkey Event Type\n"); +- return error; +- } ++ if (toshiba_hotkey_event_type_get(dev, &events_type)) ++ pr_notice("Unable to query Hotkey Event Type\n"); ++ + dev->hotkey_event_type = events_type; + + dev->hotkey_dev = input_allocate_device(); +diff --git a/drivers/power/avs/Kconfig b/drivers/power/avs/Kconfig +index 7f3d389..a67eeac 100644 +--- a/drivers/power/avs/Kconfig ++++ b/drivers/power/avs/Kconfig +@@ -13,7 +13,7 @@ menuconfig POWER_AVS + + config ROCKCHIP_IODOMAIN + tristate "Rockchip IO domain support" +- depends on ARCH_ROCKCHIP && OF ++ depends on POWER_AVS && ARCH_ROCKCHIP && OF + help + Say y here to enable support io domains on Rockchip SoCs. It is + necessary for the io domain setting of the SoC to match the +diff --git a/drivers/regulator/axp20x-regulator.c b/drivers/regulator/axp20x-regulator.c +index 6468291..1dea0e8 100644 +--- a/drivers/regulator/axp20x-regulator.c ++++ b/drivers/regulator/axp20x-regulator.c +@@ -192,9 +192,9 @@ static const struct regulator_desc axp22x_regulators[] = { + AXP_DESC(AXP22X, DCDC3, "dcdc3", "vin3", 600, 1860, 20, + AXP22X_DCDC3_V_OUT, 0x3f, AXP22X_PWR_OUT_CTRL1, BIT(3)), + AXP_DESC(AXP22X, DCDC4, "dcdc4", "vin4", 600, 1540, 20, +- AXP22X_DCDC4_V_OUT, 0x3f, AXP22X_PWR_OUT_CTRL1, BIT(3)), ++ AXP22X_DCDC4_V_OUT, 0x3f, AXP22X_PWR_OUT_CTRL1, BIT(4)), + AXP_DESC(AXP22X, DCDC5, "dcdc5", "vin5", 1000, 2550, 50, +- AXP22X_DCDC5_V_OUT, 0x1f, AXP22X_PWR_OUT_CTRL1, BIT(4)), ++ AXP22X_DCDC5_V_OUT, 0x1f, AXP22X_PWR_OUT_CTRL1, BIT(5)), + /* secondary switchable output of DCDC1 */ + AXP_DESC_SW(AXP22X, DC1SW, "dc1sw", "dcdc1", 1600, 3400, 100, + AXP22X_DCDC1_V_OUT, 0x1f, AXP22X_PWR_OUT_CTRL2, BIT(7)), +diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c +index 78387a6..5081533 100644 +--- a/drivers/regulator/core.c ++++ b/drivers/regulator/core.c +@@ -1376,15 +1376,19 @@ static int regulator_resolve_supply(struct regulator_dev *rdev) + return 0; + + r = regulator_dev_lookup(dev, rdev->supply_name, &ret); +- if (ret == -ENODEV) { +- /* +- * No supply was specified for this regulator and +- * there will never be one. +- */ +- return 0; +- } +- + if (!r) { ++ if (ret == -ENODEV) { ++ /* ++ * No supply was specified for this regulator and ++ * there will never be one. ++ */ ++ return 0; ++ } ++ ++ /* Did the lookup explicitly defer for us? */ ++ if (ret == -EPROBE_DEFER) ++ return ret; ++ + if (have_full_constraints()) { + r = dummy_regulator_rdev; + } else { +diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c +index add419d..a56a7b2 100644 +--- a/drivers/scsi/3w-9xxx.c ++++ b/drivers/scsi/3w-9xxx.c +@@ -212,6 +212,17 @@ static const struct file_operations twa_fops = { + .llseek = noop_llseek, + }; + ++/* ++ * The controllers use an inline buffer instead of a mapped SGL for small, ++ * single entry buffers. Note that we treat a zero-length transfer like ++ * a mapped SGL. ++ */ ++static bool twa_command_mapped(struct scsi_cmnd *cmd) ++{ ++ return scsi_sg_count(cmd) != 1 || ++ scsi_bufflen(cmd) >= TW_MIN_SGL_LENGTH; ++} ++ + /* This function will complete an aen request from the isr */ + static int twa_aen_complete(TW_Device_Extension *tw_dev, int request_id) + { +@@ -1339,7 +1350,8 @@ static irqreturn_t twa_interrupt(int irq, void *dev_instance) + } + + /* Now complete the io */ +- scsi_dma_unmap(cmd); ++ if (twa_command_mapped(cmd)) ++ scsi_dma_unmap(cmd); + cmd->scsi_done(cmd); + tw_dev->state[request_id] = TW_S_COMPLETED; + twa_free_request_id(tw_dev, request_id); +@@ -1582,7 +1594,8 @@ static int twa_reset_device_extension(TW_Device_Extension *tw_dev) + struct scsi_cmnd *cmd = tw_dev->srb[i]; + + cmd->result = (DID_RESET << 16); +- scsi_dma_unmap(cmd); ++ if (twa_command_mapped(cmd)) ++ scsi_dma_unmap(cmd); + cmd->scsi_done(cmd); + } + } +@@ -1765,12 +1778,14 @@ static int twa_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_ + retval = twa_scsiop_execute_scsi(tw_dev, request_id, NULL, 0, NULL); + switch (retval) { + case SCSI_MLQUEUE_HOST_BUSY: +- scsi_dma_unmap(SCpnt); ++ if (twa_command_mapped(SCpnt)) ++ scsi_dma_unmap(SCpnt); + twa_free_request_id(tw_dev, request_id); + break; + case 1: + SCpnt->result = (DID_ERROR << 16); +- scsi_dma_unmap(SCpnt); ++ if (twa_command_mapped(SCpnt)) ++ scsi_dma_unmap(SCpnt); + done(SCpnt); + tw_dev->state[request_id] = TW_S_COMPLETED; + twa_free_request_id(tw_dev, request_id); +@@ -1831,8 +1846,7 @@ static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id, + /* Map sglist from scsi layer to cmd packet */ + + if (scsi_sg_count(srb)) { +- if ((scsi_sg_count(srb) == 1) && +- (scsi_bufflen(srb) < TW_MIN_SGL_LENGTH)) { ++ if (!twa_command_mapped(srb)) { + if (srb->sc_data_direction == DMA_TO_DEVICE || + srb->sc_data_direction == DMA_BIDIRECTIONAL) + scsi_sg_copy_to_buffer(srb, +@@ -1905,7 +1919,7 @@ static void twa_scsiop_execute_scsi_complete(TW_Device_Extension *tw_dev, int re + { + struct scsi_cmnd *cmd = tw_dev->srb[request_id]; + +- if (scsi_bufflen(cmd) < TW_MIN_SGL_LENGTH && ++ if (!twa_command_mapped(cmd) && + (cmd->sc_data_direction == DMA_FROM_DEVICE || + cmd->sc_data_direction == DMA_BIDIRECTIONAL)) { + if (scsi_sg_count(cmd) == 1) { +diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c +index 1dafeb4..cab4e98 100644 +--- a/drivers/scsi/hpsa.c ++++ b/drivers/scsi/hpsa.c +@@ -5104,7 +5104,7 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd) + int rc; + struct ctlr_info *h; + struct hpsa_scsi_dev_t *dev; +- char msg[40]; ++ char msg[48]; + + /* find the controller to which the command to be aborted was sent */ + h = sdev_to_hba(scsicmd->device); +@@ -5122,16 +5122,18 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd) + + /* if controller locked up, we can guarantee command won't complete */ + if (lockup_detected(h)) { +- sprintf(msg, "cmd %d RESET FAILED, lockup detected", +- hpsa_get_cmd_index(scsicmd)); ++ snprintf(msg, sizeof(msg), ++ "cmd %d RESET FAILED, lockup detected", ++ hpsa_get_cmd_index(scsicmd)); + hpsa_show_dev_msg(KERN_WARNING, h, dev, msg); + return FAILED; + } + + /* this reset request might be the result of a lockup; check */ + if (detect_controller_lockup(h)) { +- sprintf(msg, "cmd %d RESET FAILED, new lockup detected", +- hpsa_get_cmd_index(scsicmd)); ++ snprintf(msg, sizeof(msg), ++ "cmd %d RESET FAILED, new lockup detected", ++ hpsa_get_cmd_index(scsicmd)); + hpsa_show_dev_msg(KERN_WARNING, h, dev, msg); + return FAILED; + } +@@ -5145,7 +5147,8 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd) + /* send a reset to the SCSI LUN which the command was sent to */ + rc = hpsa_do_reset(h, dev, dev->scsi3addr, HPSA_RESET_TYPE_LUN, + DEFAULT_REPLY_QUEUE); +- sprintf(msg, "reset %s", rc == 0 ? "completed successfully" : "failed"); ++ snprintf(msg, sizeof(msg), "reset %s", ++ rc == 0 ? "completed successfully" : "failed"); + hpsa_show_dev_msg(KERN_WARNING, h, dev, msg); + return rc == 0 ? SUCCESS : FAILED; + } +diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c +index a9aa389..cccab61 100644 +--- a/drivers/scsi/ipr.c ++++ b/drivers/scsi/ipr.c +@@ -4554,7 +4554,7 @@ static ssize_t ipr_store_raw_mode(struct device *dev, + spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); + res = (struct ipr_resource_entry *)sdev->hostdata; + if (res) { +- if (ioa_cfg->sis64 && ipr_is_af_dasd_device(res)) { ++ if (ipr_is_af_dasd_device(res)) { + res->raw_mode = simple_strtoul(buf, NULL, 10); + len = strlen(buf); + if (res->sdev) +diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c +index 6457a8a..bf3d801 100644 +--- a/drivers/scsi/scsi_error.c ++++ b/drivers/scsi/scsi_error.c +@@ -2169,8 +2169,17 @@ int scsi_error_handler(void *data) + * We never actually get interrupted because kthread_run + * disables signal delivery for the created thread. + */ +- while (!kthread_should_stop()) { ++ while (true) { ++ /* ++ * The sequence in kthread_stop() sets the stop flag first ++ * then wakes the process. To avoid missed wakeups, the task ++ * should always be in a non running state before the stop ++ * flag is checked ++ */ + set_current_state(TASK_INTERRUPTIBLE); ++ if (kthread_should_stop()) ++ break; ++ + if ((shost->host_failed == 0 && shost->host_eh_scheduled == 0) || + shost->host_failed != atomic_read(&shost->host_busy)) { + SCSI_LOG_ERROR_RECOVERY(1, +diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c +index c9357bb..7445964 100644 +--- a/drivers/spi/spi-bcm2835.c ++++ b/drivers/spi/spi-bcm2835.c +@@ -386,14 +386,14 @@ static bool bcm2835_spi_can_dma(struct spi_master *master, + /* otherwise we only allow transfers within the same page + * to avoid wasting time on dma_mapping when it is not practical + */ +- if (((size_t)tfr->tx_buf & PAGE_MASK) + tfr->len > PAGE_SIZE) { ++ if (((size_t)tfr->tx_buf & (PAGE_SIZE - 1)) + tfr->len > PAGE_SIZE) { + dev_warn_once(&spi->dev, + "Unaligned spi tx-transfer bridging page\n"); + return false; + } +- if (((size_t)tfr->rx_buf & PAGE_MASK) + tfr->len > PAGE_SIZE) { ++ if (((size_t)tfr->rx_buf & (PAGE_SIZE - 1)) + tfr->len > PAGE_SIZE) { + dev_warn_once(&spi->dev, +- "Unaligned spi tx-transfer bridging page\n"); ++ "Unaligned spi rx-transfer bridging page\n"); + return false; + } + +diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c +index 7293d6d..8e4b1a7 100644 +--- a/drivers/spi/spi-pxa2xx.c ++++ b/drivers/spi/spi-pxa2xx.c +@@ -643,6 +643,10 @@ static irqreturn_t ssp_int(int irq, void *dev_id) + if (!(sccr1_reg & SSCR1_TIE)) + mask &= ~SSSR_TFS; + ++ /* Ignore RX timeout interrupt if it is disabled */ ++ if (!(sccr1_reg & SSCR1_TINTE)) ++ mask &= ~SSSR_TINT; ++ + if (!(status & mask)) + return IRQ_NONE; + +diff --git a/drivers/spi/spi-xtensa-xtfpga.c b/drivers/spi/spi-xtensa-xtfpga.c +index 2e32ea2..be6155c 100644 +--- a/drivers/spi/spi-xtensa-xtfpga.c ++++ b/drivers/spi/spi-xtensa-xtfpga.c +@@ -34,13 +34,13 @@ struct xtfpga_spi { + static inline void xtfpga_spi_write32(const struct xtfpga_spi *spi, + unsigned addr, u32 val) + { +- iowrite32(val, spi->regs + addr); ++ __raw_writel(val, spi->regs + addr); + } + + static inline unsigned int xtfpga_spi_read32(const struct xtfpga_spi *spi, + unsigned addr) + { +- return ioread32(spi->regs + addr); ++ return __raw_readl(spi->regs + addr); + } + + static inline void xtfpga_spi_wait_busy(struct xtfpga_spi *xspi) +diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c +index cf8b91b..9ce2f15 100644 +--- a/drivers/spi/spi.c ++++ b/drivers/spi/spi.c +@@ -1437,8 +1437,7 @@ static struct class spi_master_class = { + * + * The caller is responsible for assigning the bus number and initializing + * the master's methods before calling spi_register_master(); and (after errors +- * adding the device) calling spi_master_put() and kfree() to prevent a memory +- * leak. ++ * adding the device) calling spi_master_put() to prevent a memory leak. + */ + struct spi_master *spi_alloc_master(struct device *dev, unsigned size) + { +diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c +index c7de641..97aad8f 100644 +--- a/drivers/spi/spidev.c ++++ b/drivers/spi/spidev.c +@@ -651,7 +651,8 @@ static int spidev_release(struct inode *inode, struct file *filp) + kfree(spidev->rx_buffer); + spidev->rx_buffer = NULL; + +- spidev->speed_hz = spidev->spi->max_speed_hz; ++ if (spidev->spi) ++ spidev->speed_hz = spidev->spi->max_speed_hz; + + /* ... after we unbound from the underlying device? */ + spin_lock_irq(&spidev->spi_lock); +diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c +index 6f48112..b71b1f2 100644 +--- a/drivers/staging/android/ion/ion.c ++++ b/drivers/staging/android/ion/ion.c +@@ -1179,13 +1179,13 @@ struct ion_handle *ion_import_dma_buf(struct ion_client *client, int fd) + mutex_unlock(&client->lock); + goto end; + } +- mutex_unlock(&client->lock); + + handle = ion_handle_create(client, buffer); +- if (IS_ERR(handle)) ++ if (IS_ERR(handle)) { ++ mutex_unlock(&client->lock); + goto end; ++ } + +- mutex_lock(&client->lock); + ret = ion_handle_add(client, handle); + mutex_unlock(&client->lock); + if (ret) { +diff --git a/drivers/staging/speakup/fakekey.c b/drivers/staging/speakup/fakekey.c +index 4299cf4..5e1f16c 100644 +--- a/drivers/staging/speakup/fakekey.c ++++ b/drivers/staging/speakup/fakekey.c +@@ -81,6 +81,7 @@ void speakup_fake_down_arrow(void) + __this_cpu_write(reporting_keystroke, true); + input_report_key(virt_keyboard, KEY_DOWN, PRESSED); + input_report_key(virt_keyboard, KEY_DOWN, RELEASED); ++ input_sync(virt_keyboard); + __this_cpu_write(reporting_keystroke, false); + + /* reenable preemption */ +diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c +index fd09290..56cf199 100644 +--- a/drivers/target/iscsi/iscsi_target.c ++++ b/drivers/target/iscsi/iscsi_target.c +@@ -341,7 +341,6 @@ static struct iscsi_np *iscsit_get_np( + + struct iscsi_np *iscsit_add_np( + struct __kernel_sockaddr_storage *sockaddr, +- char *ip_str, + int network_transport) + { + struct sockaddr_in *sock_in; +@@ -370,11 +369,9 @@ struct iscsi_np *iscsit_add_np( + np->np_flags |= NPF_IP_NETWORK; + if (sockaddr->ss_family == AF_INET6) { + sock_in6 = (struct sockaddr_in6 *)sockaddr; +- snprintf(np->np_ip, IPV6_ADDRESS_SPACE, "%s", ip_str); + np->np_port = ntohs(sock_in6->sin6_port); + } else { + sock_in = (struct sockaddr_in *)sockaddr; +- sprintf(np->np_ip, "%s", ip_str); + np->np_port = ntohs(sock_in->sin_port); + } + +@@ -411,8 +408,8 @@ struct iscsi_np *iscsit_add_np( + list_add_tail(&np->np_list, &g_np_list); + mutex_unlock(&np_lock); + +- pr_debug("CORE[0] - Added Network Portal: %s:%hu on %s\n", +- np->np_ip, np->np_port, np->np_transport->name); ++ pr_debug("CORE[0] - Added Network Portal: %pISc:%hu on %s\n", ++ &np->np_sockaddr, np->np_port, np->np_transport->name); + + return np; + } +@@ -481,8 +478,8 @@ int iscsit_del_np(struct iscsi_np *np) + list_del(&np->np_list); + mutex_unlock(&np_lock); + +- pr_debug("CORE[0] - Removed Network Portal: %s:%hu on %s\n", +- np->np_ip, np->np_port, np->np_transport->name); ++ pr_debug("CORE[0] - Removed Network Portal: %pISc:%hu on %s\n", ++ &np->np_sockaddr, np->np_port, np->np_transport->name); + + iscsit_put_transport(np->np_transport); + kfree(np); +@@ -3464,7 +3461,6 @@ iscsit_build_sendtargets_response(struct iscsi_cmd *cmd, + tpg_np_list) { + struct iscsi_np *np = tpg_np->tpg_np; + bool inaddr_any = iscsit_check_inaddr_any(np); +- char *fmt_str; + + if (np->np_network_transport != network_transport) + continue; +@@ -3492,15 +3488,18 @@ iscsit_build_sendtargets_response(struct iscsi_cmd *cmd, + } + } + +- if (np->np_sockaddr.ss_family == AF_INET6) +- fmt_str = "TargetAddress=[%s]:%hu,%hu"; +- else +- fmt_str = "TargetAddress=%s:%hu,%hu"; +- +- len = sprintf(buf, fmt_str, +- inaddr_any ? conn->local_ip : np->np_ip, +- np->np_port, +- tpg->tpgt); ++ if (inaddr_any) { ++ len = sprintf(buf, "TargetAddress=" ++ "%s:%hu,%hu", ++ conn->local_ip, ++ np->np_port, ++ tpg->tpgt); ++ } else { ++ len = sprintf(buf, "TargetAddress=" ++ "%pISpc,%hu", ++ &np->np_sockaddr, ++ tpg->tpgt); ++ } + len += 1; + + if ((len + payload_len) > buffer_len) { +diff --git a/drivers/target/iscsi/iscsi_target.h b/drivers/target/iscsi/iscsi_target.h +index 7d0f9c0..d294f03 100644 +--- a/drivers/target/iscsi/iscsi_target.h ++++ b/drivers/target/iscsi/iscsi_target.h +@@ -13,7 +13,7 @@ extern int iscsit_deaccess_np(struct iscsi_np *, struct iscsi_portal_group *, + extern bool iscsit_check_np_match(struct __kernel_sockaddr_storage *, + struct iscsi_np *, int); + extern struct iscsi_np *iscsit_add_np(struct __kernel_sockaddr_storage *, +- char *, int); ++ int); + extern int iscsit_reset_np_thread(struct iscsi_np *, struct iscsi_tpg_np *, + struct iscsi_portal_group *, bool); + extern int iscsit_del_np(struct iscsi_np *); +diff --git a/drivers/target/iscsi/iscsi_target_configfs.c b/drivers/target/iscsi/iscsi_target_configfs.c +index c1898c8..db3b9b9 100644 +--- a/drivers/target/iscsi/iscsi_target_configfs.c ++++ b/drivers/target/iscsi/iscsi_target_configfs.c +@@ -99,7 +99,7 @@ static ssize_t lio_target_np_store_sctp( + * Use existing np->np_sockaddr for SCTP network portal reference + */ + tpg_np_sctp = iscsit_tpg_add_network_portal(tpg, &np->np_sockaddr, +- np->np_ip, tpg_np, ISCSI_SCTP_TCP); ++ tpg_np, ISCSI_SCTP_TCP); + if (!tpg_np_sctp || IS_ERR(tpg_np_sctp)) + goto out; + } else { +@@ -177,7 +177,7 @@ static ssize_t lio_target_np_store_iser( + } + + tpg_np_iser = iscsit_tpg_add_network_portal(tpg, &np->np_sockaddr, +- np->np_ip, tpg_np, ISCSI_INFINIBAND); ++ tpg_np, ISCSI_INFINIBAND); + if (IS_ERR(tpg_np_iser)) { + rc = PTR_ERR(tpg_np_iser); + goto out; +@@ -248,8 +248,8 @@ static struct se_tpg_np *lio_target_call_addnptotpg( + return ERR_PTR(-EINVAL); + } + str++; /* Skip over leading "[" */ +- *str2 = '\0'; /* Terminate the IPv6 address */ +- str2++; /* Skip over the "]" */ ++ *str2 = '\0'; /* Terminate the unbracketed IPv6 address */ ++ str2++; /* Skip over the \0 */ + port_str = strstr(str2, ":"); + if (!port_str) { + pr_err("Unable to locate \":port\"" +@@ -316,7 +316,7 @@ static struct se_tpg_np *lio_target_call_addnptotpg( + * sys/kernel/config/iscsi/$IQN/$TPG/np/$IP:$PORT/ + * + */ +- tpg_np = iscsit_tpg_add_network_portal(tpg, &sockaddr, str, NULL, ++ tpg_np = iscsit_tpg_add_network_portal(tpg, &sockaddr, NULL, + ISCSI_TCP); + if (IS_ERR(tpg_np)) { + iscsit_put_tpg(tpg); +@@ -344,8 +344,8 @@ static void lio_target_call_delnpfromtpg( + + se_tpg = &tpg->tpg_se_tpg; + pr_debug("LIO_Target_ConfigFS: DEREGISTER -> %s TPGT: %hu" +- " PORTAL: %s:%hu\n", config_item_name(&se_tpg->se_tpg_wwn->wwn_group.cg_item), +- tpg->tpgt, tpg_np->tpg_np->np_ip, tpg_np->tpg_np->np_port); ++ " PORTAL: %pISc:%hu\n", config_item_name(&se_tpg->se_tpg_wwn->wwn_group.cg_item), ++ tpg->tpgt, &tpg_np->tpg_np->np_sockaddr, tpg_np->tpg_np->np_port); + + ret = iscsit_tpg_del_network_portal(tpg, tpg_np); + if (ret < 0) +diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c +index 7e8f65e..666c073 100644 +--- a/drivers/target/iscsi/iscsi_target_login.c ++++ b/drivers/target/iscsi/iscsi_target_login.c +@@ -823,8 +823,8 @@ static void iscsi_handle_login_thread_timeout(unsigned long data) + struct iscsi_np *np = (struct iscsi_np *) data; + + spin_lock_bh(&np->np_thread_lock); +- pr_err("iSCSI Login timeout on Network Portal %s:%hu\n", +- np->np_ip, np->np_port); ++ pr_err("iSCSI Login timeout on Network Portal %pISc:%hu\n", ++ &np->np_sockaddr, np->np_port); + + if (np->np_login_timer_flags & ISCSI_TF_STOP) { + spin_unlock_bh(&np->np_thread_lock); +@@ -1302,8 +1302,8 @@ static int __iscsi_target_login_thread(struct iscsi_np *np) + spin_lock_bh(&np->np_thread_lock); + if (np->np_thread_state != ISCSI_NP_THREAD_ACTIVE) { + spin_unlock_bh(&np->np_thread_lock); +- pr_err("iSCSI Network Portal on %s:%hu currently not" +- " active.\n", np->np_ip, np->np_port); ++ pr_err("iSCSI Network Portal on %pISc:%hu currently not" ++ " active.\n", &np->np_sockaddr, np->np_port); + iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, + ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE); + goto new_sess_out; +diff --git a/drivers/target/iscsi/iscsi_target_parameters.c b/drivers/target/iscsi/iscsi_target_parameters.c +index e8a52f7..51d1734 100644 +--- a/drivers/target/iscsi/iscsi_target_parameters.c ++++ b/drivers/target/iscsi/iscsi_target_parameters.c +@@ -407,6 +407,7 @@ int iscsi_create_default_params(struct iscsi_param_list **param_list_ptr) + TYPERANGE_UTF8, USE_INITIAL_ONLY); + if (!param) + goto out; ++ + /* + * Extra parameters for ISER from RFC-5046 + */ +@@ -496,9 +497,9 @@ int iscsi_set_keys_to_negotiate( + } else if (!strcmp(param->name, SESSIONTYPE)) { + SET_PSTATE_NEGOTIATE(param); + } else if (!strcmp(param->name, IFMARKER)) { +- SET_PSTATE_NEGOTIATE(param); ++ SET_PSTATE_REJECT(param); + } else if (!strcmp(param->name, OFMARKER)) { +- SET_PSTATE_NEGOTIATE(param); ++ SET_PSTATE_REJECT(param); + } else if (!strcmp(param->name, IFMARKINT)) { + SET_PSTATE_REJECT(param); + } else if (!strcmp(param->name, OFMARKINT)) { +diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c +index 968068f..de26bee 100644 +--- a/drivers/target/iscsi/iscsi_target_tpg.c ++++ b/drivers/target/iscsi/iscsi_target_tpg.c +@@ -460,7 +460,6 @@ static bool iscsit_tpg_check_network_portal( + struct iscsi_tpg_np *iscsit_tpg_add_network_portal( + struct iscsi_portal_group *tpg, + struct __kernel_sockaddr_storage *sockaddr, +- char *ip_str, + struct iscsi_tpg_np *tpg_np_parent, + int network_transport) + { +@@ -470,8 +469,8 @@ struct iscsi_tpg_np *iscsit_tpg_add_network_portal( + if (!tpg_np_parent) { + if (iscsit_tpg_check_network_portal(tpg->tpg_tiqn, sockaddr, + network_transport)) { +- pr_err("Network Portal: %s already exists on a" +- " different TPG on %s\n", ip_str, ++ pr_err("Network Portal: %pISc already exists on a" ++ " different TPG on %s\n", sockaddr, + tpg->tpg_tiqn->tiqn); + return ERR_PTR(-EEXIST); + } +@@ -484,7 +483,7 @@ struct iscsi_tpg_np *iscsit_tpg_add_network_portal( + return ERR_PTR(-ENOMEM); + } + +- np = iscsit_add_np(sockaddr, ip_str, network_transport); ++ np = iscsit_add_np(sockaddr, network_transport); + if (IS_ERR(np)) { + kfree(tpg_np); + return ERR_CAST(np); +@@ -514,8 +513,8 @@ struct iscsi_tpg_np *iscsit_tpg_add_network_portal( + spin_unlock(&tpg_np_parent->tpg_np_parent_lock); + } + +- pr_debug("CORE[%s] - Added Network Portal: %s:%hu,%hu on %s\n", +- tpg->tpg_tiqn->tiqn, np->np_ip, np->np_port, tpg->tpgt, ++ pr_debug("CORE[%s] - Added Network Portal: %pISc:%hu,%hu on %s\n", ++ tpg->tpg_tiqn->tiqn, &np->np_sockaddr, np->np_port, tpg->tpgt, + np->np_transport->name); + + return tpg_np; +@@ -528,8 +527,8 @@ static int iscsit_tpg_release_np( + { + iscsit_clear_tpg_np_login_thread(tpg_np, tpg, true); + +- pr_debug("CORE[%s] - Removed Network Portal: %s:%hu,%hu on %s\n", +- tpg->tpg_tiqn->tiqn, np->np_ip, np->np_port, tpg->tpgt, ++ pr_debug("CORE[%s] - Removed Network Portal: %pISc:%hu,%hu on %s\n", ++ tpg->tpg_tiqn->tiqn, &np->np_sockaddr, np->np_port, tpg->tpgt, + np->np_transport->name); + + tpg_np->tpg_np = NULL; +diff --git a/drivers/target/iscsi/iscsi_target_tpg.h b/drivers/target/iscsi/iscsi_target_tpg.h +index 95ff5bd..28abda8 100644 +--- a/drivers/target/iscsi/iscsi_target_tpg.h ++++ b/drivers/target/iscsi/iscsi_target_tpg.h +@@ -22,7 +22,7 @@ extern struct iscsi_node_attrib *iscsit_tpg_get_node_attrib(struct iscsi_session + extern void iscsit_tpg_del_external_nps(struct iscsi_tpg_np *); + extern struct iscsi_tpg_np *iscsit_tpg_locate_child_np(struct iscsi_tpg_np *, int); + extern struct iscsi_tpg_np *iscsit_tpg_add_network_portal(struct iscsi_portal_group *, +- struct __kernel_sockaddr_storage *, char *, struct iscsi_tpg_np *, ++ struct __kernel_sockaddr_storage *, struct iscsi_tpg_np *, + int); + extern int iscsit_tpg_del_network_portal(struct iscsi_portal_group *, + struct iscsi_tpg_np *); +diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c +index 09e682b..8f1cd19 100644 +--- a/drivers/target/target_core_device.c ++++ b/drivers/target/target_core_device.c +@@ -427,8 +427,6 @@ void core_disable_device_list_for_node( + + hlist_del_rcu(&orig->link); + clear_bit(DEF_PR_REG_ACTIVE, &orig->deve_flags); +- rcu_assign_pointer(orig->se_lun, NULL); +- rcu_assign_pointer(orig->se_lun_acl, NULL); + orig->lun_flags = 0; + orig->creation_time = 0; + orig->attach_count--; +@@ -439,6 +437,9 @@ void core_disable_device_list_for_node( + kref_put(&orig->pr_kref, target_pr_kref_release); + wait_for_completion(&orig->pr_comp); + ++ rcu_assign_pointer(orig->se_lun, NULL); ++ rcu_assign_pointer(orig->se_lun_acl, NULL); ++ + kfree_rcu(orig, rcu_head); + + core_scsi3_free_pr_reg_from_nacl(dev, nacl); +diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c +index 5ab7100..e793311 100644 +--- a/drivers/target/target_core_pr.c ++++ b/drivers/target/target_core_pr.c +@@ -618,7 +618,7 @@ static struct t10_pr_registration *__core_scsi3_do_alloc_registration( + struct se_device *dev, + struct se_node_acl *nacl, + struct se_lun *lun, +- struct se_dev_entry *deve, ++ struct se_dev_entry *dest_deve, + u64 mapped_lun, + unsigned char *isid, + u64 sa_res_key, +@@ -640,7 +640,29 @@ static struct t10_pr_registration *__core_scsi3_do_alloc_registration( + INIT_LIST_HEAD(&pr_reg->pr_reg_atp_mem_list); + atomic_set(&pr_reg->pr_res_holders, 0); + pr_reg->pr_reg_nacl = nacl; +- pr_reg->pr_reg_deve = deve; ++ /* ++ * For destination registrations for ALL_TG_PT=1 and SPEC_I_PT=1, ++ * the se_dev_entry->pr_ref will have been already obtained by ++ * core_get_se_deve_from_rtpi() or __core_scsi3_alloc_registration(). ++ * ++ * Otherwise, locate se_dev_entry now and obtain a reference until ++ * registration completes in __core_scsi3_add_registration(). ++ */ ++ if (dest_deve) { ++ pr_reg->pr_reg_deve = dest_deve; ++ } else { ++ rcu_read_lock(); ++ pr_reg->pr_reg_deve = target_nacl_find_deve(nacl, mapped_lun); ++ if (!pr_reg->pr_reg_deve) { ++ rcu_read_unlock(); ++ pr_err("Unable to locate PR deve %s mapped_lun: %llu\n", ++ nacl->initiatorname, mapped_lun); ++ kmem_cache_free(t10_pr_reg_cache, pr_reg); ++ return NULL; ++ } ++ kref_get(&pr_reg->pr_reg_deve->pr_kref); ++ rcu_read_unlock(); ++ } + pr_reg->pr_res_mapped_lun = mapped_lun; + pr_reg->pr_aptpl_target_lun = lun->unpacked_lun; + pr_reg->tg_pt_sep_rtpi = lun->lun_rtpi; +@@ -936,17 +958,29 @@ static int __core_scsi3_check_aptpl_registration( + !(strcmp(pr_reg->pr_tport, t_port)) && + (pr_reg->pr_reg_tpgt == tpgt) && + (pr_reg->pr_aptpl_target_lun == target_lun)) { ++ /* ++ * Obtain the ->pr_reg_deve pointer + reference, that ++ * is released by __core_scsi3_add_registration() below. ++ */ ++ rcu_read_lock(); ++ pr_reg->pr_reg_deve = target_nacl_find_deve(nacl, mapped_lun); ++ if (!pr_reg->pr_reg_deve) { ++ pr_err("Unable to locate PR APTPL %s mapped_lun:" ++ " %llu\n", nacl->initiatorname, mapped_lun); ++ rcu_read_unlock(); ++ continue; ++ } ++ kref_get(&pr_reg->pr_reg_deve->pr_kref); ++ rcu_read_unlock(); + + pr_reg->pr_reg_nacl = nacl; + pr_reg->tg_pt_sep_rtpi = lun->lun_rtpi; +- + list_del(&pr_reg->pr_reg_aptpl_list); + spin_unlock(&pr_tmpl->aptpl_reg_lock); + /* + * At this point all of the pointers in *pr_reg will + * be setup, so go ahead and add the registration. + */ +- + __core_scsi3_add_registration(dev, nacl, pr_reg, 0, 0); + /* + * If this registration is the reservation holder, +@@ -1044,18 +1078,11 @@ static void __core_scsi3_add_registration( + + __core_scsi3_dump_registration(tfo, dev, nacl, pr_reg, register_type); + spin_unlock(&pr_tmpl->registration_lock); +- +- rcu_read_lock(); +- deve = pr_reg->pr_reg_deve; +- if (deve) +- set_bit(DEF_PR_REG_ACTIVE, &deve->deve_flags); +- rcu_read_unlock(); +- + /* + * Skip extra processing for ALL_TG_PT=0 or REGISTER_AND_MOVE. + */ + if (!pr_reg->pr_reg_all_tg_pt || register_move) +- return; ++ goto out; + /* + * Walk pr_reg->pr_reg_atp_list and add registrations for ALL_TG_PT=1 + * allocated in __core_scsi3_alloc_registration() +@@ -1075,19 +1102,31 @@ static void __core_scsi3_add_registration( + __core_scsi3_dump_registration(tfo, dev, nacl_tmp, pr_reg_tmp, + register_type); + spin_unlock(&pr_tmpl->registration_lock); +- ++ /* ++ * Drop configfs group dependency reference and deve->pr_kref ++ * obtained from __core_scsi3_alloc_registration() code. ++ */ + rcu_read_lock(); + deve = pr_reg_tmp->pr_reg_deve; +- if (deve) ++ if (deve) { + set_bit(DEF_PR_REG_ACTIVE, &deve->deve_flags); ++ core_scsi3_lunacl_undepend_item(deve); ++ pr_reg_tmp->pr_reg_deve = NULL; ++ } + rcu_read_unlock(); +- +- /* +- * Drop configfs group dependency reference from +- * __core_scsi3_alloc_registration() +- */ +- core_scsi3_lunacl_undepend_item(pr_reg_tmp->pr_reg_deve); + } ++out: ++ /* ++ * Drop deve->pr_kref obtained in __core_scsi3_do_alloc_registration() ++ */ ++ rcu_read_lock(); ++ deve = pr_reg->pr_reg_deve; ++ if (deve) { ++ set_bit(DEF_PR_REG_ACTIVE, &deve->deve_flags); ++ kref_put(&deve->pr_kref, target_pr_kref_release); ++ pr_reg->pr_reg_deve = NULL; ++ } ++ rcu_read_unlock(); + } + + static int core_scsi3_alloc_registration( +@@ -1785,9 +1824,11 @@ core_scsi3_decode_spec_i_port( + dest_node_acl->initiatorname, i_buf, (dest_se_deve) ? + dest_se_deve->mapped_lun : 0); + +- if (!dest_se_deve) ++ if (!dest_se_deve) { ++ kref_put(&local_pr_reg->pr_reg_deve->pr_kref, ++ target_pr_kref_release); + continue; +- ++ } + core_scsi3_lunacl_undepend_item(dest_se_deve); + core_scsi3_nodeacl_undepend_item(dest_node_acl); + core_scsi3_tpg_undepend_item(dest_tpg); +@@ -1823,9 +1864,11 @@ out: + + kmem_cache_free(t10_pr_reg_cache, dest_pr_reg); + +- if (!dest_se_deve) ++ if (!dest_se_deve) { ++ kref_put(&local_pr_reg->pr_reg_deve->pr_kref, ++ target_pr_kref_release); + continue; +- ++ } + core_scsi3_lunacl_undepend_item(dest_se_deve); + core_scsi3_nodeacl_undepend_item(dest_node_acl); + core_scsi3_tpg_undepend_item(dest_tpg); +diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_core_xcopy.c +index 4515f52..47fe94e 100644 +--- a/drivers/target/target_core_xcopy.c ++++ b/drivers/target/target_core_xcopy.c +@@ -450,6 +450,8 @@ int target_xcopy_setup_pt(void) + memset(&xcopy_pt_sess, 0, sizeof(struct se_session)); + INIT_LIST_HEAD(&xcopy_pt_sess.sess_list); + INIT_LIST_HEAD(&xcopy_pt_sess.sess_acl_list); ++ INIT_LIST_HEAD(&xcopy_pt_sess.sess_cmd_list); ++ spin_lock_init(&xcopy_pt_sess.sess_cmd_lock); + + xcopy_pt_nacl.se_tpg = &xcopy_pt_tpg; + xcopy_pt_nacl.nacl_sess = &xcopy_pt_sess; +@@ -644,7 +646,7 @@ static int target_xcopy_read_source( + pr_debug("XCOPY: Built READ_16: LBA: %llu Sectors: %u Length: %u\n", + (unsigned long long)src_lba, src_sectors, length); + +- transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, NULL, length, ++ transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, &xcopy_pt_sess, length, + DMA_FROM_DEVICE, 0, &xpt_cmd->sense_buffer[0]); + xop->src_pt_cmd = xpt_cmd; + +@@ -704,7 +706,7 @@ static int target_xcopy_write_destination( + pr_debug("XCOPY: Built WRITE_16: LBA: %llu Sectors: %u Length: %u\n", + (unsigned long long)dst_lba, dst_sectors, length); + +- transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, NULL, length, ++ transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, &xcopy_pt_sess, length, + DMA_TO_DEVICE, 0, &xpt_cmd->sense_buffer[0]); + xop->dst_pt_cmd = xpt_cmd; + +diff --git a/drivers/thermal/cpu_cooling.c b/drivers/thermal/cpu_cooling.c +index 620dcd4..42c6f71 100644 +--- a/drivers/thermal/cpu_cooling.c ++++ b/drivers/thermal/cpu_cooling.c +@@ -262,7 +262,9 @@ static int cpufreq_thermal_notifier(struct notifier_block *nb, + * efficiently. Power is stored in mW, frequency in KHz. The + * resulting table is in ascending order. + * +- * Return: 0 on success, -E* on error. ++ * Return: 0 on success, -EINVAL if there are no OPPs for any CPUs, ++ * -ENOMEM if we run out of memory or -EAGAIN if an OPP was ++ * added/enabled while the function was executing. + */ + static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device, + u32 capacitance) +@@ -273,8 +275,6 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device, + int num_opps = 0, cpu, i, ret = 0; + unsigned long freq; + +- rcu_read_lock(); +- + for_each_cpu(cpu, &cpufreq_device->allowed_cpus) { + dev = get_cpu_device(cpu); + if (!dev) { +@@ -284,24 +284,20 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device, + } + + num_opps = dev_pm_opp_get_opp_count(dev); +- if (num_opps > 0) { ++ if (num_opps > 0) + break; +- } else if (num_opps < 0) { +- ret = num_opps; +- goto unlock; +- } ++ else if (num_opps < 0) ++ return num_opps; + } + +- if (num_opps == 0) { +- ret = -EINVAL; +- goto unlock; +- } ++ if (num_opps == 0) ++ return -EINVAL; + + power_table = kcalloc(num_opps, sizeof(*power_table), GFP_KERNEL); +- if (!power_table) { +- ret = -ENOMEM; +- goto unlock; +- } ++ if (!power_table) ++ return -ENOMEM; ++ ++ rcu_read_lock(); + + for (freq = 0, i = 0; + opp = dev_pm_opp_find_freq_ceil(dev, &freq), !IS_ERR(opp); +@@ -309,6 +305,12 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device, + u32 freq_mhz, voltage_mv; + u64 power; + ++ if (i >= num_opps) { ++ rcu_read_unlock(); ++ ret = -EAGAIN; ++ goto free_power_table; ++ } ++ + freq_mhz = freq / 1000000; + voltage_mv = dev_pm_opp_get_voltage(opp) / 1000; + +@@ -326,17 +328,22 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device, + power_table[i].power = power; + } + +- if (i == 0) { ++ rcu_read_unlock(); ++ ++ if (i != num_opps) { + ret = PTR_ERR(opp); +- goto unlock; ++ goto free_power_table; + } + + cpufreq_device->cpu_dev = dev; + cpufreq_device->dyn_power_table = power_table; + cpufreq_device->dyn_power_table_entries = i; + +-unlock: +- rcu_read_unlock(); ++ return 0; ++ ++free_power_table: ++ kfree(power_table); ++ + return ret; + } + +@@ -847,7 +854,7 @@ __cpufreq_cooling_register(struct device_node *np, + ret = get_idr(&cpufreq_idr, &cpufreq_dev->id); + if (ret) { + cool_dev = ERR_PTR(ret); +- goto free_table; ++ goto free_power_table; + } + + snprintf(dev_name, sizeof(dev_name), "thermal-cpufreq-%d", +@@ -889,6 +896,8 @@ __cpufreq_cooling_register(struct device_node *np, + + remove_idr: + release_idr(&cpufreq_idr, cpufreq_dev->id); ++free_power_table: ++ kfree(cpufreq_dev->dyn_power_table); + free_table: + kfree(cpufreq_dev->freq_table); + free_time_in_idle_timestamp: +@@ -1039,6 +1048,7 @@ void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev) + + thermal_cooling_device_unregister(cpufreq_dev->cool_dev); + release_idr(&cpufreq_idr, cpufreq_dev->id); ++ kfree(cpufreq_dev->dyn_power_table); + kfree(cpufreq_dev->time_in_idle_timestamp); + kfree(cpufreq_dev->time_in_idle); + kfree(cpufreq_dev->freq_table); +diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c +index ee8bfac..afc1879 100644 +--- a/drivers/tty/n_tty.c ++++ b/drivers/tty/n_tty.c +@@ -343,8 +343,7 @@ static void n_tty_packet_mode_flush(struct tty_struct *tty) + spin_lock_irqsave(&tty->ctrl_lock, flags); + tty->ctrl_status |= TIOCPKT_FLUSHREAD; + spin_unlock_irqrestore(&tty->ctrl_lock, flags); +- if (waitqueue_active(&tty->link->read_wait)) +- wake_up_interruptible(&tty->link->read_wait); ++ wake_up_interruptible(&tty->link->read_wait); + } + } + +@@ -1382,8 +1381,7 @@ handle_newline: + put_tty_queue(c, ldata); + smp_store_release(&ldata->canon_head, ldata->read_head); + kill_fasync(&tty->fasync, SIGIO, POLL_IN); +- if (waitqueue_active(&tty->read_wait)) +- wake_up_interruptible_poll(&tty->read_wait, POLLIN); ++ wake_up_interruptible_poll(&tty->read_wait, POLLIN); + return 0; + } + } +@@ -1667,8 +1665,7 @@ static void __receive_buf(struct tty_struct *tty, const unsigned char *cp, + + if ((read_cnt(ldata) >= ldata->minimum_to_wake) || L_EXTPROC(tty)) { + kill_fasync(&tty->fasync, SIGIO, POLL_IN); +- if (waitqueue_active(&tty->read_wait)) +- wake_up_interruptible_poll(&tty->read_wait, POLLIN); ++ wake_up_interruptible_poll(&tty->read_wait, POLLIN); + } + } + +@@ -1887,10 +1884,8 @@ static void n_tty_set_termios(struct tty_struct *tty, struct ktermios *old) + } + + /* The termios change make the tty ready for I/O */ +- if (waitqueue_active(&tty->write_wait)) +- wake_up_interruptible(&tty->write_wait); +- if (waitqueue_active(&tty->read_wait)) +- wake_up_interruptible(&tty->read_wait); ++ wake_up_interruptible(&tty->write_wait); ++ wake_up_interruptible(&tty->read_wait); + } + + /** +diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c +index 37fff12..c35d96e 100644 +--- a/drivers/tty/serial/8250/8250_core.c ++++ b/drivers/tty/serial/8250/8250_core.c +@@ -326,6 +326,14 @@ configured less than Maximum supported fifo bytes */ + UART_FCR7_64BYTE, + .flags = UART_CAP_FIFO, + }, ++ [PORT_RT2880] = { ++ .name = "Palmchip BK-3103", ++ .fifo_size = 16, ++ .tx_loadsz = 16, ++ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10, ++ .rxtrig_bytes = {1, 4, 8, 14}, ++ .flags = UART_CAP_FIFO, ++ }, + }; + + /* Uart divisor latch read */ +diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c +index 2a8f528..40326b3 100644 +--- a/drivers/tty/serial/atmel_serial.c ++++ b/drivers/tty/serial/atmel_serial.c +@@ -2641,7 +2641,7 @@ static int atmel_serial_probe(struct platform_device *pdev) + ret = atmel_init_gpios(port, &pdev->dev); + if (ret < 0) { + dev_err(&pdev->dev, "Failed to initialize GPIOs."); +- goto err; ++ goto err_clear_bit; + } + + ret = atmel_init_port(port, pdev); +diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c +index 57fc6ee..774df35 100644 +--- a/drivers/tty/tty_io.c ++++ b/drivers/tty/tty_io.c +@@ -2136,8 +2136,24 @@ retry_open: + if (!noctty && + current->signal->leader && + !current->signal->tty && +- tty->session == NULL) +- __proc_set_tty(tty); ++ tty->session == NULL) { ++ /* ++ * Don't let a process that only has write access to the tty ++ * obtain the privileges associated with having a tty as ++ * controlling terminal (being able to reopen it with full ++ * access through /dev/tty, being able to perform pushback). ++ * Many distributions set the group of all ttys to "tty" and ++ * grant write-only access to all terminals for setgid tty ++ * binaries, which should not imply full privileges on all ttys. ++ * ++ * This could theoretically break old code that performs open() ++ * on a write-only file descriptor. In that case, it might be ++ * necessary to also permit this if ++ * inode_permission(inode, MAY_READ) == 0. ++ */ ++ if (filp->f_mode & FMODE_READ) ++ __proc_set_tty(tty); ++ } + spin_unlock_irq(¤t->sighand->siglock); + read_unlock(&tasklist_lock); + tty_unlock(tty); +@@ -2426,7 +2442,7 @@ static int fionbio(struct file *file, int __user *p) + * Takes ->siglock() when updating signal->tty + */ + +-static int tiocsctty(struct tty_struct *tty, int arg) ++static int tiocsctty(struct tty_struct *tty, struct file *file, int arg) + { + int ret = 0; + +@@ -2460,6 +2476,13 @@ static int tiocsctty(struct tty_struct *tty, int arg) + goto unlock; + } + } ++ ++ /* See the comment in tty_open(). */ ++ if ((file->f_mode & FMODE_READ) == 0 && !capable(CAP_SYS_ADMIN)) { ++ ret = -EPERM; ++ goto unlock; ++ } ++ + proc_set_tty(tty); + unlock: + read_unlock(&tasklist_lock); +@@ -2852,7 +2875,7 @@ long tty_ioctl(struct file *file, unsigned int cmd, unsigned long arg) + no_tty(); + return 0; + case TIOCSCTTY: +- return tiocsctty(tty, arg); ++ return tiocsctty(tty, file, arg); + case TIOCGPGRP: + return tiocgpgrp(tty, real_tty, p); + case TIOCSPGRP: +diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c +index 389f0e0..fa77432 100644 +--- a/drivers/usb/chipidea/ci_hdrc_imx.c ++++ b/drivers/usb/chipidea/ci_hdrc_imx.c +@@ -56,7 +56,7 @@ static const struct of_device_id ci_hdrc_imx_dt_ids[] = { + { .compatible = "fsl,imx27-usb", .data = &imx27_usb_data}, + { .compatible = "fsl,imx6q-usb", .data = &imx6q_usb_data}, + { .compatible = "fsl,imx6sl-usb", .data = &imx6sl_usb_data}, +- { .compatible = "fsl,imx6sx-usb", .data = &imx6sl_usb_data}, ++ { .compatible = "fsl,imx6sx-usb", .data = &imx6sx_usb_data}, + { /* sentinel */ } + }; + MODULE_DEVICE_TABLE(of, ci_hdrc_imx_dt_ids); +diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c +index 764f668..6e53c24 100644 +--- a/drivers/usb/chipidea/udc.c ++++ b/drivers/usb/chipidea/udc.c +@@ -656,6 +656,44 @@ __acquires(hwep->lock) + return 0; + } + ++static int _ep_set_halt(struct usb_ep *ep, int value, bool check_transfer) ++{ ++ struct ci_hw_ep *hwep = container_of(ep, struct ci_hw_ep, ep); ++ int direction, retval = 0; ++ unsigned long flags; ++ ++ if (ep == NULL || hwep->ep.desc == NULL) ++ return -EINVAL; ++ ++ if (usb_endpoint_xfer_isoc(hwep->ep.desc)) ++ return -EOPNOTSUPP; ++ ++ spin_lock_irqsave(hwep->lock, flags); ++ ++ if (value && hwep->dir == TX && check_transfer && ++ !list_empty(&hwep->qh.queue) && ++ !usb_endpoint_xfer_control(hwep->ep.desc)) { ++ spin_unlock_irqrestore(hwep->lock, flags); ++ return -EAGAIN; ++ } ++ ++ direction = hwep->dir; ++ do { ++ retval |= hw_ep_set_halt(hwep->ci, hwep->num, hwep->dir, value); ++ ++ if (!value) ++ hwep->wedge = 0; ++ ++ if (hwep->type == USB_ENDPOINT_XFER_CONTROL) ++ hwep->dir = (hwep->dir == TX) ? RX : TX; ++ ++ } while (hwep->dir != direction); ++ ++ spin_unlock_irqrestore(hwep->lock, flags); ++ return retval; ++} ++ ++ + /** + * _gadget_stop_activity: stops all USB activity, flushes & disables all endpts + * @gadget: gadget +@@ -1051,7 +1089,7 @@ __acquires(ci->lock) + num += ci->hw_ep_max / 2; + + spin_unlock(&ci->lock); +- err = usb_ep_set_halt(&ci->ci_hw_ep[num].ep); ++ err = _ep_set_halt(&ci->ci_hw_ep[num].ep, 1, false); + spin_lock(&ci->lock); + if (!err) + isr_setup_status_phase(ci); +@@ -1110,8 +1148,8 @@ delegate: + + if (err < 0) { + spin_unlock(&ci->lock); +- if (usb_ep_set_halt(&hwep->ep)) +- dev_err(ci->dev, "error: ep_set_halt\n"); ++ if (_ep_set_halt(&hwep->ep, 1, false)) ++ dev_err(ci->dev, "error: _ep_set_halt\n"); + spin_lock(&ci->lock); + } + } +@@ -1142,9 +1180,9 @@ __acquires(ci->lock) + err = isr_setup_status_phase(ci); + if (err < 0) { + spin_unlock(&ci->lock); +- if (usb_ep_set_halt(&hwep->ep)) ++ if (_ep_set_halt(&hwep->ep, 1, false)) + dev_err(ci->dev, +- "error: ep_set_halt\n"); ++ "error: _ep_set_halt\n"); + spin_lock(&ci->lock); + } + } +@@ -1390,41 +1428,7 @@ static int ep_dequeue(struct usb_ep *ep, struct usb_request *req) + */ + static int ep_set_halt(struct usb_ep *ep, int value) + { +- struct ci_hw_ep *hwep = container_of(ep, struct ci_hw_ep, ep); +- int direction, retval = 0; +- unsigned long flags; +- +- if (ep == NULL || hwep->ep.desc == NULL) +- return -EINVAL; +- +- if (usb_endpoint_xfer_isoc(hwep->ep.desc)) +- return -EOPNOTSUPP; +- +- spin_lock_irqsave(hwep->lock, flags); +- +-#ifndef STALL_IN +- /* g_file_storage MS compliant but g_zero fails chapter 9 compliance */ +- if (value && hwep->type == USB_ENDPOINT_XFER_BULK && hwep->dir == TX && +- !list_empty(&hwep->qh.queue)) { +- spin_unlock_irqrestore(hwep->lock, flags); +- return -EAGAIN; +- } +-#endif +- +- direction = hwep->dir; +- do { +- retval |= hw_ep_set_halt(hwep->ci, hwep->num, hwep->dir, value); +- +- if (!value) +- hwep->wedge = 0; +- +- if (hwep->type == USB_ENDPOINT_XFER_CONTROL) +- hwep->dir = (hwep->dir == TX) ? RX : TX; +- +- } while (hwep->dir != direction); +- +- spin_unlock_irqrestore(hwep->lock, flags); +- return retval; ++ return _ep_set_halt(ep, value, true); + } + + /** +diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c +index b2a540b..b9ddf0c 100644 +--- a/drivers/usb/core/config.c ++++ b/drivers/usb/core/config.c +@@ -112,7 +112,7 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno, + cfgno, inum, asnum, ep->desc.bEndpointAddress); + ep->ss_ep_comp.bmAttributes = 16; + } else if (usb_endpoint_xfer_isoc(&ep->desc) && +- desc->bmAttributes > 2) { ++ USB_SS_MULT(desc->bmAttributes) > 3) { + dev_warn(ddev, "Isoc endpoint has Mult of %d in " + "config %d interface %d altsetting %d ep %d: " + "setting to 3\n", desc->bmAttributes + 1, +@@ -121,7 +121,8 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno, + } + + if (usb_endpoint_xfer_isoc(&ep->desc)) +- max_tx = (desc->bMaxBurst + 1) * (desc->bmAttributes + 1) * ++ max_tx = (desc->bMaxBurst + 1) * ++ (USB_SS_MULT(desc->bmAttributes)) * + usb_endpoint_maxp(&ep->desc); + else if (usb_endpoint_xfer_int(&ep->desc)) + max_tx = usb_endpoint_maxp(&ep->desc) * +diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c +index d85abfe..f5a3819 100644 +--- a/drivers/usb/core/quirks.c ++++ b/drivers/usb/core/quirks.c +@@ -54,6 +54,13 @@ static const struct usb_device_id usb_quirk_list[] = { + { USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT }, + { USB_DEVICE(0x046d, 0x0843), .driver_info = USB_QUIRK_DELAY_INIT }, + ++ /* Logitech ConferenceCam CC3000e */ ++ { USB_DEVICE(0x046d, 0x0847), .driver_info = USB_QUIRK_DELAY_INIT }, ++ { USB_DEVICE(0x046d, 0x0848), .driver_info = USB_QUIRK_DELAY_INIT }, ++ ++ /* Logitech PTZ Pro Camera */ ++ { USB_DEVICE(0x046d, 0x0853), .driver_info = USB_QUIRK_DELAY_INIT }, ++ + /* Logitech Quickcam Fusion */ + { USB_DEVICE(0x046d, 0x08c1), .driver_info = USB_QUIRK_RESET_RESUME }, + +@@ -78,6 +85,12 @@ static const struct usb_device_id usb_quirk_list[] = { + /* Philips PSC805 audio device */ + { USB_DEVICE(0x0471, 0x0155), .driver_info = USB_QUIRK_RESET_RESUME }, + ++ /* Plantronic Audio 655 DSP */ ++ { USB_DEVICE(0x047f, 0xc008), .driver_info = USB_QUIRK_RESET_RESUME }, ++ ++ /* Plantronic Audio 648 USB */ ++ { USB_DEVICE(0x047f, 0xc013), .driver_info = USB_QUIRK_RESET_RESUME }, ++ + /* Artisman Watchdog Dongle */ + { USB_DEVICE(0x04b4, 0x0526), .driver_info = + USB_QUIRK_CONFIG_INTF_STRINGS }, +diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c +index 9a8c936..41f841f 100644 +--- a/drivers/usb/host/xhci-mem.c ++++ b/drivers/usb/host/xhci-mem.c +@@ -1498,10 +1498,10 @@ int xhci_endpoint_init(struct xhci_hcd *xhci, + * use Event Data TRBs, and we don't chain in a link TRB on short + * transfers, we're basically dividing by 1. + * +- * xHCI 1.0 specification indicates that the Average TRB Length should +- * be set to 8 for control endpoints. ++ * xHCI 1.0 and 1.1 specification indicates that the Average TRB Length ++ * should be set to 8 for control endpoints. + */ +- if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version == 0x100) ++ if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version >= 0x100) + ep_ctx->tx_info |= cpu_to_le32(AVG_TRB_LENGTH_FOR_EP(8)); + else + ep_ctx->tx_info |= +@@ -1792,8 +1792,7 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci) + int size; + int i, j, num_ports; + +- if (timer_pending(&xhci->cmd_timer)) +- del_timer_sync(&xhci->cmd_timer); ++ del_timer_sync(&xhci->cmd_timer); + + /* Free the Event Ring Segment Table and the actual Event Ring */ + size = sizeof(struct xhci_erst_entry)*(xhci->erst.num_entries); +@@ -2321,6 +2320,10 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) + + INIT_LIST_HEAD(&xhci->cmd_list); + ++ /* init command timeout timer */ ++ setup_timer(&xhci->cmd_timer, xhci_handle_command_timeout, ++ (unsigned long)xhci); ++ + page_size = readl(&xhci->op_regs->page_size); + xhci_dbg_trace(xhci, trace_xhci_dbg_init, + "Supported page size register = 0x%x", page_size); +@@ -2505,10 +2508,6 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) + "Wrote ERST address to ir_set 0."); + xhci_print_ir_set(xhci, 0); + +- /* init command timeout timer */ +- setup_timer(&xhci->cmd_timer, xhci_handle_command_timeout, +- (unsigned long)xhci); +- + /* + * XXX: Might need to set the Interrupter Moderation Register to + * something other than the default (~1ms minimum between interrupts). +diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c +index 5590eac..c79d336 100644 +--- a/drivers/usb/host/xhci-pci.c ++++ b/drivers/usb/host/xhci-pci.c +@@ -180,51 +180,6 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) + "QUIRK: Resetting on resume"); + } + +-/* +- * In some Intel xHCI controllers, in order to get D3 working, +- * through a vendor specific SSIC CONFIG register at offset 0x883c, +- * SSIC PORT need to be marked as "unused" before putting xHCI +- * into D3. After D3 exit, the SSIC port need to be marked as "used". +- * Without this change, xHCI might not enter D3 state. +- * Make sure PME works on some Intel xHCI controllers by writing 1 to clear +- * the Internal PME flag bit in vendor specific PMCTRL register at offset 0x80a4 +- */ +-static void xhci_pme_quirk(struct usb_hcd *hcd, bool suspend) +-{ +- struct xhci_hcd *xhci = hcd_to_xhci(hcd); +- struct pci_dev *pdev = to_pci_dev(hcd->self.controller); +- u32 val; +- void __iomem *reg; +- +- if (pdev->vendor == PCI_VENDOR_ID_INTEL && +- pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI) { +- +- reg = (void __iomem *) xhci->cap_regs + PORT2_SSIC_CONFIG_REG2; +- +- /* Notify SSIC that SSIC profile programming is not done */ +- val = readl(reg) & ~PROG_DONE; +- writel(val, reg); +- +- /* Mark SSIC port as unused(suspend) or used(resume) */ +- val = readl(reg); +- if (suspend) +- val |= SSIC_PORT_UNUSED; +- else +- val &= ~SSIC_PORT_UNUSED; +- writel(val, reg); +- +- /* Notify SSIC that SSIC profile programming is done */ +- val = readl(reg) | PROG_DONE; +- writel(val, reg); +- readl(reg); +- } +- +- reg = (void __iomem *) xhci->cap_regs + 0x80a4; +- val = readl(reg); +- writel(val | BIT(28), reg); +- readl(reg); +-} +- + #ifdef CONFIG_ACPI + static void xhci_pme_acpi_rtd3_enable(struct pci_dev *dev) + { +@@ -345,6 +300,51 @@ static void xhci_pci_remove(struct pci_dev *dev) + } + + #ifdef CONFIG_PM ++/* ++ * In some Intel xHCI controllers, in order to get D3 working, ++ * through a vendor specific SSIC CONFIG register at offset 0x883c, ++ * SSIC PORT need to be marked as "unused" before putting xHCI ++ * into D3. After D3 exit, the SSIC port need to be marked as "used". ++ * Without this change, xHCI might not enter D3 state. ++ * Make sure PME works on some Intel xHCI controllers by writing 1 to clear ++ * the Internal PME flag bit in vendor specific PMCTRL register at offset 0x80a4 ++ */ ++static void xhci_pme_quirk(struct usb_hcd *hcd, bool suspend) ++{ ++ struct xhci_hcd *xhci = hcd_to_xhci(hcd); ++ struct pci_dev *pdev = to_pci_dev(hcd->self.controller); ++ u32 val; ++ void __iomem *reg; ++ ++ if (pdev->vendor == PCI_VENDOR_ID_INTEL && ++ pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI) { ++ ++ reg = (void __iomem *) xhci->cap_regs + PORT2_SSIC_CONFIG_REG2; ++ ++ /* Notify SSIC that SSIC profile programming is not done */ ++ val = readl(reg) & ~PROG_DONE; ++ writel(val, reg); ++ ++ /* Mark SSIC port as unused(suspend) or used(resume) */ ++ val = readl(reg); ++ if (suspend) ++ val |= SSIC_PORT_UNUSED; ++ else ++ val &= ~SSIC_PORT_UNUSED; ++ writel(val, reg); ++ ++ /* Notify SSIC that SSIC profile programming is done */ ++ val = readl(reg) | PROG_DONE; ++ writel(val, reg); ++ readl(reg); ++ } ++ ++ reg = (void __iomem *) xhci->cap_regs + 0x80a4; ++ val = readl(reg); ++ writel(val | BIT(28), reg); ++ readl(reg); ++} ++ + static int xhci_pci_suspend(struct usb_hcd *hcd, bool do_wakeup) + { + struct xhci_hcd *xhci = hcd_to_xhci(hcd); +diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c +index 32f4d56..8aadf3d 100644 +--- a/drivers/usb/host/xhci-ring.c ++++ b/drivers/usb/host/xhci-ring.c +@@ -302,6 +302,15 @@ static int xhci_abort_cmd_ring(struct xhci_hcd *xhci) + ret = xhci_handshake(&xhci->op_regs->cmd_ring, + CMD_RING_RUNNING, 0, 5 * 1000 * 1000); + if (ret < 0) { ++ /* we are about to kill xhci, give it one more chance */ ++ xhci_write_64(xhci, temp_64 | CMD_RING_ABORT, ++ &xhci->op_regs->cmd_ring); ++ udelay(1000); ++ ret = xhci_handshake(&xhci->op_regs->cmd_ring, ++ CMD_RING_RUNNING, 0, 3 * 1000 * 1000); ++ if (ret == 0) ++ return 0; ++ + xhci_err(xhci, "Stopped the command ring failed, " + "maybe the host is dead\n"); + xhci->xhc_state |= XHCI_STATE_DYING; +@@ -3041,9 +3050,11 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags, + struct xhci_td *td; + struct scatterlist *sg; + int num_sgs; +- int trb_buff_len, this_sg_len, running_total; ++ int trb_buff_len, this_sg_len, running_total, ret; + unsigned int total_packet_count; ++ bool zero_length_needed; + bool first_trb; ++ int last_trb_num; + u64 addr; + bool more_trbs_coming; + +@@ -3059,13 +3070,27 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags, + total_packet_count = DIV_ROUND_UP(urb->transfer_buffer_length, + usb_endpoint_maxp(&urb->ep->desc)); + +- trb_buff_len = prepare_transfer(xhci, xhci->devs[slot_id], ++ ret = prepare_transfer(xhci, xhci->devs[slot_id], + ep_index, urb->stream_id, + num_trbs, urb, 0, mem_flags); +- if (trb_buff_len < 0) +- return trb_buff_len; ++ if (ret < 0) ++ return ret; + + urb_priv = urb->hcpriv; ++ ++ /* Deal with URB_ZERO_PACKET - need one more td/trb */ ++ zero_length_needed = urb->transfer_flags & URB_ZERO_PACKET && ++ urb_priv->length == 2; ++ if (zero_length_needed) { ++ num_trbs++; ++ xhci_dbg(xhci, "Creating zero length td.\n"); ++ ret = prepare_transfer(xhci, xhci->devs[slot_id], ++ ep_index, urb->stream_id, ++ 1, urb, 1, mem_flags); ++ if (ret < 0) ++ return ret; ++ } ++ + td = urb_priv->td[0]; + + /* +@@ -3095,6 +3120,7 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags, + trb_buff_len = urb->transfer_buffer_length; + + first_trb = true; ++ last_trb_num = zero_length_needed ? 2 : 1; + /* Queue the first TRB, even if it's zero-length */ + do { + u32 field = 0; +@@ -3112,12 +3138,15 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags, + /* Chain all the TRBs together; clear the chain bit in the last + * TRB to indicate it's the last TRB in the chain. + */ +- if (num_trbs > 1) { ++ if (num_trbs > last_trb_num) { + field |= TRB_CHAIN; +- } else { +- /* FIXME - add check for ZERO_PACKET flag before this */ ++ } else if (num_trbs == last_trb_num) { + td->last_trb = ep_ring->enqueue; + field |= TRB_IOC; ++ } else if (zero_length_needed && num_trbs == 1) { ++ trb_buff_len = 0; ++ urb_priv->td[1]->last_trb = ep_ring->enqueue; ++ field |= TRB_IOC; + } + + /* Only set interrupt on short packet for IN endpoints */ +@@ -3179,7 +3208,7 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags, + if (running_total + trb_buff_len > urb->transfer_buffer_length) + trb_buff_len = + urb->transfer_buffer_length - running_total; +- } while (running_total < urb->transfer_buffer_length); ++ } while (num_trbs > 0); + + check_trb_math(urb, num_trbs, running_total); + giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id, +@@ -3197,7 +3226,9 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags, + int num_trbs; + struct xhci_generic_trb *start_trb; + bool first_trb; ++ int last_trb_num; + bool more_trbs_coming; ++ bool zero_length_needed; + int start_cycle; + u32 field, length_field; + +@@ -3228,7 +3259,6 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags, + num_trbs++; + running_total += TRB_MAX_BUFF_SIZE; + } +- /* FIXME: this doesn't deal with URB_ZERO_PACKET - need one more */ + + ret = prepare_transfer(xhci, xhci->devs[slot_id], + ep_index, urb->stream_id, +@@ -3237,6 +3267,20 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags, + return ret; + + urb_priv = urb->hcpriv; ++ ++ /* Deal with URB_ZERO_PACKET - need one more td/trb */ ++ zero_length_needed = urb->transfer_flags & URB_ZERO_PACKET && ++ urb_priv->length == 2; ++ if (zero_length_needed) { ++ num_trbs++; ++ xhci_dbg(xhci, "Creating zero length td.\n"); ++ ret = prepare_transfer(xhci, xhci->devs[slot_id], ++ ep_index, urb->stream_id, ++ 1, urb, 1, mem_flags); ++ if (ret < 0) ++ return ret; ++ } ++ + td = urb_priv->td[0]; + + /* +@@ -3258,7 +3302,7 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags, + trb_buff_len = urb->transfer_buffer_length; + + first_trb = true; +- ++ last_trb_num = zero_length_needed ? 2 : 1; + /* Queue the first TRB, even if it's zero-length */ + do { + u32 remainder = 0; +@@ -3275,12 +3319,15 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags, + /* Chain all the TRBs together; clear the chain bit in the last + * TRB to indicate it's the last TRB in the chain. + */ +- if (num_trbs > 1) { ++ if (num_trbs > last_trb_num) { + field |= TRB_CHAIN; +- } else { +- /* FIXME - add check for ZERO_PACKET flag before this */ ++ } else if (num_trbs == last_trb_num) { + td->last_trb = ep_ring->enqueue; + field |= TRB_IOC; ++ } else if (zero_length_needed && num_trbs == 1) { ++ trb_buff_len = 0; ++ urb_priv->td[1]->last_trb = ep_ring->enqueue; ++ field |= TRB_IOC; + } + + /* Only set interrupt on short packet for IN endpoints */ +@@ -3318,7 +3365,7 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags, + trb_buff_len = urb->transfer_buffer_length - running_total; + if (trb_buff_len > TRB_MAX_BUFF_SIZE) + trb_buff_len = TRB_MAX_BUFF_SIZE; +- } while (running_total < urb->transfer_buffer_length); ++ } while (num_trbs > 0); + + check_trb_math(urb, num_trbs, running_total); + giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id, +@@ -3385,8 +3432,8 @@ int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags, + if (start_cycle == 0) + field |= 0x1; + +- /* xHCI 1.0 6.4.1.2.1: Transfer Type field */ +- if (xhci->hci_version == 0x100) { ++ /* xHCI 1.0/1.1 6.4.1.2.1: Transfer Type field */ ++ if (xhci->hci_version >= 0x100) { + if (urb->transfer_buffer_length > 0) { + if (setup->bRequestType & USB_DIR_IN) + field |= TRB_TX_TYPE(TRB_DATA_IN); +diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c +index 526ebc0..d7b9f484 100644 +--- a/drivers/usb/host/xhci.c ++++ b/drivers/usb/host/xhci.c +@@ -146,7 +146,8 @@ static int xhci_start(struct xhci_hcd *xhci) + "waited %u microseconds.\n", + XHCI_MAX_HALT_USEC); + if (!ret) +- xhci->xhc_state &= ~XHCI_STATE_HALTED; ++ xhci->xhc_state &= ~(XHCI_STATE_HALTED | XHCI_STATE_DYING); ++ + return ret; + } + +@@ -654,15 +655,6 @@ int xhci_run(struct usb_hcd *hcd) + } + EXPORT_SYMBOL_GPL(xhci_run); + +-static void xhci_only_stop_hcd(struct usb_hcd *hcd) +-{ +- struct xhci_hcd *xhci = hcd_to_xhci(hcd); +- +- spin_lock_irq(&xhci->lock); +- xhci_halt(xhci); +- spin_unlock_irq(&xhci->lock); +-} +- + /* + * Stop xHCI driver. + * +@@ -677,12 +669,14 @@ void xhci_stop(struct usb_hcd *hcd) + u32 temp; + struct xhci_hcd *xhci = hcd_to_xhci(hcd); + +- if (!usb_hcd_is_primary_hcd(hcd)) { +- xhci_only_stop_hcd(xhci->shared_hcd); ++ if (xhci->xhc_state & XHCI_STATE_HALTED) + return; +- } + ++ mutex_lock(&xhci->mutex); + spin_lock_irq(&xhci->lock); ++ xhci->xhc_state |= XHCI_STATE_HALTED; ++ xhci->cmd_ring_state = CMD_RING_STATE_STOPPED; ++ + /* Make sure the xHC is halted for a USB3 roothub + * (xhci_stop() could be called as part of failed init). + */ +@@ -717,6 +711,7 @@ void xhci_stop(struct usb_hcd *hcd) + xhci_dbg_trace(xhci, trace_xhci_dbg_init, + "xhci_stop completed - status = %x", + readl(&xhci->op_regs->status)); ++ mutex_unlock(&xhci->mutex); + } + + /* +@@ -1340,6 +1335,11 @@ int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flags) + + if (usb_endpoint_xfer_isoc(&urb->ep->desc)) + size = urb->number_of_packets; ++ else if (usb_endpoint_is_bulk_out(&urb->ep->desc) && ++ urb->transfer_buffer_length > 0 && ++ urb->transfer_flags & URB_ZERO_PACKET && ++ !(urb->transfer_buffer_length % usb_endpoint_maxp(&urb->ep->desc))) ++ size = 2; + else + size = 1; + +@@ -3788,6 +3788,9 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev, + + mutex_lock(&xhci->mutex); + ++ if (xhci->xhc_state) /* dying or halted */ ++ goto out; ++ + if (!udev->slot_id) { + xhci_dbg_trace(xhci, trace_xhci_dbg_address, + "Bad Slot ID %d", udev->slot_id); +diff --git a/drivers/usb/misc/chaoskey.c b/drivers/usb/misc/chaoskey.c +index 3ad5d19..23c7948 100644 +--- a/drivers/usb/misc/chaoskey.c ++++ b/drivers/usb/misc/chaoskey.c +@@ -472,7 +472,7 @@ static int chaoskey_rng_read(struct hwrng *rng, void *data, + if (this_time > max) + this_time = max; + +- memcpy(data, dev->buf, this_time); ++ memcpy(data, dev->buf + dev->used, this_time); + + dev->used += this_time; + +diff --git a/drivers/usb/musb/musb_cppi41.c b/drivers/usb/musb/musb_cppi41.c +index 4d1b44c..d07cafb 100644 +--- a/drivers/usb/musb/musb_cppi41.c ++++ b/drivers/usb/musb/musb_cppi41.c +@@ -614,7 +614,7 @@ static int cppi41_dma_controller_start(struct cppi41_dma_controller *controller) + { + struct musb *musb = controller->musb; + struct device *dev = musb->controller; +- struct device_node *np = dev->of_node; ++ struct device_node *np = dev->parent->of_node; + struct cppi41_dma_channel *cppi41_channel; + int count; + int i; +@@ -664,7 +664,7 @@ static int cppi41_dma_controller_start(struct cppi41_dma_controller *controller) + musb_dma->status = MUSB_DMA_STATUS_FREE; + musb_dma->max_len = SZ_4M; + +- dc = dma_request_slave_channel(dev, str); ++ dc = dma_request_slave_channel(dev->parent, str); + if (!dc) { + dev_err(dev, "Failed to request %s.\n", str); + ret = -EPROBE_DEFER; +@@ -695,7 +695,7 @@ cppi41_dma_controller_create(struct musb *musb, void __iomem *base) + struct cppi41_dma_controller *controller; + int ret = 0; + +- if (!musb->controller->of_node) { ++ if (!musb->controller->parent->of_node) { + dev_err(musb->controller, "Need DT for the DMA engine.\n"); + return NULL; + } +diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c +index 1334a3d..67325ec 100644 +--- a/drivers/usb/musb/musb_dsps.c ++++ b/drivers/usb/musb/musb_dsps.c +@@ -225,8 +225,11 @@ static void dsps_musb_enable(struct musb *musb) + + dsps_writel(reg_base, wrp->epintr_set, epmask); + dsps_writel(reg_base, wrp->coreintr_set, coremask); +- /* start polling for ID change. */ +- mod_timer(&glue->timer, jiffies + msecs_to_jiffies(wrp->poll_timeout)); ++ /* start polling for ID change in dual-role idle mode */ ++ if (musb->xceiv->otg->state == OTG_STATE_B_IDLE && ++ musb->port_mode == MUSB_PORT_MODE_DUAL_ROLE) ++ mod_timer(&glue->timer, jiffies + ++ msecs_to_jiffies(wrp->poll_timeout)); + dsps_musb_try_idle(musb, 0); + } + +diff --git a/drivers/usb/phy/phy-generic.c b/drivers/usb/phy/phy-generic.c +index deee68e..0cd85f2 100644 +--- a/drivers/usb/phy/phy-generic.c ++++ b/drivers/usb/phy/phy-generic.c +@@ -230,7 +230,8 @@ int usb_phy_gen_create_phy(struct device *dev, struct usb_phy_generic *nop, + clk_rate = pdata->clk_rate; + needs_vcc = pdata->needs_vcc; + if (gpio_is_valid(pdata->gpio_reset)) { +- err = devm_gpio_request_one(dev, pdata->gpio_reset, 0, ++ err = devm_gpio_request_one(dev, pdata->gpio_reset, ++ GPIOF_ACTIVE_LOW, + dev_name(dev)); + if (!err) + nop->gpiod_reset = +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c +index 876423b..7c8eb4c 100644 +--- a/drivers/usb/serial/option.c ++++ b/drivers/usb/serial/option.c +@@ -278,6 +278,10 @@ static void option_instat_callback(struct urb *urb); + #define ZTE_PRODUCT_MF622 0x0001 + #define ZTE_PRODUCT_MF628 0x0015 + #define ZTE_PRODUCT_MF626 0x0031 ++#define ZTE_PRODUCT_ZM8620_X 0x0396 ++#define ZTE_PRODUCT_ME3620_MBIM 0x0426 ++#define ZTE_PRODUCT_ME3620_X 0x1432 ++#define ZTE_PRODUCT_ME3620_L 0x1433 + #define ZTE_PRODUCT_AC2726 0xfff1 + #define ZTE_PRODUCT_MG880 0xfffd + #define ZTE_PRODUCT_CDMA_TECH 0xfffe +@@ -544,6 +548,18 @@ static const struct option_blacklist_info zte_mc2716_z_blacklist = { + .sendsetup = BIT(1) | BIT(2) | BIT(3), + }; + ++static const struct option_blacklist_info zte_me3620_mbim_blacklist = { ++ .reserved = BIT(2) | BIT(3) | BIT(4), ++}; ++ ++static const struct option_blacklist_info zte_me3620_xl_blacklist = { ++ .reserved = BIT(3) | BIT(4) | BIT(5), ++}; ++ ++static const struct option_blacklist_info zte_zm8620_x_blacklist = { ++ .reserved = BIT(3) | BIT(4) | BIT(5), ++}; ++ + static const struct option_blacklist_info huawei_cdc12_blacklist = { + .reserved = BIT(1) | BIT(2), + }; +@@ -1591,6 +1607,14 @@ static const struct usb_device_id option_ids[] = { + .driver_info = (kernel_ulong_t)&zte_ad3812_z_blacklist }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MC2716, 0xff, 0xff, 0xff), + .driver_info = (kernel_ulong_t)&zte_mc2716_z_blacklist }, ++ { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_L), ++ .driver_info = (kernel_ulong_t)&zte_me3620_xl_blacklist }, ++ { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_MBIM), ++ .driver_info = (kernel_ulong_t)&zte_me3620_mbim_blacklist }, ++ { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_X), ++ .driver_info = (kernel_ulong_t)&zte_me3620_xl_blacklist }, ++ { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ZM8620_X), ++ .driver_info = (kernel_ulong_t)&zte_zm8620_x_blacklist }, + { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x01) }, + { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x05) }, + { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x86, 0x10) }, +diff --git a/drivers/usb/serial/whiteheat.c b/drivers/usb/serial/whiteheat.c +index 6c3734d..d3ea90b 100644 +--- a/drivers/usb/serial/whiteheat.c ++++ b/drivers/usb/serial/whiteheat.c +@@ -80,6 +80,8 @@ static int whiteheat_firmware_download(struct usb_serial *serial, + static int whiteheat_firmware_attach(struct usb_serial *serial); + + /* function prototypes for the Connect Tech WhiteHEAT serial converter */ ++static int whiteheat_probe(struct usb_serial *serial, ++ const struct usb_device_id *id); + static int whiteheat_attach(struct usb_serial *serial); + static void whiteheat_release(struct usb_serial *serial); + static int whiteheat_port_probe(struct usb_serial_port *port); +@@ -116,6 +118,7 @@ static struct usb_serial_driver whiteheat_device = { + .description = "Connect Tech - WhiteHEAT", + .id_table = id_table_std, + .num_ports = 4, ++ .probe = whiteheat_probe, + .attach = whiteheat_attach, + .release = whiteheat_release, + .port_probe = whiteheat_port_probe, +@@ -217,6 +220,34 @@ static int whiteheat_firmware_attach(struct usb_serial *serial) + /***************************************************************************** + * Connect Tech's White Heat serial driver functions + *****************************************************************************/ ++ ++static int whiteheat_probe(struct usb_serial *serial, ++ const struct usb_device_id *id) ++{ ++ struct usb_host_interface *iface_desc; ++ struct usb_endpoint_descriptor *endpoint; ++ size_t num_bulk_in = 0; ++ size_t num_bulk_out = 0; ++ size_t min_num_bulk; ++ unsigned int i; ++ ++ iface_desc = serial->interface->cur_altsetting; ++ ++ for (i = 0; i < iface_desc->desc.bNumEndpoints; i++) { ++ endpoint = &iface_desc->endpoint[i].desc; ++ if (usb_endpoint_is_bulk_in(endpoint)) ++ ++num_bulk_in; ++ if (usb_endpoint_is_bulk_out(endpoint)) ++ ++num_bulk_out; ++ } ++ ++ min_num_bulk = COMMAND_PORT + 1; ++ if (num_bulk_in < min_num_bulk || num_bulk_out < min_num_bulk) ++ return -ENODEV; ++ ++ return 0; ++} ++ + static int whiteheat_attach(struct usb_serial *serial) + { + struct usb_serial_port *command_port; +diff --git a/drivers/watchdog/imgpdc_wdt.c b/drivers/watchdog/imgpdc_wdt.c +index 0f73621..15ab072 100644 +--- a/drivers/watchdog/imgpdc_wdt.c ++++ b/drivers/watchdog/imgpdc_wdt.c +@@ -316,6 +316,7 @@ static int pdc_wdt_remove(struct platform_device *pdev) + { + struct pdc_wdt_dev *pdc_wdt = platform_get_drvdata(pdev); + ++ unregister_restart_handler(&pdc_wdt->restart_handler); + pdc_wdt_stop(&pdc_wdt->wdt_dev); + watchdog_unregister_device(&pdc_wdt->wdt_dev); + clk_disable_unprepare(pdc_wdt->wdt_clk); +diff --git a/drivers/watchdog/sunxi_wdt.c b/drivers/watchdog/sunxi_wdt.c +index a29afb3..47bd8a1 100644 +--- a/drivers/watchdog/sunxi_wdt.c ++++ b/drivers/watchdog/sunxi_wdt.c +@@ -184,7 +184,7 @@ static int sunxi_wdt_start(struct watchdog_device *wdt_dev) + /* Set system reset function */ + reg = readl(wdt_base + regs->wdt_cfg); + reg &= ~(regs->wdt_reset_mask); +- reg |= ~(regs->wdt_reset_val); ++ reg |= regs->wdt_reset_val; + writel(reg, wdt_base + regs->wdt_cfg); + + /* Enable watchdog */ +diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c +index a1800c1..08cb419 100644 +--- a/drivers/xen/preempt.c ++++ b/drivers/xen/preempt.c +@@ -31,7 +31,7 @@ EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall); + asmlinkage __visible void xen_maybe_preempt_hcall(void) + { + if (unlikely(__this_cpu_read(xen_in_preemptible_hcall) +- && should_resched())) { ++ && need_resched())) { + /* + * Clear flag as we may be rescheduled on a different + * cpu. +diff --git a/fs/block_dev.c b/fs/block_dev.c +index 1982437..1170f8c 100644 +--- a/fs/block_dev.c ++++ b/fs/block_dev.c +@@ -1241,6 +1241,13 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, int for_part) + goto out_clear; + } + bd_set_size(bdev, (loff_t)bdev->bd_part->nr_sects << 9); ++ /* ++ * If the partition is not aligned on a page ++ * boundary, we can't do dax I/O to it. ++ */ ++ if ((bdev->bd_part->start_sect % (PAGE_SIZE / 512)) || ++ (bdev->bd_part->nr_sects % (PAGE_SIZE / 512))) ++ bdev->bd_inode->i_flags &= ~S_DAX; + } + } else { + if (bdev->bd_contains == bdev) { +diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c +index 02d0581..3fc4fec 100644 +--- a/fs/btrfs/extent_io.c ++++ b/fs/btrfs/extent_io.c +@@ -2798,7 +2798,8 @@ static int submit_extent_page(int rw, struct extent_io_tree *tree, + bio_end_io_t end_io_func, + int mirror_num, + unsigned long prev_bio_flags, +- unsigned long bio_flags) ++ unsigned long bio_flags, ++ bool force_bio_submit) + { + int ret = 0; + struct bio *bio; +@@ -2816,6 +2817,7 @@ static int submit_extent_page(int rw, struct extent_io_tree *tree, + contig = bio_end_sector(bio) == sector; + + if (prev_bio_flags != bio_flags || !contig || ++ force_bio_submit || + merge_bio(rw, tree, page, offset, page_size, bio, bio_flags) || + bio_add_page(bio, page, page_size, offset) < page_size) { + ret = submit_one_bio(rw, bio, mirror_num, +@@ -2909,7 +2911,8 @@ static int __do_readpage(struct extent_io_tree *tree, + get_extent_t *get_extent, + struct extent_map **em_cached, + struct bio **bio, int mirror_num, +- unsigned long *bio_flags, int rw) ++ unsigned long *bio_flags, int rw, ++ u64 *prev_em_start) + { + struct inode *inode = page->mapping->host; + u64 start = page_offset(page); +@@ -2957,6 +2960,7 @@ static int __do_readpage(struct extent_io_tree *tree, + } + while (cur <= end) { + unsigned long pnr = (last_byte >> PAGE_CACHE_SHIFT) + 1; ++ bool force_bio_submit = false; + + if (cur >= last_byte) { + char *userpage; +@@ -3007,6 +3011,49 @@ static int __do_readpage(struct extent_io_tree *tree, + block_start = em->block_start; + if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags)) + block_start = EXTENT_MAP_HOLE; ++ ++ /* ++ * If we have a file range that points to a compressed extent ++ * and it's followed by a consecutive file range that points to ++ * to the same compressed extent (possibly with a different ++ * offset and/or length, so it either points to the whole extent ++ * or only part of it), we must make sure we do not submit a ++ * single bio to populate the pages for the 2 ranges because ++ * this makes the compressed extent read zero out the pages ++ * belonging to the 2nd range. Imagine the following scenario: ++ * ++ * File layout ++ * [0 - 8K] [8K - 24K] ++ * | | ++ * | | ++ * points to extent X, points to extent X, ++ * offset 4K, length of 8K offset 0, length 16K ++ * ++ * [extent X, compressed length = 4K uncompressed length = 16K] ++ * ++ * If the bio to read the compressed extent covers both ranges, ++ * it will decompress extent X into the pages belonging to the ++ * first range and then it will stop, zeroing out the remaining ++ * pages that belong to the other range that points to extent X. ++ * So here we make sure we submit 2 bios, one for the first ++ * range and another one for the third range. Both will target ++ * the same physical extent from disk, but we can't currently ++ * make the compressed bio endio callback populate the pages ++ * for both ranges because each compressed bio is tightly ++ * coupled with a single extent map, and each range can have ++ * an extent map with a different offset value relative to the ++ * uncompressed data of our extent and different lengths. This ++ * is a corner case so we prioritize correctness over ++ * non-optimal behavior (submitting 2 bios for the same extent). ++ */ ++ if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) && ++ prev_em_start && *prev_em_start != (u64)-1 && ++ *prev_em_start != em->orig_start) ++ force_bio_submit = true; ++ ++ if (prev_em_start) ++ *prev_em_start = em->orig_start; ++ + free_extent_map(em); + em = NULL; + +@@ -3056,7 +3103,8 @@ static int __do_readpage(struct extent_io_tree *tree, + bdev, bio, pnr, + end_bio_extent_readpage, mirror_num, + *bio_flags, +- this_bio_flag); ++ this_bio_flag, ++ force_bio_submit); + if (!ret) { + nr++; + *bio_flags = this_bio_flag; +@@ -3083,7 +3131,8 @@ static inline void __do_contiguous_readpages(struct extent_io_tree *tree, + get_extent_t *get_extent, + struct extent_map **em_cached, + struct bio **bio, int mirror_num, +- unsigned long *bio_flags, int rw) ++ unsigned long *bio_flags, int rw, ++ u64 *prev_em_start) + { + struct inode *inode; + struct btrfs_ordered_extent *ordered; +@@ -3103,7 +3152,7 @@ static inline void __do_contiguous_readpages(struct extent_io_tree *tree, + + for (index = 0; index < nr_pages; index++) { + __do_readpage(tree, pages[index], get_extent, em_cached, bio, +- mirror_num, bio_flags, rw); ++ mirror_num, bio_flags, rw, prev_em_start); + page_cache_release(pages[index]); + } + } +@@ -3113,7 +3162,8 @@ static void __extent_readpages(struct extent_io_tree *tree, + int nr_pages, get_extent_t *get_extent, + struct extent_map **em_cached, + struct bio **bio, int mirror_num, +- unsigned long *bio_flags, int rw) ++ unsigned long *bio_flags, int rw, ++ u64 *prev_em_start) + { + u64 start = 0; + u64 end = 0; +@@ -3134,7 +3184,7 @@ static void __extent_readpages(struct extent_io_tree *tree, + index - first_index, start, + end, get_extent, em_cached, + bio, mirror_num, bio_flags, +- rw); ++ rw, prev_em_start); + start = page_start; + end = start + PAGE_CACHE_SIZE - 1; + first_index = index; +@@ -3145,7 +3195,8 @@ static void __extent_readpages(struct extent_io_tree *tree, + __do_contiguous_readpages(tree, &pages[first_index], + index - first_index, start, + end, get_extent, em_cached, bio, +- mirror_num, bio_flags, rw); ++ mirror_num, bio_flags, rw, ++ prev_em_start); + } + + static int __extent_read_full_page(struct extent_io_tree *tree, +@@ -3171,7 +3222,7 @@ static int __extent_read_full_page(struct extent_io_tree *tree, + } + + ret = __do_readpage(tree, page, get_extent, NULL, bio, mirror_num, +- bio_flags, rw); ++ bio_flags, rw, NULL); + return ret; + } + +@@ -3197,7 +3248,7 @@ int extent_read_full_page_nolock(struct extent_io_tree *tree, struct page *page, + int ret; + + ret = __do_readpage(tree, page, get_extent, NULL, &bio, mirror_num, +- &bio_flags, READ); ++ &bio_flags, READ, NULL); + if (bio) + ret = submit_one_bio(READ, bio, mirror_num, bio_flags); + return ret; +@@ -3450,7 +3501,7 @@ static noinline_for_stack int __extent_writepage_io(struct inode *inode, + sector, iosize, pg_offset, + bdev, &epd->bio, max_nr, + end_bio_extent_writepage, +- 0, 0, 0); ++ 0, 0, 0, false); + if (ret) + SetPageError(page); + } +@@ -3752,7 +3803,7 @@ static noinline_for_stack int write_one_eb(struct extent_buffer *eb, + ret = submit_extent_page(rw, tree, p, offset >> 9, + PAGE_CACHE_SIZE, 0, bdev, &epd->bio, + -1, end_bio_extent_buffer_writepage, +- 0, epd->bio_flags, bio_flags); ++ 0, epd->bio_flags, bio_flags, false); + epd->bio_flags = bio_flags; + if (ret) { + set_btree_ioerr(p); +@@ -4156,6 +4207,7 @@ int extent_readpages(struct extent_io_tree *tree, + struct page *page; + struct extent_map *em_cached = NULL; + int nr = 0; ++ u64 prev_em_start = (u64)-1; + + for (page_idx = 0; page_idx < nr_pages; page_idx++) { + page = list_entry(pages->prev, struct page, lru); +@@ -4172,12 +4224,12 @@ int extent_readpages(struct extent_io_tree *tree, + if (nr < ARRAY_SIZE(pagepool)) + continue; + __extent_readpages(tree, pagepool, nr, get_extent, &em_cached, +- &bio, 0, &bio_flags, READ); ++ &bio, 0, &bio_flags, READ, &prev_em_start); + nr = 0; + } + if (nr) + __extent_readpages(tree, pagepool, nr, get_extent, &em_cached, +- &bio, 0, &bio_flags, READ); ++ &bio, 0, &bio_flags, READ, &prev_em_start); + + if (em_cached) + free_extent_map(em_cached); +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index e33dff3..b54e630 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -5051,7 +5051,8 @@ void btrfs_evict_inode(struct inode *inode) + goto no_delete; + } + /* do we really want it for ->i_nlink > 0 and zero btrfs_root_refs? */ +- btrfs_wait_ordered_range(inode, 0, (u64)-1); ++ if (!special_file(inode->i_mode)) ++ btrfs_wait_ordered_range(inode, 0, (u64)-1); + + btrfs_free_io_failure_record(inode, 0, (u64)-1); + +diff --git a/fs/cifs/cifsencrypt.c b/fs/cifs/cifsencrypt.c +index aa0dc25..afa09fc 100644 +--- a/fs/cifs/cifsencrypt.c ++++ b/fs/cifs/cifsencrypt.c +@@ -444,6 +444,48 @@ find_domain_name(struct cifs_ses *ses, const struct nls_table *nls_cp) + return 0; + } + ++/* Server has provided av pairs/target info in the type 2 challenge ++ * packet and we have plucked it and stored within smb session. ++ * We parse that blob here to find the server given timestamp ++ * as part of ntlmv2 authentication (or local current time as ++ * default in case of failure) ++ */ ++static __le64 ++find_timestamp(struct cifs_ses *ses) ++{ ++ unsigned int attrsize; ++ unsigned int type; ++ unsigned int onesize = sizeof(struct ntlmssp2_name); ++ unsigned char *blobptr; ++ unsigned char *blobend; ++ struct ntlmssp2_name *attrptr; ++ ++ if (!ses->auth_key.len || !ses->auth_key.response) ++ return 0; ++ ++ blobptr = ses->auth_key.response; ++ blobend = blobptr + ses->auth_key.len; ++ ++ while (blobptr + onesize < blobend) { ++ attrptr = (struct ntlmssp2_name *) blobptr; ++ type = le16_to_cpu(attrptr->type); ++ if (type == NTLMSSP_AV_EOL) ++ break; ++ blobptr += 2; /* advance attr type */ ++ attrsize = le16_to_cpu(attrptr->length); ++ blobptr += 2; /* advance attr size */ ++ if (blobptr + attrsize > blobend) ++ break; ++ if (type == NTLMSSP_AV_TIMESTAMP) { ++ if (attrsize == sizeof(u64)) ++ return *((__le64 *)blobptr); ++ } ++ blobptr += attrsize; /* advance attr value */ ++ } ++ ++ return cpu_to_le64(cifs_UnixTimeToNT(CURRENT_TIME)); ++} ++ + static int calc_ntlmv2_hash(struct cifs_ses *ses, char *ntlmv2_hash, + const struct nls_table *nls_cp) + { +@@ -641,6 +683,7 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp) + struct ntlmv2_resp *ntlmv2; + char ntlmv2_hash[16]; + unsigned char *tiblob = NULL; /* target info blob */ ++ __le64 rsp_timestamp; + + if (ses->server->negflavor == CIFS_NEGFLAVOR_EXTENDED) { + if (!ses->domainName) { +@@ -659,6 +702,12 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp) + } + } + ++ /* Must be within 5 minutes of the server (or in range +/-2h ++ * in case of Mac OS X), so simply carry over server timestamp ++ * (as Windows 7 does) ++ */ ++ rsp_timestamp = find_timestamp(ses); ++ + baselen = CIFS_SESS_KEY_SIZE + sizeof(struct ntlmv2_resp); + tilen = ses->auth_key.len; + tiblob = ses->auth_key.response; +@@ -675,8 +724,8 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp) + (ses->auth_key.response + CIFS_SESS_KEY_SIZE); + ntlmv2->blob_signature = cpu_to_le32(0x00000101); + ntlmv2->reserved = 0; +- /* Must be within 5 minutes of the server */ +- ntlmv2->time = cpu_to_le64(cifs_UnixTimeToNT(CURRENT_TIME)); ++ ntlmv2->time = rsp_timestamp; ++ + get_random_bytes(&ntlmv2->client_chal, sizeof(ntlmv2->client_chal)); + ntlmv2->reserved2 = 0; + +diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c +index f621b44..6b66dd5 100644 +--- a/fs/cifs/inode.c ++++ b/fs/cifs/inode.c +@@ -2034,7 +2034,6 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs, + struct tcon_link *tlink = NULL; + struct cifs_tcon *tcon = NULL; + struct TCP_Server_Info *server; +- struct cifs_io_parms io_parms; + + /* + * To avoid spurious oplock breaks from server, in the case of +@@ -2056,18 +2055,6 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs, + rc = -ENOSYS; + cifsFileInfo_put(open_file); + cifs_dbg(FYI, "SetFSize for attrs rc = %d\n", rc); +- if ((rc == -EINVAL) || (rc == -EOPNOTSUPP)) { +- unsigned int bytes_written; +- +- io_parms.netfid = open_file->fid.netfid; +- io_parms.pid = open_file->pid; +- io_parms.tcon = tcon; +- io_parms.offset = 0; +- io_parms.length = attrs->ia_size; +- rc = CIFSSMBWrite(xid, &io_parms, &bytes_written, +- NULL, NULL, 1); +- cifs_dbg(FYI, "Wrt seteof rc %d\n", rc); +- } + } else + rc = -EINVAL; + +@@ -2093,28 +2080,7 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs, + else + rc = -ENOSYS; + cifs_dbg(FYI, "SetEOF by path (setattrs) rc = %d\n", rc); +- if ((rc == -EINVAL) || (rc == -EOPNOTSUPP)) { +- __u16 netfid; +- int oplock = 0; + +- rc = SMBLegacyOpen(xid, tcon, full_path, FILE_OPEN, +- GENERIC_WRITE, CREATE_NOT_DIR, &netfid, +- &oplock, NULL, cifs_sb->local_nls, +- cifs_remap(cifs_sb)); +- if (rc == 0) { +- unsigned int bytes_written; +- +- io_parms.netfid = netfid; +- io_parms.pid = current->tgid; +- io_parms.tcon = tcon; +- io_parms.offset = 0; +- io_parms.length = attrs->ia_size; +- rc = CIFSSMBWrite(xid, &io_parms, &bytes_written, NULL, +- NULL, 1); +- cifs_dbg(FYI, "wrt seteof rc %d\n", rc); +- CIFSSMBClose(xid, tcon, netfid); +- } +- } + if (tlink) + cifs_put_tlink(tlink); + +diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c +index df91bcf..18da19f 100644 +--- a/fs/cifs/smb2ops.c ++++ b/fs/cifs/smb2ops.c +@@ -50,9 +50,13 @@ change_conf(struct TCP_Server_Info *server) + break; + default: + server->echoes = true; +- server->oplocks = true; ++ if (enable_oplocks) { ++ server->oplocks = true; ++ server->oplock_credits = 1; ++ } else ++ server->oplocks = false; ++ + server->echo_credits = 1; +- server->oplock_credits = 1; + } + server->credits -= server->echo_credits + server->oplock_credits; + return 0; +diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c +index b8b4f08..60dd831 100644 +--- a/fs/cifs/smb2pdu.c ++++ b/fs/cifs/smb2pdu.c +@@ -46,6 +46,7 @@ + #include "smb2status.h" + #include "smb2glob.h" + #include "cifspdu.h" ++#include "cifs_spnego.h" + + /* + * The following table defines the expected "StructureSize" of SMB2 requests +@@ -486,19 +487,15 @@ SMB2_negotiate(const unsigned int xid, struct cifs_ses *ses) + cifs_dbg(FYI, "missing security blob on negprot\n"); + + rc = cifs_enable_signing(server, ses->sign); +-#ifdef CONFIG_SMB2_ASN1 /* BB REMOVEME when updated asn1.c ready */ + if (rc) + goto neg_exit; +- if (blob_length) ++ if (blob_length) { + rc = decode_negTokenInit(security_blob, blob_length, server); +- if (rc == 1) +- rc = 0; +- else if (rc == 0) { +- rc = -EIO; +- goto neg_exit; ++ if (rc == 1) ++ rc = 0; ++ else if (rc == 0) ++ rc = -EIO; + } +-#endif +- + neg_exit: + free_rsp_buf(resp_buftype, rsp); + return rc; +@@ -592,7 +589,8 @@ SMB2_sess_setup(const unsigned int xid, struct cifs_ses *ses, + __le32 phase = NtLmNegotiate; /* NTLMSSP, if needed, is multistage */ + struct TCP_Server_Info *server = ses->server; + u16 blob_length = 0; +- char *security_blob; ++ struct key *spnego_key = NULL; ++ char *security_blob = NULL; + char *ntlmssp_blob = NULL; + bool use_spnego = false; /* else use raw ntlmssp */ + +@@ -620,7 +618,8 @@ SMB2_sess_setup(const unsigned int xid, struct cifs_ses *ses, + ses->ntlmssp->sesskey_per_smbsess = true; + + /* FIXME: allow for other auth types besides NTLMSSP (e.g. krb5) */ +- ses->sectype = RawNTLMSSP; ++ if (ses->sectype != Kerberos && ses->sectype != RawNTLMSSP) ++ ses->sectype = RawNTLMSSP; + + ssetup_ntlmssp_authenticate: + if (phase == NtLmChallenge) +@@ -649,7 +648,48 @@ ssetup_ntlmssp_authenticate: + iov[0].iov_base = (char *)req; + /* 4 for rfc1002 length field and 1 for pad */ + iov[0].iov_len = get_rfc1002_length(req) + 4 - 1; +- if (phase == NtLmNegotiate) { ++ ++ if (ses->sectype == Kerberos) { ++#ifdef CONFIG_CIFS_UPCALL ++ struct cifs_spnego_msg *msg; ++ ++ spnego_key = cifs_get_spnego_key(ses); ++ if (IS_ERR(spnego_key)) { ++ rc = PTR_ERR(spnego_key); ++ spnego_key = NULL; ++ goto ssetup_exit; ++ } ++ ++ msg = spnego_key->payload.data; ++ /* ++ * check version field to make sure that cifs.upcall is ++ * sending us a response in an expected form ++ */ ++ if (msg->version != CIFS_SPNEGO_UPCALL_VERSION) { ++ cifs_dbg(VFS, ++ "bad cifs.upcall version. Expected %d got %d", ++ CIFS_SPNEGO_UPCALL_VERSION, msg->version); ++ rc = -EKEYREJECTED; ++ goto ssetup_exit; ++ } ++ ses->auth_key.response = kmemdup(msg->data, msg->sesskey_len, ++ GFP_KERNEL); ++ if (!ses->auth_key.response) { ++ cifs_dbg(VFS, ++ "Kerberos can't allocate (%u bytes) memory", ++ msg->sesskey_len); ++ rc = -ENOMEM; ++ goto ssetup_exit; ++ } ++ ses->auth_key.len = msg->sesskey_len; ++ blob_length = msg->secblob_len; ++ iov[1].iov_base = msg->data + msg->sesskey_len; ++ iov[1].iov_len = blob_length; ++#else ++ rc = -EOPNOTSUPP; ++ goto ssetup_exit; ++#endif /* CONFIG_CIFS_UPCALL */ ++ } else if (phase == NtLmNegotiate) { /* if not krb5 must be ntlmssp */ + ntlmssp_blob = kmalloc(sizeof(struct _NEGOTIATE_MESSAGE), + GFP_KERNEL); + if (ntlmssp_blob == NULL) { +@@ -672,6 +712,8 @@ ssetup_ntlmssp_authenticate: + /* with raw NTLMSSP we don't encapsulate in SPNEGO */ + security_blob = ntlmssp_blob; + } ++ iov[1].iov_base = security_blob; ++ iov[1].iov_len = blob_length; + } else if (phase == NtLmAuthenticate) { + req->hdr.SessionId = ses->Suid; + ntlmssp_blob = kzalloc(sizeof(struct _NEGOTIATE_MESSAGE) + 500, +@@ -699,6 +741,8 @@ ssetup_ntlmssp_authenticate: + } else { + security_blob = ntlmssp_blob; + } ++ iov[1].iov_base = security_blob; ++ iov[1].iov_len = blob_length; + } else { + cifs_dbg(VFS, "illegal ntlmssp phase\n"); + rc = -EIO; +@@ -710,8 +754,6 @@ ssetup_ntlmssp_authenticate: + cpu_to_le16(sizeof(struct smb2_sess_setup_req) - + 1 /* pad */ - 4 /* rfc1001 len */); + req->SecurityBufferLength = cpu_to_le16(blob_length); +- iov[1].iov_base = security_blob; +- iov[1].iov_len = blob_length; + + inc_rfc1001_len(req, blob_length - 1 /* pad */); + +@@ -722,6 +764,7 @@ ssetup_ntlmssp_authenticate: + + kfree(security_blob); + rsp = (struct smb2_sess_setup_rsp *)iov[0].iov_base; ++ ses->Suid = rsp->hdr.SessionId; + if (resp_buftype != CIFS_NO_BUFFER && + rsp->hdr.Status == STATUS_MORE_PROCESSING_REQUIRED) { + if (phase != NtLmNegotiate) { +@@ -739,7 +782,6 @@ ssetup_ntlmssp_authenticate: + /* NTLMSSP Negotiate sent now processing challenge (response) */ + phase = NtLmChallenge; /* process ntlmssp challenge */ + rc = 0; /* MORE_PROCESSING is not an error here but expected */ +- ses->Suid = rsp->hdr.SessionId; + rc = decode_ntlmssp_challenge(rsp->Buffer, + le16_to_cpu(rsp->SecurityBufferLength), ses); + } +@@ -796,6 +838,10 @@ keygen_exit: + kfree(ses->auth_key.response); + ses->auth_key.response = NULL; + } ++ if (spnego_key) { ++ key_invalidate(spnego_key); ++ key_put(spnego_key); ++ } + kfree(ses->ntlmssp); + + return rc; +diff --git a/fs/dax.c b/fs/dax.c +index a7f77e1..ef35a20 100644 +--- a/fs/dax.c ++++ b/fs/dax.c +@@ -116,7 +116,8 @@ static ssize_t dax_io(struct inode *inode, struct iov_iter *iter, + unsigned len; + if (pos == max) { + unsigned blkbits = inode->i_blkbits; +- sector_t block = pos >> blkbits; ++ long page = pos >> PAGE_SHIFT; ++ sector_t block = page << (PAGE_SHIFT - blkbits); + unsigned first = pos - (block << blkbits); + long size; + +diff --git a/fs/dcache.c b/fs/dcache.c +index 9b5fe50..e3b44ca 100644 +--- a/fs/dcache.c ++++ b/fs/dcache.c +@@ -2926,6 +2926,13 @@ restart: + + if (dentry == vfsmnt->mnt_root || IS_ROOT(dentry)) { + struct mount *parent = ACCESS_ONCE(mnt->mnt_parent); ++ /* Escaped? */ ++ if (dentry != vfsmnt->mnt_root) { ++ bptr = *buffer; ++ blen = *buflen; ++ error = 3; ++ break; ++ } + /* Global root? */ + if (mnt != parent) { + dentry = ACCESS_ONCE(mnt->mnt_mountpoint); +diff --git a/fs/namei.c b/fs/namei.c +index 1c2105e..36df481 100644 +--- a/fs/namei.c ++++ b/fs/namei.c +@@ -560,6 +560,24 @@ static int __nd_alloc_stack(struct nameidata *nd) + return 0; + } + ++/** ++ * path_connected - Verify that a path->dentry is below path->mnt.mnt_root ++ * @path: nameidate to verify ++ * ++ * Rename can sometimes move a file or directory outside of a bind ++ * mount, path_connected allows those cases to be detected. ++ */ ++static bool path_connected(const struct path *path) ++{ ++ struct vfsmount *mnt = path->mnt; ++ ++ /* Only bind mounts can have disconnected paths */ ++ if (mnt->mnt_root == mnt->mnt_sb->s_root) ++ return true; ++ ++ return is_subdir(path->dentry, mnt->mnt_root); ++} ++ + static inline int nd_alloc_stack(struct nameidata *nd) + { + if (likely(nd->depth != EMBEDDED_LEVELS)) +@@ -1296,6 +1314,8 @@ static int follow_dotdot_rcu(struct nameidata *nd) + return -ECHILD; + nd->path.dentry = parent; + nd->seq = seq; ++ if (unlikely(!path_connected(&nd->path))) ++ return -ENOENT; + break; + } else { + struct mount *mnt = real_mount(nd->path.mnt); +@@ -1396,7 +1416,7 @@ static void follow_mount(struct path *path) + } + } + +-static void follow_dotdot(struct nameidata *nd) ++static int follow_dotdot(struct nameidata *nd) + { + if (!nd->root.mnt) + set_root(nd); +@@ -1412,6 +1432,8 @@ static void follow_dotdot(struct nameidata *nd) + /* rare case of legitimate dget_parent()... */ + nd->path.dentry = dget_parent(nd->path.dentry); + dput(old); ++ if (unlikely(!path_connected(&nd->path))) ++ return -ENOENT; + break; + } + if (!follow_up(&nd->path)) +@@ -1419,6 +1441,7 @@ static void follow_dotdot(struct nameidata *nd) + } + follow_mount(&nd->path); + nd->inode = nd->path.dentry->d_inode; ++ return 0; + } + + /* +@@ -1535,8 +1558,6 @@ static int lookup_fast(struct nameidata *nd, + negative = d_is_negative(dentry); + if (read_seqcount_retry(&dentry->d_seq, seq)) + return -ECHILD; +- if (negative) +- return -ENOENT; + + /* + * This sequence count validates that the parent had no +@@ -1557,6 +1578,12 @@ static int lookup_fast(struct nameidata *nd, + goto unlazy; + } + } ++ /* ++ * Note: do negative dentry check after revalidation in ++ * case that drops it. ++ */ ++ if (negative) ++ return -ENOENT; + path->mnt = mnt; + path->dentry = dentry; + if (likely(__follow_mount_rcu(nd, path, inode, seqp))) +@@ -1634,7 +1661,7 @@ static inline int handle_dots(struct nameidata *nd, int type) + if (nd->flags & LOOKUP_RCU) { + return follow_dotdot_rcu(nd); + } else +- follow_dotdot(nd); ++ return follow_dotdot(nd); + } + return 0; + } +diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c +index 029d688..c568868 100644 +--- a/fs/nfs/delegation.c ++++ b/fs/nfs/delegation.c +@@ -113,7 +113,8 @@ out: + return status; + } + +-static int nfs_delegation_claim_opens(struct inode *inode, const nfs4_stateid *stateid) ++static int nfs_delegation_claim_opens(struct inode *inode, ++ const nfs4_stateid *stateid, fmode_t type) + { + struct nfs_inode *nfsi = NFS_I(inode); + struct nfs_open_context *ctx; +@@ -140,7 +141,7 @@ again: + /* Block nfs4_proc_unlck */ + mutex_lock(&sp->so_delegreturn_mutex); + seq = raw_seqcount_begin(&sp->so_reclaim_seqcount); +- err = nfs4_open_delegation_recall(ctx, state, stateid); ++ err = nfs4_open_delegation_recall(ctx, state, stateid, type); + if (!err) + err = nfs_delegation_claim_locks(ctx, state, stateid); + if (!err && read_seqcount_retry(&sp->so_reclaim_seqcount, seq)) +@@ -411,7 +412,8 @@ static int nfs_end_delegation_return(struct inode *inode, struct nfs_delegation + do { + if (test_bit(NFS_DELEGATION_REVOKED, &delegation->flags)) + break; +- err = nfs_delegation_claim_opens(inode, &delegation->stateid); ++ err = nfs_delegation_claim_opens(inode, &delegation->stateid, ++ delegation->type); + if (!issync || err != -EAGAIN) + break; + /* +diff --git a/fs/nfs/delegation.h b/fs/nfs/delegation.h +index e3c20a3..785c852 100644 +--- a/fs/nfs/delegation.h ++++ b/fs/nfs/delegation.h +@@ -54,7 +54,7 @@ void nfs_delegation_reap_unclaimed(struct nfs_client *clp); + + /* NFSv4 delegation-related procedures */ + int nfs4_proc_delegreturn(struct inode *inode, struct rpc_cred *cred, const nfs4_stateid *stateid, int issync); +-int nfs4_open_delegation_recall(struct nfs_open_context *ctx, struct nfs4_state *state, const nfs4_stateid *stateid); ++int nfs4_open_delegation_recall(struct nfs_open_context *ctx, struct nfs4_state *state, const nfs4_stateid *stateid, fmode_t type); + int nfs4_lock_delegation_recall(struct file_lock *fl, struct nfs4_state *state, const nfs4_stateid *stateid); + bool nfs4_copy_delegation_stateid(nfs4_stateid *dst, struct inode *inode, fmode_t flags); + +diff --git a/fs/nfs/filelayout/filelayout.c b/fs/nfs/filelayout/filelayout.c +index b34f2e2..02ec079 100644 +--- a/fs/nfs/filelayout/filelayout.c ++++ b/fs/nfs/filelayout/filelayout.c +@@ -629,23 +629,18 @@ out_put: + goto out; + } + +-static void filelayout_free_fh_array(struct nfs4_filelayout_segment *fl) ++static void _filelayout_free_lseg(struct nfs4_filelayout_segment *fl) + { + int i; + +- for (i = 0; i < fl->num_fh; i++) { +- if (!fl->fh_array[i]) +- break; +- kfree(fl->fh_array[i]); ++ if (fl->fh_array) { ++ for (i = 0; i < fl->num_fh; i++) { ++ if (!fl->fh_array[i]) ++ break; ++ kfree(fl->fh_array[i]); ++ } ++ kfree(fl->fh_array); + } +- kfree(fl->fh_array); +- fl->fh_array = NULL; +-} +- +-static void +-_filelayout_free_lseg(struct nfs4_filelayout_segment *fl) +-{ +- filelayout_free_fh_array(fl); + kfree(fl); + } + +@@ -716,21 +711,21 @@ filelayout_decode_layout(struct pnfs_layout_hdr *flo, + /* Do we want to use a mempool here? */ + fl->fh_array[i] = kmalloc(sizeof(struct nfs_fh), gfp_flags); + if (!fl->fh_array[i]) +- goto out_err_free; ++ goto out_err; + + p = xdr_inline_decode(&stream, 4); + if (unlikely(!p)) +- goto out_err_free; ++ goto out_err; + fl->fh_array[i]->size = be32_to_cpup(p++); + if (sizeof(struct nfs_fh) < fl->fh_array[i]->size) { + printk(KERN_ERR "NFS: Too big fh %d received %d\n", + i, fl->fh_array[i]->size); +- goto out_err_free; ++ goto out_err; + } + + p = xdr_inline_decode(&stream, fl->fh_array[i]->size); + if (unlikely(!p)) +- goto out_err_free; ++ goto out_err; + memcpy(fl->fh_array[i]->data, p, fl->fh_array[i]->size); + dprintk("DEBUG: %s: fh len %d\n", __func__, + fl->fh_array[i]->size); +@@ -739,8 +734,6 @@ filelayout_decode_layout(struct pnfs_layout_hdr *flo, + __free_page(scratch); + return 0; + +-out_err_free: +- filelayout_free_fh_array(fl); + out_err: + __free_page(scratch); + return -EIO; +diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c +index d731bbf..0f020e4 100644 +--- a/fs/nfs/nfs42proc.c ++++ b/fs/nfs/nfs42proc.c +@@ -175,10 +175,12 @@ loff_t nfs42_proc_llseek(struct file *filep, loff_t offset, int whence) + { + struct nfs_server *server = NFS_SERVER(file_inode(filep)); + struct nfs4_exception exception = { }; +- int err; ++ loff_t err; + + do { + err = _nfs42_proc_llseek(filep, offset, whence); ++ if (err >= 0) ++ break; + if (err == -ENOTSUPP) + return -EOPNOTSUPP; + err = nfs4_handle_exception(server, err, &exception); +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index 73c8204..d2daaca 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -1127,6 +1127,21 @@ static int nfs4_wait_for_completion_rpc_task(struct rpc_task *task) + return ret; + } + ++static bool nfs4_mode_match_open_stateid(struct nfs4_state *state, ++ fmode_t fmode) ++{ ++ switch(fmode & (FMODE_READ|FMODE_WRITE)) { ++ case FMODE_READ|FMODE_WRITE: ++ return state->n_rdwr != 0; ++ case FMODE_WRITE: ++ return state->n_wronly != 0; ++ case FMODE_READ: ++ return state->n_rdonly != 0; ++ } ++ WARN_ON_ONCE(1); ++ return false; ++} ++ + static int can_open_cached(struct nfs4_state *state, fmode_t mode, int open_mode) + { + int ret = 0; +@@ -1561,17 +1576,13 @@ static struct nfs4_opendata *nfs4_open_recoverdata_alloc(struct nfs_open_context + return opendata; + } + +-static int nfs4_open_recover_helper(struct nfs4_opendata *opendata, fmode_t fmode, struct nfs4_state **res) ++static int nfs4_open_recover_helper(struct nfs4_opendata *opendata, ++ fmode_t fmode) + { + struct nfs4_state *newstate; + int ret; + +- if ((opendata->o_arg.claim == NFS4_OPEN_CLAIM_DELEGATE_CUR || +- opendata->o_arg.claim == NFS4_OPEN_CLAIM_DELEG_CUR_FH) && +- (opendata->o_arg.u.delegation_type & fmode) != fmode) +- /* This mode can't have been delegated, so we must have +- * a valid open_stateid to cover it - not need to reclaim. +- */ ++ if (!nfs4_mode_match_open_stateid(opendata->state, fmode)) + return 0; + opendata->o_arg.open_flags = 0; + opendata->o_arg.fmode = fmode; +@@ -1587,14 +1598,14 @@ static int nfs4_open_recover_helper(struct nfs4_opendata *opendata, fmode_t fmod + newstate = nfs4_opendata_to_nfs4_state(opendata); + if (IS_ERR(newstate)) + return PTR_ERR(newstate); ++ if (newstate != opendata->state) ++ ret = -ESTALE; + nfs4_close_state(newstate, fmode); +- *res = newstate; +- return 0; ++ return ret; + } + + static int nfs4_open_recover(struct nfs4_opendata *opendata, struct nfs4_state *state) + { +- struct nfs4_state *newstate; + int ret; + + /* Don't trigger recovery in nfs_test_and_clear_all_open_stateid */ +@@ -1605,27 +1616,15 @@ static int nfs4_open_recover(struct nfs4_opendata *opendata, struct nfs4_state * + clear_bit(NFS_DELEGATED_STATE, &state->flags); + clear_bit(NFS_OPEN_STATE, &state->flags); + smp_rmb(); +- if (state->n_rdwr != 0) { +- ret = nfs4_open_recover_helper(opendata, FMODE_READ|FMODE_WRITE, &newstate); +- if (ret != 0) +- return ret; +- if (newstate != state) +- return -ESTALE; +- } +- if (state->n_wronly != 0) { +- ret = nfs4_open_recover_helper(opendata, FMODE_WRITE, &newstate); +- if (ret != 0) +- return ret; +- if (newstate != state) +- return -ESTALE; +- } +- if (state->n_rdonly != 0) { +- ret = nfs4_open_recover_helper(opendata, FMODE_READ, &newstate); +- if (ret != 0) +- return ret; +- if (newstate != state) +- return -ESTALE; +- } ++ ret = nfs4_open_recover_helper(opendata, FMODE_READ|FMODE_WRITE); ++ if (ret != 0) ++ return ret; ++ ret = nfs4_open_recover_helper(opendata, FMODE_WRITE); ++ if (ret != 0) ++ return ret; ++ ret = nfs4_open_recover_helper(opendata, FMODE_READ); ++ if (ret != 0) ++ return ret; + /* + * We may have performed cached opens for all three recoveries. + * Check if we need to update the current stateid. +@@ -1749,18 +1748,32 @@ static int nfs4_handle_delegation_recall_error(struct nfs_server *server, struct + return err; + } + +-int nfs4_open_delegation_recall(struct nfs_open_context *ctx, struct nfs4_state *state, const nfs4_stateid *stateid) ++int nfs4_open_delegation_recall(struct nfs_open_context *ctx, ++ struct nfs4_state *state, const nfs4_stateid *stateid, ++ fmode_t type) + { + struct nfs_server *server = NFS_SERVER(state->inode); + struct nfs4_opendata *opendata; +- int err; ++ int err = 0; + + opendata = nfs4_open_recoverdata_alloc(ctx, state, + NFS4_OPEN_CLAIM_DELEG_CUR_FH); + if (IS_ERR(opendata)) + return PTR_ERR(opendata); + nfs4_stateid_copy(&opendata->o_arg.u.delegation, stateid); +- err = nfs4_open_recover(opendata, state); ++ clear_bit(NFS_DELEGATED_STATE, &state->flags); ++ switch (type & (FMODE_READ|FMODE_WRITE)) { ++ case FMODE_READ|FMODE_WRITE: ++ case FMODE_WRITE: ++ err = nfs4_open_recover_helper(opendata, FMODE_READ|FMODE_WRITE); ++ if (err) ++ break; ++ err = nfs4_open_recover_helper(opendata, FMODE_WRITE); ++ if (err) ++ break; ++ case FMODE_READ: ++ err = nfs4_open_recover_helper(opendata, FMODE_READ); ++ } + nfs4_opendata_put(opendata); + return nfs4_handle_delegation_recall_error(server, state, stateid, err); + } +diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c +index 7c5718b..fe3ddd2 100644 +--- a/fs/nfs/pagelist.c ++++ b/fs/nfs/pagelist.c +@@ -508,7 +508,7 @@ size_t nfs_generic_pg_test(struct nfs_pageio_descriptor *desc, + * for it without upsetting the slab allocator. + */ + if (((mirror->pg_count + req->wb_bytes) >> PAGE_SHIFT) * +- sizeof(struct page) > PAGE_SIZE) ++ sizeof(struct page *) > PAGE_SIZE) + return 0; + + return min(mirror->pg_bsize - mirror->pg_count, (size_t)req->wb_bytes); +diff --git a/fs/nfs/read.c b/fs/nfs/read.c +index ae0ff7a..01b8cc8 100644 +--- a/fs/nfs/read.c ++++ b/fs/nfs/read.c +@@ -72,6 +72,9 @@ void nfs_pageio_reset_read_mds(struct nfs_pageio_descriptor *pgio) + { + struct nfs_pgio_mirror *mirror; + ++ if (pgio->pg_ops && pgio->pg_ops->pg_cleanup) ++ pgio->pg_ops->pg_cleanup(pgio); ++ + pgio->pg_ops = &nfs_pgio_rw_ops; + + /* read path should never have more than one mirror */ +diff --git a/fs/nfs/write.c b/fs/nfs/write.c +index fdee927..b45b465 100644 +--- a/fs/nfs/write.c ++++ b/fs/nfs/write.c +@@ -1223,7 +1223,7 @@ static int nfs_can_extend_write(struct file *file, struct page *page, struct ino + return 1; + if (!flctx || (list_empty_careful(&flctx->flc_flock) && + list_empty_careful(&flctx->flc_posix))) +- return 0; ++ return 1; + + /* Check to see if there are whole file write locks */ + ret = 0; +@@ -1351,6 +1351,9 @@ void nfs_pageio_reset_write_mds(struct nfs_pageio_descriptor *pgio) + { + struct nfs_pgio_mirror *mirror; + ++ if (pgio->pg_ops && pgio->pg_ops->pg_cleanup) ++ pgio->pg_ops->pg_cleanup(pgio); ++ + pgio->pg_ops = &nfs_pgio_rw_ops; + + nfs_pageio_stop_mirroring(pgio); +diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c +index fdf4b41..482cfd3 100644 +--- a/fs/ocfs2/dlm/dlmmaster.c ++++ b/fs/ocfs2/dlm/dlmmaster.c +@@ -1439,6 +1439,7 @@ int dlm_master_request_handler(struct o2net_msg *msg, u32 len, void *data, + int found, ret; + int set_maybe; + int dispatch_assert = 0; ++ int dispatched = 0; + + if (!dlm_grab(dlm)) + return DLM_MASTER_RESP_NO; +@@ -1658,15 +1659,18 @@ send_response: + mlog(ML_ERROR, "failed to dispatch assert master work\n"); + response = DLM_MASTER_RESP_ERROR; + dlm_lockres_put(res); +- } else ++ } else { ++ dispatched = 1; + __dlm_lockres_grab_inflight_worker(dlm, res); ++ } + spin_unlock(&res->spinlock); + } else { + if (res) + dlm_lockres_put(res); + } + +- dlm_put(dlm); ++ if (!dispatched) ++ dlm_put(dlm); + return response; + } + +@@ -2090,7 +2094,6 @@ int dlm_dispatch_assert_master(struct dlm_ctxt *dlm, + + + /* queue up work for dlm_assert_master_worker */ +- dlm_grab(dlm); /* get an extra ref for the work item */ + dlm_init_work_item(dlm, item, dlm_assert_master_worker, NULL); + item->u.am.lockres = res; /* already have a ref */ + /* can optionally ignore node numbers higher than this node */ +diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c +index ce12e0b..3d90ad7 100644 +--- a/fs/ocfs2/dlm/dlmrecovery.c ++++ b/fs/ocfs2/dlm/dlmrecovery.c +@@ -1694,6 +1694,7 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data, + unsigned int hash; + int master = DLM_LOCK_RES_OWNER_UNKNOWN; + u32 flags = DLM_ASSERT_MASTER_REQUERY; ++ int dispatched = 0; + + if (!dlm_grab(dlm)) { + /* since the domain has gone away on this +@@ -1719,8 +1720,10 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data, + dlm_put(dlm); + /* sender will take care of this and retry */ + return ret; +- } else ++ } else { ++ dispatched = 1; + __dlm_lockres_grab_inflight_worker(dlm, res); ++ } + spin_unlock(&res->spinlock); + } else { + /* put.. incase we are not the master */ +@@ -1730,7 +1733,8 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data, + } + spin_unlock(&dlm->spinlock); + +- dlm_put(dlm); ++ if (!dispatched) ++ dlm_put(dlm); + return master; + } + +diff --git a/fs/ubifs/xattr.c b/fs/ubifs/xattr.c +index 96f3448..fd65b3f 100644 +--- a/fs/ubifs/xattr.c ++++ b/fs/ubifs/xattr.c +@@ -652,11 +652,8 @@ int ubifs_init_security(struct inode *dentry, struct inode *inode, + { + int err; + +- mutex_lock(&inode->i_mutex); + err = security_inode_init_security(inode, dentry, qstr, + &init_xattrs, 0); +- mutex_unlock(&inode->i_mutex); +- + if (err) { + struct ubifs_info *c = dentry->i_sb->s_fs_info; + ubifs_err(c, "cannot initialize security for inode %lu, error %d", +diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h +index d0a7a47..0bec580 100644 +--- a/include/asm-generic/preempt.h ++++ b/include/asm-generic/preempt.h +@@ -71,9 +71,10 @@ static __always_inline bool __preempt_count_dec_and_test(void) + /* + * Returns true when we need to resched and can (barring IRQ state). + */ +-static __always_inline bool should_resched(void) ++static __always_inline bool should_resched(int preempt_offset) + { +- return unlikely(!preempt_count() && tif_need_resched()); ++ return unlikely(preempt_count() == preempt_offset && ++ tif_need_resched()); + } + + #ifdef CONFIG_PREEMPT +diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h +index 83bfb87..e2aadbc 100644 +--- a/include/asm-generic/qspinlock.h ++++ b/include/asm-generic/qspinlock.h +@@ -111,8 +111,8 @@ static inline void queued_spin_unlock_wait(struct qspinlock *lock) + cpu_relax(); + } + +-#ifndef virt_queued_spin_lock +-static __always_inline bool virt_queued_spin_lock(struct qspinlock *lock) ++#ifndef virt_spin_lock ++static __always_inline bool virt_spin_lock(struct qspinlock *lock) + { + return false; + } +diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h +index 93755a6..430c876 100644 +--- a/include/linux/cgroup-defs.h ++++ b/include/linux/cgroup-defs.h +@@ -463,31 +463,8 @@ struct cgroup_subsys { + unsigned int depends_on; + }; + +-extern struct percpu_rw_semaphore cgroup_threadgroup_rwsem; +- +-/** +- * cgroup_threadgroup_change_begin - threadgroup exclusion for cgroups +- * @tsk: target task +- * +- * Called from threadgroup_change_begin() and allows cgroup operations to +- * synchronize against threadgroup changes using a percpu_rw_semaphore. +- */ +-static inline void cgroup_threadgroup_change_begin(struct task_struct *tsk) +-{ +- percpu_down_read(&cgroup_threadgroup_rwsem); +-} +- +-/** +- * cgroup_threadgroup_change_end - threadgroup exclusion for cgroups +- * @tsk: target task +- * +- * Called from threadgroup_change_end(). Counterpart of +- * cgroup_threadcgroup_change_begin(). +- */ +-static inline void cgroup_threadgroup_change_end(struct task_struct *tsk) +-{ +- percpu_up_read(&cgroup_threadgroup_rwsem); +-} ++void cgroup_threadgroup_change_begin(struct task_struct *tsk); ++void cgroup_threadgroup_change_end(struct task_struct *tsk); + + #else /* CONFIG_CGROUPS */ + +diff --git a/include/linux/init_task.h b/include/linux/init_task.h +index e8493fe..bb9b075 100644 +--- a/include/linux/init_task.h ++++ b/include/linux/init_task.h +@@ -25,6 +25,13 @@ + extern struct files_struct init_files; + extern struct fs_struct init_fs; + ++#ifdef CONFIG_CGROUPS ++#define INIT_GROUP_RWSEM(sig) \ ++ .group_rwsem = __RWSEM_INITIALIZER(sig.group_rwsem), ++#else ++#define INIT_GROUP_RWSEM(sig) ++#endif ++ + #ifdef CONFIG_CPUSETS + #define INIT_CPUSET_SEQ(tsk) \ + .mems_allowed_seq = SEQCNT_ZERO(tsk.mems_allowed_seq), +@@ -48,6 +55,7 @@ extern struct fs_struct init_fs; + }, \ + .cred_guard_mutex = \ + __MUTEX_INITIALIZER(sig.cred_guard_mutex), \ ++ INIT_GROUP_RWSEM(sig) \ + } + + extern struct nsproxy init_nsproxy; +diff --git a/include/linux/mm.h b/include/linux/mm.h +index bf6f117..2b05068 100644 +--- a/include/linux/mm.h ++++ b/include/linux/mm.h +@@ -916,6 +916,27 @@ static inline void set_page_links(struct page *page, enum zone_type zone, + #endif + } + ++#ifdef CONFIG_MEMCG ++static inline struct mem_cgroup *page_memcg(struct page *page) ++{ ++ return page->mem_cgroup; ++} ++ ++static inline void set_page_memcg(struct page *page, struct mem_cgroup *memcg) ++{ ++ page->mem_cgroup = memcg; ++} ++#else ++static inline struct mem_cgroup *page_memcg(struct page *page) ++{ ++ return NULL; ++} ++ ++static inline void set_page_memcg(struct page *page, struct mem_cgroup *memcg) ++{ ++} ++#endif ++ + /* + * Some inline functions in vmstat.h depend on page_zone() + */ +diff --git a/include/linux/preempt.h b/include/linux/preempt.h +index 84991f1..bea8dd8 100644 +--- a/include/linux/preempt.h ++++ b/include/linux/preempt.h +@@ -84,13 +84,21 @@ + */ + #define in_nmi() (preempt_count() & NMI_MASK) + ++/* ++ * The preempt_count offset after preempt_disable(); ++ */ + #if defined(CONFIG_PREEMPT_COUNT) +-# define PREEMPT_DISABLE_OFFSET 1 ++# define PREEMPT_DISABLE_OFFSET PREEMPT_OFFSET + #else +-# define PREEMPT_DISABLE_OFFSET 0 ++# define PREEMPT_DISABLE_OFFSET 0 + #endif + + /* ++ * The preempt_count offset after spin_lock() ++ */ ++#define PREEMPT_LOCK_OFFSET PREEMPT_DISABLE_OFFSET ++ ++/* + * The preempt_count offset needed for things like: + * + * spin_lock_bh() +@@ -103,7 +111,7 @@ + * + * Work as expected. + */ +-#define SOFTIRQ_LOCK_OFFSET (SOFTIRQ_DISABLE_OFFSET + PREEMPT_DISABLE_OFFSET) ++#define SOFTIRQ_LOCK_OFFSET (SOFTIRQ_DISABLE_OFFSET + PREEMPT_LOCK_OFFSET) + + /* + * Are we running in atomic context? WARNING: this macro cannot +@@ -124,7 +132,8 @@ + #if defined(CONFIG_DEBUG_PREEMPT) || defined(CONFIG_PREEMPT_TRACER) + extern void preempt_count_add(int val); + extern void preempt_count_sub(int val); +-#define preempt_count_dec_and_test() ({ preempt_count_sub(1); should_resched(); }) ++#define preempt_count_dec_and_test() \ ++ ({ preempt_count_sub(1); should_resched(0); }) + #else + #define preempt_count_add(val) __preempt_count_add(val) + #define preempt_count_sub(val) __preempt_count_sub(val) +@@ -184,7 +193,7 @@ do { \ + + #define preempt_check_resched() \ + do { \ +- if (should_resched()) \ ++ if (should_resched(0)) \ + __preempt_schedule(); \ + } while (0) + +diff --git a/include/linux/sched.h b/include/linux/sched.h +index 04b5ada..bfca8aa 100644 +--- a/include/linux/sched.h ++++ b/include/linux/sched.h +@@ -754,6 +754,18 @@ struct signal_struct { + unsigned audit_tty_log_passwd; + struct tty_audit_buf *tty_audit_buf; + #endif ++#ifdef CONFIG_CGROUPS ++ /* ++ * group_rwsem prevents new tasks from entering the threadgroup and ++ * member tasks from exiting,a more specifically, setting of ++ * PF_EXITING. fork and exit paths are protected with this rwsem ++ * using threadgroup_change_begin/end(). Users which require ++ * threadgroup to remain stable should use threadgroup_[un]lock() ++ * which also takes care of exec path. Currently, cgroup is the ++ * only user. ++ */ ++ struct rw_semaphore group_rwsem; ++#endif + + oom_flags_t oom_flags; + short oom_score_adj; /* OOM kill score adjustment */ +@@ -2897,12 +2909,6 @@ extern int _cond_resched(void); + + extern int __cond_resched_lock(spinlock_t *lock); + +-#ifdef CONFIG_PREEMPT_COUNT +-#define PREEMPT_LOCK_OFFSET PREEMPT_OFFSET +-#else +-#define PREEMPT_LOCK_OFFSET 0 +-#endif +- + #define cond_resched_lock(lock) ({ \ + ___might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET);\ + __cond_resched_lock(lock); \ +diff --git a/include/linux/security.h b/include/linux/security.h +index 79d85dd..2f4c1f7 100644 +--- a/include/linux/security.h ++++ b/include/linux/security.h +@@ -946,7 +946,7 @@ static inline int security_task_prctl(int option, unsigned long arg2, + unsigned long arg4, + unsigned long arg5) + { +- return cap_task_prctl(option, arg2, arg3, arg3, arg5); ++ return cap_task_prctl(option, arg2, arg3, arg4, arg5); + } + + static inline void security_task_to_inode(struct task_struct *p, struct inode *inode) +diff --git a/include/net/netfilter/br_netfilter.h b/include/net/netfilter/br_netfilter.h +index bab824b..d4c6b5f 100644 +--- a/include/net/netfilter/br_netfilter.h ++++ b/include/net/netfilter/br_netfilter.h +@@ -59,7 +59,7 @@ static inline unsigned int + br_nf_pre_routing_ipv6(const struct nf_hook_ops *ops, struct sk_buff *skb, + const struct nf_hook_state *state) + { +- return NF_DROP; ++ return NF_ACCEPT; + } + #endif + +diff --git a/include/net/netfilter/nf_conntrack.h b/include/net/netfilter/nf_conntrack.h +index 37cd391..4023c4c 100644 +--- a/include/net/netfilter/nf_conntrack.h ++++ b/include/net/netfilter/nf_conntrack.h +@@ -292,6 +292,7 @@ extern unsigned int nf_conntrack_hash_rnd; + void init_nf_conntrack_hash_rnd(void); + + struct nf_conn *nf_ct_tmpl_alloc(struct net *net, u16 zone, gfp_t flags); ++void nf_ct_tmpl_free(struct nf_conn *tmpl); + + #define NF_CT_STAT_INC(net, count) __this_cpu_inc((net)->ct.stat->count) + #define NF_CT_STAT_INC_ATOMIC(net, count) this_cpu_inc((net)->ct.stat->count) +diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h +index 2a24668..aa8bee7 100644 +--- a/include/net/netfilter/nf_tables.h ++++ b/include/net/netfilter/nf_tables.h +@@ -125,7 +125,7 @@ static inline enum nft_data_types nft_dreg_to_type(enum nft_registers reg) + + static inline enum nft_registers nft_type_to_reg(enum nft_data_types type) + { +- return type == NFT_DATA_VERDICT ? NFT_REG_VERDICT : NFT_REG_1; ++ return type == NFT_DATA_VERDICT ? NFT_REG_VERDICT : NFT_REG_1 * NFT_REG_SIZE / NFT_REG32_SIZE; + } + + unsigned int nft_parse_register(const struct nlattr *attr); +diff --git a/include/target/iscsi/iscsi_target_core.h b/include/target/iscsi/iscsi_target_core.h +index 0aedbb2..7e7f887 100644 +--- a/include/target/iscsi/iscsi_target_core.h ++++ b/include/target/iscsi/iscsi_target_core.h +@@ -776,7 +776,6 @@ struct iscsi_np { + enum iscsi_timer_flags_table np_login_timer_flags; + u32 np_exports; + enum np_flags_table np_flags; +- unsigned char np_ip[IPV6_ADDRESS_SPACE]; + u16 np_port; + spinlock_t np_thread_lock; + struct completion np_restart_comp; +diff --git a/include/xen/interface/sched.h b/include/xen/interface/sched.h +index 9ce0839..f184909 100644 +--- a/include/xen/interface/sched.h ++++ b/include/xen/interface/sched.h +@@ -107,5 +107,13 @@ struct sched_watchdog { + #define SHUTDOWN_suspend 2 /* Clean up, save suspend info, kill. */ + #define SHUTDOWN_crash 3 /* Tell controller we've crashed. */ + #define SHUTDOWN_watchdog 4 /* Restart because watchdog time expired. */ ++/* ++ * Domain asked to perform 'soft reset' for it. The expected behavior is to ++ * reset internal Xen state for the domain returning it to the point where it ++ * was created but leaving the domain's memory contents and vCPU contexts ++ * intact. This will allow the domain to start over and set up all Xen specific ++ * interfaces again. ++ */ ++#define SHUTDOWN_soft_reset 5 + + #endif /* __XEN_PUBLIC_SCHED_H__ */ +diff --git a/ipc/msg.c b/ipc/msg.c +index 66c4f56..1471db9 100644 +--- a/ipc/msg.c ++++ b/ipc/msg.c +@@ -137,13 +137,6 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params) + return retval; + } + +- /* ipc_addid() locks msq upon success. */ +- id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni); +- if (id < 0) { +- ipc_rcu_putref(msq, msg_rcu_free); +- return id; +- } +- + msq->q_stime = msq->q_rtime = 0; + msq->q_ctime = get_seconds(); + msq->q_cbytes = msq->q_qnum = 0; +@@ -153,6 +146,13 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params) + INIT_LIST_HEAD(&msq->q_receivers); + INIT_LIST_HEAD(&msq->q_senders); + ++ /* ipc_addid() locks msq upon success. */ ++ id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni); ++ if (id < 0) { ++ ipc_rcu_putref(msq, msg_rcu_free); ++ return id; ++ } ++ + ipc_unlock_object(&msq->q_perm); + rcu_read_unlock(); + +diff --git a/ipc/shm.c b/ipc/shm.c +index 4aef24d..0e61fd4 100644 +--- a/ipc/shm.c ++++ b/ipc/shm.c +@@ -551,12 +551,6 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params) + if (IS_ERR(file)) + goto no_file; + +- id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni); +- if (id < 0) { +- error = id; +- goto no_id; +- } +- + shp->shm_cprid = task_tgid_vnr(current); + shp->shm_lprid = 0; + shp->shm_atim = shp->shm_dtim = 0; +@@ -565,6 +559,13 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params) + shp->shm_nattch = 0; + shp->shm_file = file; + shp->shm_creator = current; ++ ++ id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni); ++ if (id < 0) { ++ error = id; ++ goto no_id; ++ } ++ + list_add(&shp->shm_clist, ¤t->sysvshm.shm_clist); + + /* +diff --git a/ipc/util.c b/ipc/util.c +index be42300..0f401d9 100644 +--- a/ipc/util.c ++++ b/ipc/util.c +@@ -237,6 +237,10 @@ int ipc_addid(struct ipc_ids *ids, struct kern_ipc_perm *new, int size) + rcu_read_lock(); + spin_lock(&new->lock); + ++ current_euid_egid(&euid, &egid); ++ new->cuid = new->uid = euid; ++ new->gid = new->cgid = egid; ++ + id = idr_alloc(&ids->ipcs_idr, new, + (next_id < 0) ? 0 : ipcid_to_idx(next_id), 0, + GFP_NOWAIT); +@@ -249,10 +253,6 @@ int ipc_addid(struct ipc_ids *ids, struct kern_ipc_perm *new, int size) + + ids->in_use++; + +- current_euid_egid(&euid, &egid); +- new->cuid = new->uid = euid; +- new->gid = new->cgid = egid; +- + if (next_id < 0) { + new->seq = ids->seq++; + if (ids->seq > IPCID_SEQ_MAX) +diff --git a/kernel/cgroup.c b/kernel/cgroup.c +index c6c4240..fe6f855 100644 +--- a/kernel/cgroup.c ++++ b/kernel/cgroup.c +@@ -46,7 +46,6 @@ + #include <linux/slab.h> + #include <linux/spinlock.h> + #include <linux/rwsem.h> +-#include <linux/percpu-rwsem.h> + #include <linux/string.h> + #include <linux/sort.h> + #include <linux/kmod.h> +@@ -104,8 +103,6 @@ static DEFINE_SPINLOCK(cgroup_idr_lock); + */ + static DEFINE_SPINLOCK(release_agent_path_lock); + +-struct percpu_rw_semaphore cgroup_threadgroup_rwsem; +- + #define cgroup_assert_mutex_or_rcu_locked() \ + rcu_lockdep_assert(rcu_read_lock_held() || \ + lockdep_is_held(&cgroup_mutex), \ +@@ -870,6 +867,48 @@ static struct css_set *find_css_set(struct css_set *old_cset, + return cset; + } + ++void cgroup_threadgroup_change_begin(struct task_struct *tsk) ++{ ++ down_read(&tsk->signal->group_rwsem); ++} ++ ++void cgroup_threadgroup_change_end(struct task_struct *tsk) ++{ ++ up_read(&tsk->signal->group_rwsem); ++} ++ ++/** ++ * threadgroup_lock - lock threadgroup ++ * @tsk: member task of the threadgroup to lock ++ * ++ * Lock the threadgroup @tsk belongs to. No new task is allowed to enter ++ * and member tasks aren't allowed to exit (as indicated by PF_EXITING) or ++ * change ->group_leader/pid. This is useful for cases where the threadgroup ++ * needs to stay stable across blockable operations. ++ * ++ * fork and exit explicitly call threadgroup_change_{begin|end}() for ++ * synchronization. While held, no new task will be added to threadgroup ++ * and no existing live task will have its PF_EXITING set. ++ * ++ * de_thread() does threadgroup_change_{begin|end}() when a non-leader ++ * sub-thread becomes a new leader. ++ */ ++static void threadgroup_lock(struct task_struct *tsk) ++{ ++ down_write(&tsk->signal->group_rwsem); ++} ++ ++/** ++ * threadgroup_unlock - unlock threadgroup ++ * @tsk: member task of the threadgroup to unlock ++ * ++ * Reverse threadgroup_lock(). ++ */ ++static inline void threadgroup_unlock(struct task_struct *tsk) ++{ ++ up_write(&tsk->signal->group_rwsem); ++} ++ + static struct cgroup_root *cgroup_root_from_kf(struct kernfs_root *kf_root) + { + struct cgroup *root_cgrp = kf_root->kn->priv; +@@ -2066,9 +2105,9 @@ static void cgroup_task_migrate(struct cgroup *old_cgrp, + lockdep_assert_held(&css_set_rwsem); + + /* +- * We are synchronized through cgroup_threadgroup_rwsem against +- * PF_EXITING setting such that we can't race against cgroup_exit() +- * changing the css_set to init_css_set and dropping the old one. ++ * We are synchronized through threadgroup_lock() against PF_EXITING ++ * setting such that we can't race against cgroup_exit() changing the ++ * css_set to init_css_set and dropping the old one. + */ + WARN_ON_ONCE(tsk->flags & PF_EXITING); + old_cset = task_css_set(tsk); +@@ -2125,11 +2164,10 @@ static void cgroup_migrate_finish(struct list_head *preloaded_csets) + * @src_cset and add it to @preloaded_csets, which should later be cleaned + * up by cgroup_migrate_finish(). + * +- * This function may be called without holding cgroup_threadgroup_rwsem +- * even if the target is a process. Threads may be created and destroyed +- * but as long as cgroup_mutex is not dropped, no new css_set can be put +- * into play and the preloaded css_sets are guaranteed to cover all +- * migrations. ++ * This function may be called without holding threadgroup_lock even if the ++ * target is a process. Threads may be created and destroyed but as long ++ * as cgroup_mutex is not dropped, no new css_set can be put into play and ++ * the preloaded css_sets are guaranteed to cover all migrations. + */ + static void cgroup_migrate_add_src(struct css_set *src_cset, + struct cgroup *dst_cgrp, +@@ -2232,7 +2270,7 @@ err: + * @threadgroup: whether @leader points to the whole process or a single task + * + * Migrate a process or task denoted by @leader to @cgrp. If migrating a +- * process, the caller must be holding cgroup_threadgroup_rwsem. The ++ * process, the caller must be holding threadgroup_lock of @leader. The + * caller is also responsible for invoking cgroup_migrate_add_src() and + * cgroup_migrate_prepare_dst() on the targets before invoking this + * function and following up with cgroup_migrate_finish(). +@@ -2360,7 +2398,7 @@ out_release_tset: + * @leader: the task or the leader of the threadgroup to be attached + * @threadgroup: attach the whole threadgroup? + * +- * Call holding cgroup_mutex and cgroup_threadgroup_rwsem. ++ * Call holding cgroup_mutex and threadgroup_lock of @leader. + */ + static int cgroup_attach_task(struct cgroup *dst_cgrp, + struct task_struct *leader, bool threadgroup) +@@ -2452,13 +2490,14 @@ static ssize_t __cgroup_procs_write(struct kernfs_open_file *of, char *buf, + if (!cgrp) + return -ENODEV; + +- percpu_down_write(&cgroup_threadgroup_rwsem); ++retry_find_task: + rcu_read_lock(); + if (pid) { + tsk = find_task_by_vpid(pid); + if (!tsk) { ++ rcu_read_unlock(); + ret = -ESRCH; +- goto out_unlock_rcu; ++ goto out_unlock_cgroup; + } + } else { + tsk = current; +@@ -2474,23 +2513,37 @@ static ssize_t __cgroup_procs_write(struct kernfs_open_file *of, char *buf, + */ + if (tsk == kthreadd_task || (tsk->flags & PF_NO_SETAFFINITY)) { + ret = -EINVAL; +- goto out_unlock_rcu; ++ rcu_read_unlock(); ++ goto out_unlock_cgroup; + } + + get_task_struct(tsk); + rcu_read_unlock(); + ++ threadgroup_lock(tsk); ++ if (threadgroup) { ++ if (!thread_group_leader(tsk)) { ++ /* ++ * a race with de_thread from another thread's exec() ++ * may strip us of our leadership, if this happens, ++ * there is no choice but to throw this task away and ++ * try again; this is ++ * "double-double-toil-and-trouble-check locking". ++ */ ++ threadgroup_unlock(tsk); ++ put_task_struct(tsk); ++ goto retry_find_task; ++ } ++ } ++ + ret = cgroup_procs_write_permission(tsk, cgrp, of); + if (!ret) + ret = cgroup_attach_task(cgrp, tsk, threadgroup); + +- put_task_struct(tsk); +- goto out_unlock_threadgroup; ++ threadgroup_unlock(tsk); + +-out_unlock_rcu: +- rcu_read_unlock(); +-out_unlock_threadgroup: +- percpu_up_write(&cgroup_threadgroup_rwsem); ++ put_task_struct(tsk); ++out_unlock_cgroup: + cgroup_kn_unlock(of->kn); + return ret ?: nbytes; + } +@@ -2635,8 +2688,6 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp) + + lockdep_assert_held(&cgroup_mutex); + +- percpu_down_write(&cgroup_threadgroup_rwsem); +- + /* look up all csses currently attached to @cgrp's subtree */ + down_read(&css_set_rwsem); + css_for_each_descendant_pre(css, cgroup_css(cgrp, NULL)) { +@@ -2692,8 +2743,17 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp) + goto out_finish; + last_task = task; + ++ threadgroup_lock(task); ++ /* raced against de_thread() from another thread? */ ++ if (!thread_group_leader(task)) { ++ threadgroup_unlock(task); ++ put_task_struct(task); ++ continue; ++ } ++ + ret = cgroup_migrate(src_cset->dfl_cgrp, task, true); + ++ threadgroup_unlock(task); + put_task_struct(task); + + if (WARN(ret, "cgroup: failed to update controllers for the default hierarchy (%d), further operations may crash or hang\n", ret)) +@@ -2703,7 +2763,6 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp) + + out_finish: + cgroup_migrate_finish(&preloaded_csets); +- percpu_up_write(&cgroup_threadgroup_rwsem); + return ret; + } + +@@ -5013,7 +5072,6 @@ int __init cgroup_init(void) + unsigned long key; + int ssid, err; + +- BUG_ON(percpu_init_rwsem(&cgroup_threadgroup_rwsem)); + BUG_ON(cgroup_init_cftypes(NULL, cgroup_dfl_base_files)); + BUG_ON(cgroup_init_cftypes(NULL, cgroup_legacy_base_files)); + +diff --git a/kernel/fork.c b/kernel/fork.c +index 26a70dc..e769c8c 100644 +--- a/kernel/fork.c ++++ b/kernel/fork.c +@@ -1146,6 +1146,10 @@ static int copy_signal(unsigned long clone_flags, struct task_struct *tsk) + tty_audit_fork(sig); + sched_autogroup_fork(sig); + ++#ifdef CONFIG_CGROUPS ++ init_rwsem(&sig->group_rwsem); ++#endif ++ + sig->oom_score_adj = current->signal->oom_score_adj; + sig->oom_score_adj_min = current->signal->oom_score_adj_min; + +diff --git a/kernel/irq/proc.c b/kernel/irq/proc.c +index 0e97c14..4e6267a 100644 +--- a/kernel/irq/proc.c ++++ b/kernel/irq/proc.c +@@ -12,6 +12,7 @@ + #include <linux/seq_file.h> + #include <linux/interrupt.h> + #include <linux/kernel_stat.h> ++#include <linux/mutex.h> + + #include "internals.h" + +@@ -323,18 +324,29 @@ void register_handler_proc(unsigned int irq, struct irqaction *action) + + void register_irq_proc(unsigned int irq, struct irq_desc *desc) + { ++ static DEFINE_MUTEX(register_lock); + char name [MAX_NAMELEN]; + +- if (!root_irq_dir || (desc->irq_data.chip == &no_irq_chip) || desc->dir) ++ if (!root_irq_dir || (desc->irq_data.chip == &no_irq_chip)) + return; + ++ /* ++ * irq directories are registered only when a handler is ++ * added, not when the descriptor is created, so multiple ++ * tasks might try to register at the same time. ++ */ ++ mutex_lock(®ister_lock); ++ ++ if (desc->dir) ++ goto out_unlock; ++ + memset(name, 0, MAX_NAMELEN); + sprintf(name, "%d", irq); + + /* create /proc/irq/1234 */ + desc->dir = proc_mkdir(name, root_irq_dir); + if (!desc->dir) +- return; ++ goto out_unlock; + + #ifdef CONFIG_SMP + /* create /proc/irq/<irq>/smp_affinity */ +@@ -355,6 +367,9 @@ void register_irq_proc(unsigned int irq, struct irq_desc *desc) + + proc_create_data("spurious", 0444, desc->dir, + &irq_spurious_proc_fops, (void *)(long)irq); ++ ++out_unlock: ++ mutex_unlock(®ister_lock); + } + + void unregister_irq_proc(unsigned int irq, struct irq_desc *desc) +diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c +index 38c4920..8ed0161 100644 +--- a/kernel/locking/qspinlock.c ++++ b/kernel/locking/qspinlock.c +@@ -289,7 +289,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) + if (pv_enabled()) + goto queue; + +- if (virt_queued_spin_lock(lock)) ++ if (virt_spin_lock(lock)) + return; + + /* +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index e967343..6776631 100644 +--- a/kernel/sched/core.c ++++ b/kernel/sched/core.c +@@ -2461,11 +2461,11 @@ static struct rq *finish_task_switch(struct task_struct *prev) + * If a task dies, then it sets TASK_DEAD in tsk->state and calls + * schedule one last time. The schedule call will never return, and + * the scheduled task must drop that reference. +- * The test for TASK_DEAD must occur while the runqueue locks are +- * still held, otherwise prev could be scheduled on another cpu, die +- * there before we look at prev->state, and then the reference would +- * be dropped twice. +- * Manfred Spraul <manfred@colorfullife.com> ++ * ++ * We must observe prev->state before clearing prev->on_cpu (in ++ * finish_lock_switch), otherwise a concurrent wakeup can get prev ++ * running on another CPU and we could rave with its RUNNING -> DEAD ++ * transition, resulting in a double drop. + */ + prev_state = prev->state; + vtime_task_switch(prev); +@@ -2614,13 +2614,20 @@ unsigned long nr_running(void) + + /* + * Check if only the current task is running on the cpu. ++ * ++ * Caution: this function does not check that the caller has disabled ++ * preemption, thus the result might have a time-of-check-to-time-of-use ++ * race. The caller is responsible to use it correctly, for example: ++ * ++ * - from a non-preemptable section (of course) ++ * ++ * - from a thread that is bound to a single CPU ++ * ++ * - in a loop with very short iterations (e.g. a polling loop) + */ + bool single_task_running(void) + { +- if (cpu_rq(smp_processor_id())->nr_running == 1) +- return true; +- else +- return false; ++ return raw_rq()->nr_running == 1; + } + EXPORT_SYMBOL(single_task_running); + +@@ -4492,7 +4499,7 @@ SYSCALL_DEFINE0(sched_yield) + + int __sched _cond_resched(void) + { +- if (should_resched()) { ++ if (should_resched(0)) { + preempt_schedule_common(); + return 1; + } +@@ -4510,7 +4517,7 @@ EXPORT_SYMBOL(_cond_resched); + */ + int __cond_resched_lock(spinlock_t *lock) + { +- int resched = should_resched(); ++ int resched = should_resched(PREEMPT_LOCK_OFFSET); + int ret = 0; + + lockdep_assert_held(lock); +@@ -4532,7 +4539,7 @@ int __sched __cond_resched_softirq(void) + { + BUG_ON(!in_softirq()); + +- if (should_resched()) { ++ if (should_resched(SOFTIRQ_DISABLE_OFFSET)) { + local_bh_enable(); + preempt_schedule_common(); + local_bh_disable(); +diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h +index 84d4879..08ab96b 100644 +--- a/kernel/sched/sched.h ++++ b/kernel/sched/sched.h +@@ -1091,9 +1091,10 @@ static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev) + * After ->on_cpu is cleared, the task can be moved to a different CPU. + * We must ensure this doesn't happen until the switch is completely + * finished. ++ * ++ * Pairs with the control dependency and rmb in try_to_wake_up(). + */ +- smp_wmb(); +- prev->on_cpu = 0; ++ smp_store_release(&prev->on_cpu, 0); + #endif + #ifdef CONFIG_DEBUG_SPINLOCK + /* this is a valid case when another task releases the spinlock */ +diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c +index 841b72f..3a38775 100644 +--- a/kernel/time/clocksource.c ++++ b/kernel/time/clocksource.c +@@ -217,7 +217,7 @@ static void clocksource_watchdog(unsigned long data) + continue; + + /* Check the deviation from the watchdog clocksource. */ +- if ((abs(cs_nsec - wd_nsec) > WATCHDOG_THRESHOLD)) { ++ if (abs64(cs_nsec - wd_nsec) > WATCHDOG_THRESHOLD) { + pr_warn("timekeeping watchdog: Marking clocksource '%s' as unstable because the skew is too large:\n", + cs->name); + pr_warn(" '%s' wd_now: %llx wd_last: %llx mask: %llx\n", +diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c +index bca3667..a20d411 100644 +--- a/kernel/time/timekeeping.c ++++ b/kernel/time/timekeeping.c +@@ -1607,7 +1607,7 @@ static __always_inline void timekeeping_freqadjust(struct timekeeper *tk, + negative = (tick_error < 0); + + /* Sort out the magnitude of the correction */ +- tick_error = abs(tick_error); ++ tick_error = abs64(tick_error); + for (adj = 0; tick_error > interval; adj++) + tick_error >>= 1; + +diff --git a/lib/iommu-common.c b/lib/iommu-common.c +index ff19f66..b1c93e9 100644 +--- a/lib/iommu-common.c ++++ b/lib/iommu-common.c +@@ -21,8 +21,7 @@ static DEFINE_PER_CPU(unsigned int, iommu_hash_common); + + static inline bool need_flush(struct iommu_map_table *iommu) + { +- return (iommu->lazy_flush != NULL && +- (iommu->flags & IOMMU_NEED_FLUSH) != 0); ++ return ((iommu->flags & IOMMU_NEED_FLUSH) != 0); + } + + static inline void set_flush(struct iommu_map_table *iommu) +@@ -211,7 +210,8 @@ unsigned long iommu_tbl_range_alloc(struct device *dev, + goto bail; + } + } +- if (n < pool->hint || need_flush(iommu)) { ++ if (iommu->lazy_flush && ++ (n < pool->hint || need_flush(iommu))) { + clear_flush(iommu); + iommu->lazy_flush(iommu); + } +diff --git a/mm/hugetlb.c b/mm/hugetlb.c +index a8c3087..62c1ec5 100644 +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -2974,6 +2974,14 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma, + continue; + + /* ++ * Shared VMAs have their own reserves and do not affect ++ * MAP_PRIVATE accounting but it is possible that a shared ++ * VMA is using the same page so check and skip such VMAs. ++ */ ++ if (iter_vma->vm_flags & VM_MAYSHARE) ++ continue; ++ ++ /* + * Unmap the page from other VMAs without their own reserves. + * They get marked to be SIGKILLed if they fault in these + * areas. This is because a future no-page fault on this VMA +diff --git a/mm/memcontrol.c b/mm/memcontrol.c +index acb93c5..237d468 100644 +--- a/mm/memcontrol.c ++++ b/mm/memcontrol.c +@@ -806,12 +806,14 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_zone *mctz) + } + + /* ++ * Return page count for single (non recursive) @memcg. ++ * + * Implementation Note: reading percpu statistics for memcg. + * + * Both of vmstat[] and percpu_counter has threshold and do periodic + * synchronization to implement "quick" read. There are trade-off between + * reading cost and precision of value. Then, we may have a chance to implement +- * a periodic synchronizion of counter in memcg's counter. ++ * a periodic synchronization of counter in memcg's counter. + * + * But this _read() function is used for user interface now. The user accounts + * memory usage by memory cgroup and he _always_ requires exact value because +@@ -821,17 +823,24 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_zone *mctz) + * + * If there are kernel internal actions which can make use of some not-exact + * value, and reading all cpu value can be performance bottleneck in some +- * common workload, threashold and synchonization as vmstat[] should be ++ * common workload, threshold and synchronization as vmstat[] should be + * implemented. + */ +-static long mem_cgroup_read_stat(struct mem_cgroup *memcg, +- enum mem_cgroup_stat_index idx) ++static unsigned long ++mem_cgroup_read_stat(struct mem_cgroup *memcg, enum mem_cgroup_stat_index idx) + { + long val = 0; + int cpu; + ++ /* Per-cpu values can be negative, use a signed accumulator */ + for_each_possible_cpu(cpu) + val += per_cpu(memcg->stat->count[idx], cpu); ++ /* ++ * Summing races with updates, so val may be negative. Avoid exposing ++ * transient negative values. ++ */ ++ if (val < 0) ++ val = 0; + return val; + } + +@@ -1498,7 +1507,7 @@ void mem_cgroup_print_oom_info(struct mem_cgroup *memcg, struct task_struct *p) + for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) { + if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account) + continue; +- pr_cont(" %s:%ldKB", mem_cgroup_stat_names[i], ++ pr_cont(" %s:%luKB", mem_cgroup_stat_names[i], + K(mem_cgroup_read_stat(iter, i))); + } + +@@ -3119,14 +3128,11 @@ static unsigned long tree_stat(struct mem_cgroup *memcg, + enum mem_cgroup_stat_index idx) + { + struct mem_cgroup *iter; +- long val = 0; ++ unsigned long val = 0; + +- /* Per-cpu values can be negative, use a signed accumulator */ + for_each_mem_cgroup_tree(iter, memcg) + val += mem_cgroup_read_stat(iter, idx); + +- if (val < 0) /* race ? */ +- val = 0; + return val; + } + +@@ -3469,7 +3475,7 @@ static int memcg_stat_show(struct seq_file *m, void *v) + for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) { + if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account) + continue; +- seq_printf(m, "%s %ld\n", mem_cgroup_stat_names[i], ++ seq_printf(m, "%s %lu\n", mem_cgroup_stat_names[i], + mem_cgroup_read_stat(memcg, i) * PAGE_SIZE); + } + +@@ -3494,13 +3500,13 @@ static int memcg_stat_show(struct seq_file *m, void *v) + (u64)memsw * PAGE_SIZE); + + for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) { +- long long val = 0; ++ unsigned long long val = 0; + + if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account) + continue; + for_each_mem_cgroup_tree(mi, memcg) + val += mem_cgroup_read_stat(mi, i) * PAGE_SIZE; +- seq_printf(m, "total_%s %lld\n", mem_cgroup_stat_names[i], val); ++ seq_printf(m, "total_%s %llu\n", mem_cgroup_stat_names[i], val); + } + + for (i = 0; i < MEM_CGROUP_EVENTS_NSTATS; i++) { +diff --git a/mm/migrate.c b/mm/migrate.c +index eb42671..fcb6204 100644 +--- a/mm/migrate.c ++++ b/mm/migrate.c +@@ -734,6 +734,15 @@ static int move_to_new_page(struct page *newpage, struct page *page, + if (PageSwapBacked(page)) + SetPageSwapBacked(newpage); + ++ /* ++ * Indirectly called below, migrate_page_copy() copies PG_dirty and thus ++ * needs newpage's memcg set to transfer memcg dirty page accounting. ++ * So perform memcg migration in two steps: ++ * 1. set newpage->mem_cgroup (here) ++ * 2. clear page->mem_cgroup (below) ++ */ ++ set_page_memcg(newpage, page_memcg(page)); ++ + mapping = page_mapping(page); + if (!mapping) + rc = migrate_page(mapping, newpage, page, mode); +@@ -750,9 +759,10 @@ static int move_to_new_page(struct page *newpage, struct page *page, + rc = fallback_migrate_page(mapping, newpage, page, mode); + + if (rc != MIGRATEPAGE_SUCCESS) { ++ set_page_memcg(newpage, NULL); + newpage->mapping = NULL; + } else { +- mem_cgroup_migrate(page, newpage, false); ++ set_page_memcg(page, NULL); + if (page_was_mapped) + remove_migration_ptes(page, newpage); + page->mapping = NULL; +@@ -1068,7 +1078,7 @@ out: + if (rc != MIGRATEPAGE_SUCCESS && put_new_page) + put_new_page(new_hpage, private); + else +- put_page(new_hpage); ++ putback_active_hugepage(new_hpage); + + if (result) { + if (rc) +diff --git a/mm/slab.c b/mm/slab.c +index bbd0b47..ae36028 100644 +--- a/mm/slab.c ++++ b/mm/slab.c +@@ -2190,9 +2190,16 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags) + size += BYTES_PER_WORD; + } + #if FORCED_DEBUG && defined(CONFIG_DEBUG_PAGEALLOC) +- if (size >= kmalloc_size(INDEX_NODE + 1) +- && cachep->object_size > cache_line_size() +- && ALIGN(size, cachep->align) < PAGE_SIZE) { ++ /* ++ * To activate debug pagealloc, off-slab management is necessary ++ * requirement. In early phase of initialization, small sized slab ++ * doesn't get initialized so it would not be possible. So, we need ++ * to check size >= 256. It guarantees that all necessary small ++ * sized slab is initialized in current slab initialization sequence. ++ */ ++ if (!slab_early_init && size >= kmalloc_size(INDEX_NODE) && ++ size >= 256 && cachep->object_size > cache_line_size() && ++ ALIGN(size, cachep->align) < PAGE_SIZE) { + cachep->obj_offset += PAGE_SIZE - ALIGN(size, cachep->align); + size = PAGE_SIZE; + } +diff --git a/net/batman-adv/distributed-arp-table.c b/net/batman-adv/distributed-arp-table.c +index 6d0b471..cc7d87d 100644 +--- a/net/batman-adv/distributed-arp-table.c ++++ b/net/batman-adv/distributed-arp-table.c +@@ -19,6 +19,7 @@ + #include "main.h" + + #include <linux/atomic.h> ++#include <linux/bitops.h> + #include <linux/byteorder/generic.h> + #include <linux/errno.h> + #include <linux/etherdevice.h> +@@ -453,7 +454,7 @@ static bool batadv_is_orig_node_eligible(struct batadv_dat_candidate *res, + int j; + + /* check if orig node candidate is running DAT */ +- if (!(candidate->capabilities & BATADV_ORIG_CAPA_HAS_DAT)) ++ if (!test_bit(BATADV_ORIG_CAPA_HAS_DAT, &candidate->capabilities)) + goto out; + + /* Check if this node has already been selected... */ +@@ -713,9 +714,9 @@ static void batadv_dat_tvlv_ogm_handler_v1(struct batadv_priv *bat_priv, + uint16_t tvlv_value_len) + { + if (flags & BATADV_TVLV_HANDLER_OGM_CIFNOTFND) +- orig->capabilities &= ~BATADV_ORIG_CAPA_HAS_DAT; ++ clear_bit(BATADV_ORIG_CAPA_HAS_DAT, &orig->capabilities); + else +- orig->capabilities |= BATADV_ORIG_CAPA_HAS_DAT; ++ set_bit(BATADV_ORIG_CAPA_HAS_DAT, &orig->capabilities); + } + + /** +diff --git a/net/batman-adv/multicast.c b/net/batman-adv/multicast.c +index 7aa480b..68a9554 100644 +--- a/net/batman-adv/multicast.c ++++ b/net/batman-adv/multicast.c +@@ -19,6 +19,8 @@ + #include "main.h" + + #include <linux/atomic.h> ++#include <linux/bitops.h> ++#include <linux/bug.h> + #include <linux/byteorder/generic.h> + #include <linux/errno.h> + #include <linux/etherdevice.h> +@@ -588,19 +590,26 @@ batadv_mcast_forw_mode(struct batadv_priv *bat_priv, struct sk_buff *skb, + * + * If the BATADV_MCAST_WANT_ALL_UNSNOOPABLES flag of this originator, + * orig, has toggled then this method updates counter and list accordingly. ++ * ++ * Caller needs to hold orig->mcast_handler_lock. + */ + static void batadv_mcast_want_unsnoop_update(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig, + uint8_t mcast_flags) + { ++ struct hlist_node *node = &orig->mcast_want_all_unsnoopables_node; ++ struct hlist_head *head = &bat_priv->mcast.want_all_unsnoopables_list; ++ + /* switched from flag unset to set */ + if (mcast_flags & BATADV_MCAST_WANT_ALL_UNSNOOPABLES && + !(orig->mcast_flags & BATADV_MCAST_WANT_ALL_UNSNOOPABLES)) { + atomic_inc(&bat_priv->mcast.num_want_all_unsnoopables); + + spin_lock_bh(&bat_priv->mcast.want_lists_lock); +- hlist_add_head_rcu(&orig->mcast_want_all_unsnoopables_node, +- &bat_priv->mcast.want_all_unsnoopables_list); ++ /* flag checks above + mcast_handler_lock prevents this */ ++ WARN_ON(!hlist_unhashed(node)); ++ ++ hlist_add_head_rcu(node, head); + spin_unlock_bh(&bat_priv->mcast.want_lists_lock); + /* switched from flag set to unset */ + } else if (!(mcast_flags & BATADV_MCAST_WANT_ALL_UNSNOOPABLES) && +@@ -608,7 +617,10 @@ static void batadv_mcast_want_unsnoop_update(struct batadv_priv *bat_priv, + atomic_dec(&bat_priv->mcast.num_want_all_unsnoopables); + + spin_lock_bh(&bat_priv->mcast.want_lists_lock); +- hlist_del_rcu(&orig->mcast_want_all_unsnoopables_node); ++ /* flag checks above + mcast_handler_lock prevents this */ ++ WARN_ON(hlist_unhashed(node)); ++ ++ hlist_del_init_rcu(node); + spin_unlock_bh(&bat_priv->mcast.want_lists_lock); + } + } +@@ -621,19 +633,26 @@ static void batadv_mcast_want_unsnoop_update(struct batadv_priv *bat_priv, + * + * If the BATADV_MCAST_WANT_ALL_IPV4 flag of this originator, orig, has + * toggled then this method updates counter and list accordingly. ++ * ++ * Caller needs to hold orig->mcast_handler_lock. + */ + static void batadv_mcast_want_ipv4_update(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig, + uint8_t mcast_flags) + { ++ struct hlist_node *node = &orig->mcast_want_all_ipv4_node; ++ struct hlist_head *head = &bat_priv->mcast.want_all_ipv4_list; ++ + /* switched from flag unset to set */ + if (mcast_flags & BATADV_MCAST_WANT_ALL_IPV4 && + !(orig->mcast_flags & BATADV_MCAST_WANT_ALL_IPV4)) { + atomic_inc(&bat_priv->mcast.num_want_all_ipv4); + + spin_lock_bh(&bat_priv->mcast.want_lists_lock); +- hlist_add_head_rcu(&orig->mcast_want_all_ipv4_node, +- &bat_priv->mcast.want_all_ipv4_list); ++ /* flag checks above + mcast_handler_lock prevents this */ ++ WARN_ON(!hlist_unhashed(node)); ++ ++ hlist_add_head_rcu(node, head); + spin_unlock_bh(&bat_priv->mcast.want_lists_lock); + /* switched from flag set to unset */ + } else if (!(mcast_flags & BATADV_MCAST_WANT_ALL_IPV4) && +@@ -641,7 +660,10 @@ static void batadv_mcast_want_ipv4_update(struct batadv_priv *bat_priv, + atomic_dec(&bat_priv->mcast.num_want_all_ipv4); + + spin_lock_bh(&bat_priv->mcast.want_lists_lock); +- hlist_del_rcu(&orig->mcast_want_all_ipv4_node); ++ /* flag checks above + mcast_handler_lock prevents this */ ++ WARN_ON(hlist_unhashed(node)); ++ ++ hlist_del_init_rcu(node); + spin_unlock_bh(&bat_priv->mcast.want_lists_lock); + } + } +@@ -654,19 +676,26 @@ static void batadv_mcast_want_ipv4_update(struct batadv_priv *bat_priv, + * + * If the BATADV_MCAST_WANT_ALL_IPV6 flag of this originator, orig, has + * toggled then this method updates counter and list accordingly. ++ * ++ * Caller needs to hold orig->mcast_handler_lock. + */ + static void batadv_mcast_want_ipv6_update(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig, + uint8_t mcast_flags) + { ++ struct hlist_node *node = &orig->mcast_want_all_ipv6_node; ++ struct hlist_head *head = &bat_priv->mcast.want_all_ipv6_list; ++ + /* switched from flag unset to set */ + if (mcast_flags & BATADV_MCAST_WANT_ALL_IPV6 && + !(orig->mcast_flags & BATADV_MCAST_WANT_ALL_IPV6)) { + atomic_inc(&bat_priv->mcast.num_want_all_ipv6); + + spin_lock_bh(&bat_priv->mcast.want_lists_lock); +- hlist_add_head_rcu(&orig->mcast_want_all_ipv6_node, +- &bat_priv->mcast.want_all_ipv6_list); ++ /* flag checks above + mcast_handler_lock prevents this */ ++ WARN_ON(!hlist_unhashed(node)); ++ ++ hlist_add_head_rcu(node, head); + spin_unlock_bh(&bat_priv->mcast.want_lists_lock); + /* switched from flag set to unset */ + } else if (!(mcast_flags & BATADV_MCAST_WANT_ALL_IPV6) && +@@ -674,7 +703,10 @@ static void batadv_mcast_want_ipv6_update(struct batadv_priv *bat_priv, + atomic_dec(&bat_priv->mcast.num_want_all_ipv6); + + spin_lock_bh(&bat_priv->mcast.want_lists_lock); +- hlist_del_rcu(&orig->mcast_want_all_ipv6_node); ++ /* flag checks above + mcast_handler_lock prevents this */ ++ WARN_ON(hlist_unhashed(node)); ++ ++ hlist_del_init_rcu(node); + spin_unlock_bh(&bat_priv->mcast.want_lists_lock); + } + } +@@ -697,39 +729,42 @@ static void batadv_mcast_tvlv_ogm_handler_v1(struct batadv_priv *bat_priv, + uint8_t mcast_flags = BATADV_NO_FLAGS; + bool orig_initialized; + +- orig_initialized = orig->capa_initialized & BATADV_ORIG_CAPA_HAS_MCAST; ++ if (orig_mcast_enabled && tvlv_value && ++ (tvlv_value_len >= sizeof(mcast_flags))) ++ mcast_flags = *(uint8_t *)tvlv_value; ++ ++ spin_lock_bh(&orig->mcast_handler_lock); ++ orig_initialized = test_bit(BATADV_ORIG_CAPA_HAS_MCAST, ++ &orig->capa_initialized); + + /* If mcast support is turned on decrease the disabled mcast node + * counter only if we had increased it for this node before. If this + * is a completely new orig_node no need to decrease the counter. + */ + if (orig_mcast_enabled && +- !(orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST)) { ++ !test_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capabilities)) { + if (orig_initialized) + atomic_dec(&bat_priv->mcast.num_disabled); +- orig->capabilities |= BATADV_ORIG_CAPA_HAS_MCAST; ++ set_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capabilities); + /* If mcast support is being switched off or if this is an initial + * OGM without mcast support then increase the disabled mcast + * node counter. + */ + } else if (!orig_mcast_enabled && +- (orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST || ++ (test_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capabilities) || + !orig_initialized)) { + atomic_inc(&bat_priv->mcast.num_disabled); +- orig->capabilities &= ~BATADV_ORIG_CAPA_HAS_MCAST; ++ clear_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capabilities); + } + +- orig->capa_initialized |= BATADV_ORIG_CAPA_HAS_MCAST; +- +- if (orig_mcast_enabled && tvlv_value && +- (tvlv_value_len >= sizeof(mcast_flags))) +- mcast_flags = *(uint8_t *)tvlv_value; ++ set_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capa_initialized); + + batadv_mcast_want_unsnoop_update(bat_priv, orig, mcast_flags); + batadv_mcast_want_ipv4_update(bat_priv, orig, mcast_flags); + batadv_mcast_want_ipv6_update(bat_priv, orig, mcast_flags); + + orig->mcast_flags = mcast_flags; ++ spin_unlock_bh(&orig->mcast_handler_lock); + } + + /** +@@ -763,11 +798,15 @@ void batadv_mcast_purge_orig(struct batadv_orig_node *orig) + { + struct batadv_priv *bat_priv = orig->bat_priv; + +- if (!(orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST) && +- orig->capa_initialized & BATADV_ORIG_CAPA_HAS_MCAST) ++ spin_lock_bh(&orig->mcast_handler_lock); ++ ++ if (!test_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capabilities) && ++ test_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capa_initialized)) + atomic_dec(&bat_priv->mcast.num_disabled); + + batadv_mcast_want_unsnoop_update(bat_priv, orig, BATADV_NO_FLAGS); + batadv_mcast_want_ipv4_update(bat_priv, orig, BATADV_NO_FLAGS); + batadv_mcast_want_ipv6_update(bat_priv, orig, BATADV_NO_FLAGS); ++ ++ spin_unlock_bh(&orig->mcast_handler_lock); + } +diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c +index f0a50f3..4660401 100644 +--- a/net/batman-adv/network-coding.c ++++ b/net/batman-adv/network-coding.c +@@ -19,6 +19,7 @@ + #include "main.h" + + #include <linux/atomic.h> ++#include <linux/bitops.h> + #include <linux/byteorder/generic.h> + #include <linux/compiler.h> + #include <linux/debugfs.h> +@@ -134,9 +135,9 @@ static void batadv_nc_tvlv_ogm_handler_v1(struct batadv_priv *bat_priv, + uint16_t tvlv_value_len) + { + if (flags & BATADV_TVLV_HANDLER_OGM_CIFNOTFND) +- orig->capabilities &= ~BATADV_ORIG_CAPA_HAS_NC; ++ clear_bit(BATADV_ORIG_CAPA_HAS_NC, &orig->capabilities); + else +- orig->capabilities |= BATADV_ORIG_CAPA_HAS_NC; ++ set_bit(BATADV_ORIG_CAPA_HAS_NC, &orig->capabilities); + } + + /** +@@ -894,7 +895,7 @@ void batadv_nc_update_nc_node(struct batadv_priv *bat_priv, + goto out; + + /* check if orig node is network coding enabled */ +- if (!(orig_node->capabilities & BATADV_ORIG_CAPA_HAS_NC)) ++ if (!test_bit(BATADV_ORIG_CAPA_HAS_NC, &orig_node->capabilities)) + goto out; + + /* accept ogms from 'good' neighbors and single hop neighbors */ +diff --git a/net/batman-adv/originator.c b/net/batman-adv/originator.c +index 018b749..32a0fcf 100644 +--- a/net/batman-adv/originator.c ++++ b/net/batman-adv/originator.c +@@ -696,8 +696,13 @@ struct batadv_orig_node *batadv_orig_node_new(struct batadv_priv *bat_priv, + orig_node->last_seen = jiffies; + reset_time = jiffies - 1 - msecs_to_jiffies(BATADV_RESET_PROTECTION_MS); + orig_node->bcast_seqno_reset = reset_time; ++ + #ifdef CONFIG_BATMAN_ADV_MCAST + orig_node->mcast_flags = BATADV_NO_FLAGS; ++ INIT_HLIST_NODE(&orig_node->mcast_want_all_unsnoopables_node); ++ INIT_HLIST_NODE(&orig_node->mcast_want_all_ipv4_node); ++ INIT_HLIST_NODE(&orig_node->mcast_want_all_ipv6_node); ++ spin_lock_init(&orig_node->mcast_handler_lock); + #endif + + /* create a vlan object for the "untagged" LAN */ +diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c +index a2fc843..51cda3a 100644 +--- a/net/batman-adv/soft-interface.c ++++ b/net/batman-adv/soft-interface.c +@@ -202,6 +202,7 @@ static int batadv_interface_tx(struct sk_buff *skb, + int gw_mode; + enum batadv_forw_mode forw_mode; + struct batadv_orig_node *mcast_single_orig = NULL; ++ int network_offset = ETH_HLEN; + + if (atomic_read(&bat_priv->mesh_state) != BATADV_MESH_ACTIVE) + goto dropped; +@@ -214,14 +215,18 @@ static int batadv_interface_tx(struct sk_buff *skb, + case ETH_P_8021Q: + vhdr = vlan_eth_hdr(skb); + +- if (vhdr->h_vlan_encapsulated_proto != ethertype) ++ if (vhdr->h_vlan_encapsulated_proto != ethertype) { ++ network_offset += VLAN_HLEN; + break; ++ } + + /* fall through */ + case ETH_P_BATMAN: + goto dropped; + } + ++ skb_set_network_header(skb, network_offset); ++ + if (batadv_bla_tx(bat_priv, skb, vid)) + goto dropped; + +diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c +index 5809b39..c9b2629 100644 +--- a/net/batman-adv/translation-table.c ++++ b/net/batman-adv/translation-table.c +@@ -19,6 +19,7 @@ + #include "main.h" + + #include <linux/atomic.h> ++#include <linux/bitops.h> + #include <linux/bug.h> + #include <linux/byteorder/generic.h> + #include <linux/compiler.h> +@@ -1882,7 +1883,7 @@ void batadv_tt_global_del_orig(struct batadv_priv *bat_priv, + } + spin_unlock_bh(list_lock); + } +- orig_node->capa_initialized &= ~BATADV_ORIG_CAPA_HAS_TT; ++ clear_bit(BATADV_ORIG_CAPA_HAS_TT, &orig_node->capa_initialized); + } + + static bool batadv_tt_global_to_purge(struct batadv_tt_global_entry *tt_global, +@@ -2841,7 +2842,7 @@ static void _batadv_tt_update_changes(struct batadv_priv *bat_priv, + return; + } + } +- orig_node->capa_initialized |= BATADV_ORIG_CAPA_HAS_TT; ++ set_bit(BATADV_ORIG_CAPA_HAS_TT, &orig_node->capa_initialized); + } + + static void batadv_tt_fill_gtable(struct batadv_priv *bat_priv, +@@ -3343,7 +3344,8 @@ static void batadv_tt_update_orig(struct batadv_priv *bat_priv, + bool has_tt_init; + + tt_vlan = (struct batadv_tvlv_tt_vlan_data *)tt_buff; +- has_tt_init = orig_node->capa_initialized & BATADV_ORIG_CAPA_HAS_TT; ++ has_tt_init = test_bit(BATADV_ORIG_CAPA_HAS_TT, ++ &orig_node->capa_initialized); + + /* orig table not initialised AND first diff is in the OGM OR the ttvn + * increased by one -> we can apply the attached changes +diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h +index 67d6348..55610a8 100644 +--- a/net/batman-adv/types.h ++++ b/net/batman-adv/types.h +@@ -221,6 +221,7 @@ struct batadv_orig_bat_iv { + * @batadv_dat_addr_t: address of the orig node in the distributed hash + * @last_seen: time when last packet from this node was received + * @bcast_seqno_reset: time when the broadcast seqno window was reset ++ * @mcast_handler_lock: synchronizes mcast-capability and -flag changes + * @mcast_flags: multicast flags announced by the orig node + * @mcast_want_all_unsnoop_node: a list node for the + * mcast.want_all_unsnoopables list +@@ -268,13 +269,15 @@ struct batadv_orig_node { + unsigned long last_seen; + unsigned long bcast_seqno_reset; + #ifdef CONFIG_BATMAN_ADV_MCAST ++ /* synchronizes mcast tvlv specific orig changes */ ++ spinlock_t mcast_handler_lock; + uint8_t mcast_flags; + struct hlist_node mcast_want_all_unsnoopables_node; + struct hlist_node mcast_want_all_ipv4_node; + struct hlist_node mcast_want_all_ipv6_node; + #endif +- uint8_t capabilities; +- uint8_t capa_initialized; ++ unsigned long capabilities; ++ unsigned long capa_initialized; + atomic_t last_ttvn; + unsigned char *tt_buff; + int16_t tt_buff_len; +@@ -313,10 +316,10 @@ struct batadv_orig_node { + * (= orig node announces a tvlv of type BATADV_TVLV_MCAST) + */ + enum batadv_orig_capabilities { +- BATADV_ORIG_CAPA_HAS_DAT = BIT(0), +- BATADV_ORIG_CAPA_HAS_NC = BIT(1), +- BATADV_ORIG_CAPA_HAS_TT = BIT(2), +- BATADV_ORIG_CAPA_HAS_MCAST = BIT(3), ++ BATADV_ORIG_CAPA_HAS_DAT, ++ BATADV_ORIG_CAPA_HAS_NC, ++ BATADV_ORIG_CAPA_HAS_TT, ++ BATADV_ORIG_CAPA_HAS_MCAST, + }; + + /** +diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c +index ad82324..0510a57 100644 +--- a/net/bluetooth/smp.c ++++ b/net/bluetooth/smp.c +@@ -2311,12 +2311,6 @@ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level) + if (!conn) + return 1; + +- chan = conn->smp; +- if (!chan) { +- BT_ERR("SMP security requested but not available"); +- return 1; +- } +- + if (!hci_dev_test_flag(hcon->hdev, HCI_LE_ENABLED)) + return 1; + +@@ -2330,6 +2324,12 @@ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level) + if (smp_ltk_encrypt(conn, hcon->pending_sec_level)) + return 0; + ++ chan = conn->smp; ++ if (!chan) { ++ BT_ERR("SMP security requested but not available"); ++ return 1; ++ } ++ + l2cap_chan_lock(chan); + + /* If SMP is already in progress ignore this request */ +diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h +index afe905c..691b54f 100644 +--- a/net/netfilter/ipset/ip_set_hash_gen.h ++++ b/net/netfilter/ipset/ip_set_hash_gen.h +@@ -152,9 +152,13 @@ htable_bits(u32 hashsize) + #define SET_HOST_MASK(family) (family == AF_INET ? 32 : 128) + + #ifdef IP_SET_HASH_WITH_NET0 ++/* cidr from 0 to SET_HOST_MASK() value and c = cidr + 1 */ + #define NLEN(family) (SET_HOST_MASK(family) + 1) ++#define CIDR_POS(c) ((c) - 1) + #else ++/* cidr from 1 to SET_HOST_MASK() value and c = cidr + 1 */ + #define NLEN(family) SET_HOST_MASK(family) ++#define CIDR_POS(c) ((c) - 2) + #endif + + #else +@@ -305,7 +309,7 @@ mtype_add_cidr(struct htype *h, u8 cidr, u8 nets_length, u8 n) + } else if (h->nets[i].cidr[n] < cidr) { + j = i; + } else if (h->nets[i].cidr[n] == cidr) { +- h->nets[cidr - 1].nets[n]++; ++ h->nets[CIDR_POS(cidr)].nets[n]++; + return; + } + } +@@ -314,7 +318,7 @@ mtype_add_cidr(struct htype *h, u8 cidr, u8 nets_length, u8 n) + h->nets[i].cidr[n] = h->nets[i - 1].cidr[n]; + } + h->nets[i].cidr[n] = cidr; +- h->nets[cidr - 1].nets[n] = 1; ++ h->nets[CIDR_POS(cidr)].nets[n] = 1; + } + + static void +@@ -325,8 +329,8 @@ mtype_del_cidr(struct htype *h, u8 cidr, u8 nets_length, u8 n) + for (i = 0; i < nets_length; i++) { + if (h->nets[i].cidr[n] != cidr) + continue; +- h->nets[cidr - 1].nets[n]--; +- if (h->nets[cidr - 1].nets[n] > 0) ++ h->nets[CIDR_POS(cidr)].nets[n]--; ++ if (h->nets[CIDR_POS(cidr)].nets[n] > 0) + return; + for (j = i; j < net_end && h->nets[j].cidr[n]; j++) + h->nets[j].cidr[n] = h->nets[j + 1].cidr[n]; +diff --git a/net/netfilter/ipset/ip_set_hash_netnet.c b/net/netfilter/ipset/ip_set_hash_netnet.c +index 3c862c0..a93dfeb 100644 +--- a/net/netfilter/ipset/ip_set_hash_netnet.c ++++ b/net/netfilter/ipset/ip_set_hash_netnet.c +@@ -131,6 +131,13 @@ hash_netnet4_data_next(struct hash_netnet4_elem *next, + #define HOST_MASK 32 + #include "ip_set_hash_gen.h" + ++static void ++hash_netnet4_init(struct hash_netnet4_elem *e) ++{ ++ e->cidr[0] = HOST_MASK; ++ e->cidr[1] = HOST_MASK; ++} ++ + static int + hash_netnet4_kadt(struct ip_set *set, const struct sk_buff *skb, + const struct xt_action_param *par, +@@ -160,7 +167,7 @@ hash_netnet4_uadt(struct ip_set *set, struct nlattr *tb[], + { + const struct hash_netnet *h = set->data; + ipset_adtfn adtfn = set->variant->adt[adt]; +- struct hash_netnet4_elem e = { .cidr = { HOST_MASK, HOST_MASK, }, }; ++ struct hash_netnet4_elem e = { }; + struct ip_set_ext ext = IP_SET_INIT_UEXT(set); + u32 ip = 0, ip_to = 0, last; + u32 ip2 = 0, ip2_from = 0, ip2_to = 0, last2; +@@ -169,6 +176,7 @@ hash_netnet4_uadt(struct ip_set *set, struct nlattr *tb[], + if (tb[IPSET_ATTR_LINENO]) + *lineno = nla_get_u32(tb[IPSET_ATTR_LINENO]); + ++ hash_netnet4_init(&e); + if (unlikely(!tb[IPSET_ATTR_IP] || !tb[IPSET_ATTR_IP2] || + !ip_set_optattr_netorder(tb, IPSET_ATTR_CADT_FLAGS))) + return -IPSET_ERR_PROTOCOL; +@@ -357,6 +365,13 @@ hash_netnet6_data_next(struct hash_netnet4_elem *next, + #define IP_SET_EMIT_CREATE + #include "ip_set_hash_gen.h" + ++static void ++hash_netnet6_init(struct hash_netnet6_elem *e) ++{ ++ e->cidr[0] = HOST_MASK; ++ e->cidr[1] = HOST_MASK; ++} ++ + static int + hash_netnet6_kadt(struct ip_set *set, const struct sk_buff *skb, + const struct xt_action_param *par, +@@ -385,13 +400,14 @@ hash_netnet6_uadt(struct ip_set *set, struct nlattr *tb[], + enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) + { + ipset_adtfn adtfn = set->variant->adt[adt]; +- struct hash_netnet6_elem e = { .cidr = { HOST_MASK, HOST_MASK, }, }; ++ struct hash_netnet6_elem e = { }; + struct ip_set_ext ext = IP_SET_INIT_UEXT(set); + int ret; + + if (tb[IPSET_ATTR_LINENO]) + *lineno = nla_get_u32(tb[IPSET_ATTR_LINENO]); + ++ hash_netnet6_init(&e); + if (unlikely(!tb[IPSET_ATTR_IP] || !tb[IPSET_ATTR_IP2] || + !ip_set_optattr_netorder(tb, IPSET_ATTR_CADT_FLAGS))) + return -IPSET_ERR_PROTOCOL; +diff --git a/net/netfilter/ipset/ip_set_hash_netportnet.c b/net/netfilter/ipset/ip_set_hash_netportnet.c +index 0c68734..9a14c23 100644 +--- a/net/netfilter/ipset/ip_set_hash_netportnet.c ++++ b/net/netfilter/ipset/ip_set_hash_netportnet.c +@@ -142,6 +142,13 @@ hash_netportnet4_data_next(struct hash_netportnet4_elem *next, + #define HOST_MASK 32 + #include "ip_set_hash_gen.h" + ++static void ++hash_netportnet4_init(struct hash_netportnet4_elem *e) ++{ ++ e->cidr[0] = HOST_MASK; ++ e->cidr[1] = HOST_MASK; ++} ++ + static int + hash_netportnet4_kadt(struct ip_set *set, const struct sk_buff *skb, + const struct xt_action_param *par, +@@ -175,7 +182,7 @@ hash_netportnet4_uadt(struct ip_set *set, struct nlattr *tb[], + { + const struct hash_netportnet *h = set->data; + ipset_adtfn adtfn = set->variant->adt[adt]; +- struct hash_netportnet4_elem e = { .cidr = { HOST_MASK, HOST_MASK, }, }; ++ struct hash_netportnet4_elem e = { }; + struct ip_set_ext ext = IP_SET_INIT_UEXT(set); + u32 ip = 0, ip_to = 0, ip_last, p = 0, port, port_to; + u32 ip2_from = 0, ip2_to = 0, ip2_last, ip2; +@@ -185,6 +192,7 @@ hash_netportnet4_uadt(struct ip_set *set, struct nlattr *tb[], + if (tb[IPSET_ATTR_LINENO]) + *lineno = nla_get_u32(tb[IPSET_ATTR_LINENO]); + ++ hash_netportnet4_init(&e); + if (unlikely(!tb[IPSET_ATTR_IP] || !tb[IPSET_ATTR_IP2] || + !ip_set_attr_netorder(tb, IPSET_ATTR_PORT) || + !ip_set_optattr_netorder(tb, IPSET_ATTR_PORT_TO) || +@@ -412,6 +420,13 @@ hash_netportnet6_data_next(struct hash_netportnet4_elem *next, + #define IP_SET_EMIT_CREATE + #include "ip_set_hash_gen.h" + ++static void ++hash_netportnet6_init(struct hash_netportnet6_elem *e) ++{ ++ e->cidr[0] = HOST_MASK; ++ e->cidr[1] = HOST_MASK; ++} ++ + static int + hash_netportnet6_kadt(struct ip_set *set, const struct sk_buff *skb, + const struct xt_action_param *par, +@@ -445,7 +460,7 @@ hash_netportnet6_uadt(struct ip_set *set, struct nlattr *tb[], + { + const struct hash_netportnet *h = set->data; + ipset_adtfn adtfn = set->variant->adt[adt]; +- struct hash_netportnet6_elem e = { .cidr = { HOST_MASK, HOST_MASK, }, }; ++ struct hash_netportnet6_elem e = { }; + struct ip_set_ext ext = IP_SET_INIT_UEXT(set); + u32 port, port_to; + bool with_ports = false; +@@ -454,6 +469,7 @@ hash_netportnet6_uadt(struct ip_set *set, struct nlattr *tb[], + if (tb[IPSET_ATTR_LINENO]) + *lineno = nla_get_u32(tb[IPSET_ATTR_LINENO]); + ++ hash_netportnet6_init(&e); + if (unlikely(!tb[IPSET_ATTR_IP] || !tb[IPSET_ATTR_IP2] || + !ip_set_attr_netorder(tb, IPSET_ATTR_PORT) || + !ip_set_optattr_netorder(tb, IPSET_ATTR_PORT_TO) || +diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c +index 3c20d02..0625a42 100644 +--- a/net/netfilter/nf_conntrack_core.c ++++ b/net/netfilter/nf_conntrack_core.c +@@ -320,12 +320,13 @@ out_free: + } + EXPORT_SYMBOL_GPL(nf_ct_tmpl_alloc); + +-static void nf_ct_tmpl_free(struct nf_conn *tmpl) ++void nf_ct_tmpl_free(struct nf_conn *tmpl) + { + nf_ct_ext_destroy(tmpl); + nf_ct_ext_free(tmpl); + kfree(tmpl); + } ++EXPORT_SYMBOL_GPL(nf_ct_tmpl_free); + + static void + destroy_conntrack(struct nf_conntrack *nfct) +diff --git a/net/netfilter/nf_log.c b/net/netfilter/nf_log.c +index 675d12c..a5d41df 100644 +--- a/net/netfilter/nf_log.c ++++ b/net/netfilter/nf_log.c +@@ -107,12 +107,17 @@ EXPORT_SYMBOL(nf_log_register); + + void nf_log_unregister(struct nf_logger *logger) + { ++ const struct nf_logger *log; + int i; + + mutex_lock(&nf_log_mutex); +- for (i = 0; i < NFPROTO_NUMPROTO; i++) +- RCU_INIT_POINTER(loggers[i][logger->type], NULL); ++ for (i = 0; i < NFPROTO_NUMPROTO; i++) { ++ log = nft_log_dereference(loggers[i][logger->type]); ++ if (log == logger) ++ RCU_INIT_POINTER(loggers[i][logger->type], NULL); ++ } + mutex_unlock(&nf_log_mutex); ++ synchronize_rcu(); + } + EXPORT_SYMBOL(nf_log_unregister); + +diff --git a/net/netfilter/nf_synproxy_core.c b/net/netfilter/nf_synproxy_core.c +index d7f1685..d6ee8f8 100644 +--- a/net/netfilter/nf_synproxy_core.c ++++ b/net/netfilter/nf_synproxy_core.c +@@ -378,7 +378,7 @@ static int __net_init synproxy_net_init(struct net *net) + err3: + free_percpu(snet->stats); + err2: +- nf_conntrack_free(ct); ++ nf_ct_tmpl_free(ct); + err1: + return err; + } +diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c +index 0c0e8ec..70277b1 100644 +--- a/net/netfilter/nfnetlink.c ++++ b/net/netfilter/nfnetlink.c +@@ -444,6 +444,7 @@ done: + static void nfnetlink_rcv(struct sk_buff *skb) + { + struct nlmsghdr *nlh = nlmsg_hdr(skb); ++ u_int16_t res_id; + int msglen; + + if (nlh->nlmsg_len < NLMSG_HDRLEN || +@@ -468,7 +469,12 @@ static void nfnetlink_rcv(struct sk_buff *skb) + + nfgenmsg = nlmsg_data(nlh); + skb_pull(skb, msglen); +- nfnetlink_rcv_batch(skb, nlh, nfgenmsg->res_id); ++ /* Work around old nft using host byte order */ ++ if (nfgenmsg->res_id == NFNL_SUBSYS_NFTABLES) ++ res_id = NFNL_SUBSYS_NFTABLES; ++ else ++ res_id = ntohs(nfgenmsg->res_id); ++ nfnetlink_rcv_batch(skb, nlh, res_id); + } else { + netlink_rcv_skb(skb, &nfnetlink_rcv_msg); + } +diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c +index 66def31..9c8fab0 100644 +--- a/net/netfilter/nft_compat.c ++++ b/net/netfilter/nft_compat.c +@@ -619,6 +619,13 @@ struct nft_xt { + + static struct nft_expr_type nft_match_type; + ++static bool nft_match_cmp(const struct xt_match *match, ++ const char *name, u32 rev, u32 family) ++{ ++ return strcmp(match->name, name) == 0 && match->revision == rev && ++ (match->family == NFPROTO_UNSPEC || match->family == family); ++} ++ + static const struct nft_expr_ops * + nft_match_select_ops(const struct nft_ctx *ctx, + const struct nlattr * const tb[]) +@@ -626,7 +633,7 @@ nft_match_select_ops(const struct nft_ctx *ctx, + struct nft_xt *nft_match; + struct xt_match *match; + char *mt_name; +- __u32 rev, family; ++ u32 rev, family; + + if (tb[NFTA_MATCH_NAME] == NULL || + tb[NFTA_MATCH_REV] == NULL || +@@ -641,8 +648,7 @@ nft_match_select_ops(const struct nft_ctx *ctx, + list_for_each_entry(nft_match, &nft_match_list, head) { + struct xt_match *match = nft_match->ops.data; + +- if (strcmp(match->name, mt_name) == 0 && +- match->revision == rev && match->family == family) { ++ if (nft_match_cmp(match, mt_name, rev, family)) { + if (!try_module_get(match->me)) + return ERR_PTR(-ENOENT); + +@@ -693,6 +699,13 @@ static LIST_HEAD(nft_target_list); + + static struct nft_expr_type nft_target_type; + ++static bool nft_target_cmp(const struct xt_target *tg, ++ const char *name, u32 rev, u32 family) ++{ ++ return strcmp(tg->name, name) == 0 && tg->revision == rev && ++ (tg->family == NFPROTO_UNSPEC || tg->family == family); ++} ++ + static const struct nft_expr_ops * + nft_target_select_ops(const struct nft_ctx *ctx, + const struct nlattr * const tb[]) +@@ -700,7 +713,7 @@ nft_target_select_ops(const struct nft_ctx *ctx, + struct nft_xt *nft_target; + struct xt_target *target; + char *tg_name; +- __u32 rev, family; ++ u32 rev, family; + + if (tb[NFTA_TARGET_NAME] == NULL || + tb[NFTA_TARGET_REV] == NULL || +@@ -715,8 +728,7 @@ nft_target_select_ops(const struct nft_ctx *ctx, + list_for_each_entry(nft_target, &nft_target_list, head) { + struct xt_target *target = nft_target->ops.data; + +- if (strcmp(target->name, tg_name) == 0 && +- target->revision == rev && target->family == family) { ++ if (nft_target_cmp(target, tg_name, rev, family)) { + if (!try_module_get(target->me)) + return ERR_PTR(-ENOENT); + +diff --git a/net/netfilter/xt_CT.c b/net/netfilter/xt_CT.c +index 43ddeee..f3377ce 100644 +--- a/net/netfilter/xt_CT.c ++++ b/net/netfilter/xt_CT.c +@@ -233,7 +233,7 @@ out: + return 0; + + err3: +- nf_conntrack_free(ct); ++ nf_ct_tmpl_free(ct); + err2: + nf_ct_l3proto_module_put(par->family); + err1: +diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c +index d25cd43..95412ab 100644 +--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c ++++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c +@@ -384,6 +384,7 @@ static int send_reply(struct svcxprt_rdma *rdma, + int byte_count) + { + struct ib_send_wr send_wr; ++ u32 xdr_off; + int sge_no; + int sge_bytes; + int page_no; +@@ -418,8 +419,8 @@ static int send_reply(struct svcxprt_rdma *rdma, + ctxt->direction = DMA_TO_DEVICE; + + /* Map the payload indicated by 'byte_count' */ ++ xdr_off = 0; + for (sge_no = 1; byte_count && sge_no < vec->count; sge_no++) { +- int xdr_off = 0; + sge_bytes = min_t(size_t, vec->sge[sge_no].iov_len, byte_count); + byte_count -= sge_bytes; + ctxt->sge[sge_no].addr = +@@ -457,6 +458,13 @@ static int send_reply(struct svcxprt_rdma *rdma, + } + rqstp->rq_next_page = rqstp->rq_respages + 1; + ++ /* The loop above bumps sc_dma_used for each sge. The ++ * xdr_buf.tail gets a separate sge, but resides in the ++ * same page as xdr_buf.head. Don't count it twice. ++ */ ++ if (sge_no > ctxt->count) ++ atomic_dec(&rdma->sc_dma_used); ++ + if (sge_no > rdma->sc_max_sge) { + pr_err("svcrdma: Too many sges (%d)\n", sge_no); + goto err; +diff --git a/sound/arm/Kconfig b/sound/arm/Kconfig +index 885683a..e040621 100644 +--- a/sound/arm/Kconfig ++++ b/sound/arm/Kconfig +@@ -9,6 +9,14 @@ menuconfig SND_ARM + Drivers that are implemented on ASoC can be found in + "ALSA for SoC audio support" section. + ++config SND_PXA2XX_LIB ++ tristate ++ select SND_AC97_CODEC if SND_PXA2XX_LIB_AC97 ++ select SND_DMAENGINE_PCM ++ ++config SND_PXA2XX_LIB_AC97 ++ bool ++ + if SND_ARM + + config SND_ARMAACI +@@ -21,13 +29,6 @@ config SND_PXA2XX_PCM + tristate + select SND_PCM + +-config SND_PXA2XX_LIB +- tristate +- select SND_AC97_CODEC if SND_PXA2XX_LIB_AC97 +- +-config SND_PXA2XX_LIB_AC97 +- bool +- + config SND_PXA2XX_AC97 + tristate "AC97 driver for the Intel PXA2xx chip" + depends on ARCH_PXA +diff --git a/sound/pci/hda/hda_tegra.c b/sound/pci/hda/hda_tegra.c +index 477742c..58c0aad 100644 +--- a/sound/pci/hda/hda_tegra.c ++++ b/sound/pci/hda/hda_tegra.c +@@ -73,6 +73,7 @@ struct hda_tegra { + struct clk *hda2codec_2x_clk; + struct clk *hda2hdmi_clk; + void __iomem *regs; ++ struct work_struct probe_work; + }; + + #ifdef CONFIG_PM +@@ -294,7 +295,9 @@ static int hda_tegra_dev_disconnect(struct snd_device *device) + static int hda_tegra_dev_free(struct snd_device *device) + { + struct azx *chip = device->device_data; ++ struct hda_tegra *hda = container_of(chip, struct hda_tegra, chip); + ++ cancel_work_sync(&hda->probe_work); + if (azx_bus(chip)->chip_init) { + azx_stop_all_streams(chip); + azx_stop_chip(chip); +@@ -426,6 +429,9 @@ static int hda_tegra_first_init(struct azx *chip, struct platform_device *pdev) + /* + * constructor + */ ++ ++static void hda_tegra_probe_work(struct work_struct *work); ++ + static int hda_tegra_create(struct snd_card *card, + unsigned int driver_caps, + struct hda_tegra *hda) +@@ -452,6 +458,8 @@ static int hda_tegra_create(struct snd_card *card, + chip->single_cmd = false; + chip->snoop = true; + ++ INIT_WORK(&hda->probe_work, hda_tegra_probe_work); ++ + err = azx_bus_init(chip, NULL, &hda_tegra_io_ops); + if (err < 0) + return err; +@@ -499,6 +507,21 @@ static int hda_tegra_probe(struct platform_device *pdev) + card->private_data = chip; + + dev_set_drvdata(&pdev->dev, card); ++ schedule_work(&hda->probe_work); ++ ++ return 0; ++ ++out_free: ++ snd_card_free(card); ++ return err; ++} ++ ++static void hda_tegra_probe_work(struct work_struct *work) ++{ ++ struct hda_tegra *hda = container_of(work, struct hda_tegra, probe_work); ++ struct azx *chip = &hda->chip; ++ struct platform_device *pdev = to_platform_device(hda->dev); ++ int err; + + err = hda_tegra_first_init(chip, pdev); + if (err < 0) +@@ -520,11 +543,8 @@ static int hda_tegra_probe(struct platform_device *pdev) + chip->running = 1; + snd_hda_set_power_save(&chip->bus, power_save * 1000); + +- return 0; +- +-out_free: +- snd_card_free(card); +- return err; ++ out_free: ++ return; /* no error return from async probe */ + } + + static int hda_tegra_remove(struct platform_device *pdev) +diff --git a/sound/pci/hda/patch_cirrus.c b/sound/pci/hda/patch_cirrus.c +index 584a034..85813de 100644 +--- a/sound/pci/hda/patch_cirrus.c ++++ b/sound/pci/hda/patch_cirrus.c +@@ -633,6 +633,7 @@ static const struct snd_pci_quirk cs4208_mac_fixup_tbl[] = { + SND_PCI_QUIRK(0x106b, 0x5e00, "MacBookPro 11,2", CS4208_MBP11), + SND_PCI_QUIRK(0x106b, 0x7100, "MacBookAir 6,1", CS4208_MBA6), + SND_PCI_QUIRK(0x106b, 0x7200, "MacBookAir 6,2", CS4208_MBA6), ++ SND_PCI_QUIRK(0x106b, 0x7b00, "MacBookPro 12,1", CS4208_MBP11), + {} /* terminator */ + }; + +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index c8f01cc..6a66139 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -4188,6 +4188,24 @@ static void alc_fixup_disable_aamix(struct hda_codec *codec, + } + } + ++/* fixup for Thinkpad docks: add dock pins, avoid HP parser fixup */ ++static void alc_fixup_tpt440_dock(struct hda_codec *codec, ++ const struct hda_fixup *fix, int action) ++{ ++ static const struct hda_pintbl pincfgs[] = { ++ { 0x16, 0x21211010 }, /* dock headphone */ ++ { 0x19, 0x21a11010 }, /* dock mic */ ++ { } ++ }; ++ struct alc_spec *spec = codec->spec; ++ ++ if (action == HDA_FIXUP_ACT_PRE_PROBE) { ++ spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP; ++ codec->power_save_node = 0; /* avoid click noises */ ++ snd_hda_apply_pincfgs(codec, pincfgs); ++ } ++} ++ + static void alc_shutup_dell_xps13(struct hda_codec *codec) + { + struct alc_spec *spec = codec->spec; +@@ -4562,7 +4580,6 @@ enum { + ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC, + ALC293_FIXUP_DELL1_MIC_NO_PRESENCE, + ALC292_FIXUP_TPT440_DOCK, +- ALC292_FIXUP_TPT440_DOCK2, + ALC283_FIXUP_BXBT2807_MIC, + ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED, + ALC282_FIXUP_ASPIRE_V5_PINS, +@@ -5029,17 +5046,7 @@ static const struct hda_fixup alc269_fixups[] = { + }, + [ALC292_FIXUP_TPT440_DOCK] = { + .type = HDA_FIXUP_FUNC, +- .v.func = alc269_fixup_pincfg_no_hp_to_lineout, +- .chained = true, +- .chain_id = ALC292_FIXUP_TPT440_DOCK2 +- }, +- [ALC292_FIXUP_TPT440_DOCK2] = { +- .type = HDA_FIXUP_PINS, +- .v.pins = (const struct hda_pintbl[]) { +- { 0x16, 0x21211010 }, /* dock headphone */ +- { 0x19, 0x21a11010 }, /* dock mic */ +- { } +- }, ++ .v.func = alc_fixup_tpt440_dock, + .chained = true, + .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST + }, +@@ -5299,6 +5306,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x17aa, 0x2212, "Thinkpad T440", ALC292_FIXUP_TPT440_DOCK), + SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad X240", ALC292_FIXUP_TPT440_DOCK), + SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), ++ SND_PCI_QUIRK(0x17aa, 0x2223, "ThinkPad T550", ALC292_FIXUP_TPT440_DOCK), + SND_PCI_QUIRK(0x17aa, 0x2226, "ThinkPad X250", ALC292_FIXUP_TPT440_DOCK), + SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC), + SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP), +diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c +index 9d947ae..def5cc8 100644 +--- a/sound/pci/hda/patch_sigmatel.c ++++ b/sound/pci/hda/patch_sigmatel.c +@@ -4520,7 +4520,11 @@ static int patch_stac92hd73xx(struct hda_codec *codec) + return err; + + spec = codec->spec; +- codec->power_save_node = 1; ++ /* enable power_save_node only for new 92HD89xx chips, as it causes ++ * click noises on old 92HD73xx chips. ++ */ ++ if ((codec->core.vendor_id & 0xfffffff0) != 0x111d7670) ++ codec->power_save_node = 1; + spec->linear_tone_beep = 0; + spec->gen.mixer_nid = 0x1d; + spec->have_spdif_mux = 1; +diff --git a/sound/soc/au1x/db1200.c b/sound/soc/au1x/db1200.c +index 58c3164..8c907eb 100644 +--- a/sound/soc/au1x/db1200.c ++++ b/sound/soc/au1x/db1200.c +@@ -129,6 +129,8 @@ static struct snd_soc_dai_link db1300_i2s_dai = { + .cpu_dai_name = "au1xpsc_i2s.2", + .platform_name = "au1xpsc-pcm.2", + .codec_name = "wm8731.0-001b", ++ .dai_fmt = SND_SOC_DAIFMT_LEFT_J | SND_SOC_DAIFMT_NB_NF | ++ SND_SOC_DAIFMT_CBM_CFM, + .ops = &db1200_i2s_wm8731_ops, + }; + +@@ -146,6 +148,8 @@ static struct snd_soc_dai_link db1550_i2s_dai = { + .cpu_dai_name = "au1xpsc_i2s.3", + .platform_name = "au1xpsc-pcm.3", + .codec_name = "wm8731.0-001b", ++ .dai_fmt = SND_SOC_DAIFMT_LEFT_J | SND_SOC_DAIFMT_NB_NF | ++ SND_SOC_DAIFMT_CBM_CFM, + .ops = &db1200_i2s_wm8731_ops, + }; + +diff --git a/sound/soc/codecs/sgtl5000.c b/sound/soc/codecs/sgtl5000.c +index e673f6c..7c41129 100644 +--- a/sound/soc/codecs/sgtl5000.c ++++ b/sound/soc/codecs/sgtl5000.c +@@ -1377,8 +1377,8 @@ static int sgtl5000_probe(struct snd_soc_codec *codec) + sgtl5000->micbias_resistor << SGTL5000_BIAS_R_SHIFT); + + snd_soc_update_bits(codec, SGTL5000_CHIP_MIC_CTRL, +- SGTL5000_BIAS_R_MASK, +- sgtl5000->micbias_voltage << SGTL5000_BIAS_R_SHIFT); ++ SGTL5000_BIAS_VOLT_MASK, ++ sgtl5000->micbias_voltage << SGTL5000_BIAS_VOLT_SHIFT); + /* + * disable DAP + * TODO: +diff --git a/sound/soc/codecs/tas2552.c b/sound/soc/codecs/tas2552.c +index 4f25a7d..b3e5685 100644 +--- a/sound/soc/codecs/tas2552.c ++++ b/sound/soc/codecs/tas2552.c +@@ -551,7 +551,7 @@ static struct snd_soc_dai_driver tas2552_dai[] = { + /* + * DAC digital volumes. From -7 to 24 dB in 1 dB steps + */ +-static DECLARE_TLV_DB_SCALE(dac_tlv, -7, 100, 0); ++static DECLARE_TLV_DB_SCALE(dac_tlv, -700, 100, 0); + + static const char * const tas2552_din_source_select[] = { + "Muted", +diff --git a/sound/soc/dwc/designware_i2s.c b/sound/soc/dwc/designware_i2s.c +index a3e97b4..0d28e3b 100644 +--- a/sound/soc/dwc/designware_i2s.c ++++ b/sound/soc/dwc/designware_i2s.c +@@ -131,10 +131,10 @@ static inline void i2s_clear_irqs(struct dw_i2s_dev *dev, u32 stream) + + if (stream == SNDRV_PCM_STREAM_PLAYBACK) { + for (i = 0; i < 4; i++) +- i2s_write_reg(dev->i2s_base, TOR(i), 0); ++ i2s_read_reg(dev->i2s_base, TOR(i)); + } else { + for (i = 0; i < 4; i++) +- i2s_write_reg(dev->i2s_base, ROR(i), 0); ++ i2s_read_reg(dev->i2s_base, ROR(i)); + } + } + +diff --git a/sound/soc/pxa/Kconfig b/sound/soc/pxa/Kconfig +index 39cea80..f2bf866 100644 +--- a/sound/soc/pxa/Kconfig ++++ b/sound/soc/pxa/Kconfig +@@ -1,7 +1,6 @@ + config SND_PXA2XX_SOC + tristate "SoC Audio for the Intel PXA2xx chip" + depends on ARCH_PXA +- select SND_ARM + select SND_PXA2XX_LIB + help + Say Y or M if you want to add support for codecs attached to +@@ -25,7 +24,6 @@ config SND_PXA2XX_AC97 + config SND_PXA2XX_SOC_AC97 + tristate + select AC97_BUS +- select SND_ARM + select SND_PXA2XX_LIB_AC97 + select SND_SOC_AC97_BUS + +diff --git a/sound/soc/pxa/pxa2xx-ac97.c b/sound/soc/pxa/pxa2xx-ac97.c +index 1f60546..9e4b04e 100644 +--- a/sound/soc/pxa/pxa2xx-ac97.c ++++ b/sound/soc/pxa/pxa2xx-ac97.c +@@ -49,7 +49,7 @@ static struct snd_ac97_bus_ops pxa2xx_ac97_ops = { + .reset = pxa2xx_ac97_cold_reset, + }; + +-static unsigned long pxa2xx_ac97_pcm_stereo_in_req = 12; ++static unsigned long pxa2xx_ac97_pcm_stereo_in_req = 11; + static struct snd_dmaengine_dai_dma_data pxa2xx_ac97_pcm_stereo_in = { + .addr = __PREG(PCDR), + .addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES, +@@ -57,7 +57,7 @@ static struct snd_dmaengine_dai_dma_data pxa2xx_ac97_pcm_stereo_in = { + .filter_data = &pxa2xx_ac97_pcm_stereo_in_req, + }; + +-static unsigned long pxa2xx_ac97_pcm_stereo_out_req = 11; ++static unsigned long pxa2xx_ac97_pcm_stereo_out_req = 12; + static struct snd_dmaengine_dai_dma_data pxa2xx_ac97_pcm_stereo_out = { + .addr = __PREG(PCDR), + .addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES, +diff --git a/sound/synth/emux/emux_oss.c b/sound/synth/emux/emux_oss.c +index 82e350e..ac75816 100644 +--- a/sound/synth/emux/emux_oss.c ++++ b/sound/synth/emux/emux_oss.c +@@ -69,7 +69,8 @@ snd_emux_init_seq_oss(struct snd_emux *emu) + struct snd_seq_oss_reg *arg; + struct snd_seq_device *dev; + +- if (snd_seq_device_new(emu->card, 0, SNDRV_SEQ_DEV_ID_OSS, ++ /* using device#1 here for avoiding conflicts with OPL3 */ ++ if (snd_seq_device_new(emu->card, 1, SNDRV_SEQ_DEV_ID_OSS, + sizeof(struct snd_seq_oss_reg), &dev) < 0) + return; + +diff --git a/tools/lguest/lguest.c b/tools/lguest/lguest.c +index e440524..80159e6 100644 +--- a/tools/lguest/lguest.c ++++ b/tools/lguest/lguest.c +@@ -125,7 +125,11 @@ struct device_list { + /* The list of Guest devices, based on command line arguments. */ + static struct device_list devices; + +-struct virtio_pci_cfg_cap { ++/* ++ * Just like struct virtio_pci_cfg_cap in uapi/linux/virtio_pci.h, ++ * but uses a u32 explicitly for the data. ++ */ ++struct virtio_pci_cfg_cap_u32 { + struct virtio_pci_cap cap; + u32 pci_cfg_data; /* Data for BAR access. */ + }; +@@ -157,7 +161,7 @@ struct pci_config { + struct virtio_pci_notify_cap notify; + struct virtio_pci_cap isr; + struct virtio_pci_cap device; +- struct virtio_pci_cfg_cap cfg_access; ++ struct virtio_pci_cfg_cap_u32 cfg_access; + }; + + /* The device structure describes a single device. */ +@@ -1291,7 +1295,7 @@ static struct device *dev_and_reg(u32 *reg) + * only fault if they try to write with some invalid bar/offset/length. + */ + static bool valid_bar_access(struct device *d, +- struct virtio_pci_cfg_cap *cfg_access) ++ struct virtio_pci_cfg_cap_u32 *cfg_access) + { + /* We only have 1 bar (BAR0) */ + if (cfg_access->cap.bar != 0) +diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c +index cc25f05..a843bee 100644 +--- a/tools/lib/traceevent/event-parse.c ++++ b/tools/lib/traceevent/event-parse.c +@@ -3721,7 +3721,7 @@ static void print_str_arg(struct trace_seq *s, void *data, int size, + struct format_field *field; + struct printk_map *printk; + long long val, fval; +- unsigned long addr; ++ unsigned long long addr; + char *str; + unsigned char *hex; + int print; +@@ -3754,13 +3754,30 @@ static void print_str_arg(struct trace_seq *s, void *data, int size, + */ + if (!(field->flags & FIELD_IS_ARRAY) && + field->size == pevent->long_size) { +- addr = *(unsigned long *)(data + field->offset); ++ ++ /* Handle heterogeneous recording and processing ++ * architectures ++ * ++ * CASE I: ++ * Traces recorded on 32-bit devices (32-bit ++ * addressing) and processed on 64-bit devices: ++ * In this case, only 32 bits should be read. ++ * ++ * CASE II: ++ * Traces recorded on 64 bit devices and processed ++ * on 32-bit devices: ++ * In this case, 64 bits must be read. ++ */ ++ addr = (pevent->long_size == 8) ? ++ *(unsigned long long *)(data + field->offset) : ++ (unsigned long long)*(unsigned int *)(data + field->offset); ++ + /* Check if it matches a print format */ + printk = find_printk(pevent, addr); + if (printk) + trace_seq_puts(s, printk->printk); + else +- trace_seq_printf(s, "%lx", addr); ++ trace_seq_printf(s, "%llx", addr); + break; + } + str = malloc(len + 1); +diff --git a/tools/perf/arch/alpha/Build b/tools/perf/arch/alpha/Build +new file mode 100644 +index 0000000..1bb8bf6 +--- /dev/null ++++ b/tools/perf/arch/alpha/Build +@@ -0,0 +1 @@ ++# empty +diff --git a/tools/perf/arch/mips/Build b/tools/perf/arch/mips/Build +new file mode 100644 +index 0000000..1bb8bf6 +--- /dev/null ++++ b/tools/perf/arch/mips/Build +@@ -0,0 +1 @@ ++# empty +diff --git a/tools/perf/arch/parisc/Build b/tools/perf/arch/parisc/Build +new file mode 100644 +index 0000000..1bb8bf6 +--- /dev/null ++++ b/tools/perf/arch/parisc/Build +@@ -0,0 +1 @@ ++# empty +diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c +index d99d850..ef355fc 100644 +--- a/tools/perf/builtin-stat.c ++++ b/tools/perf/builtin-stat.c +@@ -694,7 +694,7 @@ static void abs_printout(int id, int nr, struct perf_evsel *evsel, double avg) + static void print_aggr(char *prefix) + { + struct perf_evsel *counter; +- int cpu, cpu2, s, s2, id, nr; ++ int cpu, s, s2, id, nr; + double uval; + u64 ena, run, val; + +@@ -707,8 +707,7 @@ static void print_aggr(char *prefix) + val = ena = run = 0; + nr = 0; + for (cpu = 0; cpu < perf_evsel__nr_cpus(counter); cpu++) { +- cpu2 = perf_evsel__cpus(counter)->map[cpu]; +- s2 = aggr_get_id(evsel_list->cpus, cpu2); ++ s2 = aggr_get_id(perf_evsel__cpus(counter), cpu); + if (s2 != id) + continue; + val += perf_counts(counter->counts, cpu, 0)->val; +diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c +index 03ace57..4215cc1 100644 +--- a/tools/perf/util/header.c ++++ b/tools/perf/util/header.c +@@ -1442,7 +1442,7 @@ static int process_nrcpus(struct perf_file_section *section __maybe_unused, + if (ph->needs_swap) + nr = bswap_32(nr); + +- ph->env.nr_cpus_online = nr; ++ ph->env.nr_cpus_avail = nr; + + ret = readn(fd, &nr, sizeof(nr)); + if (ret != sizeof(nr)) +@@ -1451,7 +1451,7 @@ static int process_nrcpus(struct perf_file_section *section __maybe_unused, + if (ph->needs_swap) + nr = bswap_32(nr); + +- ph->env.nr_cpus_avail = nr; ++ ph->env.nr_cpus_online = nr; + return 0; + } + +diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c +index 6f28d53..f298c69 100644 +--- a/tools/perf/util/hist.c ++++ b/tools/perf/util/hist.c +@@ -151,6 +151,9 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h) + hists__new_col_len(hists, HISTC_LOCAL_WEIGHT, 12); + hists__new_col_len(hists, HISTC_GLOBAL_WEIGHT, 12); + ++ if (h->srcline) ++ hists__new_col_len(hists, HISTC_SRCLINE, strlen(h->srcline)); ++ + if (h->transaction) + hists__new_col_len(hists, HISTC_TRANSACTION, + hist_entry__transaction_len()); +diff --git a/tools/perf/util/parse-events.y b/tools/perf/util/parse-events.y +index 591905a..9cd7081 100644 +--- a/tools/perf/util/parse-events.y ++++ b/tools/perf/util/parse-events.y +@@ -255,7 +255,7 @@ PE_PMU_EVENT_PRE '-' PE_PMU_EVENT_SUF sep_dc + list_add_tail(&term->list, head); + + ALLOC_LIST(list); +- ABORT_ON(parse_events_add_pmu(list, &data->idx, "cpu", head)); ++ ABORT_ON(parse_events_add_pmu(data, list, "cpu", head)); + parse_events__free_terms(head); + $$ = list; + } +diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c +index 381f23a..ae6351d 100644 +--- a/tools/perf/util/probe-event.c ++++ b/tools/perf/util/probe-event.c +@@ -274,12 +274,13 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso) + int ret = 0; + + if (module) { +- list_for_each_entry(dso, &host_machine->dsos.head, node) { +- if (!dso->kernel) +- continue; +- if (strncmp(dso->short_name + 1, module, +- dso->short_name_len - 2) == 0) +- goto found; ++ char module_name[128]; ++ ++ snprintf(module_name, sizeof(module_name), "[%s]", module); ++ map = map_groups__find_by_name(&host_machine->kmaps, MAP__FUNCTION, module_name); ++ if (map) { ++ dso = map->dso; ++ goto found; + } + pr_debug("Failed to find module %s.\n", module); + return -ENOENT; +diff --git a/tools/perf/util/probe-event.h b/tools/perf/util/probe-event.h +index 31db6ee..cd55c6d 100644 +--- a/tools/perf/util/probe-event.h ++++ b/tools/perf/util/probe-event.h +@@ -106,6 +106,8 @@ struct variable_list { + struct strlist *vars; /* Available variables */ + }; + ++struct map; ++ + /* Command string to events */ + extern int parse_perf_probe_command(const char *cmd, + struct perf_probe_event *pev); +diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c +index 65f7e38..3338588 100644 +--- a/tools/perf/util/symbol-elf.c ++++ b/tools/perf/util/symbol-elf.c +@@ -1260,8 +1260,6 @@ out_close: + static int kcore__init(struct kcore *kcore, char *filename, int elfclass, + bool temp) + { +- GElf_Ehdr *ehdr; +- + kcore->elfclass = elfclass; + + if (temp) +@@ -1278,9 +1276,7 @@ static int kcore__init(struct kcore *kcore, char *filename, int elfclass, + if (!gelf_newehdr(kcore->elf, elfclass)) + goto out_end; + +- ehdr = gelf_getehdr(kcore->elf, &kcore->ehdr); +- if (!ehdr) +- goto out_end; ++ memset(&kcore->ehdr, 0, sizeof(GElf_Ehdr)); + + return 0; + +@@ -1337,23 +1333,18 @@ static int kcore__copy_hdr(struct kcore *from, struct kcore *to, size_t count) + static int kcore__add_phdr(struct kcore *kcore, int idx, off_t offset, + u64 addr, u64 len) + { +- GElf_Phdr gphdr; +- GElf_Phdr *phdr; +- +- phdr = gelf_getphdr(kcore->elf, idx, &gphdr); +- if (!phdr) +- return -1; +- +- phdr->p_type = PT_LOAD; +- phdr->p_flags = PF_R | PF_W | PF_X; +- phdr->p_offset = offset; +- phdr->p_vaddr = addr; +- phdr->p_paddr = 0; +- phdr->p_filesz = len; +- phdr->p_memsz = len; +- phdr->p_align = page_size; +- +- if (!gelf_update_phdr(kcore->elf, idx, phdr)) ++ GElf_Phdr phdr = { ++ .p_type = PT_LOAD, ++ .p_flags = PF_R | PF_W | PF_X, ++ .p_offset = offset, ++ .p_vaddr = addr, ++ .p_paddr = 0, ++ .p_filesz = len, ++ .p_memsz = len, ++ .p_align = page_size, ++ }; ++ ++ if (!gelf_update_phdr(kcore->elf, idx, &phdr)) + return -1; + + return 0; +diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c +index 9ff4193..79db453 100644 +--- a/virt/kvm/eventfd.c ++++ b/virt/kvm/eventfd.c +@@ -771,40 +771,14 @@ static enum kvm_bus ioeventfd_bus_from_flags(__u32 flags) + return KVM_MMIO_BUS; + } + +-static int +-kvm_assign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args) ++static int kvm_assign_ioeventfd_idx(struct kvm *kvm, ++ enum kvm_bus bus_idx, ++ struct kvm_ioeventfd *args) + { +- enum kvm_bus bus_idx; +- struct _ioeventfd *p; +- struct eventfd_ctx *eventfd; +- int ret; +- +- bus_idx = ioeventfd_bus_from_flags(args->flags); +- /* must be natural-word sized, or 0 to ignore length */ +- switch (args->len) { +- case 0: +- case 1: +- case 2: +- case 4: +- case 8: +- break; +- default: +- return -EINVAL; +- } +- +- /* check for range overflow */ +- if (args->addr + args->len < args->addr) +- return -EINVAL; + +- /* check for extra flags that we don't understand */ +- if (args->flags & ~KVM_IOEVENTFD_VALID_FLAG_MASK) +- return -EINVAL; +- +- /* ioeventfd with no length can't be combined with DATAMATCH */ +- if (!args->len && +- args->flags & (KVM_IOEVENTFD_FLAG_PIO | +- KVM_IOEVENTFD_FLAG_DATAMATCH)) +- return -EINVAL; ++ struct eventfd_ctx *eventfd; ++ struct _ioeventfd *p; ++ int ret; + + eventfd = eventfd_ctx_fdget(args->fd); + if (IS_ERR(eventfd)) +@@ -843,16 +817,6 @@ kvm_assign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args) + if (ret < 0) + goto unlock_fail; + +- /* When length is ignored, MMIO is also put on a separate bus, for +- * faster lookups. +- */ +- if (!args->len && !(args->flags & KVM_IOEVENTFD_FLAG_PIO)) { +- ret = kvm_io_bus_register_dev(kvm, KVM_FAST_MMIO_BUS, +- p->addr, 0, &p->dev); +- if (ret < 0) +- goto register_fail; +- } +- + kvm->buses[bus_idx]->ioeventfd_count++; + list_add_tail(&p->list, &kvm->ioeventfds); + +@@ -860,8 +824,6 @@ kvm_assign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args) + + return 0; + +-register_fail: +- kvm_io_bus_unregister_dev(kvm, bus_idx, &p->dev); + unlock_fail: + mutex_unlock(&kvm->slots_lock); + +@@ -873,14 +835,13 @@ fail: + } + + static int +-kvm_deassign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args) ++kvm_deassign_ioeventfd_idx(struct kvm *kvm, enum kvm_bus bus_idx, ++ struct kvm_ioeventfd *args) + { +- enum kvm_bus bus_idx; + struct _ioeventfd *p, *tmp; + struct eventfd_ctx *eventfd; + int ret = -ENOENT; + +- bus_idx = ioeventfd_bus_from_flags(args->flags); + eventfd = eventfd_ctx_fdget(args->fd); + if (IS_ERR(eventfd)) + return PTR_ERR(eventfd); +@@ -901,10 +862,6 @@ kvm_deassign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args) + continue; + + kvm_io_bus_unregister_dev(kvm, bus_idx, &p->dev); +- if (!p->length) { +- kvm_io_bus_unregister_dev(kvm, KVM_FAST_MMIO_BUS, +- &p->dev); +- } + kvm->buses[bus_idx]->ioeventfd_count--; + ioeventfd_release(p); + ret = 0; +@@ -918,6 +875,71 @@ kvm_deassign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args) + return ret; + } + ++static int kvm_deassign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args) ++{ ++ enum kvm_bus bus_idx = ioeventfd_bus_from_flags(args->flags); ++ int ret = kvm_deassign_ioeventfd_idx(kvm, bus_idx, args); ++ ++ if (!args->len && bus_idx == KVM_MMIO_BUS) ++ kvm_deassign_ioeventfd_idx(kvm, KVM_FAST_MMIO_BUS, args); ++ ++ return ret; ++} ++ ++static int ++kvm_assign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args) ++{ ++ enum kvm_bus bus_idx; ++ int ret; ++ ++ bus_idx = ioeventfd_bus_from_flags(args->flags); ++ /* must be natural-word sized, or 0 to ignore length */ ++ switch (args->len) { ++ case 0: ++ case 1: ++ case 2: ++ case 4: ++ case 8: ++ break; ++ default: ++ return -EINVAL; ++ } ++ ++ /* check for range overflow */ ++ if (args->addr + args->len < args->addr) ++ return -EINVAL; ++ ++ /* check for extra flags that we don't understand */ ++ if (args->flags & ~KVM_IOEVENTFD_VALID_FLAG_MASK) ++ return -EINVAL; ++ ++ /* ioeventfd with no length can't be combined with DATAMATCH */ ++ if (!args->len && ++ args->flags & (KVM_IOEVENTFD_FLAG_PIO | ++ KVM_IOEVENTFD_FLAG_DATAMATCH)) ++ return -EINVAL; ++ ++ ret = kvm_assign_ioeventfd_idx(kvm, bus_idx, args); ++ if (ret) ++ goto fail; ++ ++ /* When length is ignored, MMIO is also put on a separate bus, for ++ * faster lookups. ++ */ ++ if (!args->len && bus_idx == KVM_MMIO_BUS) { ++ ret = kvm_assign_ioeventfd_idx(kvm, KVM_FAST_MMIO_BUS, args); ++ if (ret < 0) ++ goto fast_fail; ++ } ++ ++ return 0; ++ ++fast_fail: ++ kvm_deassign_ioeventfd_idx(kvm, bus_idx, args); ++fail: ++ return ret; ++} ++ + int + kvm_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args) + { +diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c +index 8b8a444..5a2a78a 100644 +--- a/virt/kvm/kvm_main.c ++++ b/virt/kvm/kvm_main.c +@@ -3080,10 +3080,25 @@ static void kvm_io_bus_destroy(struct kvm_io_bus *bus) + static inline int kvm_io_bus_cmp(const struct kvm_io_range *r1, + const struct kvm_io_range *r2) + { +- if (r1->addr < r2->addr) ++ gpa_t addr1 = r1->addr; ++ gpa_t addr2 = r2->addr; ++ ++ if (addr1 < addr2) + return -1; +- if (r1->addr + r1->len > r2->addr + r2->len) ++ ++ /* If r2->len == 0, match the exact address. If r2->len != 0, ++ * accept any overlapping write. Any order is acceptable for ++ * overlapping ranges, because kvm_io_bus_get_first_dev ensures ++ * we process all of them. ++ */ ++ if (r2->len) { ++ addr1 += r1->len; ++ addr2 += r2->len; ++ } ++ ++ if (addr1 > addr2) + return 1; ++ + return 0; + } + diff --git a/4.2.3/4420_grsecurity-3.1-4.2.3-201510202025.patch b/4.2.4/4420_grsecurity-3.1-4.2.4-201510222059.patch index 87c4cb1..c3d3682 100644 --- a/4.2.3/4420_grsecurity-3.1-4.2.3-201510202025.patch +++ b/4.2.4/4420_grsecurity-3.1-4.2.4-201510222059.patch @@ -406,7 +406,7 @@ index 6fccb69..60c7c7a 100644 A toggle value indicating if modules are allowed to be loaded diff --git a/Makefile b/Makefile -index a6edbb1..5ac7686 100644 +index a952801..9da1dcb 100644 --- a/Makefile +++ b/Makefile @@ -298,7 +298,9 @@ CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \ @@ -3187,7 +3187,7 @@ index 36c18b7..0d78292 100644 cpu_arch = CPU_ARCH_ARMv6; else diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c -index 423663e..bfeb0ff 100644 +index 586eef2..61aabd4 100644 --- a/arch/arm/kernel/signal.c +++ b/arch/arm/kernel/signal.c @@ -24,8 +24,6 @@ @@ -3199,7 +3199,7 @@ index 423663e..bfeb0ff 100644 #ifdef CONFIG_CRUNCH static int preserve_crunch_context(struct crunch_sigframe __user *frame) { -@@ -385,8 +383,7 @@ setup_return(struct pt_regs *regs, struct ksignal *ksig, +@@ -390,8 +388,7 @@ setup_return(struct pt_regs *regs, struct ksignal *ksig, * except when the MPU has protected the vectors * page from PL0 */ @@ -3209,7 +3209,7 @@ index 423663e..bfeb0ff 100644 } else #endif { -@@ -592,33 +589,3 @@ do_work_pending(struct pt_regs *regs, unsigned int thread_flags, int syscall) +@@ -597,33 +594,3 @@ do_work_pending(struct pt_regs *regs, unsigned int thread_flags, int syscall) } while (thread_flags & _TIF_WORK_MASK); return 0; } @@ -5104,20 +5104,6 @@ index 07e1ba44..ec8cbbb 100644 #define access_ok(type, addr, size) __range_ok(addr, size) #define user_addr_max get_fs -diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c -index e8ca6ea..13671a9 100644 ---- a/arch/arm64/kernel/efi.c -+++ b/arch/arm64/kernel/efi.c -@@ -258,7 +258,8 @@ static bool __init efi_virtmap_init(void) - */ - if (!is_normal_ram(md)) - prot = __pgprot(PROT_DEVICE_nGnRE); -- else if (md->type == EFI_RUNTIME_SERVICES_CODE) -+ else if (md->type == EFI_RUNTIME_SERVICES_CODE || -+ !PAGE_ALIGNED(md->phys_addr)) - prot = PAGE_KERNEL_EXEC; - else - prot = PAGE_KERNEL; diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index d16a1ce..a5acc60 100644 --- a/arch/arm64/mm/dma-mapping.c @@ -7287,7 +7273,7 @@ index 5c81fdd..db158d3 100644 { return pfn_valid(PFN_DOWN(virt_to_phys(kaddr))); diff --git a/arch/mips/net/bpf_jit_asm.S b/arch/mips/net/bpf_jit_asm.S -index e927260..552e6ea 100644 +index dabf417..0be1d6d 100644 --- a/arch/mips/net/bpf_jit_asm.S +++ b/arch/mips/net/bpf_jit_asm.S @@ -62,7 +62,9 @@ sk_load_word_positive: @@ -7298,9 +7284,9 @@ index e927260..552e6ea 100644 lw $r_A, 0(t1) + .set noreorder #ifdef CONFIG_CPU_LITTLE_ENDIAN + # if defined(__mips_isa_rev) && (__mips_isa_rev >= 2) wsbh t0, $r_A - rotr $r_A, t0, 16 -@@ -78,7 +80,9 @@ sk_load_half_positive: +@@ -90,7 +92,9 @@ sk_load_half_positive: is_offset_in_header(2, half) /* Offset within header boundaries */ PTR_ADDU t1, $r_skb_data, offset @@ -7308,8 +7294,8 @@ index e927260..552e6ea 100644 lh $r_A, 0(t1) + .set noreorder #ifdef CONFIG_CPU_LITTLE_ENDIAN + # if defined(__mips_isa_rev) && (__mips_isa_rev >= 2) wsbh t0, $r_A - seh $r_A, t0 diff --git a/arch/mips/sgi-ip27/ip27-nmi.c b/arch/mips/sgi-ip27/ip27-nmi.c index a2358b4..7cead4f 100644 --- a/arch/mips/sgi-ip27/ip27-nmi.c @@ -15813,7 +15799,7 @@ index 21dc60a..844def1 100644 +ENDPROC(async_page_fault) #endif diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S -index 8cb3e43..a497278 100644 +index d330840..4f1925e 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -37,6 +37,8 @@ @@ -16711,7 +16697,7 @@ index 8cb3e43..a497278 100644 /* Runs on exception stack */ ENTRY(nmi) -@@ -1258,6 +1754,8 @@ ENTRY(nmi) +@@ -1269,6 +1765,8 @@ ENTRY(nmi) * other IST entries. */ @@ -16720,7 +16706,7 @@ index 8cb3e43..a497278 100644 /* Use %rdx as our temp variable throughout */ pushq %rdx -@@ -1298,6 +1796,12 @@ ENTRY(nmi) +@@ -1312,6 +1810,12 @@ ENTRY(nmi) pushq %r14 /* pt_regs->r14 */ pushq %r15 /* pt_regs->r15 */ @@ -16733,7 +16719,7 @@ index 8cb3e43..a497278 100644 /* * At this point we no longer need to worry about stack damage * due to nesting -- we're on the normal thread stack and we're -@@ -1308,12 +1812,19 @@ ENTRY(nmi) +@@ -1322,12 +1826,19 @@ ENTRY(nmi) movq $-1, %rsi call do_nmi @@ -16753,7 +16739,7 @@ index 8cb3e43..a497278 100644 jmp restore_c_regs_and_iret .Lnmi_from_kernel: -@@ -1435,6 +1946,7 @@ nested_nmi_out: +@@ -1449,6 +1960,7 @@ nested_nmi_out: popq %rdx /* We are returning to kernel mode, so this cannot result in a fault. */ @@ -16761,7 +16747,7 @@ index 8cb3e43..a497278 100644 INTERRUPT_RETURN first_nmi: -@@ -1508,20 +2020,22 @@ end_repeat_nmi: +@@ -1522,20 +2034,22 @@ end_repeat_nmi: ALLOC_PT_GPREGS_ON_STACK /* @@ -16787,7 +16773,7 @@ index 8cb3e43..a497278 100644 jnz nmi_restore nmi_swapgs: SWAPGS_UNSAFE_STACK -@@ -1532,6 +2046,8 @@ nmi_restore: +@@ -1546,6 +2060,8 @@ nmi_restore: /* Point RSP at the "iret" frame. */ REMOVE_PT_GPREGS_FROM_STACK 6*8 @@ -16796,7 +16782,7 @@ index 8cb3e43..a497278 100644 /* * Clear "NMI executing". Set DF first so that we can easily * distinguish the remaining code between here and IRET from -@@ -1549,9 +2065,9 @@ nmi_restore: +@@ -1563,9 +2079,9 @@ nmi_restore: * mode, so this cannot result in a fault. */ INTERRUPT_RETURN @@ -20926,7 +20912,7 @@ index 13f310b..f0ef42e 100644 #define pgprot_writecombine pgprot_writecombine extern pgprot_t pgprot_writecombine(pgprot_t prot); diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h -index dca71714..919d4e1 100644 +index b12f810..aedcc13 100644 --- a/arch/x86/include/asm/preempt.h +++ b/arch/x86/include/asm/preempt.h @@ -84,7 +84,7 @@ static __always_inline void __preempt_count_sub(int val) @@ -23010,7 +22996,7 @@ index 0c26b1b..a766e85 100644 bogus_magic: jmp bogus_magic diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c -index c42827e..c2fd50b 100644 +index 25f9093..21d2827 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -20,6 +20,7 @@ @@ -23045,7 +23031,7 @@ index c42827e..c2fd50b 100644 o_dspl = *(s32 *)(insnbuf + 1); /* next_rip of the replacement JMP */ -@@ -359,6 +369,7 @@ void __init_or_module apply_alternatives(struct alt_instr *start, +@@ -364,6 +374,7 @@ void __init_or_module apply_alternatives(struct alt_instr *start, { struct alt_instr *a; u8 *instr, *replacement; @@ -23053,7 +23039,7 @@ index c42827e..c2fd50b 100644 u8 insnbuf[MAX_PATCH_LEN]; DPRINTK("alt table %p -> %p", start, end); -@@ -374,46 +385,71 @@ void __init_or_module apply_alternatives(struct alt_instr *start, +@@ -379,46 +390,71 @@ void __init_or_module apply_alternatives(struct alt_instr *start, for (a = start; a < end; a++) { int insnbuf_sz = 0; @@ -23139,7 +23125,7 @@ index c42827e..c2fd50b 100644 text_poke_early(instr, insnbuf, insnbuf_sz); } -@@ -429,10 +465,16 @@ static void alternatives_smp_lock(const s32 *start, const s32 *end, +@@ -434,10 +470,16 @@ static void alternatives_smp_lock(const s32 *start, const s32 *end, for (poff = start; poff < end; poff++) { u8 *ptr = (u8 *)poff + *poff; @@ -23157,7 +23143,7 @@ index c42827e..c2fd50b 100644 text_poke(ptr, ((unsigned char []){0xf0}), 1); } mutex_unlock(&text_mutex); -@@ -447,10 +489,16 @@ static void alternatives_smp_unlock(const s32 *start, const s32 *end, +@@ -452,10 +494,16 @@ static void alternatives_smp_unlock(const s32 *start, const s32 *end, for (poff = start; poff < end; poff++) { u8 *ptr = (u8 *)poff + *poff; @@ -23175,7 +23161,7 @@ index c42827e..c2fd50b 100644 text_poke(ptr, ((unsigned char []){0x3E}), 1); } mutex_unlock(&text_mutex); -@@ -587,7 +635,7 @@ void __init_or_module apply_paravirt(struct paravirt_patch_site *start, +@@ -592,7 +640,7 @@ void __init_or_module apply_paravirt(struct paravirt_patch_site *start, BUG_ON(p->len > MAX_PATCH_LEN); /* prep the buffer with the original instructions */ @@ -23184,7 +23170,7 @@ index c42827e..c2fd50b 100644 used = pv_init_ops.patch(p->instrtype, p->clobbers, insnbuf, (unsigned long)p->instr, p->len); -@@ -634,7 +682,7 @@ void __init alternative_instructions(void) +@@ -639,7 +687,7 @@ void __init alternative_instructions(void) if (!uniproc_patched || num_possible_cpus() == 1) free_init_pages("SMP alternatives", (unsigned long)__smp_locks, @@ -23193,7 +23179,7 @@ index c42827e..c2fd50b 100644 #endif apply_paravirt(__parainstructions, __parainstructions_end); -@@ -655,13 +703,17 @@ void __init alternative_instructions(void) +@@ -660,13 +708,17 @@ void __init alternative_instructions(void) * instructions. And on the local CPU you need to be protected again NMI or MCE * handlers seeing an inconsistent instruction while you patch. */ @@ -23213,7 +23199,7 @@ index c42827e..c2fd50b 100644 local_irq_restore(flags); /* Could also do a CLFLUSH here to speed up CPU recovery; but that causes hangs on some VIA CPUs. */ -@@ -683,36 +735,22 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode, +@@ -688,36 +740,22 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode, */ void *text_poke(void *addr, const void *opcode, size_t len) { @@ -23258,7 +23244,7 @@ index c42827e..c2fd50b 100644 return addr; } -@@ -766,7 +804,7 @@ int poke_int3_handler(struct pt_regs *regs) +@@ -771,7 +809,7 @@ int poke_int3_handler(struct pt_regs *regs) */ void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler) { @@ -23268,7 +23254,7 @@ index c42827e..c2fd50b 100644 bp_int3_handler = handler; bp_int3_addr = (u8 *)addr + sizeof(int3); diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c -index cde732c..6365ac2 100644 +index 307a498..783e96a 100644 --- a/arch/x86/kernel/apic/apic.c +++ b/arch/x86/kernel/apic/apic.c @@ -171,7 +171,7 @@ int first_system_vector = FIRST_SYSTEM_VECTOR; @@ -23280,7 +23266,7 @@ index cde732c..6365ac2 100644 int pic_mode; -@@ -1857,7 +1857,7 @@ static inline void __smp_error_interrupt(struct pt_regs *regs) +@@ -1864,7 +1864,7 @@ static inline void __smp_error_interrupt(struct pt_regs *regs) apic_write(APIC_ESR, 0); v = apic_read(APIC_ESR); ack_APIC_irq(); @@ -23338,7 +23324,7 @@ index c4a8d63..fe893ac 100644 .name = "bigsmp", .probe = probe_bigsmp, diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c -index 206052e..621dfb4 100644 +index 5880b48..5085f3e 100644 --- a/arch/x86/kernel/apic/io_apic.c +++ b/arch/x86/kernel/apic/io_apic.c @@ -1682,7 +1682,7 @@ static unsigned int startup_ioapic_irq(struct irq_data *data) @@ -24303,10 +24289,10 @@ index 97242a9..cf9c30e 100644 while (amd_iommu_v2_event_descs[i].attr.attr.name) diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c -index 6326ae2..f092747 100644 +index 1b09c42..521004d 100644 --- a/arch/x86/kernel/cpu/perf_event_intel.c +++ b/arch/x86/kernel/cpu/perf_event_intel.c -@@ -3016,10 +3016,10 @@ __init int intel_pmu_init(void) +@@ -3019,10 +3019,10 @@ __init int intel_pmu_init(void) x86_pmu.num_counters_fixed = max((int)edx.split.num_counters_fixed, 3); if (boot_cpu_has(X86_FEATURE_PDCM)) { @@ -24508,32 +24494,6 @@ index 83741a7..bd3507d 100644 { .notifier_call = cpuid_class_cpu_callback, }; -diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c -index e068d66..74ca2fe 100644 ---- a/arch/x86/kernel/crash.c -+++ b/arch/x86/kernel/crash.c -@@ -185,10 +185,9 @@ void native_machine_crash_shutdown(struct pt_regs *regs) - } - - #ifdef CONFIG_KEXEC_FILE --static int get_nr_ram_ranges_callback(unsigned long start_pfn, -- unsigned long nr_pfn, void *arg) -+static int get_nr_ram_ranges_callback(u64 start, u64 end, void *arg) - { -- int *nr_ranges = arg; -+ unsigned int *nr_ranges = arg; - - (*nr_ranges)++; - return 0; -@@ -214,7 +213,7 @@ static void fill_up_crash_elf_data(struct crash_elf_data *ced, - - ced->image = image; - -- walk_system_ram_range(0, -1, &nr_ranges, -+ walk_system_ram_res(0, -1, &nr_ranges, - get_nr_ram_ranges_callback); - - ced->max_nr_ranges = nr_ranges; diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c index afa64ad..dce67dd 100644 --- a/arch/x86/kernel/crash_dump_64.c @@ -27536,10 +27496,10 @@ index 33ee3e0..da3519a 100644 #ifdef CONFIG_QUEUED_SPINLOCKS .queued_spin_lock_slowpath = native_queued_spin_lock_slowpath, diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c -index 58bcfb6..0adb7d7 100644 +index ebb5657..dde2f45 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c -@@ -56,6 +56,9 @@ u64 _paravirt_ident_64(u64 x) +@@ -64,6 +64,9 @@ u64 _paravirt_ident_64(u64 x) { return x; } @@ -27549,7 +27509,7 @@ index 58bcfb6..0adb7d7 100644 void __init default_banner(void) { -@@ -142,16 +145,20 @@ unsigned paravirt_patch_default(u8 type, u16 clobbers, void *insnbuf, +@@ -150,16 +153,20 @@ unsigned paravirt_patch_default(u8 type, u16 clobbers, void *insnbuf, if (opfunc == NULL) /* If there's no function, patch it with a ud2a (BUG) */ @@ -27574,7 +27534,7 @@ index 58bcfb6..0adb7d7 100644 else if (type == PARAVIRT_PATCH(pv_cpu_ops.iret) || #ifdef CONFIG_X86_32 -@@ -178,7 +185,7 @@ unsigned paravirt_patch_insns(void *insnbuf, unsigned len, +@@ -186,7 +193,7 @@ unsigned paravirt_patch_insns(void *insnbuf, unsigned len, if (insn_len > len || start == NULL) insn_len = len; else @@ -27583,7 +27543,7 @@ index 58bcfb6..0adb7d7 100644 return insn_len; } -@@ -302,7 +309,7 @@ enum paravirt_lazy_mode paravirt_get_lazy_mode(void) +@@ -310,7 +317,7 @@ enum paravirt_lazy_mode paravirt_get_lazy_mode(void) return this_cpu_read(paravirt_lazy_mode); } @@ -27592,7 +27552,7 @@ index 58bcfb6..0adb7d7 100644 .name = "bare hardware", .paravirt_enabled = 0, .kernel_rpl = 0, -@@ -313,16 +320,16 @@ struct pv_info pv_info = { +@@ -321,16 +328,16 @@ struct pv_info pv_info = { #endif }; @@ -27612,7 +27572,7 @@ index 58bcfb6..0adb7d7 100644 .save_fl = __PV_IS_CALLEE_SAVE(native_save_fl), .restore_fl = __PV_IS_CALLEE_SAVE(native_restore_fl), .irq_disable = __PV_IS_CALLEE_SAVE(native_irq_disable), -@@ -334,7 +341,7 @@ __visible struct pv_irq_ops pv_irq_ops = { +@@ -342,7 +349,7 @@ __visible struct pv_irq_ops pv_irq_ops = { #endif }; @@ -27621,7 +27581,7 @@ index 58bcfb6..0adb7d7 100644 .cpuid = native_cpuid, .get_debugreg = native_get_debugreg, .set_debugreg = native_set_debugreg, -@@ -397,21 +404,26 @@ NOKPROBE_SYMBOL(native_get_debugreg); +@@ -405,21 +412,26 @@ NOKPROBE_SYMBOL(native_get_debugreg); NOKPROBE_SYMBOL(native_set_debugreg); NOKPROBE_SYMBOL(native_load_idt); @@ -27651,7 +27611,7 @@ index 58bcfb6..0adb7d7 100644 .read_cr2 = native_read_cr2, .write_cr2 = native_write_cr2, -@@ -461,6 +473,7 @@ struct pv_mmu_ops pv_mmu_ops = { +@@ -469,6 +481,7 @@ struct pv_mmu_ops pv_mmu_ops = { .make_pud = PTE_IDENT, .set_pgd = native_set_pgd, @@ -27659,7 +27619,7 @@ index 58bcfb6..0adb7d7 100644 #endif #endif /* CONFIG_PGTABLE_LEVELS >= 3 */ -@@ -481,6 +494,12 @@ struct pv_mmu_ops pv_mmu_ops = { +@@ -489,6 +502,12 @@ struct pv_mmu_ops pv_mmu_ops = { }, .set_fixmap = native_set_fixmap, @@ -27997,7 +27957,7 @@ index f73c962..6589332 100644 } - diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c -index f6b9163..1ab8c96 100644 +index a90ac95..ebac33e 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -157,9 +157,10 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long sp, @@ -28058,21 +28018,26 @@ index f6b9163..1ab8c96 100644 /* * Now maybe reload the debug registers and handle I/O bitmaps */ -@@ -506,12 +516,11 @@ unsigned long get_wchan(struct task_struct *p) +@@ -510,7 +520,6 @@ unsigned long get_wchan(struct task_struct *p) + if (!p || p == current || p->state == TASK_RUNNING) return 0; - stack = (unsigned long)task_stack_page(p); -- if (p->thread.sp < stack || p->thread.sp >= stack+THREAD_SIZE) -+ if (p->thread.sp < stack || p->thread.sp > stack+THREAD_SIZE-16-sizeof(u64)) +- + start = (unsigned long)task_stack_page(p); + if (!start) return 0; - fp = *(u64 *)(p->thread.sp); - do { -- if (fp < (unsigned long)stack || -- fp >= (unsigned long)stack+THREAD_SIZE) -+ if (fp < stack || fp > stack+THREAD_SIZE-16-sizeof(u64)) - return 0; - ip = *(u64 *)(fp+8); - if (!in_sched_functions(ip)) +@@ -535,7 +544,10 @@ unsigned long get_wchan(struct task_struct *p) + */ + top = start + THREAD_SIZE - TOP_OF_KERNEL_STACK_PADDING; + top -= 2 * sizeof(unsigned long); +- bottom = start + sizeof(struct thread_info); ++ /* not adding sizeof(thread_info) since it's not located on the stack ++ with PaX patched in ++ */ ++ bottom = start; + + sp = READ_ONCE(p->thread.sp); + if (sp < bottom || sp > top) diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c index 9be72bc..f4329c5 100644 --- a/arch/x86/kernel/ptrace.c @@ -29298,10 +29263,10 @@ index f579192..aed90b8 100644 memmove(&new_stack->regs.ip, (void *)s->regs.sp, 5*8); diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c -index 7437b41..45f6250 100644 +index dc9af7a..1bc625e 100644 --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c -@@ -150,7 +150,7 @@ static void cyc2ns_write_end(int cpu, struct cyc2ns_data *data) +@@ -151,7 +151,7 @@ static void cyc2ns_write_end(int cpu, struct cyc2ns_data *data) */ smp_wmb(); @@ -29816,10 +29781,10 @@ index 0f67d7e..4b9fa11 100644 goto error; walker->ptep_user[walker->level - 1] = ptep_user; diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c -index 8e0c084..bdb9c3b 100644 +index 2d32b67..2cd298b 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c -@@ -3688,7 +3688,11 @@ static void reload_tss(struct kvm_vcpu *vcpu) +@@ -3586,7 +3586,11 @@ static void reload_tss(struct kvm_vcpu *vcpu) int cpu = raw_smp_processor_id(); struct svm_cpu_data *sd = per_cpu(svm_data, cpu); @@ -29831,7 +29796,7 @@ index 8e0c084..bdb9c3b 100644 load_TR_desc(); } -@@ -4084,6 +4088,10 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) +@@ -3982,6 +3986,10 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) #endif #endif @@ -29843,7 +29808,7 @@ index 8e0c084..bdb9c3b 100644 local_irq_disable(); diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c -index 83b7b5c..26d8b1b 100644 +index aa9e8229..ab09cc4 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -1440,12 +1440,12 @@ static void vmcs_write64(unsigned long field, u64 value) @@ -29957,7 +29922,7 @@ index 83b7b5c..26d8b1b 100644 vmx_disable_intercept_for_msr(MSR_FS_BASE, false); vmx_disable_intercept_for_msr(MSR_GS_BASE, false); -@@ -6172,10 +6191,12 @@ static __init int hardware_setup(void) +@@ -6174,10 +6193,12 @@ static __init int hardware_setup(void) enable_pml = 0; if (!enable_pml) { @@ -29974,7 +29939,7 @@ index 83b7b5c..26d8b1b 100644 } return alloc_kvm_area(); -@@ -8378,6 +8399,12 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) +@@ -8380,6 +8401,12 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) "jmp 2f \n\t" "1: " __ex(ASM_VMX_VMRESUME) "\n\t" "2: " @@ -29987,7 +29952,7 @@ index 83b7b5c..26d8b1b 100644 /* Save guest registers, load host registers, keep flags */ "mov %0, %c[wordsize](%%" _ASM_SP ") \n\t" "pop %0 \n\t" -@@ -8430,6 +8457,11 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) +@@ -8432,6 +8459,11 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) #endif [cr2]"i"(offsetof(struct vcpu_vmx, vcpu.arch.cr2)), [wordsize]"i"(sizeof(ulong)) @@ -29999,7 +29964,7 @@ index 83b7b5c..26d8b1b 100644 : "cc", "memory" #ifdef CONFIG_X86_64 , "rax", "rbx", "rdi", "rsi" -@@ -8443,7 +8475,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) +@@ -8445,7 +8477,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) if (debugctlmsr) update_debugctlmsr(debugctlmsr); @@ -30008,7 +29973,7 @@ index 83b7b5c..26d8b1b 100644 /* * The sysexit path does not restore ds/es, so we must set them to * a reasonable value ourselves. -@@ -8452,8 +8484,18 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) +@@ -8454,8 +8486,18 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) * may be executed in interrupt context, which saves and restore segments * around it, nullifying its effect. */ @@ -30030,7 +29995,7 @@ index 83b7b5c..26d8b1b 100644 vcpu->arch.regs_avail = ~((1 << VCPU_REGS_RIP) | (1 << VCPU_REGS_RSP) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c -index 8f0f6ec..9cee69e 100644 +index 32c6e6a..d6c5bc2 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1842,8 +1842,8 @@ static int xen_hvm_config(struct kvm_vcpu *vcpu, u64 data) @@ -30044,7 +30009,7 @@ index 8f0f6ec..9cee69e 100644 u8 blob_size = lm ? kvm->arch.xen_hvm_config.blob_size_64 : kvm->arch.xen_hvm_config.blob_size_32; u32 page_num = data & ~PAGE_MASK; -@@ -2731,6 +2731,8 @@ long kvm_arch_dev_ioctl(struct file *filp, +@@ -2733,6 +2733,8 @@ long kvm_arch_dev_ioctl(struct file *filp, if (n < msr_list.nmsrs) goto out; r = -EFAULT; @@ -30053,7 +30018,7 @@ index 8f0f6ec..9cee69e 100644 if (copy_to_user(user_msr_list->indices, &msrs_to_save, num_msrs_to_save * sizeof(u32))) goto out; -@@ -3091,7 +3093,7 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu, +@@ -3093,7 +3095,7 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu, static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu) { @@ -30062,7 +30027,7 @@ index 8f0f6ec..9cee69e 100644 u64 xstate_bv = xsave->header.xfeatures; u64 valid; -@@ -3127,7 +3129,7 @@ static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu) +@@ -3129,7 +3131,7 @@ static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu) static void load_xsave(struct kvm_vcpu *vcpu, u8 *src) { @@ -30071,7 +30036,7 @@ index 8f0f6ec..9cee69e 100644 u64 xstate_bv = *(u64 *)(src + XSAVE_HDR_OFFSET); u64 valid; -@@ -3171,7 +3173,7 @@ static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu, +@@ -3173,7 +3175,7 @@ static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu, fill_xsave((u8 *) guest_xsave->region, vcpu); } else { memcpy(guest_xsave->region, @@ -30080,7 +30045,7 @@ index 8f0f6ec..9cee69e 100644 sizeof(struct fxregs_state)); *(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)] = XSTATE_FPSSE; -@@ -3196,7 +3198,7 @@ static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, +@@ -3198,7 +3200,7 @@ static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, } else { if (xstate_bv & ~XSTATE_FPSSE) return -EINVAL; @@ -30089,7 +30054,7 @@ index 8f0f6ec..9cee69e 100644 guest_xsave->region, sizeof(struct fxregs_state)); } return 0; -@@ -5786,7 +5788,7 @@ static struct notifier_block pvclock_gtod_notifier = { +@@ -5788,7 +5790,7 @@ static struct notifier_block pvclock_gtod_notifier = { }; #endif @@ -30098,7 +30063,7 @@ index 8f0f6ec..9cee69e 100644 { int r; struct kvm_x86_ops *ops = opaque; -@@ -7210,7 +7212,7 @@ int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu, +@@ -7212,7 +7214,7 @@ int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu, int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) { struct fxregs_state *fxsave = @@ -30107,7 +30072,7 @@ index 8f0f6ec..9cee69e 100644 memcpy(fpu->fpr, fxsave->st_space, 128); fpu->fcw = fxsave->cwd; -@@ -7227,7 +7229,7 @@ int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) +@@ -7229,7 +7231,7 @@ int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) { struct fxregs_state *fxsave = @@ -30116,7 +30081,7 @@ index 8f0f6ec..9cee69e 100644 memcpy(fxsave->st_space, fpu->fpr, 128); fxsave->cwd = fpu->fcw; -@@ -7243,9 +7245,9 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) +@@ -7245,9 +7247,9 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) static void fx_init(struct kvm_vcpu *vcpu) { @@ -30128,7 +30093,7 @@ index 8f0f6ec..9cee69e 100644 host_xcr0 | XSTATE_COMPACTION_ENABLED; /* -@@ -7269,7 +7271,7 @@ void kvm_load_guest_fpu(struct kvm_vcpu *vcpu) +@@ -7271,7 +7273,7 @@ void kvm_load_guest_fpu(struct kvm_vcpu *vcpu) kvm_put_guest_xcr0(vcpu); vcpu->guest_fpu_loaded = 1; __kernel_fpu_begin(); @@ -30137,7 +30102,7 @@ index 8f0f6ec..9cee69e 100644 trace_kvm_fpu(1); } -@@ -7547,6 +7549,8 @@ bool kvm_vcpu_compatible(struct kvm_vcpu *vcpu) +@@ -7549,6 +7551,8 @@ bool kvm_vcpu_compatible(struct kvm_vcpu *vcpu) struct static_key kvm_no_apic_vcpu __read_mostly; @@ -30146,7 +30111,7 @@ index 8f0f6ec..9cee69e 100644 int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) { struct page *page; -@@ -7563,11 +7567,14 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) +@@ -7565,11 +7569,14 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) else vcpu->arch.mp_state = KVM_MP_STATE_UNINITIALIZED; @@ -30165,7 +30130,7 @@ index 8f0f6ec..9cee69e 100644 vcpu->arch.pio_data = page_address(page); kvm_set_tsc_khz(vcpu, max_tsc_khz); -@@ -7621,6 +7628,9 @@ fail_mmu_destroy: +@@ -7623,6 +7630,9 @@ fail_mmu_destroy: kvm_mmu_destroy(vcpu); fail_free_pio_data: free_page((unsigned long)vcpu->arch.pio_data); @@ -30175,7 +30140,7 @@ index 8f0f6ec..9cee69e 100644 fail: return r; } -@@ -7638,6 +7648,8 @@ void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) +@@ -7640,6 +7650,8 @@ void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) free_page((unsigned long)vcpu->arch.pio_data); if (!irqchip_in_kernel(vcpu->kvm)) static_key_slow_dec(&kvm_no_apic_vcpu); @@ -34200,7 +34165,7 @@ index 68aec42..95ad5d3 100644 printk(KERN_INFO "Write protecting the kernel text: %luk\n", size >> 10); diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c -index 3fba623..5ee9802 100644 +index f9977a7..21a5082 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -136,7 +136,7 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page, @@ -35567,10 +35532,10 @@ index 71e8a67..6a313bb 100644 struct op_counter_config; diff --git a/arch/x86/pci/intel_mid_pci.c b/arch/x86/pci/intel_mid_pci.c -index 2706230..74b4d9f 100644 +index 7553921..d631bd4 100644 --- a/arch/x86/pci/intel_mid_pci.c +++ b/arch/x86/pci/intel_mid_pci.c -@@ -258,7 +258,7 @@ int __init intel_mid_pci_init(void) +@@ -278,7 +278,7 @@ int __init intel_mid_pci_init(void) pci_mmcfg_late_init(); pcibios_enable_irq = intel_mid_pci_irq_enable; pcibios_disable_irq = intel_mid_pci_irq_disable; @@ -35921,91 +35886,6 @@ index 9b83b90..2c256c5 100644 return !(ret & 0xff00); } EXPORT_SYMBOL(pcibios_set_irq_routing); -diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c -index e4308fe..c6835bf 100644 ---- a/arch/x86/platform/efi/efi.c -+++ b/arch/x86/platform/efi/efi.c -@@ -705,6 +705,70 @@ out: - } - - /* -+ * Iterate the EFI memory map in reverse order because the regions -+ * will be mapped top-down. The end result is the same as if we had -+ * mapped things forward, but doesn't require us to change the -+ * existing implementation of efi_map_region(). -+ */ -+static inline void *efi_map_next_entry_reverse(void *entry) -+{ -+ /* Initial call */ -+ if (!entry) -+ return memmap.map_end - memmap.desc_size; -+ -+ entry -= memmap.desc_size; -+ if (entry < memmap.map) -+ return NULL; -+ -+ return entry; -+} -+ -+/* -+ * efi_map_next_entry - Return the next EFI memory map descriptor -+ * @entry: Previous EFI memory map descriptor -+ * -+ * This is a helper function to iterate over the EFI memory map, which -+ * we do in different orders depending on the current configuration. -+ * -+ * To begin traversing the memory map @entry must be %NULL. -+ * -+ * Returns %NULL when we reach the end of the memory map. -+ */ -+static void *efi_map_next_entry(void *entry) -+{ -+ if (!efi_enabled(EFI_OLD_MEMMAP) && efi_enabled(EFI_64BIT)) { -+ /* -+ * Starting in UEFI v2.5 the EFI_PROPERTIES_TABLE -+ * config table feature requires us to map all entries -+ * in the same order as they appear in the EFI memory -+ * map. That is to say, entry N must have a lower -+ * virtual address than entry N+1. This is because the -+ * firmware toolchain leaves relative references in -+ * the code/data sections, which are split and become -+ * separate EFI memory regions. Mapping things -+ * out-of-order leads to the firmware accessing -+ * unmapped addresses. -+ * -+ * Since we need to map things this way whether or not -+ * the kernel actually makes use of -+ * EFI_PROPERTIES_TABLE, let's just switch to this -+ * scheme by default for 64-bit. -+ */ -+ return efi_map_next_entry_reverse(entry); -+ } -+ -+ /* Initial call */ -+ if (!entry) -+ return memmap.map; -+ -+ entry += memmap.desc_size; -+ if (entry >= memmap.map_end) -+ return NULL; -+ -+ return entry; -+} -+ -+/* - * Map the efi memory ranges of the runtime services and update new_mmap with - * virtual addresses. - */ -@@ -714,7 +778,8 @@ static void * __init efi_map_regions(int *count, int *pg_shift) - unsigned long left = 0; - efi_memory_desc_t *md; - -- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) { -+ p = NULL; -+ while ((p = efi_map_next_entry(p))) { - md = p; - if (!(md->attribute & EFI_MEMORY_RUNTIME)) { - #ifdef CONFIG_X86_64 diff --git a/arch/x86/platform/efi/efi_32.c b/arch/x86/platform/efi/efi_32.c index ed5b673..24d2d53 100644 --- a/arch/x86/platform/efi/efi_32.c @@ -36810,10 +36690,10 @@ index 4841453..d59a203 100644 This is the Linux Xen port. Enabling this will allow the kernel to boot in a paravirtualized environment under the diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c -index 11d6fb4..c581662 100644 +index 777ad2f..fa43e03 100644 --- a/arch/x86/xen/enlighten.c +++ b/arch/x86/xen/enlighten.c -@@ -125,8 +125,6 @@ EXPORT_SYMBOL_GPL(xen_start_info); +@@ -129,8 +129,6 @@ EXPORT_SYMBOL_GPL(xen_start_info); struct shared_info xen_dummy_shared_info; @@ -36822,7 +36702,7 @@ index 11d6fb4..c581662 100644 RESERVE_BRK(shared_info_page_brk, PAGE_SIZE); __read_mostly int xen_have_vector_callback; EXPORT_SYMBOL_GPL(xen_have_vector_callback); -@@ -584,8 +582,7 @@ static void xen_load_gdt(const struct desc_ptr *dtr) +@@ -588,8 +586,7 @@ static void xen_load_gdt(const struct desc_ptr *dtr) { unsigned long va = dtr->address; unsigned int size = dtr->size + 1; @@ -36832,7 +36712,7 @@ index 11d6fb4..c581662 100644 int f; /* -@@ -633,8 +630,7 @@ static void __init xen_load_gdt_boot(const struct desc_ptr *dtr) +@@ -637,8 +634,7 @@ static void __init xen_load_gdt_boot(const struct desc_ptr *dtr) { unsigned long va = dtr->address; unsigned int size = dtr->size + 1; @@ -36842,7 +36722,7 @@ index 11d6fb4..c581662 100644 int f; /* -@@ -642,7 +638,7 @@ static void __init xen_load_gdt_boot(const struct desc_ptr *dtr) +@@ -646,7 +642,7 @@ static void __init xen_load_gdt_boot(const struct desc_ptr *dtr) * 8-byte entries, or 16 4k pages.. */ @@ -36851,7 +36731,7 @@ index 11d6fb4..c581662 100644 BUG_ON(va & ~PAGE_MASK); for (f = 0; va < dtr->address + size; va += PAGE_SIZE, f++) { -@@ -1264,30 +1260,30 @@ static const struct pv_apic_ops xen_apic_ops __initconst = { +@@ -1268,30 +1264,30 @@ static const struct pv_apic_ops xen_apic_ops __initconst = { #endif }; @@ -36889,7 +36769,7 @@ index 11d6fb4..c581662 100644 { if (pm_power_off) pm_power_off(); -@@ -1440,8 +1436,11 @@ static void __ref xen_setup_gdt(int cpu) +@@ -1444,8 +1440,11 @@ static void __ref xen_setup_gdt(int cpu) pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot; pv_cpu_ops.load_gdt = xen_load_gdt_boot; @@ -36903,7 +36783,7 @@ index 11d6fb4..c581662 100644 pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry; pv_cpu_ops.load_gdt = xen_load_gdt; -@@ -1557,7 +1556,17 @@ asmlinkage __visible void __init xen_start_kernel(void) +@@ -1561,7 +1560,17 @@ asmlinkage __visible void __init xen_start_kernel(void) __userpte_alloc_gfp &= ~__GFP_HIGHMEM; /* Work out if we support NX */ @@ -36922,7 +36802,7 @@ index 11d6fb4..c581662 100644 /* Get mfn list */ xen_build_dynamic_phys_to_machine(); -@@ -1585,13 +1594,6 @@ asmlinkage __visible void __init xen_start_kernel(void) +@@ -1589,13 +1598,6 @@ asmlinkage __visible void __init xen_start_kernel(void) machine_ops = xen_machine_ops; @@ -37153,20 +37033,6 @@ index d6e5ba3..2bb142c 100644 return ERR_PTR(-EINVAL); nr_pages += end - start; -diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c -index d6283b3..9cc48d1d 100644 ---- a/block/blk-cgroup.c -+++ b/block/blk-cgroup.c -@@ -387,6 +387,9 @@ static void blkg_destroy_all(struct request_queue *q) - blkg_destroy(blkg); - spin_unlock(&blkcg->lock); - } -+ -+ q->root_blkg = NULL; -+ q->root_rl.blkg = NULL; - } - - /* diff --git a/block/blk-iopoll.c b/block/blk-iopoll.c index 0736729..2ec3b48 100644 --- a/block/blk-iopoll.c @@ -39021,23 +38887,19 @@ index 51f15bc..892a668 100644 split_counters(&cnt, &inpr); diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c -index 5799a0b..f7c7a7e 100644 +index c8941f3..f7c7a7e 100644 --- a/drivers/base/regmap/regmap-debugfs.c +++ b/drivers/base/regmap/regmap-debugfs.c -@@ -30,10 +30,9 @@ static LIST_HEAD(regmap_debugfs_early_list); +@@ -30,7 +30,7 @@ static LIST_HEAD(regmap_debugfs_early_list); static DEFINE_MUTEX(regmap_debugfs_early_lock); /* Calculate the length of a fixed format */ -static size_t regmap_calc_reg_len(int max_val, char *buf, size_t buf_size) +static size_t regmap_calc_reg_len(int max_val) { -- snprintf(buf, buf_size, "%x", max_val); -- return strlen(buf); -+ return snprintf(NULL, 0, "%x", max_val); + return snprintf(NULL, 0, "%x", max_val); } - - static ssize_t regmap_name_read_file(struct file *file, -@@ -174,8 +173,7 @@ static inline void regmap_calc_tot_len(struct regmap *map, +@@ -173,8 +173,7 @@ static inline void regmap_calc_tot_len(struct regmap *map, { /* Calculate the length of a fixed format */ if (!map->debugfs_tot_len) { @@ -39047,7 +38909,7 @@ index 5799a0b..f7c7a7e 100644 map->debugfs_val_len = 2 * map->format.val_bytes; map->debugfs_tot_len = map->debugfs_reg_len + map->debugfs_val_len + 3; /* : \n */ -@@ -405,7 +403,7 @@ static ssize_t regmap_access_read_file(struct file *file, +@@ -404,7 +403,7 @@ static ssize_t regmap_access_read_file(struct file *file, char __user *user_buf, size_t count, loff_t *ppos) { @@ -39056,7 +38918,7 @@ index 5799a0b..f7c7a7e 100644 size_t buf_pos = 0; loff_t p = 0; ssize_t ret; -@@ -421,7 +419,7 @@ static ssize_t regmap_access_read_file(struct file *file, +@@ -420,7 +419,7 @@ static ssize_t regmap_access_read_file(struct file *file, return -ENOMEM; /* Calculate the length of a fixed format */ @@ -39065,15 +38927,6 @@ index 5799a0b..f7c7a7e 100644 tot_len = reg_len + 10; /* ': R W V P\n' */ for (i = 0; i <= map->max_register; i += map->reg_stride) { -@@ -432,7 +430,7 @@ static ssize_t regmap_access_read_file(struct file *file, - /* If we're in the region the user is trying to read */ - if (p >= *ppos) { - /* ...but not beyond it */ -- if (buf_pos >= count - 1 - tot_len) -+ if (buf_pos + tot_len + 1 >= count) - break; - - /* Format the register */ diff --git a/drivers/base/syscore.c b/drivers/base/syscore.c index 8d98a32..61d3165 100644 --- a/drivers/base/syscore.c @@ -40561,10 +40414,10 @@ index 8f26b52..29f2a3a 100644 clk = clk_register(NULL, &pll_clk->hw.hw); if (WARN_ON(IS_ERR(clk))) { diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c -index 0136dfc..4cc55cb 100644 +index 7c2a738..0b84bd6 100644 --- a/drivers/cpufreq/acpi-cpufreq.c +++ b/drivers/cpufreq/acpi-cpufreq.c -@@ -675,8 +675,11 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) +@@ -678,8 +678,11 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) data->acpi_data = per_cpu_ptr(acpi_perf_data, cpu); per_cpu(acfreq_data, cpu) = data; @@ -40578,7 +40431,7 @@ index 0136dfc..4cc55cb 100644 result = acpi_processor_register_performance(data->acpi_data, cpu); if (result) -@@ -810,7 +813,9 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) +@@ -813,7 +816,9 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) policy->cur = acpi_cpufreq_guess_freq(data, policy->cpu); break; case ACPI_ADR_SPACE_FIXED_HARDWARE: @@ -40589,7 +40442,7 @@ index 0136dfc..4cc55cb 100644 break; default: break; -@@ -904,8 +909,10 @@ static void __init acpi_cpufreq_boost_init(void) +@@ -907,8 +912,10 @@ static void __init acpi_cpufreq_boost_init(void) if (!msrs) return; @@ -40603,10 +40456,10 @@ index 0136dfc..4cc55cb 100644 cpu_notifier_register_begin(); diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c -index 528a82bf..78dc025 100644 +index 99a4065..f97236c 100644 --- a/drivers/cpufreq/cpufreq-dt.c +++ b/drivers/cpufreq/cpufreq-dt.c -@@ -392,7 +392,9 @@ static int dt_cpufreq_probe(struct platform_device *pdev) +@@ -393,7 +393,9 @@ static int dt_cpufreq_probe(struct platform_device *pdev) if (!IS_ERR(cpu_reg)) regulator_put(cpu_reg); @@ -41466,130 +41319,6 @@ index 756eca8..2336d08 100644 int error; /* new_var */ -diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c -index e29560e..950c87f 100644 ---- a/drivers/firmware/efi/libstub/arm-stub.c -+++ b/drivers/firmware/efi/libstub/arm-stub.c -@@ -13,6 +13,7 @@ - */ - - #include <linux/efi.h> -+#include <linux/sort.h> - #include <asm/efi.h> - - #include "efistub.h" -@@ -305,6 +306,44 @@ fail: - */ - #define EFI_RT_VIRTUAL_BASE 0x40000000 - -+static int cmp_mem_desc(const void *l, const void *r) -+{ -+ const efi_memory_desc_t *left = l, *right = r; -+ -+ return (left->phys_addr > right->phys_addr) ? 1 : -1; -+} -+ -+/* -+ * Returns whether region @left ends exactly where region @right starts, -+ * or false if either argument is NULL. -+ */ -+static bool regions_are_adjacent(efi_memory_desc_t *left, -+ efi_memory_desc_t *right) -+{ -+ u64 left_end; -+ -+ if (left == NULL || right == NULL) -+ return false; -+ -+ left_end = left->phys_addr + left->num_pages * EFI_PAGE_SIZE; -+ -+ return left_end == right->phys_addr; -+} -+ -+/* -+ * Returns whether region @left and region @right have compatible memory type -+ * mapping attributes, and are both EFI_MEMORY_RUNTIME regions. -+ */ -+static bool regions_have_compatible_memory_type_attrs(efi_memory_desc_t *left, -+ efi_memory_desc_t *right) -+{ -+ static const u64 mem_type_mask = EFI_MEMORY_WB | EFI_MEMORY_WT | -+ EFI_MEMORY_WC | EFI_MEMORY_UC | -+ EFI_MEMORY_RUNTIME; -+ -+ return ((left->attribute ^ right->attribute) & mem_type_mask) == 0; -+} -+ - /* - * efi_get_virtmap() - create a virtual mapping for the EFI memory map - * -@@ -317,33 +356,52 @@ void efi_get_virtmap(efi_memory_desc_t *memory_map, unsigned long map_size, - int *count) - { - u64 efi_virt_base = EFI_RT_VIRTUAL_BASE; -- efi_memory_desc_t *out = runtime_map; -+ efi_memory_desc_t *in, *prev = NULL, *out = runtime_map; - int l; - -- for (l = 0; l < map_size; l += desc_size) { -- efi_memory_desc_t *in = (void *)memory_map + l; -+ /* -+ * To work around potential issues with the Properties Table feature -+ * introduced in UEFI 2.5, which may split PE/COFF executable images -+ * in memory into several RuntimeServicesCode and RuntimeServicesData -+ * regions, we need to preserve the relative offsets between adjacent -+ * EFI_MEMORY_RUNTIME regions with the same memory type attributes. -+ * The easiest way to find adjacent regions is to sort the memory map -+ * before traversing it. -+ */ -+ sort(memory_map, map_size / desc_size, desc_size, cmp_mem_desc, NULL); -+ -+ for (l = 0; l < map_size; l += desc_size, prev = in) { - u64 paddr, size; - -+ in = (void *)memory_map + l; - if (!(in->attribute & EFI_MEMORY_RUNTIME)) - continue; - -+ paddr = in->phys_addr; -+ size = in->num_pages * EFI_PAGE_SIZE; -+ - /* - * Make the mapping compatible with 64k pages: this allows - * a 4k page size kernel to kexec a 64k page size kernel and - * vice versa. - */ -- paddr = round_down(in->phys_addr, SZ_64K); -- size = round_up(in->num_pages * EFI_PAGE_SIZE + -- in->phys_addr - paddr, SZ_64K); -+ if (!regions_are_adjacent(prev, in) || -+ !regions_have_compatible_memory_type_attrs(prev, in)) { - -- /* -- * Avoid wasting memory on PTEs by choosing a virtual base that -- * is compatible with section mappings if this region has the -- * appropriate size and physical alignment. (Sections are 2 MB -- * on 4k granule kernels) -- */ -- if (IS_ALIGNED(in->phys_addr, SZ_2M) && size >= SZ_2M) -- efi_virt_base = round_up(efi_virt_base, SZ_2M); -+ paddr = round_down(in->phys_addr, SZ_64K); -+ size += in->phys_addr - paddr; -+ -+ /* -+ * Avoid wasting memory on PTEs by choosing a virtual -+ * base that is compatible with section mappings if this -+ * region has the appropriate size and physical -+ * alignment. (Sections are 2 MB on 4k granule kernels) -+ */ -+ if (IS_ALIGNED(in->phys_addr, SZ_2M) && size >= SZ_2M) -+ efi_virt_base = round_up(efi_virt_base, SZ_2M); -+ else -+ efi_virt_base = round_up(efi_virt_base, SZ_64K); -+ } - - in->virt_addr = efi_virt_base + in->phys_addr - paddr; - efi_virt_base += size; diff --git a/drivers/firmware/efi/runtime-map.c b/drivers/firmware/efi/runtime-map.c index 5c55227..97f4978 100644 --- a/drivers/firmware/efi/runtime-map.c @@ -42662,16 +42391,19 @@ index b1d303f..c59012c 100644 int retcode = -EINVAL; char stack_kdata[128]; diff --git a/drivers/gpu/drm/drm_lock.c b/drivers/gpu/drm/drm_lock.c -index f861361..b61d4c7 100644 +index 4924d381..fd3b5ee 100644 --- a/drivers/gpu/drm/drm_lock.c +++ b/drivers/gpu/drm/drm_lock.c -@@ -61,9 +61,12 @@ int drm_legacy_lock(struct drm_device *dev, void *data, +@@ -61,12 +61,15 @@ int drm_legacy_lock(struct drm_device *dev, void *data, struct drm_master *master = file_priv->master; int ret = 0; + if (!drm_core_check_feature(dev, DRIVER_KMS_LEGACY_CONTEXT)) + return -EINVAL; + + if (drm_core_check_feature(dev, DRIVER_MODESET)) + return -EINVAL; + ++file_priv->lock_count; - if (lock->context == DRM_KERNEL_CONTEXT) { @@ -42679,17 +42411,17 @@ index f861361..b61d4c7 100644 DRM_ERROR("Process %d using kernel context %d\n", task_pid_nr(current), lock->context); return -EINVAL; -@@ -153,12 +156,23 @@ int drm_legacy_unlock(struct drm_device *dev, void *data, struct drm_file *file_ +@@ -156,6 +159,9 @@ int drm_legacy_unlock(struct drm_device *dev, void *data, struct drm_file *file_ struct drm_lock *lock = data; struct drm_master *master = file_priv->master; -- if (lock->context == DRM_KERNEL_CONTEXT) { + if (!drm_core_check_feature(dev, DRIVER_KMS_LEGACY_CONTEXT)) + return -EINVAL; + -+ if (_DRM_LOCKING_CONTEXT(lock->context) == DRM_KERNEL_CONTEXT) { - DRM_ERROR("Process %d using kernel context %d\n", - task_pid_nr(current), lock->context); + if (drm_core_check_feature(dev, DRIVER_MODESET)) + return -EINVAL; + +@@ -165,6 +171,14 @@ int drm_legacy_unlock(struct drm_device *dev, void *data, struct drm_file *file_ return -EINVAL; } @@ -44653,10 +44385,10 @@ index 37f0170..414ec2c 100644 int i, j, count; diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c -index bd1c99d..2fa55ad 100644 +index 2aaedbe..e944f14 100644 --- a/drivers/hwmon/nct6775.c +++ b/drivers/hwmon/nct6775.c -@@ -953,10 +953,10 @@ static struct attribute_group * +@@ -957,10 +957,10 @@ static struct attribute_group * nct6775_create_attr_group(struct device *dev, struct sensor_template_group *tg, int repeat) { @@ -47284,7 +47016,7 @@ index 79a6d63..47acff6 100644 cl->fn = fn; cl->wq = wq; diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c -index e51de52..c52ff17 100644 +index 48b5890..b0af0ca 100644 --- a/drivers/md/bitmap.c +++ b/drivers/md/bitmap.c @@ -1933,7 +1933,7 @@ void bitmap_status(struct seq_file *seq, struct bitmap *bitmap) @@ -47496,7 +47228,7 @@ index 6ba47cf..a870ba2 100644 pmd->bl_info.value_type.inc = data_block_inc; pmd->bl_info.value_type.dec = data_block_dec; diff --git a/drivers/md/dm.c b/drivers/md/dm.c -index 0d7ab20..350d006 100644 +index 3e32f4e..01e0a7f 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -194,9 +194,9 @@ struct mapped_device { @@ -47531,7 +47263,7 @@ index 0d7ab20..350d006 100644 wake_up(&md->eventq); } -@@ -3481,18 +3481,18 @@ int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action, +@@ -3479,18 +3479,18 @@ int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action, uint32_t dm_next_uevent_seq(struct mapped_device *md) { @@ -48249,6 +47981,18 @@ index c9388c4..ce71ece 100644 .release = mxr_vp_layer_release, .buffer_set = mxr_vp_buffer_set, .stream_set = mxr_vp_stream_set, +diff --git a/drivers/media/platform/vivid/vivid-osd.c b/drivers/media/platform/vivid/vivid-osd.c +index 084d346..e15eef6 100644 +--- a/drivers/media/platform/vivid/vivid-osd.c ++++ b/drivers/media/platform/vivid/vivid-osd.c +@@ -85,6 +85,7 @@ static int vivid_fb_ioctl(struct fb_info *info, unsigned cmd, unsigned long arg) + case FBIOGET_VBLANK: { + struct fb_vblank vblank; + ++ memset(&vblank, 0, sizeof(vblank)); + vblank.flags = FB_VBLANK_HAVE_COUNT | FB_VBLANK_HAVE_VCOUNT | + FB_VBLANK_HAVE_VSYNC; + vblank.count = 0; diff --git a/drivers/media/radio/radio-cadet.c b/drivers/media/radio/radio-cadet.c index 82affae..42833ec 100644 --- a/drivers/media/radio/radio-cadet.c @@ -51948,10 +51692,10 @@ index e508c65..fb0dbae 100644 const struct ce_attr *attr) { diff --git a/drivers/net/wireless/ath/ath10k/htc.c b/drivers/net/wireless/ath/ath10k/htc.c -index 85bfa2a..3f6e72c 100644 +index 32d9ff1..0952b33 100644 --- a/drivers/net/wireless/ath/ath10k/htc.c +++ b/drivers/net/wireless/ath/ath10k/htc.c -@@ -839,7 +839,10 @@ int ath10k_htc_start(struct ath10k_htc *htc) +@@ -841,7 +841,10 @@ int ath10k_htc_start(struct ath10k_htc *htc) /* registered target arrival callback from the HIF layer */ int ath10k_htc_init(struct ath10k *ar) { @@ -51963,7 +51707,7 @@ index 85bfa2a..3f6e72c 100644 struct ath10k_htc_ep *ep = NULL; struct ath10k_htc *htc = &ar->htc; -@@ -848,8 +851,6 @@ int ath10k_htc_init(struct ath10k *ar) +@@ -850,8 +853,6 @@ int ath10k_htc_init(struct ath10k *ar) ath10k_htc_reset_endpoint_states(htc); /* setup HIF layer callbacks */ @@ -53912,10 +53656,10 @@ index 302e626..12579af 100644 da->attr.name = info->pin_config[i].name; da->attr.mode = 0644; diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c -index 78387a6..faffdc7 100644 +index 5081533..794deb2 100644 --- a/drivers/regulator/core.c +++ b/drivers/regulator/core.c -@@ -3646,7 +3646,7 @@ regulator_register(const struct regulator_desc *regulator_desc, +@@ -3650,7 +3650,7 @@ regulator_register(const struct regulator_desc *regulator_desc, const struct regulation_constraints *constraints = NULL; const struct regulator_init_data *init_data; struct regulator_config *config = NULL; @@ -53924,7 +53668,7 @@ index 78387a6..faffdc7 100644 struct regulator_dev *rdev; struct device *dev; int ret, i; -@@ -3729,7 +3729,7 @@ regulator_register(const struct regulator_desc *regulator_desc, +@@ -3733,7 +3733,7 @@ regulator_register(const struct regulator_desc *regulator_desc, rdev->dev.class = ®ulator_class; rdev->dev.parent = dev; dev_set_name(&rdev->dev, "regulator.%lu", @@ -54318,7 +54062,7 @@ index 8bb173e..20236b4 100644 /* These three are default values which can be overridden */ diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c -index 1dafeb4..3da5095 100644 +index cab4e98..31323f6 100644 --- a/drivers/scsi/hpsa.c +++ b/drivers/scsi/hpsa.c @@ -793,10 +793,10 @@ static inline u32 next_command(struct ctlr_info *h, u8 q) @@ -54343,7 +54087,7 @@ index 1dafeb4..3da5095 100644 } } -@@ -6340,17 +6340,17 @@ static void __iomem *remap_pci_mem(ulong base, ulong size) +@@ -6343,17 +6343,17 @@ static void __iomem *remap_pci_mem(ulong base, ulong size) static inline unsigned long get_next_completion(struct ctlr_info *h, u8 q) { @@ -54364,7 +54108,7 @@ index 1dafeb4..3da5095 100644 (h->interrupts_enabled == 0); } -@@ -7288,7 +7288,7 @@ static int hpsa_pci_init(struct ctlr_info *h) +@@ -7291,7 +7291,7 @@ static int hpsa_pci_init(struct ctlr_info *h) if (prod_index < 0) return prod_index; h->product_name = products[prod_index].product_name; @@ -54373,7 +54117,7 @@ index 1dafeb4..3da5095 100644 h->needs_abort_tags_swizzled = ctlr_needs_abort_tags_swizzled(h->board_id); -@@ -7687,7 +7687,7 @@ static void controller_lockup_detected(struct ctlr_info *h) +@@ -7690,7 +7690,7 @@ static void controller_lockup_detected(struct ctlr_info *h) unsigned long flags; u32 lockup_detected; @@ -54382,7 +54126,7 @@ index 1dafeb4..3da5095 100644 spin_lock_irqsave(&h->lock, flags); lockup_detected = readl(h->vaddr + SA5_SCRATCHPAD_OFFSET); if (!lockup_detected) { -@@ -7970,7 +7970,7 @@ reinit_after_soft_reset: +@@ -7973,7 +7973,7 @@ reinit_after_soft_reset: } /* make sure the board interrupts are off */ @@ -54391,7 +54135,7 @@ index 1dafeb4..3da5095 100644 rc = hpsa_request_irqs(h, do_hpsa_intr_msi, do_hpsa_intr_intx); if (rc) -@@ -8029,7 +8029,7 @@ reinit_after_soft_reset: +@@ -8032,7 +8032,7 @@ reinit_after_soft_reset: * fake ones to scoop up any residual completions. */ spin_lock_irqsave(&h->lock, flags); @@ -54400,7 +54144,7 @@ index 1dafeb4..3da5095 100644 spin_unlock_irqrestore(&h->lock, flags); hpsa_free_irqs(h); rc = hpsa_request_irqs(h, hpsa_msix_discard_completions, -@@ -8059,9 +8059,9 @@ reinit_after_soft_reset: +@@ -8062,9 +8062,9 @@ reinit_after_soft_reset: dev_info(&h->pdev->dev, "Board READY.\n"); dev_info(&h->pdev->dev, "Waiting for stale completions to drain.\n"); @@ -54412,7 +54156,7 @@ index 1dafeb4..3da5095 100644 rc = controller_reset_failed(h->cfgtable); if (rc) -@@ -8086,7 +8086,7 @@ reinit_after_soft_reset: +@@ -8089,7 +8089,7 @@ reinit_after_soft_reset: /* Turn the interrupts on so we can service requests */ @@ -54421,7 +54165,7 @@ index 1dafeb4..3da5095 100644 hpsa_hba_inquiry(h); -@@ -8104,7 +8104,7 @@ clean9: /* wq, sh, perf, sg, cmd, irq, shost, pci, lu, aer/h */ +@@ -8107,7 +8107,7 @@ clean9: /* wq, sh, perf, sg, cmd, irq, shost, pci, lu, aer/h */ kfree(h->hba_inquiry_data); clean7: /* perf, sg, cmd, irq, shost, pci, lu, aer/h */ hpsa_free_performant_mode(h); @@ -54430,7 +54174,7 @@ index 1dafeb4..3da5095 100644 clean6: /* sg, cmd, irq, pci, lockup, wq/aer/h */ hpsa_free_sg_chain_blocks(h); clean5: /* cmd, irq, shost, pci, lu, aer/h */ -@@ -8174,7 +8174,7 @@ static void hpsa_shutdown(struct pci_dev *pdev) +@@ -8177,7 +8177,7 @@ static void hpsa_shutdown(struct pci_dev *pdev) * To write all data in the battery backed cache to disks */ hpsa_flush_cache(h); @@ -54439,7 +54183,7 @@ index 1dafeb4..3da5095 100644 hpsa_free_irqs(h); /* init_one 4 */ hpsa_disable_interrupt_mode(h); /* pci_init 2 */ } -@@ -8306,7 +8306,7 @@ static int hpsa_enter_performant_mode(struct ctlr_info *h, u32 trans_support) +@@ -8309,7 +8309,7 @@ static int hpsa_enter_performant_mode(struct ctlr_info *h, u32 trans_support) CFGTBL_Trans_enable_directed_msix | (trans_support & (CFGTBL_Trans_io_accel1 | CFGTBL_Trans_io_accel2)); @@ -54448,7 +54192,7 @@ index 1dafeb4..3da5095 100644 /* This is a bit complicated. There are 8 registers on * the controller which we write to to tell it 8 different -@@ -8348,7 +8348,7 @@ static int hpsa_enter_performant_mode(struct ctlr_info *h, u32 trans_support) +@@ -8351,7 +8351,7 @@ static int hpsa_enter_performant_mode(struct ctlr_info *h, u32 trans_support) * perform the superfluous readl() after each command submission. */ if (trans_support & (CFGTBL_Trans_io_accel1 | CFGTBL_Trans_io_accel2)) @@ -54457,7 +54201,7 @@ index 1dafeb4..3da5095 100644 /* Controller spec: zero out this buffer. */ for (i = 0; i < h->nreply_queues; i++) -@@ -8378,12 +8378,12 @@ static int hpsa_enter_performant_mode(struct ctlr_info *h, u32 trans_support) +@@ -8381,12 +8381,12 @@ static int hpsa_enter_performant_mode(struct ctlr_info *h, u32 trans_support) * enable outbound interrupt coalescing in accelerator mode; */ if (trans_support & CFGTBL_Trans_io_accel1) { @@ -55393,10 +55137,10 @@ index c0d660f..24a5854 100644 .read = fuse_read, }; diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c -index cf8b91b..a13d434 100644 +index 9ce2f15..1ff9b36 100644 --- a/drivers/spi/spi.c +++ b/drivers/spi/spi.c -@@ -2216,7 +2216,7 @@ int spi_bus_unlock(struct spi_master *master) +@@ -2215,7 +2215,7 @@ int spi_bus_unlock(struct spi_master *master) EXPORT_SYMBOL_GPL(spi_bus_unlock); /* portable code must never pass more than 32 bytes */ @@ -55469,6 +55213,18 @@ index 985d94b..49c59fb 100644 return 0; } +diff --git a/drivers/staging/dgnc/dgnc_mgmt.c b/drivers/staging/dgnc/dgnc_mgmt.c +index b13318a..883e2a8 100644 +--- a/drivers/staging/dgnc/dgnc_mgmt.c ++++ b/drivers/staging/dgnc/dgnc_mgmt.c +@@ -115,6 +115,7 @@ long dgnc_mgmt_ioctl(struct file *file, unsigned int cmd, unsigned long arg) + + spin_lock_irqsave(&dgnc_global_lock, flags); + ++ memset(&ddi, 0, sizeof(ddi)); + ddi.dinfo_nboards = dgnc_NumBoards; + sprintf(ddi.dinfo_version, "%s", DG_PART); + diff --git a/drivers/staging/fbtft/fbtft-core.c b/drivers/staging/fbtft/fbtft-core.c index 9cc8141..ffd5039 100644 --- a/drivers/staging/fbtft/fbtft-core.c @@ -55878,10 +55634,10 @@ index 0edf320..49afe95 100644 login->tgt_agt = sbp_target_agent_register(login); if (IS_ERR(login->tgt_agt)) { diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c -index 09e682b..1980042 100644 +index 8f1cd19..ba7a8f1 100644 --- a/drivers/target/target_core_device.c +++ b/drivers/target/target_core_device.c -@@ -771,7 +771,7 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name) +@@ -772,7 +772,7 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name) spin_lock_init(&dev->se_tmr_lock); spin_lock_init(&dev->qf_cmd_lock); sema_init(&dev->caw_sem, 1); @@ -55904,10 +55660,10 @@ index ce8574b..98d6199 100644 cmd->se_ordered_id, cmd->sam_task_attr, dev->transport->name); diff --git a/drivers/thermal/cpu_cooling.c b/drivers/thermal/cpu_cooling.c -index 620dcd4..b91b5e0 100644 +index 42c6f71..1c64309 100644 --- a/drivers/thermal/cpu_cooling.c +++ b/drivers/thermal/cpu_cooling.c -@@ -831,10 +831,11 @@ __cpufreq_cooling_register(struct device_node *np, +@@ -838,10 +838,11 @@ __cpufreq_cooling_register(struct device_node *np, cpumask_copy(&cpufreq_dev->allowed_cpus, clip_cpus); if (capacitance) { @@ -56448,53 +56204,10 @@ index 382d3fc..b16d625 100644 dlci->modem_rx = 0; diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c -index ee8bfac..b605d4b 100644 +index afc1879..b605d4b 100644 --- a/drivers/tty/n_tty.c +++ b/drivers/tty/n_tty.c -@@ -343,8 +343,7 @@ static void n_tty_packet_mode_flush(struct tty_struct *tty) - spin_lock_irqsave(&tty->ctrl_lock, flags); - tty->ctrl_status |= TIOCPKT_FLUSHREAD; - spin_unlock_irqrestore(&tty->ctrl_lock, flags); -- if (waitqueue_active(&tty->link->read_wait)) -- wake_up_interruptible(&tty->link->read_wait); -+ wake_up_interruptible(&tty->link->read_wait); - } - } - -@@ -1382,8 +1381,7 @@ handle_newline: - put_tty_queue(c, ldata); - smp_store_release(&ldata->canon_head, ldata->read_head); - kill_fasync(&tty->fasync, SIGIO, POLL_IN); -- if (waitqueue_active(&tty->read_wait)) -- wake_up_interruptible_poll(&tty->read_wait, POLLIN); -+ wake_up_interruptible_poll(&tty->read_wait, POLLIN); - return 0; - } - } -@@ -1667,8 +1665,7 @@ static void __receive_buf(struct tty_struct *tty, const unsigned char *cp, - - if ((read_cnt(ldata) >= ldata->minimum_to_wake) || L_EXTPROC(tty)) { - kill_fasync(&tty->fasync, SIGIO, POLL_IN); -- if (waitqueue_active(&tty->read_wait)) -- wake_up_interruptible_poll(&tty->read_wait, POLLIN); -+ wake_up_interruptible_poll(&tty->read_wait, POLLIN); - } - } - -@@ -1887,10 +1884,8 @@ static void n_tty_set_termios(struct tty_struct *tty, struct ktermios *old) - } - - /* The termios change make the tty ready for I/O */ -- if (waitqueue_active(&tty->write_wait)) -- wake_up_interruptible(&tty->write_wait); -- if (waitqueue_active(&tty->read_wait)) -- wake_up_interruptible(&tty->read_wait); -+ wake_up_interruptible(&tty->write_wait); -+ wake_up_interruptible(&tty->read_wait); - } - - /** -@@ -2579,6 +2574,7 @@ void n_tty_inherit_ops(struct tty_ldisc_ops *ops) +@@ -2574,6 +2574,7 @@ void n_tty_inherit_ops(struct tty_ldisc_ops *ops) { *ops = tty_ldisc_N_TTY; ops->owner = NULL; @@ -56551,10 +56264,10 @@ index c8dd8dc..dca6cfd 100644 clear_bit((info->aiop * 8) + info->chan, (void *) &xmit_flags[info->board]); spin_unlock_irqrestore(&info->port.lock, flags); diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c -index 37fff12..1a88ae1 100644 +index c35d96e..f05d689 100644 --- a/drivers/tty/serial/8250/8250_core.c +++ b/drivers/tty/serial/8250/8250_core.c -@@ -3229,9 +3229,9 @@ static void univ8250_release_port(struct uart_port *port) +@@ -3237,9 +3237,9 @@ static void univ8250_release_port(struct uart_port *port) static void univ8250_rsa_support(struct uart_ops *ops) { @@ -56567,7 +56280,7 @@ index 37fff12..1a88ae1 100644 } #else -@@ -3274,8 +3274,10 @@ static void __init serial8250_isa_init_ports(void) +@@ -3282,8 +3282,10 @@ static void __init serial8250_isa_init_ports(void) } /* chain base port ops to support Remote Supervisor Adapter */ @@ -57334,69 +57047,10 @@ index 4cf263d..fd011fa 100644 if (next == NULL) { check_other_closed(tty); diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c -index 57fc6ee..62fa290 100644 +index 774df35..62fa290 100644 --- a/drivers/tty/tty_io.c +++ b/drivers/tty/tty_io.c -@@ -2136,8 +2136,24 @@ retry_open: - if (!noctty && - current->signal->leader && - !current->signal->tty && -- tty->session == NULL) -- __proc_set_tty(tty); -+ tty->session == NULL) { -+ /* -+ * Don't let a process that only has write access to the tty -+ * obtain the privileges associated with having a tty as -+ * controlling terminal (being able to reopen it with full -+ * access through /dev/tty, being able to perform pushback). -+ * Many distributions set the group of all ttys to "tty" and -+ * grant write-only access to all terminals for setgid tty -+ * binaries, which should not imply full privileges on all ttys. -+ * -+ * This could theoretically break old code that performs open() -+ * on a write-only file descriptor. In that case, it might be -+ * necessary to also permit this if -+ * inode_permission(inode, MAY_READ) == 0. -+ */ -+ if (filp->f_mode & FMODE_READ) -+ __proc_set_tty(tty); -+ } - spin_unlock_irq(¤t->sighand->siglock); - read_unlock(&tasklist_lock); - tty_unlock(tty); -@@ -2426,7 +2442,7 @@ static int fionbio(struct file *file, int __user *p) - * Takes ->siglock() when updating signal->tty - */ - --static int tiocsctty(struct tty_struct *tty, int arg) -+static int tiocsctty(struct tty_struct *tty, struct file *file, int arg) - { - int ret = 0; - -@@ -2460,6 +2476,13 @@ static int tiocsctty(struct tty_struct *tty, int arg) - goto unlock; - } - } -+ -+ /* See the comment in tty_open(). */ -+ if ((file->f_mode & FMODE_READ) == 0 && !capable(CAP_SYS_ADMIN)) { -+ ret = -EPERM; -+ goto unlock; -+ } -+ - proc_set_tty(tty); - unlock: - read_unlock(&tasklist_lock); -@@ -2852,7 +2875,7 @@ long tty_ioctl(struct file *file, unsigned int cmd, unsigned long arg) - no_tty(); - return 0; - case TIOCSCTTY: -- return tiocsctty(tty, arg); -+ return tiocsctty(tty, file, arg); - case TIOCGPGRP: - return tiocgpgrp(tty, real_tty, p); - case TIOCSPGRP: -@@ -3501,7 +3524,7 @@ EXPORT_SYMBOL(tty_devnum); +@@ -3524,7 +3524,7 @@ EXPORT_SYMBOL(tty_devnum); void tty_default_fops(struct file_operations *fops) { @@ -58220,7 +57874,7 @@ index a7de8e8..e1ef134 100644 spin_lock_init(&uhci->lock); setup_timer(&uhci->fsbr_timer, uhci_fsbr_timeout, diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c -index 5590eac..16d71c5 100644 +index c79d336..8fe41af 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -30,7 +30,7 @@ @@ -58233,10 +57887,10 @@ index 5590eac..16d71c5 100644 /* Device for a quirk */ #define PCI_VENDOR_ID_FRESCO_LOGIC 0x1b73 diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c -index 526ebc0..fa8f325 100644 +index d7b9f484..8208965 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c -@@ -4834,7 +4834,7 @@ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks) +@@ -4837,7 +4837,7 @@ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks) int retval; /* Accept arbitrarily long scatter-gather lists */ @@ -75569,7 +75223,7 @@ index d3634bf..10fc244 100644 for (i = 0; i < numnote; i++) sz += notesize(notes + i); diff --git a/fs/block_dev.c b/fs/block_dev.c -index 1982437..dc80c28 100644 +index 1170f8c..2a8acc1 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -738,7 +738,7 @@ static bool bd_may_claim(struct block_device *bdev, struct block_device *whole, @@ -76111,66 +75765,6 @@ index 3f50cee..7741620 100644 scanned = true; } server = cifs_sb_master_tcon(cifs_sb)->ses->server; -diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c -index f621b44..6b66dd5 100644 ---- a/fs/cifs/inode.c -+++ b/fs/cifs/inode.c -@@ -2034,7 +2034,6 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs, - struct tcon_link *tlink = NULL; - struct cifs_tcon *tcon = NULL; - struct TCP_Server_Info *server; -- struct cifs_io_parms io_parms; - - /* - * To avoid spurious oplock breaks from server, in the case of -@@ -2056,18 +2055,6 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs, - rc = -ENOSYS; - cifsFileInfo_put(open_file); - cifs_dbg(FYI, "SetFSize for attrs rc = %d\n", rc); -- if ((rc == -EINVAL) || (rc == -EOPNOTSUPP)) { -- unsigned int bytes_written; -- -- io_parms.netfid = open_file->fid.netfid; -- io_parms.pid = open_file->pid; -- io_parms.tcon = tcon; -- io_parms.offset = 0; -- io_parms.length = attrs->ia_size; -- rc = CIFSSMBWrite(xid, &io_parms, &bytes_written, -- NULL, NULL, 1); -- cifs_dbg(FYI, "Wrt seteof rc %d\n", rc); -- } - } else - rc = -EINVAL; - -@@ -2093,28 +2080,7 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs, - else - rc = -ENOSYS; - cifs_dbg(FYI, "SetEOF by path (setattrs) rc = %d\n", rc); -- if ((rc == -EINVAL) || (rc == -EOPNOTSUPP)) { -- __u16 netfid; -- int oplock = 0; - -- rc = SMBLegacyOpen(xid, tcon, full_path, FILE_OPEN, -- GENERIC_WRITE, CREATE_NOT_DIR, &netfid, -- &oplock, NULL, cifs_sb->local_nls, -- cifs_remap(cifs_sb)); -- if (rc == 0) { -- unsigned int bytes_written; -- -- io_parms.netfid = netfid; -- io_parms.pid = current->tgid; -- io_parms.tcon = tcon; -- io_parms.offset = 0; -- io_parms.length = attrs->ia_size; -- rc = CIFSSMBWrite(xid, &io_parms, &bytes_written, NULL, -- NULL, 1); -- cifs_dbg(FYI, "wrt seteof rc %d\n", rc); -- CIFSSMBClose(xid, tcon, netfid); -- } -- } - if (tlink) - cifs_put_tlink(tlink); - diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c index 8442b8b..ea6986f 100644 --- a/fs/cifs/misc.c @@ -76303,10 +75897,10 @@ index fc537c2..47d654c 100644 } diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c -index df91bcf..c499de7 100644 +index 18da19f..38a3a79 100644 --- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c -@@ -418,8 +418,8 @@ smb2_clear_stats(struct cifs_tcon *tcon) +@@ -422,8 +422,8 @@ smb2_clear_stats(struct cifs_tcon *tcon) #ifdef CONFIG_CIFS_STATS int i; for (i = 0; i < NUMBER_OF_SMB2_COMMANDS; i++) { @@ -76317,7 +75911,7 @@ index df91bcf..c499de7 100644 } #endif } -@@ -459,65 +459,65 @@ static void +@@ -463,65 +463,65 @@ static void smb2_print_stats(struct seq_file *m, struct cifs_tcon *tcon) { #ifdef CONFIG_CIFS_STATS @@ -76424,10 +76018,10 @@ index df91bcf..c499de7 100644 } diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c -index b8b4f08..6e84a23 100644 +index 60dd831..42f911c 100644 --- a/fs/cifs/smb2pdu.c +++ b/fs/cifs/smb2pdu.c -@@ -2206,8 +2206,7 @@ SMB2_query_directory(const unsigned int xid, struct cifs_tcon *tcon, +@@ -2252,8 +2252,7 @@ SMB2_query_directory(const unsigned int xid, struct cifs_tcon *tcon, default: cifs_dbg(VFS, "info level %u isn't supported\n", srch_inf->info_level); @@ -76794,7 +76388,7 @@ index a8f7564..3dde349 100644 return 0; while (nr) { diff --git a/fs/dcache.c b/fs/dcache.c -index 9b5fe50..8e7901e 100644 +index e3b44ca..e0d94f1 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -545,7 +545,7 @@ static void __dentry_kill(struct dentry *dentry) @@ -76987,7 +76581,7 @@ index 9b5fe50..8e7901e 100644 if (!spin_trylock(&inode->i_lock)) { spin_unlock(&dentry->d_lock); cpu_relax(); -@@ -3337,7 +3340,7 @@ static enum d_walk_ret d_genocide_kill(void *data, struct dentry *dentry) +@@ -3344,7 +3347,7 @@ static enum d_walk_ret d_genocide_kill(void *data, struct dentry *dentry) if (!(dentry->d_flags & DCACHE_GENOCIDE)) { dentry->d_flags |= DCACHE_GENOCIDE; @@ -76996,7 +76590,7 @@ index 9b5fe50..8e7901e 100644 } } return D_WALK_CONTINUE; -@@ -3445,7 +3448,8 @@ void __init vfs_caches_init_early(void) +@@ -3452,7 +3455,8 @@ void __init vfs_caches_init_early(void) void __init vfs_caches_init(void) { names_cachep = kmem_cache_create("names_cache", PATH_MAX, 0, @@ -80430,7 +80024,7 @@ index 14db05d..687f6d8 100644 #define MNT_NS_INTERNAL ERR_PTR(-EINVAL) /* distinct from any mnt_namespace */ diff --git a/fs/namei.c b/fs/namei.c -index 1c2105e..e54c8ab 100644 +index 36df481..c3045fd 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -336,17 +336,32 @@ int generic_permission(struct inode *inode, int mask) @@ -80535,7 +80129,7 @@ index 1c2105e..e54c8ab 100644 } static int __nd_alloc_stack(struct nameidata *nd) -@@ -557,11 +593,36 @@ static int __nd_alloc_stack(struct nameidata *nd) +@@ -557,9 +593,29 @@ static int __nd_alloc_stack(struct nameidata *nd) } memcpy(p, nd->internal, sizeof(nd->internal)); nd->stack = p; @@ -80562,6 +80156,11 @@ index 1c2105e..e54c8ab 100644 +} +#endif + + /** + * path_connected - Verify that a path->dentry is below path->mnt.mnt_root + * @path: nameidate to verify +@@ -580,6 +636,11 @@ static bool path_connected(const struct path *path) + static inline int nd_alloc_stack(struct nameidata *nd) { +#ifdef CONFIG_GRKERNSEC_SYMLINKOWN @@ -80572,7 +80171,7 @@ index 1c2105e..e54c8ab 100644 if (likely(nd->depth != EMBEDDED_LEVELS)) return 0; if (likely(nd->stack != nd->internal)) -@@ -590,6 +651,14 @@ static void terminate_walk(struct nameidata *nd) +@@ -608,6 +669,14 @@ static void terminate_walk(struct nameidata *nd) path_put(&nd->path); for (i = 0; i < nd->depth; i++) path_put(&nd->stack[i].link); @@ -80587,7 +80186,7 @@ index 1c2105e..e54c8ab 100644 if (nd->root.mnt && !(nd->flags & LOOKUP_ROOT)) { path_put(&nd->root); nd->root.mnt = NULL; -@@ -986,6 +1055,9 @@ const char *get_link(struct nameidata *nd) +@@ -1004,6 +1073,9 @@ const char *get_link(struct nameidata *nd) if (unlikely(error)) return ERR_PTR(error); @@ -80597,29 +80196,7 @@ index 1c2105e..e54c8ab 100644 nd->last_type = LAST_BIND; res = inode->i_link; if (!res) { -@@ -1535,8 +1607,6 @@ static int lookup_fast(struct nameidata *nd, - negative = d_is_negative(dentry); - if (read_seqcount_retry(&dentry->d_seq, seq)) - return -ECHILD; -- if (negative) -- return -ENOENT; - - /* - * This sequence count validates that the parent had no -@@ -1557,6 +1627,12 @@ static int lookup_fast(struct nameidata *nd, - goto unlazy; - } - } -+ /* -+ * Note: do negative dentry check after revalidation in -+ * case that drops it. -+ */ -+ if (negative) -+ return -ENOENT; - path->mnt = mnt; - path->dentry = dentry; - if (likely(__follow_mount_rcu(nd, path, inode, seqp))) -@@ -1665,6 +1741,23 @@ static int pick_link(struct nameidata *nd, struct path *link, +@@ -1692,6 +1764,23 @@ static int pick_link(struct nameidata *nd, struct path *link, } } @@ -80643,7 +80220,7 @@ index 1c2105e..e54c8ab 100644 last = nd->stack + nd->depth++; last->link = *link; last->cookie = NULL; -@@ -1804,7 +1897,7 @@ EXPORT_SYMBOL(full_name_hash); +@@ -1831,7 +1920,7 @@ EXPORT_SYMBOL(full_name_hash); static inline u64 hash_name(const char *name) { unsigned long a, b, adata, bdata, mask, hash, len; @@ -80652,7 +80229,7 @@ index 1c2105e..e54c8ab 100644 hash = a = 0; len = -sizeof(unsigned long); -@@ -1973,6 +2066,9 @@ static const char *path_init(struct nameidata *nd, unsigned flags) +@@ -2000,6 +2089,9 @@ static const char *path_init(struct nameidata *nd, unsigned flags) nd->flags = flags | LOOKUP_JUMPED | LOOKUP_PARENT; nd->depth = 0; nd->total_link_count = 0; @@ -80662,7 +80239,7 @@ index 1c2105e..e54c8ab 100644 if (flags & LOOKUP_ROOT) { struct dentry *root = nd->root.dentry; struct inode *inode = root->d_inode; -@@ -2110,6 +2206,11 @@ static int path_lookupat(struct nameidata *nd, unsigned flags, struct path *path +@@ -2137,6 +2229,11 @@ static int path_lookupat(struct nameidata *nd, unsigned flags, struct path *path if (!err) err = complete_walk(nd); @@ -80674,7 +80251,7 @@ index 1c2105e..e54c8ab 100644 if (!err && nd->flags & LOOKUP_DIRECTORY) if (!d_can_lookup(nd->path.dentry)) err = -ENOTDIR; -@@ -2158,6 +2259,10 @@ static int path_parentat(struct nameidata *nd, unsigned flags, +@@ -2185,6 +2282,10 @@ static int path_parentat(struct nameidata *nd, unsigned flags, err = link_path_walk(s, nd); if (!err) err = complete_walk(nd); @@ -80685,7 +80262,7 @@ index 1c2105e..e54c8ab 100644 if (!err) { *parent = nd->path; nd->path.mnt = NULL; -@@ -2689,6 +2794,13 @@ static int may_open(struct path *path, int acc_mode, int flag) +@@ -2716,6 +2817,13 @@ static int may_open(struct path *path, int acc_mode, int flag) if (flag & O_NOATIME && !inode_owner_or_capable(inode)) return -EPERM; @@ -80699,7 +80276,7 @@ index 1c2105e..e54c8ab 100644 return 0; } -@@ -2955,6 +3067,18 @@ static int lookup_open(struct nameidata *nd, struct path *path, +@@ -2982,6 +3090,18 @@ static int lookup_open(struct nameidata *nd, struct path *path, /* Negative dentry, just create the file */ if (!dentry->d_inode && (op->open_flag & O_CREAT)) { umode_t mode = op->mode; @@ -80718,7 +80295,7 @@ index 1c2105e..e54c8ab 100644 if (!IS_POSIXACL(dir->d_inode)) mode &= ~current_umask(); /* -@@ -2976,6 +3100,8 @@ static int lookup_open(struct nameidata *nd, struct path *path, +@@ -3003,6 +3123,8 @@ static int lookup_open(struct nameidata *nd, struct path *path, nd->flags & LOOKUP_EXCL); if (error) goto out_dput; @@ -80727,7 +80304,7 @@ index 1c2105e..e54c8ab 100644 } out_no_open: path->dentry = dentry; -@@ -3039,6 +3165,9 @@ static int do_last(struct nameidata *nd, +@@ -3066,6 +3188,9 @@ static int do_last(struct nameidata *nd, if (error) return error; @@ -80737,7 +80314,7 @@ index 1c2105e..e54c8ab 100644 audit_inode(nd->name, dir, LOOKUP_PARENT); /* trailing slashes? */ if (unlikely(nd->last.name[nd->last.len])) -@@ -3081,11 +3210,24 @@ retry_lookup: +@@ -3108,11 +3233,24 @@ retry_lookup: goto finish_open_created; } @@ -80763,7 +80340,7 @@ index 1c2105e..e54c8ab 100644 /* * If atomic_open() acquired write access it is dropped now due to -@@ -3121,6 +3263,11 @@ finish_lookup: +@@ -3148,6 +3286,11 @@ finish_lookup: if (unlikely(error)) return error; @@ -80775,7 +80352,7 @@ index 1c2105e..e54c8ab 100644 if (unlikely(d_is_symlink(path.dentry)) && !(open_flag & O_PATH)) { path_to_nameidata(&path, nd); return -ELOOP; -@@ -3143,6 +3290,12 @@ finish_open: +@@ -3170,6 +3313,12 @@ finish_open: path_put(&save_parent); return error; } @@ -80788,7 +80365,7 @@ index 1c2105e..e54c8ab 100644 audit_inode(nd->name, nd->path.dentry, 0); error = -EISDIR; if ((open_flag & O_CREAT) && d_is_dir(nd->path.dentry)) -@@ -3409,9 +3562,11 @@ static struct dentry *filename_create(int dfd, struct filename *name, +@@ -3436,9 +3585,11 @@ static struct dentry *filename_create(int dfd, struct filename *name, goto unlock; error = -EEXIST; @@ -80802,7 +80379,7 @@ index 1c2105e..e54c8ab 100644 /* * Special case - lookup gave negative, but... we had foo/bar/ * From the vfs_mknod() POV we just have a negative dentry - -@@ -3465,6 +3620,20 @@ inline struct dentry *user_path_create(int dfd, const char __user *pathname, +@@ -3492,6 +3643,20 @@ inline struct dentry *user_path_create(int dfd, const char __user *pathname, } EXPORT_SYMBOL(user_path_create); @@ -80823,7 +80400,7 @@ index 1c2105e..e54c8ab 100644 int vfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode, dev_t dev) { int error = may_create(dir, dentry); -@@ -3528,6 +3697,17 @@ retry: +@@ -3555,6 +3720,17 @@ retry: if (!IS_POSIXACL(path.dentry->d_inode)) mode &= ~current_umask(); @@ -80841,7 +80418,7 @@ index 1c2105e..e54c8ab 100644 error = security_path_mknod(&path, dentry, mode, dev); if (error) goto out; -@@ -3543,6 +3723,8 @@ retry: +@@ -3570,6 +3746,8 @@ retry: error = vfs_mknod(path.dentry->d_inode,dentry,mode,0); break; } @@ -80850,7 +80427,7 @@ index 1c2105e..e54c8ab 100644 out: done_path_create(&path, dentry); if (retry_estale(error, lookup_flags)) { -@@ -3597,9 +3779,16 @@ retry: +@@ -3624,9 +3802,16 @@ retry: if (!IS_POSIXACL(path.dentry->d_inode)) mode &= ~current_umask(); @@ -80867,7 +80444,7 @@ index 1c2105e..e54c8ab 100644 done_path_create(&path, dentry); if (retry_estale(error, lookup_flags)) { lookup_flags |= LOOKUP_REVAL; -@@ -3632,7 +3821,7 @@ void dentry_unhash(struct dentry *dentry) +@@ -3659,7 +3844,7 @@ void dentry_unhash(struct dentry *dentry) { shrink_dcache_parent(dentry); spin_lock(&dentry->d_lock); @@ -80876,7 +80453,7 @@ index 1c2105e..e54c8ab 100644 __d_drop(dentry); spin_unlock(&dentry->d_lock); } -@@ -3685,6 +3874,8 @@ static long do_rmdir(int dfd, const char __user *pathname) +@@ -3712,6 +3897,8 @@ static long do_rmdir(int dfd, const char __user *pathname) struct path path; struct qstr last; int type; @@ -80885,7 +80462,7 @@ index 1c2105e..e54c8ab 100644 unsigned int lookup_flags = 0; retry: name = user_path_parent(dfd, pathname, -@@ -3717,10 +3908,20 @@ retry: +@@ -3744,10 +3931,20 @@ retry: error = -ENOENT; goto exit3; } @@ -80906,7 +80483,7 @@ index 1c2105e..e54c8ab 100644 exit3: dput(dentry); exit2: -@@ -3815,6 +4016,8 @@ static long do_unlinkat(int dfd, const char __user *pathname) +@@ -3842,6 +4039,8 @@ static long do_unlinkat(int dfd, const char __user *pathname) int type; struct inode *inode = NULL; struct inode *delegated_inode = NULL; @@ -80915,7 +80492,7 @@ index 1c2105e..e54c8ab 100644 unsigned int lookup_flags = 0; retry: name = user_path_parent(dfd, pathname, -@@ -3841,10 +4044,21 @@ retry_deleg: +@@ -3868,10 +4067,21 @@ retry_deleg: if (d_is_negative(dentry)) goto slashes; ihold(inode); @@ -80937,7 +80514,7 @@ index 1c2105e..e54c8ab 100644 exit2: dput(dentry); } -@@ -3933,9 +4147,17 @@ retry: +@@ -3960,9 +4170,17 @@ retry: if (IS_ERR(dentry)) goto out_putname; @@ -80955,7 +80532,7 @@ index 1c2105e..e54c8ab 100644 done_path_create(&path, dentry); if (retry_estale(error, lookup_flags)) { lookup_flags |= LOOKUP_REVAL; -@@ -4039,6 +4261,7 @@ SYSCALL_DEFINE5(linkat, int, olddfd, const char __user *, oldname, +@@ -4066,6 +4284,7 @@ SYSCALL_DEFINE5(linkat, int, olddfd, const char __user *, oldname, struct dentry *new_dentry; struct path old_path, new_path; struct inode *delegated_inode = NULL; @@ -80963,7 +80540,7 @@ index 1c2105e..e54c8ab 100644 int how = 0; int error; -@@ -4062,7 +4285,7 @@ retry: +@@ -4089,7 +4308,7 @@ retry: if (error) return error; @@ -80972,7 +80549,7 @@ index 1c2105e..e54c8ab 100644 (how & LOOKUP_REVAL)); error = PTR_ERR(new_dentry); if (IS_ERR(new_dentry)) -@@ -4074,11 +4297,26 @@ retry: +@@ -4101,11 +4320,26 @@ retry: error = may_linkat(&old_path); if (unlikely(error)) goto out_dput; @@ -80999,7 +80576,7 @@ index 1c2105e..e54c8ab 100644 done_path_create(&new_path, new_dentry); if (delegated_inode) { error = break_deleg_wait(&delegated_inode); -@@ -4393,6 +4631,20 @@ retry_deleg: +@@ -4420,6 +4654,20 @@ retry_deleg: if (new_dentry == trap) goto exit5; @@ -81020,7 +80597,7 @@ index 1c2105e..e54c8ab 100644 error = security_path_rename(&old_path, old_dentry, &new_path, new_dentry, flags); if (error) -@@ -4400,6 +4652,9 @@ retry_deleg: +@@ -4427,6 +4675,9 @@ retry_deleg: error = vfs_rename(old_path.dentry->d_inode, old_dentry, new_path.dentry->d_inode, new_dentry, &delegated_inode, flags); @@ -81030,7 +80607,7 @@ index 1c2105e..e54c8ab 100644 exit5: dput(new_dentry); exit4: -@@ -4456,14 +4711,24 @@ EXPORT_SYMBOL(vfs_whiteout); +@@ -4483,14 +4734,24 @@ EXPORT_SYMBOL(vfs_whiteout); int readlink_copy(char __user *buffer, int buflen, const char *link) { @@ -99537,10 +99114,10 @@ index b449f37..61005b3 100644 #define __meminitconst __constsection(.meminit.rodata) #define __memexit __section(.memexit.text) __exitused __cold notrace diff --git a/include/linux/init_task.h b/include/linux/init_task.h -index e8493fe..8684844 100644 +index bb9b075..ecac42c 100644 --- a/include/linux/init_task.h +++ b/include/linux/init_task.h -@@ -149,6 +149,12 @@ extern struct task_group root_task_group; +@@ -157,6 +157,12 @@ extern struct task_group root_task_group; #define INIT_TASK_COMM "swapper" @@ -99553,7 +99130,7 @@ index e8493fe..8684844 100644 #ifdef CONFIG_RT_MUTEXES # define INIT_RT_MUTEXES(tsk) \ .pi_waiters = RB_ROOT, \ -@@ -215,6 +221,7 @@ extern struct task_group root_task_group; +@@ -223,6 +229,7 @@ extern struct task_group root_task_group; RCU_POINTER_INITIALIZER(cred, &init_cred), \ .comm = INIT_TASK_COMM, \ .thread = INIT_THREAD, \ @@ -100156,7 +99733,7 @@ index 3d385c8..deacb6a 100644 static inline int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst) diff --git a/include/linux/mm.h b/include/linux/mm.h -index bf6f117..c8abe91 100644 +index 2b05068..c58989c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -136,6 +136,11 @@ extern unsigned int kobjsize(const void *objp); @@ -100190,7 +99767,7 @@ index bf6f117..c8abe91 100644 struct mmu_gather; struct inode; -@@ -1160,8 +1166,8 @@ int follow_pfn(struct vm_area_struct *vma, unsigned long address, +@@ -1181,8 +1187,8 @@ int follow_pfn(struct vm_area_struct *vma, unsigned long address, unsigned long *pfn); int follow_phys(struct vm_area_struct *vma, unsigned long address, unsigned int flags, unsigned long *prot, resource_size_t *phys); @@ -100201,7 +99778,7 @@ index bf6f117..c8abe91 100644 static inline void unmap_shared_mapping_range(struct address_space *mapping, loff_t const holebegin, loff_t const holelen) -@@ -1201,9 +1207,9 @@ static inline int fixup_user_fault(struct task_struct *tsk, +@@ -1222,9 +1228,9 @@ static inline int fixup_user_fault(struct task_struct *tsk, } #endif @@ -100214,7 +99791,7 @@ index bf6f117..c8abe91 100644 long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, unsigned long nr_pages, -@@ -1251,34 +1257,6 @@ int clear_page_dirty_for_io(struct page *page); +@@ -1272,34 +1278,6 @@ int clear_page_dirty_for_io(struct page *page); int get_cmdline(struct task_struct *task, char *buffer, int buflen); @@ -100249,7 +99826,7 @@ index bf6f117..c8abe91 100644 extern struct task_struct *task_of_stack(struct task_struct *task, struct vm_area_struct *vma, bool in_group); -@@ -1401,8 +1379,15 @@ static inline int __pud_alloc(struct mm_struct *mm, pgd_t *pgd, +@@ -1422,8 +1400,15 @@ static inline int __pud_alloc(struct mm_struct *mm, pgd_t *pgd, { return 0; } @@ -100265,7 +99842,7 @@ index bf6f117..c8abe91 100644 #endif #if defined(__PAGETABLE_PMD_FOLDED) || !defined(CONFIG_MMU) -@@ -1412,6 +1397,12 @@ static inline int __pmd_alloc(struct mm_struct *mm, pud_t *pud, +@@ -1433,6 +1418,12 @@ static inline int __pmd_alloc(struct mm_struct *mm, pud_t *pud, return 0; } @@ -100278,7 +99855,7 @@ index bf6f117..c8abe91 100644 static inline void mm_nr_pmds_init(struct mm_struct *mm) {} static inline unsigned long mm_nr_pmds(struct mm_struct *mm) -@@ -1424,6 +1415,7 @@ static inline void mm_dec_nr_pmds(struct mm_struct *mm) {} +@@ -1445,6 +1436,7 @@ static inline void mm_dec_nr_pmds(struct mm_struct *mm) {} #else int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address); @@ -100286,7 +99863,7 @@ index bf6f117..c8abe91 100644 static inline void mm_nr_pmds_init(struct mm_struct *mm) { -@@ -1461,11 +1453,23 @@ static inline pud_t *pud_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long a +@@ -1482,11 +1474,23 @@ static inline pud_t *pud_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long a NULL: pud_offset(pgd, address); } @@ -100310,7 +99887,7 @@ index bf6f117..c8abe91 100644 #endif /* CONFIG_MMU && !__ARCH_HAS_4LEVEL_HACK */ #if USE_SPLIT_PTE_PTLOCKS -@@ -1846,12 +1850,23 @@ extern struct vm_area_struct *copy_vma(struct vm_area_struct **, +@@ -1867,12 +1871,23 @@ extern struct vm_area_struct *copy_vma(struct vm_area_struct **, bool *need_rmap_locks); extern void exit_mmap(struct mm_struct *); @@ -100334,7 +99911,7 @@ index bf6f117..c8abe91 100644 if (rlim < RLIM_INFINITY) { if (((new - start) + (end_data - start_data)) > rlim) return -ENOSPC; -@@ -1884,6 +1899,7 @@ extern unsigned long do_mmap_pgoff(struct file *file, unsigned long addr, +@@ -1905,6 +1920,7 @@ extern unsigned long do_mmap_pgoff(struct file *file, unsigned long addr, unsigned long len, unsigned long prot, unsigned long flags, unsigned long pgoff, unsigned long *populate); extern int do_munmap(struct mm_struct *, unsigned long, size_t); @@ -100342,7 +99919,7 @@ index bf6f117..c8abe91 100644 #ifdef CONFIG_MMU extern int __mm_populate(unsigned long addr, unsigned long len, -@@ -1912,10 +1928,11 @@ struct vm_unmapped_area_info { +@@ -1933,10 +1949,11 @@ struct vm_unmapped_area_info { unsigned long high_limit; unsigned long align_mask; unsigned long align_offset; @@ -100356,7 +99933,7 @@ index bf6f117..c8abe91 100644 /* * Search for an unmapped address range. -@@ -1927,7 +1944,7 @@ extern unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info); +@@ -1948,7 +1965,7 @@ extern unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info); * - satisfies (begin_addr & align_mask) == (align_offset & align_mask) */ static inline unsigned long @@ -100365,7 +99942,7 @@ index bf6f117..c8abe91 100644 { if (info->flags & VM_UNMAPPED_AREA_TOPDOWN) return unmapped_area_topdown(info); -@@ -1989,6 +2006,10 @@ extern struct vm_area_struct * find_vma(struct mm_struct * mm, unsigned long add +@@ -2010,6 +2027,10 @@ extern struct vm_area_struct * find_vma(struct mm_struct * mm, unsigned long add extern struct vm_area_struct * find_vma_prev(struct mm_struct * mm, unsigned long addr, struct vm_area_struct **pprev); @@ -100376,7 +99953,7 @@ index bf6f117..c8abe91 100644 /* Look up the first VMA which intersects the interval start_addr..end_addr-1, NULL if none. Assume start_addr < end_addr. */ static inline struct vm_area_struct * find_vma_intersection(struct mm_struct * mm, unsigned long start_addr, unsigned long end_addr) -@@ -2018,10 +2039,10 @@ static inline struct vm_area_struct *find_exact_vma(struct mm_struct *mm, +@@ -2039,10 +2060,10 @@ static inline struct vm_area_struct *find_exact_vma(struct mm_struct *mm, } #ifdef CONFIG_MMU @@ -100389,7 +99966,7 @@ index bf6f117..c8abe91 100644 { return __pgprot(0); } -@@ -2083,6 +2104,11 @@ void vm_stat_account(struct mm_struct *, unsigned long, struct file *, long); +@@ -2104,6 +2125,11 @@ void vm_stat_account(struct mm_struct *, unsigned long, struct file *, long); static inline void vm_stat_account(struct mm_struct *mm, unsigned long flags, struct file *file, long pages) { @@ -100401,7 +99978,7 @@ index bf6f117..c8abe91 100644 mm->total_vm += pages; } #endif /* CONFIG_PROC_FS */ -@@ -2186,7 +2212,7 @@ extern int get_hwpoison_page(struct page *page); +@@ -2207,7 +2233,7 @@ extern int get_hwpoison_page(struct page *page); extern int sysctl_memory_failure_early_kill; extern int sysctl_memory_failure_recovery; extern void shake_page(struct page *p, int access); @@ -100410,7 +99987,7 @@ index bf6f117..c8abe91 100644 extern int soft_offline_page(struct page *page, int flags); -@@ -2271,5 +2297,11 @@ void __init setup_nr_node_ids(void); +@@ -2292,5 +2318,11 @@ void __init setup_nr_node_ids(void); static inline void setup_nr_node_ids(void) {} #endif @@ -101182,10 +100759,10 @@ index 4ea1d37..80f4b33 100644 /* * The return value from decompress routine is the length of the diff --git a/include/linux/preempt.h b/include/linux/preempt.h -index 84991f1..6f23603 100644 +index bea8dd8..534a23d 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h -@@ -131,11 +131,16 @@ extern void preempt_count_sub(int val); +@@ -140,11 +140,16 @@ extern void preempt_count_sub(int val); #define preempt_count_dec_and_test() __preempt_count_dec_and_test() #endif @@ -101202,7 +100779,7 @@ index 84991f1..6f23603 100644 #define preempt_active_enter() \ do { \ -@@ -157,6 +162,12 @@ do { \ +@@ -166,6 +171,12 @@ do { \ barrier(); \ } while (0) @@ -101215,7 +100792,7 @@ index 84991f1..6f23603 100644 #define sched_preempt_enable_no_resched() \ do { \ barrier(); \ -@@ -165,6 +176,12 @@ do { \ +@@ -174,6 +185,12 @@ do { \ #define preempt_enable_no_resched() sched_preempt_enable_no_resched() @@ -101228,7 +100805,7 @@ index 84991f1..6f23603 100644 #define preemptible() (preempt_count() == 0 && !irqs_disabled()) #ifdef CONFIG_PREEMPT -@@ -225,8 +242,10 @@ do { \ +@@ -234,8 +251,10 @@ do { \ * region. */ #define preempt_disable() barrier() @@ -101239,7 +100816,7 @@ index 84991f1..6f23603 100644 #define preempt_enable() barrier() #define preempt_check_resched() do { } while (0) -@@ -241,11 +260,13 @@ do { \ +@@ -250,11 +269,13 @@ do { \ /* * Modules have no business playing preemption tricks. */ @@ -101594,7 +101171,7 @@ index 9b1ef0c..9fa3feb 100644 /* diff --git a/include/linux/sched.h b/include/linux/sched.h -index 04b5ada..9861651 100644 +index bfca8aa..c8b327c 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -7,7 +7,7 @@ @@ -101652,7 +101229,7 @@ index 04b5ada..9861651 100644 #ifdef CONFIG_AUDIT unsigned audit_tty; unsigned audit_tty_log_passwd; -@@ -763,7 +788,7 @@ struct signal_struct { +@@ -775,7 +800,7 @@ struct signal_struct { struct mutex cred_guard_mutex; /* guard against foreign influences on * credential calculations * (notably. ptrace) */ @@ -101661,7 +101238,7 @@ index 04b5ada..9861651 100644 /* * Bits in flags field of signal_struct. -@@ -816,6 +841,14 @@ struct user_struct { +@@ -828,6 +853,14 @@ struct user_struct { struct key *session_keyring; /* UID's default session keyring */ #endif @@ -101676,7 +101253,7 @@ index 04b5ada..9861651 100644 /* Hash table maintenance information */ struct hlist_node uidhash_node; kuid_t uid; -@@ -823,7 +856,7 @@ struct user_struct { +@@ -835,7 +868,7 @@ struct user_struct { #ifdef CONFIG_PERF_EVENTS atomic_long_t locked_vm; #endif @@ -101685,7 +101262,7 @@ index 04b5ada..9861651 100644 extern int uids_sysfs_init(void); -@@ -1344,6 +1377,9 @@ enum perf_event_task_context { +@@ -1356,6 +1389,9 @@ enum perf_event_task_context { struct task_struct { volatile long state; /* -1 unrunnable, 0 runnable, >0 stopped */ void *stack; @@ -101695,7 +101272,7 @@ index 04b5ada..9861651 100644 atomic_t usage; unsigned int flags; /* per process flags, defined below */ unsigned int ptrace; -@@ -1476,8 +1512,8 @@ struct task_struct { +@@ -1488,8 +1524,8 @@ struct task_struct { struct list_head thread_node; struct completion *vfork_done; /* for vfork() */ @@ -101706,7 +101283,7 @@ index 04b5ada..9861651 100644 cputime_t utime, stime, utimescaled, stimescaled; cputime_t gtime; -@@ -1502,11 +1538,6 @@ struct task_struct { +@@ -1514,11 +1550,6 @@ struct task_struct { struct task_cputime cputime_expires; struct list_head cpu_timers[3]; @@ -101718,7 +101295,7 @@ index 04b5ada..9861651 100644 char comm[TASK_COMM_LEN]; /* executable name excluding path - access with [gs]et_task_comm (which lock it with task_lock()) -@@ -1598,6 +1629,10 @@ struct task_struct { +@@ -1610,6 +1641,10 @@ struct task_struct { gfp_t lockdep_reclaim_gfp; #endif @@ -101729,7 +101306,7 @@ index 04b5ada..9861651 100644 /* journalling filesystem info */ void *journal_info; -@@ -1636,6 +1671,10 @@ struct task_struct { +@@ -1648,6 +1683,10 @@ struct task_struct { /* cg_list protected by css_set_lock and tsk->alloc_lock */ struct list_head cg_list; #endif @@ -101740,7 +101317,7 @@ index 04b5ada..9861651 100644 #ifdef CONFIG_FUTEX struct robust_list_head __user *robust_list; #ifdef CONFIG_COMPAT -@@ -1747,7 +1786,7 @@ struct task_struct { +@@ -1759,7 +1798,7 @@ struct task_struct { * Number of functions that haven't been traced * because of depth overrun. */ @@ -101749,7 +101326,7 @@ index 04b5ada..9861651 100644 /* Pause for the tracing */ atomic_t tracing_graph_pause; #endif -@@ -1776,22 +1815,91 @@ struct task_struct { +@@ -1788,22 +1827,91 @@ struct task_struct { unsigned long task_state_change; #endif int pagefault_disabled; @@ -101849,7 +101426,7 @@ index 04b5ada..9861651 100644 /* Future-safe accessor for struct task_struct's cpus_allowed. */ #define tsk_cpus_allowed(tsk) (&(tsk)->cpus_allowed) -@@ -1873,7 +1981,7 @@ struct pid_namespace; +@@ -1885,7 +1993,7 @@ struct pid_namespace; pid_t __task_pid_nr_ns(struct task_struct *task, enum pid_type type, struct pid_namespace *ns); @@ -101858,7 +101435,7 @@ index 04b5ada..9861651 100644 { return tsk->pid; } -@@ -2241,6 +2349,25 @@ extern u64 sched_clock_cpu(int cpu); +@@ -2253,6 +2361,25 @@ extern u64 sched_clock_cpu(int cpu); extern void sched_clock_init(void); @@ -101884,7 +101461,7 @@ index 04b5ada..9861651 100644 #ifndef CONFIG_HAVE_UNSTABLE_SCHED_CLOCK static inline void sched_clock_tick(void) { -@@ -2369,7 +2496,9 @@ extern void set_curr_task(int cpu, struct task_struct *p); +@@ -2381,7 +2508,9 @@ extern void set_curr_task(int cpu, struct task_struct *p); void yield(void); union thread_union { @@ -101894,7 +101471,7 @@ index 04b5ada..9861651 100644 unsigned long stack[THREAD_SIZE/sizeof(long)]; }; -@@ -2402,6 +2531,7 @@ extern struct pid_namespace init_pid_ns; +@@ -2414,6 +2543,7 @@ extern struct pid_namespace init_pid_ns; */ extern struct task_struct *find_task_by_vpid(pid_t nr); @@ -101902,7 +101479,7 @@ index 04b5ada..9861651 100644 extern struct task_struct *find_task_by_pid_ns(pid_t nr, struct pid_namespace *ns); -@@ -2579,7 +2709,7 @@ extern void __cleanup_sighand(struct sighand_struct *); +@@ -2591,7 +2721,7 @@ extern void __cleanup_sighand(struct sighand_struct *); extern void exit_itimers(struct signal_struct *); extern void flush_itimer_signals(void); @@ -101911,7 +101488,7 @@ index 04b5ada..9861651 100644 extern int do_execve(struct filename *, const char __user * const __user *, -@@ -2784,9 +2914,9 @@ static inline unsigned long *end_of_stack(struct task_struct *p) +@@ -2796,9 +2926,9 @@ static inline unsigned long *end_of_stack(struct task_struct *p) #define task_stack_end_corrupted(task) \ (*(end_of_stack(task)) != STACK_END_MAGIC) @@ -101936,7 +101513,7 @@ index c9e4731..c716293 100644 extern unsigned int sysctl_sched_latency; extern unsigned int sysctl_sched_min_granularity; diff --git a/include/linux/security.h b/include/linux/security.h -index 79d85dd..5bc05d7 100644 +index 2f4c1f7..5bc05d7 100644 --- a/include/linux/security.h +++ b/include/linux/security.h @@ -28,6 +28,7 @@ @@ -101947,15 +101524,6 @@ index 79d85dd..5bc05d7 100644 struct linux_binprm; struct cred; -@@ -946,7 +947,7 @@ static inline int security_task_prctl(int option, unsigned long arg2, - unsigned long arg4, - unsigned long arg5) - { -- return cap_task_prctl(option, arg2, arg3, arg3, arg5); -+ return cap_task_prctl(option, arg2, arg3, arg4, arg5); - } - - static inline void security_task_to_inode(struct task_struct *p, struct inode *inode) diff --git a/include/linux/semaphore.h b/include/linux/semaphore.h index dc368b8..e895209 100644 --- a/include/linux/semaphore.h @@ -103567,18 +103135,6 @@ index e951453..0685f5b 100644 } #endif /* __NET_NET_NAMESPACE_H */ -diff --git a/include/net/netfilter/nf_conntrack.h b/include/net/netfilter/nf_conntrack.h -index 37cd391..4023c4c 100644 ---- a/include/net/netfilter/nf_conntrack.h -+++ b/include/net/netfilter/nf_conntrack.h -@@ -292,6 +292,7 @@ extern unsigned int nf_conntrack_hash_rnd; - void init_nf_conntrack_hash_rnd(void); - - struct nf_conn *nf_ct_tmpl_alloc(struct net *net, u16 zone, gfp_t flags); -+void nf_ct_tmpl_free(struct nf_conn *tmpl); - - #define NF_CT_STAT_INC(net, count) __this_cpu_inc((net)->ct.stat->count) - #define NF_CT_STAT_INC_ATOMIC(net, count) this_cpu_inc((net)->ct.stat->count) diff --git a/include/net/netlink.h b/include/net/netlink.h index 2a5dbcc..8243656 100644 --- a/include/net/netlink.h @@ -105073,38 +104629,6 @@ index 161a180..be31d93 100644 spin_lock(&mq_lock); if (u->mq_bytes + mq_bytes < u->mq_bytes || u->mq_bytes + mq_bytes > rlimit(RLIMIT_MSGQUEUE)) { -diff --git a/ipc/msg.c b/ipc/msg.c -index 66c4f56..1471db9 100644 ---- a/ipc/msg.c -+++ b/ipc/msg.c -@@ -137,13 +137,6 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params) - return retval; - } - -- /* ipc_addid() locks msq upon success. */ -- id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni); -- if (id < 0) { -- ipc_rcu_putref(msq, msg_rcu_free); -- return id; -- } -- - msq->q_stime = msq->q_rtime = 0; - msq->q_ctime = get_seconds(); - msq->q_cbytes = msq->q_qnum = 0; -@@ -153,6 +146,13 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params) - INIT_LIST_HEAD(&msq->q_receivers); - INIT_LIST_HEAD(&msq->q_senders); - -+ /* ipc_addid() locks msq upon success. */ -+ id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni); -+ if (id < 0) { -+ ipc_rcu_putref(msq, msg_rcu_free); -+ return id; -+ } -+ - ipc_unlock_object(&msq->q_perm); - rcu_read_unlock(); - diff --git a/ipc/sem.c b/ipc/sem.c index b471e5a..89aef1d 100644 --- a/ipc/sem.c @@ -105128,7 +104652,7 @@ index b471e5a..89aef1d 100644 return sys_semtimedop(semid, tsops, nsops, NULL); } diff --git a/ipc/shm.c b/ipc/shm.c -index 4aef24d..c545631 100644 +index 0e61fd4..c545631 100644 --- a/ipc/shm.c +++ b/ipc/shm.c @@ -72,6 +72,14 @@ static void shm_destroy(struct ipc_namespace *ns, struct shmid_kernel *shp); @@ -105146,17 +104670,7 @@ index 4aef24d..c545631 100644 void shm_init_ns(struct ipc_namespace *ns) { ns->shm_ctlmax = SHMMAX; -@@ -551,20 +559,24 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params) - if (IS_ERR(file)) - goto no_file; - -- id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni); -- if (id < 0) { -- error = id; -- goto no_id; -- } -- - shp->shm_cprid = task_tgid_vnr(current); +@@ -555,6 +563,9 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params) shp->shm_lprid = 0; shp->shm_atim = shp->shm_dtim = 0; shp->shm_ctim = get_seconds(); @@ -105166,18 +104680,7 @@ index 4aef24d..c545631 100644 shp->shm_segsz = size; shp->shm_nattch = 0; shp->shm_file = file; - shp->shm_creator = current; -+ -+ id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni); -+ if (id < 0) { -+ error = id; -+ goto no_id; -+ } -+ - list_add(&shp->shm_clist, ¤t->sysvshm.shm_clist); - - /* -@@ -1097,6 +1109,12 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr, +@@ -1098,6 +1109,12 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr, f_mode = FMODE_READ | FMODE_WRITE; } if (shmflg & SHM_EXEC) { @@ -105190,7 +104693,7 @@ index 4aef24d..c545631 100644 prot |= PROT_EXEC; acc_mode |= S_IXUGO; } -@@ -1121,6 +1139,15 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr, +@@ -1122,6 +1139,15 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr, if (err) goto out_unlock; @@ -105206,7 +104709,7 @@ index 4aef24d..c545631 100644 ipc_lock_object(&shp->shm_perm); /* check if shm_destroy() is tearing down shp */ -@@ -1133,6 +1160,9 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr, +@@ -1134,6 +1160,9 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr, path = shp->shm_file->f_path; path_get(&path); shp->shm_nattch++; @@ -105217,7 +104720,7 @@ index 4aef24d..c545631 100644 ipc_unlock_object(&shp->shm_perm); rcu_read_unlock(); diff --git a/ipc/util.c b/ipc/util.c -index be42300..049b0ff 100644 +index 0f401d9..049b0ff 100644 --- a/ipc/util.c +++ b/ipc/util.c @@ -71,6 +71,8 @@ struct ipc_proc_iface { @@ -105229,28 +104732,6 @@ index be42300..049b0ff 100644 /** * ipc_init - initialise ipc subsystem * -@@ -237,6 +239,10 @@ int ipc_addid(struct ipc_ids *ids, struct kern_ipc_perm *new, int size) - rcu_read_lock(); - spin_lock(&new->lock); - -+ current_euid_egid(&euid, &egid); -+ new->cuid = new->uid = euid; -+ new->gid = new->cgid = egid; -+ - id = idr_alloc(&ids->ipcs_idr, new, - (next_id < 0) ? 0 : ipcid_to_idx(next_id), 0, - GFP_NOWAIT); -@@ -249,10 +255,6 @@ int ipc_addid(struct ipc_ids *ids, struct kern_ipc_perm *new, int size) - - ids->in_use++; - -- current_euid_egid(&euid, &egid); -- new->cuid = new->uid = euid; -- new->gid = new->cgid = egid; -- - if (next_id < 0) { - new->seq = ids->seq++; - if (ids->seq > IPCID_SEQ_MAX) @@ -494,6 +496,10 @@ int ipcperms(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp, short flag) granted_mode >>= 6; else if (in_group_p(ipcp->cgid) || in_group_p(ipcp->gid)) @@ -105486,10 +104967,10 @@ index 45432b5..988f1e4 100644 +} +EXPORT_SYMBOL(capable_wrt_inode_uidgid_nolog); diff --git a/kernel/cgroup.c b/kernel/cgroup.c -index c6c4240..8af0064 100644 +index fe6f855..7dba913 100644 --- a/kernel/cgroup.c +++ b/kernel/cgroup.c -@@ -5367,6 +5367,9 @@ static void cgroup_release_agent(struct work_struct *work) +@@ -5425,6 +5425,9 @@ static void cgroup_release_agent(struct work_struct *work) if (!pathbuf || !agentbuf) goto out; @@ -105499,7 +104980,7 @@ index c6c4240..8af0064 100644 path = cgroup_path(cgrp, pathbuf, PATH_MAX); if (!path) goto out; -@@ -5552,7 +5555,7 @@ static int cgroup_css_links_read(struct seq_file *seq, void *v) +@@ -5610,7 +5613,7 @@ static int cgroup_css_links_read(struct seq_file *seq, void *v) struct task_struct *task; int count = 0; @@ -106148,7 +105629,7 @@ index 031325e..c6342c4 100644 { struct signal_struct *sig = current->signal; diff --git a/kernel/fork.c b/kernel/fork.c -index 26a70dc..74efe33 100644 +index e769c8c..9fa1de5 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -188,12 +188,54 @@ static void free_thread_info(struct thread_info *ti) @@ -106508,7 +105989,7 @@ index 26a70dc..74efe33 100644 return 0; } -@@ -1234,7 +1337,7 @@ init_task_pid(struct task_struct *task, enum pid_type type, struct pid *pid) +@@ -1238,7 +1341,7 @@ init_task_pid(struct task_struct *task, enum pid_type type, struct pid *pid) * parts of the process environment (as per the clone * flags). The actual kick-off is left to the caller. */ @@ -106517,7 +105998,7 @@ index 26a70dc..74efe33 100644 unsigned long stack_start, unsigned long stack_size, int __user *child_tidptr, -@@ -1306,6 +1409,9 @@ static struct task_struct *copy_process(unsigned long clone_flags, +@@ -1310,6 +1413,9 @@ static struct task_struct *copy_process(unsigned long clone_flags, DEBUG_LOCKS_WARN_ON(!p->softirqs_enabled); #endif retval = -EAGAIN; @@ -106527,7 +106008,7 @@ index 26a70dc..74efe33 100644 if (atomic_read(&p->real_cred->user->processes) >= task_rlimit(p, RLIMIT_NPROC)) { if (p->real_cred->user != INIT_USER && -@@ -1556,6 +1662,11 @@ static struct task_struct *copy_process(unsigned long clone_flags, +@@ -1560,6 +1666,11 @@ static struct task_struct *copy_process(unsigned long clone_flags, goto bad_fork_free_pid; } @@ -106539,7 +106020,7 @@ index 26a70dc..74efe33 100644 if (likely(p->pid)) { ptrace_init_task(p, (clone_flags & CLONE_PTRACE) || trace); -@@ -1645,6 +1756,8 @@ bad_fork_cleanup_count: +@@ -1649,6 +1760,8 @@ bad_fork_cleanup_count: bad_fork_free: free_task(p); fork_out: @@ -106548,7 +106029,7 @@ index 26a70dc..74efe33 100644 return ERR_PTR(retval); } -@@ -1707,6 +1820,7 @@ long _do_fork(unsigned long clone_flags, +@@ -1711,6 +1824,7 @@ long _do_fork(unsigned long clone_flags, p = copy_process(clone_flags, stack_start, stack_size, child_tidptr, NULL, trace, tls); @@ -106556,7 +106037,7 @@ index 26a70dc..74efe33 100644 /* * Do this prior waking up the new thread - the thread pointer * might get invalid after that point, if the thread exits quickly. -@@ -1723,6 +1837,8 @@ long _do_fork(unsigned long clone_flags, +@@ -1727,6 +1841,8 @@ long _do_fork(unsigned long clone_flags, if (clone_flags & CLONE_PARENT_SETTID) put_user(nr, parent_tidptr); @@ -106565,7 +106046,7 @@ index 26a70dc..74efe33 100644 if (clone_flags & CLONE_VFORK) { p->vfork_done = &vfork; init_completion(&vfork); -@@ -1855,7 +1971,7 @@ void __init proc_caches_init(void) +@@ -1859,7 +1975,7 @@ void __init proc_caches_init(void) mm_cachep = kmem_cache_create("mm_struct", sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL); @@ -106574,7 +106055,7 @@ index 26a70dc..74efe33 100644 mmap_init(); nsproxy_cache_init(); } -@@ -1903,7 +2019,7 @@ static int unshare_fs(unsigned long unshare_flags, struct fs_struct **new_fsp) +@@ -1907,7 +2023,7 @@ static int unshare_fs(unsigned long unshare_flags, struct fs_struct **new_fsp) return 0; /* don't need lock here; in the worst case we'll do useless copy */ @@ -106583,7 +106064,7 @@ index 26a70dc..74efe33 100644 return 0; *new_fsp = copy_fs_struct(fs); -@@ -2015,7 +2131,8 @@ SYSCALL_DEFINE1(unshare, unsigned long, unshare_flags) +@@ -2019,7 +2135,8 @@ SYSCALL_DEFINE1(unshare, unsigned long, unshare_flags) fs = current->fs; spin_lock(&fs->lock); current->fs = new_fs; @@ -106593,7 +106074,7 @@ index 26a70dc..74efe33 100644 new_fs = NULL; else new_fs = fs; -@@ -2079,7 +2196,7 @@ int unshare_files(struct files_struct **displaced) +@@ -2083,7 +2200,7 @@ int unshare_files(struct files_struct **displaced) int sysctl_max_threads(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { @@ -109546,7 +109027,7 @@ index 750ed60..eb01466 100644 #ifdef CONFIG_RT_GROUP_SCHED /* diff --git a/kernel/sched/core.c b/kernel/sched/core.c -index e967343..5064e2f 100644 +index 6776631..45eb6ee 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2080,7 +2080,7 @@ void set_numabalancing_state(bool enabled) @@ -109570,7 +109051,7 @@ index e967343..5064e2f 100644 if (!prev->mm) { prev->active_mm = NULL; -@@ -3386,6 +3388,8 @@ int can_nice(const struct task_struct *p, const int nice) +@@ -3393,6 +3395,8 @@ int can_nice(const struct task_struct *p, const int nice) /* convert nice value [19,-20] to rlimit style value [1,40] */ int nice_rlim = nice_to_rlimit(nice); @@ -109579,7 +109060,7 @@ index e967343..5064e2f 100644 return (nice_rlim <= task_rlimit(p, RLIMIT_NICE) || capable(CAP_SYS_NICE)); } -@@ -3412,7 +3416,8 @@ SYSCALL_DEFINE1(nice, int, increment) +@@ -3419,7 +3423,8 @@ SYSCALL_DEFINE1(nice, int, increment) nice = task_nice(current) + increment; nice = clamp_val(nice, MIN_NICE, MAX_NICE); @@ -109589,7 +109070,7 @@ index e967343..5064e2f 100644 return -EPERM; retval = security_task_setnice(current, nice); -@@ -3724,6 +3729,7 @@ recheck: +@@ -3731,6 +3736,7 @@ recheck: if (policy != p->policy && !rlim_rtprio) return -EPERM; @@ -109597,7 +109078,7 @@ index e967343..5064e2f 100644 /* can't increase priority */ if (attr->sched_priority > p->rt_priority && attr->sched_priority > rlim_rtprio) -@@ -5048,6 +5054,7 @@ void idle_task_exit(void) +@@ -5055,6 +5061,7 @@ void idle_task_exit(void) if (mm != &init_mm) { switch_mm(mm, &init_mm, current); @@ -109605,7 +109086,7 @@ index e967343..5064e2f 100644 finish_arch_post_lock_switch(); } mmdrop(mm); -@@ -5150,7 +5157,7 @@ static void migrate_tasks(struct rq *dead_rq) +@@ -5157,7 +5164,7 @@ static void migrate_tasks(struct rq *dead_rq) #if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL) @@ -109614,7 +109095,7 @@ index e967343..5064e2f 100644 { .procname = "sched_domain", .mode = 0555, -@@ -5167,17 +5174,17 @@ static struct ctl_table sd_ctl_root[] = { +@@ -5174,17 +5181,17 @@ static struct ctl_table sd_ctl_root[] = { {} }; @@ -109636,7 +109117,7 @@ index e967343..5064e2f 100644 /* * In the intermediate directories, both the child directory and -@@ -5185,22 +5192,25 @@ static void sd_free_ctl_entry(struct ctl_table **tablep) +@@ -5192,22 +5199,25 @@ static void sd_free_ctl_entry(struct ctl_table **tablep) * will always be set. In the lowest directory the names are * static strings and all have proc handlers. */ @@ -109668,7 +109149,7 @@ index e967343..5064e2f 100644 const char *procname, void *data, int maxlen, umode_t mode, proc_handler *proc_handler, bool load_idx) -@@ -5220,7 +5230,7 @@ set_table_entry(struct ctl_table *entry, +@@ -5227,7 +5237,7 @@ set_table_entry(struct ctl_table *entry, static struct ctl_table * sd_alloc_ctl_domain_table(struct sched_domain *sd) { @@ -109677,7 +109158,7 @@ index e967343..5064e2f 100644 if (table == NULL) return NULL; -@@ -5258,9 +5268,9 @@ sd_alloc_ctl_domain_table(struct sched_domain *sd) +@@ -5265,9 +5275,9 @@ sd_alloc_ctl_domain_table(struct sched_domain *sd) return table; } @@ -109689,7 +109170,7 @@ index e967343..5064e2f 100644 struct sched_domain *sd; int domain_num = 0, i; char buf[32]; -@@ -5287,11 +5297,13 @@ static struct ctl_table_header *sd_sysctl_header; +@@ -5294,11 +5304,13 @@ static struct ctl_table_header *sd_sysctl_header; static void register_sched_domain_sysctl(void) { int i, cpu_num = num_possible_cpus(); @@ -109704,7 +109185,7 @@ index e967343..5064e2f 100644 if (entry == NULL) return; -@@ -5314,8 +5326,12 @@ static void unregister_sched_domain_sysctl(void) +@@ -5321,8 +5333,12 @@ static void unregister_sched_domain_sysctl(void) if (sd_sysctl_header) unregister_sysctl_table(sd_sysctl_header); sd_sysctl_header = NULL; @@ -109733,10 +109214,10 @@ index d113c3b..91a6fcc 100644 struct rq *this_rq = this_rq(); enum cpu_idle_type idle = this_rq->idle_balance ? diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h -index 84d4879..cf3ed33 100644 +index 08ab96b..82ab34c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h -@@ -1241,7 +1241,7 @@ struct sched_class { +@@ -1242,7 +1242,7 @@ struct sched_class { #ifdef CONFIG_FAIR_GROUP_SCHED void (*task_move_group) (struct task_struct *p, int on_rq); #endif @@ -110765,7 +110246,7 @@ index 85d5bb1..aeca463 100644 update_vsyscall_tz(); if (firsttime) { diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c -index bca3667..2745765 100644 +index a20d411..255b10a 100644 --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -15,6 +15,7 @@ @@ -112807,7 +112288,7 @@ index 123bcd3..c2c85db 100644 pkmap_count[last_pkmap_nr] = 1; set_page_address(page, (void *)vaddr); diff --git a/mm/hugetlb.c b/mm/hugetlb.c -index a8c3087..ec431dc 100644 +index 62c1ec5..ec431dc 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2442,6 +2442,7 @@ static int hugetlb_sysctl_handler_common(bool obey_mempolicy, @@ -112854,22 +112335,7 @@ index a8c3087..ec431dc 100644 if (ret) goto out; -@@ -2974,6 +2978,14 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma, - continue; - - /* -+ * Shared VMAs have their own reserves and do not affect -+ * MAP_PRIVATE accounting but it is possible that a shared -+ * VMA is using the same page so check and skip such VMAs. -+ */ -+ if (iter_vma->vm_flags & VM_MAYSHARE) -+ continue; -+ -+ /* - * Unmap the page from other VMAs without their own reserves. - * They get marked to be SIGKILLed if they fault in these - * areas. This is because a future no-page fault on this VMA -@@ -2987,6 +2999,27 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma, +@@ -2995,6 +2999,27 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma, i_mmap_unlock_write(mapping); } @@ -112897,7 +112363,7 @@ index a8c3087..ec431dc 100644 /* * Hugetlb_cow() should be called with page lock of the original hugepage held. * Called with hugetlb_instantiation_mutex held and pte_page locked so we -@@ -3100,6 +3133,11 @@ retry_avoidcopy: +@@ -3108,6 +3133,11 @@ retry_avoidcopy: make_huge_pte(vma, new_page, 1)); page_remove_rmap(old_page); hugepage_add_new_anon_rmap(new_page, vma, address); @@ -112909,7 +112375,7 @@ index a8c3087..ec431dc 100644 /* Make the old page be freed below */ new_page = old_page; } -@@ -3261,6 +3299,10 @@ retry: +@@ -3269,6 +3299,10 @@ retry: && (vma->vm_flags & VM_SHARED))); set_huge_pte_at(mm, address, ptep, new_pte); @@ -112920,7 +112386,7 @@ index a8c3087..ec431dc 100644 if ((flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) { /* Optimization, do the COW without a second fault */ ret = hugetlb_cow(mm, vma, address, ptep, new_pte, page, ptl); -@@ -3328,6 +3370,10 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, +@@ -3336,6 +3370,10 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, struct address_space *mapping; int need_wait_lock = 0; @@ -112931,7 +112397,7 @@ index a8c3087..ec431dc 100644 address &= huge_page_mask(h); ptep = huge_pte_offset(mm, address); -@@ -3341,6 +3387,26 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, +@@ -3349,6 +3387,26 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, VM_FAULT_SET_HINDEX(hstate_index(h)); } @@ -113088,104 +112554,6 @@ index 64bb8a2..68e4be5 100644 error = 0; if (end == start) return error; -diff --git a/mm/memcontrol.c b/mm/memcontrol.c -index acb93c5..237d468 100644 ---- a/mm/memcontrol.c -+++ b/mm/memcontrol.c -@@ -806,12 +806,14 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_zone *mctz) - } - - /* -+ * Return page count for single (non recursive) @memcg. -+ * - * Implementation Note: reading percpu statistics for memcg. - * - * Both of vmstat[] and percpu_counter has threshold and do periodic - * synchronization to implement "quick" read. There are trade-off between - * reading cost and precision of value. Then, we may have a chance to implement -- * a periodic synchronizion of counter in memcg's counter. -+ * a periodic synchronization of counter in memcg's counter. - * - * But this _read() function is used for user interface now. The user accounts - * memory usage by memory cgroup and he _always_ requires exact value because -@@ -821,17 +823,24 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_zone *mctz) - * - * If there are kernel internal actions which can make use of some not-exact - * value, and reading all cpu value can be performance bottleneck in some -- * common workload, threashold and synchonization as vmstat[] should be -+ * common workload, threshold and synchronization as vmstat[] should be - * implemented. - */ --static long mem_cgroup_read_stat(struct mem_cgroup *memcg, -- enum mem_cgroup_stat_index idx) -+static unsigned long -+mem_cgroup_read_stat(struct mem_cgroup *memcg, enum mem_cgroup_stat_index idx) - { - long val = 0; - int cpu; - -+ /* Per-cpu values can be negative, use a signed accumulator */ - for_each_possible_cpu(cpu) - val += per_cpu(memcg->stat->count[idx], cpu); -+ /* -+ * Summing races with updates, so val may be negative. Avoid exposing -+ * transient negative values. -+ */ -+ if (val < 0) -+ val = 0; - return val; - } - -@@ -1498,7 +1507,7 @@ void mem_cgroup_print_oom_info(struct mem_cgroup *memcg, struct task_struct *p) - for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) { - if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account) - continue; -- pr_cont(" %s:%ldKB", mem_cgroup_stat_names[i], -+ pr_cont(" %s:%luKB", mem_cgroup_stat_names[i], - K(mem_cgroup_read_stat(iter, i))); - } - -@@ -3119,14 +3128,11 @@ static unsigned long tree_stat(struct mem_cgroup *memcg, - enum mem_cgroup_stat_index idx) - { - struct mem_cgroup *iter; -- long val = 0; -+ unsigned long val = 0; - -- /* Per-cpu values can be negative, use a signed accumulator */ - for_each_mem_cgroup_tree(iter, memcg) - val += mem_cgroup_read_stat(iter, idx); - -- if (val < 0) /* race ? */ -- val = 0; - return val; - } - -@@ -3469,7 +3475,7 @@ static int memcg_stat_show(struct seq_file *m, void *v) - for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) { - if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account) - continue; -- seq_printf(m, "%s %ld\n", mem_cgroup_stat_names[i], -+ seq_printf(m, "%s %lu\n", mem_cgroup_stat_names[i], - mem_cgroup_read_stat(memcg, i) * PAGE_SIZE); - } - -@@ -3494,13 +3500,13 @@ static int memcg_stat_show(struct seq_file *m, void *v) - (u64)memsw * PAGE_SIZE); - - for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) { -- long long val = 0; -+ unsigned long long val = 0; - - if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account) - continue; - for_each_mem_cgroup_tree(mi, memcg) - val += mem_cgroup_read_stat(mi, i) * PAGE_SIZE; -- seq_printf(m, "total_%s %lld\n", mem_cgroup_stat_names[i], val); -+ seq_printf(m, "total_%s %llu\n", mem_cgroup_stat_names[i], val); - } - - for (i = 0; i < MEM_CGROUP_EVENTS_NSTATS; i++) { diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 1f4446a..47abb4e 100644 --- a/mm/memory-failure.c @@ -114083,10 +113451,10 @@ index 99d4c1d..a577817 100644 capable(CAP_SYS_NICE) ? MPOL_MF_MOVE_ALL : MPOL_MF_MOVE); diff --git a/mm/migrate.c b/mm/migrate.c -index eb42671..9f2f3ea 100644 +index fcb6204..b3f1a44 100644 --- a/mm/migrate.c +++ b/mm/migrate.c -@@ -1491,8 +1491,7 @@ SYSCALL_DEFINE6(move_pages, pid_t, pid, unsigned long, nr_pages, +@@ -1501,8 +1501,7 @@ SYSCALL_DEFINE6(move_pages, pid_t, pid, unsigned long, nr_pages, */ tcred = __task_cred(task); if (!uid_eq(cred->euid, tcred->suid) && !uid_eq(cred->euid, tcred->uid) && @@ -116229,7 +115597,7 @@ index dbe0c1e..22c16c7 100644 return -ENOMEM; diff --git a/mm/slab.c b/mm/slab.c -index bbd0b47..eb6af9e 100644 +index ae36028..eb6af9e 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -116,6 +116,7 @@ @@ -116293,27 +115661,7 @@ index bbd0b47..eb6af9e 100644 /* * Adjust the object sizes so that we clear -@@ -2190,9 +2195,16 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags) - size += BYTES_PER_WORD; - } - #if FORCED_DEBUG && defined(CONFIG_DEBUG_PAGEALLOC) -- if (size >= kmalloc_size(INDEX_NODE + 1) -- && cachep->object_size > cache_line_size() -- && ALIGN(size, cachep->align) < PAGE_SIZE) { -+ /* -+ * To activate debug pagealloc, off-slab management is necessary -+ * requirement. In early phase of initialization, small sized slab -+ * doesn't get initialized so it would not be possible. So, we need -+ * to check size >= 256. It guarantees that all necessary small -+ * sized slab is initialized in current slab initialization sequence. -+ */ -+ if (!slab_early_init && size >= kmalloc_size(INDEX_NODE) && -+ size >= 256 && cachep->object_size > cache_line_size() && -+ ALIGN(size, cachep->align) < PAGE_SIZE) { - cachep->obj_offset += PAGE_SIZE - ALIGN(size, cachep->align); - size = PAGE_SIZE; - } -@@ -3372,6 +3384,20 @@ static inline void __cache_free(struct kmem_cache *cachep, void *objp, +@@ -3379,6 +3384,20 @@ static inline void __cache_free(struct kmem_cache *cachep, void *objp, struct array_cache *ac = cpu_cache_get(cachep); check_irq_off(); @@ -116334,7 +115682,7 @@ index bbd0b47..eb6af9e 100644 kmemleak_free_recursive(objp, cachep->flags); objp = cache_free_debugcheck(cachep, objp, caller); -@@ -3484,7 +3510,7 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) +@@ -3491,7 +3510,7 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) return kmem_cache_alloc_node_trace(cachep, flags, node, size); } @@ -116343,7 +115691,7 @@ index bbd0b47..eb6af9e 100644 { return __do_kmalloc_node(size, flags, node, _RET_IP_); } -@@ -3504,7 +3530,7 @@ EXPORT_SYMBOL(__kmalloc_node_track_caller); +@@ -3511,7 +3530,7 @@ EXPORT_SYMBOL(__kmalloc_node_track_caller); * @flags: the type of memory to allocate (see kmalloc). * @caller: function caller for debug tracking of the caller */ @@ -116352,7 +115700,7 @@ index bbd0b47..eb6af9e 100644 unsigned long caller) { struct kmem_cache *cachep; -@@ -3577,6 +3603,7 @@ void kfree(const void *objp) +@@ -3584,6 +3603,7 @@ void kfree(const void *objp) if (unlikely(ZERO_OR_NULL_PTR(objp))) return; @@ -116360,7 +115708,7 @@ index bbd0b47..eb6af9e 100644 local_irq_save(flags); kfree_debugcheck(objp); c = virt_to_cache(objp); -@@ -3996,14 +4023,22 @@ void slabinfo_show_stats(struct seq_file *m, struct kmem_cache *cachep) +@@ -4003,14 +4023,22 @@ void slabinfo_show_stats(struct seq_file *m, struct kmem_cache *cachep) } /* cpu stats */ { @@ -116387,7 +115735,7 @@ index bbd0b47..eb6af9e 100644 #endif } -@@ -4211,13 +4246,80 @@ static const struct file_operations proc_slabstats_operations = { +@@ -4218,13 +4246,80 @@ static const struct file_operations proc_slabstats_operations = { static int __init slab_proc_init(void) { #ifdef CONFIG_DEBUG_SLAB_LEAK @@ -118353,10 +117701,10 @@ index c0f0d01..725928a 100644 frag_header.no = 0; frag_header.total_size = htons(skb->len); diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c -index a2fc843..0f8059e 100644 +index 51cda3a..a5db59e 100644 --- a/net/batman-adv/soft-interface.c +++ b/net/batman-adv/soft-interface.c -@@ -325,7 +325,7 @@ send: +@@ -330,7 +330,7 @@ send: primary_if->net_dev->dev_addr); /* set broadcast sequence number */ @@ -118365,7 +117713,7 @@ index a2fc843..0f8059e 100644 bcast_packet->seqno = htonl(seqno); batadv_add_bcast_packet_to_list(bat_priv, skb, brd_delay); -@@ -793,7 +793,7 @@ static int batadv_softif_init_late(struct net_device *dev) +@@ -798,7 +798,7 @@ static int batadv_softif_init_late(struct net_device *dev) atomic_set(&bat_priv->batman_queue_left, BATADV_BATMAN_QUEUE_LEN); atomic_set(&bat_priv->mesh_state, BATADV_MESH_INACTIVE); @@ -118374,7 +117722,7 @@ index a2fc843..0f8059e 100644 atomic_set(&bat_priv->tt.vn, 0); atomic_set(&bat_priv->tt.local_changes, 0); atomic_set(&bat_priv->tt.ogm_append_cnt, 0); -@@ -807,7 +807,7 @@ static int batadv_softif_init_late(struct net_device *dev) +@@ -812,7 +812,7 @@ static int batadv_softif_init_late(struct net_device *dev) /* randomize initial seqno to avoid collision */ get_random_bytes(&random_seqno, sizeof(random_seqno)); @@ -118383,7 +117731,7 @@ index a2fc843..0f8059e 100644 bat_priv->primary_if = NULL; bat_priv->num_ifaces = 0; -@@ -1015,7 +1015,7 @@ int batadv_softif_is_valid(const struct net_device *net_dev) +@@ -1020,7 +1020,7 @@ int batadv_softif_is_valid(const struct net_device *net_dev) return 0; } @@ -118393,7 +117741,7 @@ index a2fc843..0f8059e 100644 .priv_size = sizeof(struct batadv_priv), .setup = batadv_softif_init_early, diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h -index 67d6348..4358755 100644 +index 55610a8..aba2ae8 100644 --- a/net/batman-adv/types.h +++ b/net/batman-adv/types.h @@ -81,7 +81,7 @@ enum batadv_dhcp_recipient { @@ -118405,7 +117753,7 @@ index 67d6348..4358755 100644 }; /** -@@ -783,7 +783,7 @@ struct batadv_priv { +@@ -786,7 +786,7 @@ struct batadv_priv { atomic_t bonding; atomic_t fragmentation; atomic_t packet_size_max; @@ -118414,7 +117762,7 @@ index 67d6348..4358755 100644 #ifdef CONFIG_BATMAN_ADV_BLA atomic_t bridge_loop_avoidance; #endif -@@ -802,7 +802,7 @@ struct batadv_priv { +@@ -805,7 +805,7 @@ struct batadv_priv { #endif uint32_t isolation_mark; uint32_t isolation_mark_mask; @@ -121831,6 +121179,19 @@ index 683346d..cb0e12d 100644 seq_printf(m, "Max data size: %d\n", self->max_data_size); seq_printf(m, "Max header size: %d\n", self->max_header_size); +diff --git a/net/irda/irlmp.c b/net/irda/irlmp.c +index a26c401..4396459 100644 +--- a/net/irda/irlmp.c ++++ b/net/irda/irlmp.c +@@ -1839,7 +1839,7 @@ static void *irlmp_seq_hb_idx(struct irlmp_iter_state *iter, loff_t *off) + for (element = hashbin_get_first(iter->hashbin); + element != NULL; + element = hashbin_get_next(iter->hashbin)) { +- if (!off || *off-- == 0) { ++ if (!off || (*off)-- == 0) { + /* NB: hashbin left locked */ + return element; + } diff --git a/net/irda/irproc.c b/net/irda/irproc.c index b9ac598..f88cc56 100644 --- a/net/irda/irproc.c @@ -122325,137 +121686,6 @@ index 338b404..839dcb0 100644 .pf = PF_INET, .get_optmin = SO_IP_SET, .get_optmax = SO_IP_SET + 1, -diff --git a/net/netfilter/ipset/ip_set_hash_netnet.c b/net/netfilter/ipset/ip_set_hash_netnet.c -index 3c862c0..a93dfeb 100644 ---- a/net/netfilter/ipset/ip_set_hash_netnet.c -+++ b/net/netfilter/ipset/ip_set_hash_netnet.c -@@ -131,6 +131,13 @@ hash_netnet4_data_next(struct hash_netnet4_elem *next, - #define HOST_MASK 32 - #include "ip_set_hash_gen.h" - -+static void -+hash_netnet4_init(struct hash_netnet4_elem *e) -+{ -+ e->cidr[0] = HOST_MASK; -+ e->cidr[1] = HOST_MASK; -+} -+ - static int - hash_netnet4_kadt(struct ip_set *set, const struct sk_buff *skb, - const struct xt_action_param *par, -@@ -160,7 +167,7 @@ hash_netnet4_uadt(struct ip_set *set, struct nlattr *tb[], - { - const struct hash_netnet *h = set->data; - ipset_adtfn adtfn = set->variant->adt[adt]; -- struct hash_netnet4_elem e = { .cidr = { HOST_MASK, HOST_MASK, }, }; -+ struct hash_netnet4_elem e = { }; - struct ip_set_ext ext = IP_SET_INIT_UEXT(set); - u32 ip = 0, ip_to = 0, last; - u32 ip2 = 0, ip2_from = 0, ip2_to = 0, last2; -@@ -169,6 +176,7 @@ hash_netnet4_uadt(struct ip_set *set, struct nlattr *tb[], - if (tb[IPSET_ATTR_LINENO]) - *lineno = nla_get_u32(tb[IPSET_ATTR_LINENO]); - -+ hash_netnet4_init(&e); - if (unlikely(!tb[IPSET_ATTR_IP] || !tb[IPSET_ATTR_IP2] || - !ip_set_optattr_netorder(tb, IPSET_ATTR_CADT_FLAGS))) - return -IPSET_ERR_PROTOCOL; -@@ -357,6 +365,13 @@ hash_netnet6_data_next(struct hash_netnet4_elem *next, - #define IP_SET_EMIT_CREATE - #include "ip_set_hash_gen.h" - -+static void -+hash_netnet6_init(struct hash_netnet6_elem *e) -+{ -+ e->cidr[0] = HOST_MASK; -+ e->cidr[1] = HOST_MASK; -+} -+ - static int - hash_netnet6_kadt(struct ip_set *set, const struct sk_buff *skb, - const struct xt_action_param *par, -@@ -385,13 +400,14 @@ hash_netnet6_uadt(struct ip_set *set, struct nlattr *tb[], - enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) - { - ipset_adtfn adtfn = set->variant->adt[adt]; -- struct hash_netnet6_elem e = { .cidr = { HOST_MASK, HOST_MASK, }, }; -+ struct hash_netnet6_elem e = { }; - struct ip_set_ext ext = IP_SET_INIT_UEXT(set); - int ret; - - if (tb[IPSET_ATTR_LINENO]) - *lineno = nla_get_u32(tb[IPSET_ATTR_LINENO]); - -+ hash_netnet6_init(&e); - if (unlikely(!tb[IPSET_ATTR_IP] || !tb[IPSET_ATTR_IP2] || - !ip_set_optattr_netorder(tb, IPSET_ATTR_CADT_FLAGS))) - return -IPSET_ERR_PROTOCOL; -diff --git a/net/netfilter/ipset/ip_set_hash_netportnet.c b/net/netfilter/ipset/ip_set_hash_netportnet.c -index 0c68734..9a14c23 100644 ---- a/net/netfilter/ipset/ip_set_hash_netportnet.c -+++ b/net/netfilter/ipset/ip_set_hash_netportnet.c -@@ -142,6 +142,13 @@ hash_netportnet4_data_next(struct hash_netportnet4_elem *next, - #define HOST_MASK 32 - #include "ip_set_hash_gen.h" - -+static void -+hash_netportnet4_init(struct hash_netportnet4_elem *e) -+{ -+ e->cidr[0] = HOST_MASK; -+ e->cidr[1] = HOST_MASK; -+} -+ - static int - hash_netportnet4_kadt(struct ip_set *set, const struct sk_buff *skb, - const struct xt_action_param *par, -@@ -175,7 +182,7 @@ hash_netportnet4_uadt(struct ip_set *set, struct nlattr *tb[], - { - const struct hash_netportnet *h = set->data; - ipset_adtfn adtfn = set->variant->adt[adt]; -- struct hash_netportnet4_elem e = { .cidr = { HOST_MASK, HOST_MASK, }, }; -+ struct hash_netportnet4_elem e = { }; - struct ip_set_ext ext = IP_SET_INIT_UEXT(set); - u32 ip = 0, ip_to = 0, ip_last, p = 0, port, port_to; - u32 ip2_from = 0, ip2_to = 0, ip2_last, ip2; -@@ -185,6 +192,7 @@ hash_netportnet4_uadt(struct ip_set *set, struct nlattr *tb[], - if (tb[IPSET_ATTR_LINENO]) - *lineno = nla_get_u32(tb[IPSET_ATTR_LINENO]); - -+ hash_netportnet4_init(&e); - if (unlikely(!tb[IPSET_ATTR_IP] || !tb[IPSET_ATTR_IP2] || - !ip_set_attr_netorder(tb, IPSET_ATTR_PORT) || - !ip_set_optattr_netorder(tb, IPSET_ATTR_PORT_TO) || -@@ -412,6 +420,13 @@ hash_netportnet6_data_next(struct hash_netportnet4_elem *next, - #define IP_SET_EMIT_CREATE - #include "ip_set_hash_gen.h" - -+static void -+hash_netportnet6_init(struct hash_netportnet6_elem *e) -+{ -+ e->cidr[0] = HOST_MASK; -+ e->cidr[1] = HOST_MASK; -+} -+ - static int - hash_netportnet6_kadt(struct ip_set *set, const struct sk_buff *skb, - const struct xt_action_param *par, -@@ -445,7 +460,7 @@ hash_netportnet6_uadt(struct ip_set *set, struct nlattr *tb[], - { - const struct hash_netportnet *h = set->data; - ipset_adtfn adtfn = set->variant->adt[adt]; -- struct hash_netportnet6_elem e = { .cidr = { HOST_MASK, HOST_MASK, }, }; -+ struct hash_netportnet6_elem e = { }; - struct ip_set_ext ext = IP_SET_INIT_UEXT(set); - u32 port, port_to; - bool with_ports = false; -@@ -454,6 +469,7 @@ hash_netportnet6_uadt(struct ip_set *set, struct nlattr *tb[], - if (tb[IPSET_ATTR_LINENO]) - *lineno = nla_get_u32(tb[IPSET_ATTR_LINENO]); - -+ hash_netportnet6_init(&e); - if (unlikely(!tb[IPSET_ATTR_IP] || !tb[IPSET_ATTR_IP2] || - !ip_set_attr_netorder(tb, IPSET_ATTR_PORT) || - !ip_set_optattr_netorder(tb, IPSET_ATTR_PORT_TO) || diff --git a/net/netfilter/ipvs/ip_vs_conn.c b/net/netfilter/ipvs/ip_vs_conn.c index b0f7b62..0541842 100644 --- a/net/netfilter/ipvs/ip_vs_conn.c @@ -122669,25 +121899,10 @@ index 45da11a..ef3e5dc 100644 table = kmemdup(acct_sysctl_table, sizeof(acct_sysctl_table), GFP_KERNEL); diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c -index 3c20d02..b2c15f4 100644 +index 0625a42..b2c15f4 100644 --- a/net/netfilter/nf_conntrack_core.c +++ b/net/netfilter/nf_conntrack_core.c -@@ -320,12 +320,13 @@ out_free: - } - EXPORT_SYMBOL_GPL(nf_ct_tmpl_alloc); - --static void nf_ct_tmpl_free(struct nf_conn *tmpl) -+void nf_ct_tmpl_free(struct nf_conn *tmpl) - { - nf_ct_ext_destroy(tmpl); - nf_ct_ext_free(tmpl); - kfree(tmpl); - } -+EXPORT_SYMBOL_GPL(nf_ct_tmpl_free); - - static void - destroy_conntrack(struct nf_conntrack *nfct) -@@ -1753,6 +1754,10 @@ void nf_conntrack_init_end(void) +@@ -1754,6 +1754,10 @@ void nf_conntrack_init_end(void) #define DYING_NULLS_VAL ((1<<30)+1) #define TEMPLATE_NULLS_VAL ((1<<30)+2) @@ -122698,7 +121913,7 @@ index 3c20d02..b2c15f4 100644 int nf_conntrack_init_net(struct net *net) { int ret = -ENOMEM; -@@ -1777,7 +1782,11 @@ int nf_conntrack_init_net(struct net *net) +@@ -1778,7 +1782,11 @@ int nf_conntrack_init_net(struct net *net) if (!net->ct.stat) goto err_pcpu_lists; @@ -122776,10 +121991,10 @@ index 7a394df..bd91a8a 100644 table = kmemdup(tstamp_sysctl_table, sizeof(tstamp_sysctl_table), GFP_KERNEL); diff --git a/net/netfilter/nf_log.c b/net/netfilter/nf_log.c -index 675d12c..b36e825 100644 +index a5d41df..1ff49be 100644 --- a/net/netfilter/nf_log.c +++ b/net/netfilter/nf_log.c -@@ -386,7 +386,7 @@ static const struct file_operations nflog_file_ops = { +@@ -391,7 +391,7 @@ static const struct file_operations nflog_file_ops = { #ifdef CONFIG_SYSCTL static char nf_log_sysctl_fnames[NFPROTO_NUMPROTO-NFPROTO_UNSPEC][3]; @@ -122788,7 +122003,7 @@ index 675d12c..b36e825 100644 static int nf_log_proc_dostring(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) -@@ -417,13 +417,15 @@ static int nf_log_proc_dostring(struct ctl_table *table, int write, +@@ -422,13 +422,15 @@ static int nf_log_proc_dostring(struct ctl_table *table, int write, rcu_assign_pointer(net->nf.nf_loggers[tindex], logger); mutex_unlock(&nf_log_mutex); } else { @@ -122829,19 +122044,6 @@ index c68c1e5..8b5d670 100644 mutex_unlock(&nf_sockopt_mutex); } EXPORT_SYMBOL(nf_unregister_sockopt); -diff --git a/net/netfilter/nf_synproxy_core.c b/net/netfilter/nf_synproxy_core.c -index d7f1685..d6ee8f8 100644 ---- a/net/netfilter/nf_synproxy_core.c -+++ b/net/netfilter/nf_synproxy_core.c -@@ -378,7 +378,7 @@ static int __net_init synproxy_net_init(struct net *net) - err3: - free_percpu(snet->stats); - err2: -- nf_conntrack_free(ct); -+ nf_ct_tmpl_free(ct); - err1: - return err; - } diff --git a/net/netfilter/nfnetlink_log.c b/net/netfilter/nfnetlink_log.c index 4670821..a6c3c47d 100644 --- a/net/netfilter/nfnetlink_log.c @@ -122865,7 +122067,7 @@ index 4670821..a6c3c47d 100644 if (data_len) { diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c -index 66def31..d64a66d 100644 +index 9c8fab0..5080c7c 100644 --- a/net/netfilter/nft_compat.c +++ b/net/netfilter/nft_compat.c @@ -322,14 +322,7 @@ static void nft_match_eval(const struct nft_expr *expr, @@ -122884,19 +122086,6 @@ index 66def31..d64a66d 100644 } static const struct nla_policy nft_match_policy[NFTA_MATCH_MAX + 1] = { -diff --git a/net/netfilter/xt_CT.c b/net/netfilter/xt_CT.c -index 43ddeee..f3377ce 100644 ---- a/net/netfilter/xt_CT.c -+++ b/net/netfilter/xt_CT.c -@@ -233,7 +233,7 @@ out: - return 0; - - err3: -- nf_conntrack_free(ct); -+ nf_ct_tmpl_free(ct); - err2: - nf_ct_l3proto_module_put(par->family); - err1: diff --git a/net/netfilter/xt_gradm.c b/net/netfilter/xt_gradm.c new file mode 100644 index 0000000..c566332 @@ -124413,7 +123602,7 @@ index 2e1348b..2d3b463 100644 /* Build up the XDR from the receive buffers. */ rdma_build_arg_xdr(rqstp, ctxt, ctxt->byte_len); diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c -index d25cd43..1f5cb46 100644 +index 95412ab..29e8f37 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c @@ -218,7 +218,7 @@ static int send_write(struct svcxprt_rdma *xprt, struct svc_rqst *rqstp, @@ -127078,6 +126267,20 @@ index aee2ec5..c276071 100644 /* record the root user tracking */ rb_link_node(&root_key_user.node, +diff --git a/security/keys/request_key.c b/security/keys/request_key.c +index 486ef6f..0d62531 100644 +--- a/security/keys/request_key.c ++++ b/security/keys/request_key.c +@@ -440,6 +440,9 @@ static struct key *construct_key_and_link(struct keyring_search_context *ctx, + + kenter(""); + ++ if (ctx->index_key.type == &key_type_keyring) ++ return ERR_PTR(-EPERM); ++ + user = key_user_lookup(current_fsuid()); + if (!user) + return ERR_PTR(-ENOMEM); diff --git a/security/min_addr.c b/security/min_addr.c index f728728..6457a0c 100644 --- a/security/min_addr.c @@ -172894,7 +172097,7 @@ index 0a578fe..b81f62d 100644 }) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c -index 8b8a444..4ac8a9a 100644 +index 5a2a78a..4f322d3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -81,12 +81,17 @@ LIST_HEAD(vm_list); @@ -172995,7 +172198,7 @@ index 8b8a444..4ac8a9a 100644 hardware_disable_all_nolock(); r = -EBUSY; } -@@ -3421,7 +3434,7 @@ static void kvm_sched_out(struct preempt_notifier *pn, +@@ -3436,7 +3449,7 @@ static void kvm_sched_out(struct preempt_notifier *pn, kvm_arch_vcpu_put(vcpu); } @@ -173004,7 +172207,7 @@ index 8b8a444..4ac8a9a 100644 struct module *module) { int r; -@@ -3468,7 +3481,7 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align, +@@ -3483,7 +3496,7 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align, if (!vcpu_align) vcpu_align = __alignof__(struct kvm_vcpu); kvm_vcpu_cache = kmem_cache_create("kvm_vcpu", vcpu_size, vcpu_align, @@ -173013,7 +172216,7 @@ index 8b8a444..4ac8a9a 100644 if (!kvm_vcpu_cache) { r = -ENOMEM; goto out_free_3; -@@ -3478,9 +3491,11 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align, +@@ -3493,9 +3506,11 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align, if (r) goto out_free; @@ -173025,7 +172228,7 @@ index 8b8a444..4ac8a9a 100644 r = misc_register(&kvm_dev); if (r) { -@@ -3490,9 +3505,6 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align, +@@ -3505,9 +3520,6 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align, register_syscore_ops(&kvm_syscore_ops); diff --git a/4.2.3/4425_grsec_remove_EI_PAX.patch b/4.2.4/4425_grsec_remove_EI_PAX.patch index 2a1aa6c..2a1aa6c 100644 --- a/4.2.3/4425_grsec_remove_EI_PAX.patch +++ b/4.2.4/4425_grsec_remove_EI_PAX.patch diff --git a/4.2.3/4427_force_XATTR_PAX_tmpfs.patch b/4.2.4/4427_force_XATTR_PAX_tmpfs.patch index 9157231..9157231 100644 --- a/4.2.3/4427_force_XATTR_PAX_tmpfs.patch +++ b/4.2.4/4427_force_XATTR_PAX_tmpfs.patch diff --git a/4.2.3/4430_grsec-remove-localversion-grsec.patch b/4.2.4/4430_grsec-remove-localversion-grsec.patch index 31cf878..31cf878 100644 --- a/4.2.3/4430_grsec-remove-localversion-grsec.patch +++ b/4.2.4/4430_grsec-remove-localversion-grsec.patch diff --git a/4.2.3/4435_grsec-mute-warnings.patch b/4.2.4/4435_grsec-mute-warnings.patch index b7564e4..b7564e4 100644 --- a/4.2.3/4435_grsec-mute-warnings.patch +++ b/4.2.4/4435_grsec-mute-warnings.patch diff --git a/4.2.3/4440_grsec-remove-protected-paths.patch b/4.2.4/4440_grsec-remove-protected-paths.patch index 741546d..741546d 100644 --- a/4.2.3/4440_grsec-remove-protected-paths.patch +++ b/4.2.4/4440_grsec-remove-protected-paths.patch diff --git a/4.2.3/4450_grsec-kconfig-default-gids.patch b/4.2.4/4450_grsec-kconfig-default-gids.patch index 9524b1f..9524b1f 100644 --- a/4.2.3/4450_grsec-kconfig-default-gids.patch +++ b/4.2.4/4450_grsec-kconfig-default-gids.patch diff --git a/4.2.3/4465_selinux-avc_audit-log-curr_ip.patch b/4.2.4/4465_selinux-avc_audit-log-curr_ip.patch index ba89596..ba89596 100644 --- a/4.2.3/4465_selinux-avc_audit-log-curr_ip.patch +++ b/4.2.4/4465_selinux-avc_audit-log-curr_ip.patch diff --git a/4.2.3/4470_disable-compat_vdso.patch b/4.2.4/4470_disable-compat_vdso.patch index 7f84a27..7f84a27 100644 --- a/4.2.3/4470_disable-compat_vdso.patch +++ b/4.2.4/4470_disable-compat_vdso.patch diff --git a/4.2.3/4475_emutramp_default_on.patch b/4.2.4/4475_emutramp_default_on.patch index afd6019..afd6019 100644 --- a/4.2.3/4475_emutramp_default_on.patch +++ b/4.2.4/4475_emutramp_default_on.patch |