sfrench/cifs-2.6.git
9 months agoKVM: x86/mmu: Assert that correct locks are held for page write-tracking
Sean Christopherson [Sat, 29 Jul 2023 01:35:31 +0000 (18:35 -0700)]
KVM: x86/mmu: Assert that correct locks are held for page write-tracking

When adding/removing gfns to/from write-tracking, assert that mmu_lock
is held for write, and that either slots_lock or kvm->srcu is held.
mmu_lock must be held for write to protect gfn_write_track's refcount,
and SRCU or slots_lock must be held to protect the memslot itself.

Tested-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-26-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Rename page-track APIs to reflect the new reality
Sean Christopherson [Sat, 29 Jul 2023 01:35:30 +0000 (18:35 -0700)]
KVM: x86/mmu: Rename page-track APIs to reflect the new reality

Rename the page-track APIs to capture that they're all about tracking
writes, now that the facade of supporting multiple modes is gone.

Opportunstically replace "slot" with "gfn" in anticipation of removing
the @slot param from the external APIs.

No functional change intended.

Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-25-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Drop infrastructure for multiple page-track modes
Sean Christopherson [Sat, 29 Jul 2023 01:35:29 +0000 (18:35 -0700)]
KVM: x86/mmu: Drop infrastructure for multiple page-track modes

Drop "support" for multiple page-track modes, as there is no evidence
that array-based and refcounted metadata is the optimal solution for
other modes, nor is there any evidence that other use cases, e.g. for
access-tracking, will be a good fit for the page-track machinery in
general.

E.g. one potential use case of access-tracking would be to prevent guest
access to poisoned memory (from the guest's perspective).  In that case,
the number of poisoned pages is likely to be a very small percentage of
the guest memory, and there is no need to reference count the number of
access-tracking users, i.e. expanding gfn_track[] for a new mode would be
grossly inefficient.  And for poisoned memory, host userspace would also
likely want to trap accesses, e.g. to inject #MC into the guest, and that
isn't currently supported by the page-track framework.

A better alternative for that poisoned page use case is likely a
variation of the proposed per-gfn attributes overlay (linked), which
would allow efficiently tracking the sparse set of poisoned pages, and by
default would exit to userspace on access.

Link: https://lore.kernel.org/all/Y2WB48kD0J4VGynX@google.com
Cc: Ben Gardon <bgardon@google.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-24-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Use page-track notifiers iff there are external users
Sean Christopherson [Sat, 29 Jul 2023 01:35:28 +0000 (18:35 -0700)]
KVM: x86/mmu: Use page-track notifiers iff there are external users

Disable the page-track notifier code at compile time if there are no
external users, i.e. if CONFIG_KVM_EXTERNAL_WRITE_TRACKING=n.  KVM itself
now hooks emulated writes directly instead of relying on the page-track
mechanism.

Provide a stub for "struct kvm_page_track_notifier_node" so that including
headers directly from the command line, e.g. for testing include guards,
doesn't fail due to a struct having an incomplete type.

Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-23-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Move KVM-only page-track declarations to internal header
Sean Christopherson [Sat, 29 Jul 2023 01:35:27 +0000 (18:35 -0700)]
KVM: x86/mmu: Move KVM-only page-track declarations to internal header

Bury the declaration of the page-track helpers that are intended only for
internal KVM use in a "private" header.  In addition to guarding against
unwanted usage of the internal-only helpers, dropping their definitions
avoids exposing other structures that should be KVM-internal, e.g. for
memslots.  This is a baby step toward making kvm_host.h a KVM-internal
header in the very distant future.

Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-22-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86: Remove the unused page-track hook track_flush_slot()
Yan Zhao [Sat, 29 Jul 2023 01:35:26 +0000 (18:35 -0700)]
KVM: x86: Remove the unused page-track hook track_flush_slot()

Remove ->track_remove_slot(), there are no longer any users and it's
unlikely a "flush" hook will ever be the correct API to provide to an
external page-track user.

Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-21-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agodrm/i915/gvt: switch from ->track_flush_slot() to ->track_remove_region()
Yan Zhao [Sat, 29 Jul 2023 01:35:25 +0000 (18:35 -0700)]
drm/i915/gvt: switch from ->track_flush_slot() to ->track_remove_region()

Switch from the poorly named and flawed ->track_flush_slot() to the newly
introduced ->track_remove_region().  From KVMGT's perspective, the two
hooks are functionally equivalent, the only difference being that
->track_remove_region() is called only when KVM is 100% certain the
memory region will be removed, i.e. is invoked slightly later in KVM's
memslot modification flow.

Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
[sean: handle name change, massage changelog, rebase]
Tested-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-20-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86: Add a new page-track hook to handle memslot deletion
Yan Zhao [Sat, 29 Jul 2023 01:35:24 +0000 (18:35 -0700)]
KVM: x86: Add a new page-track hook to handle memslot deletion

Add a new page-track hook, track_remove_region(), that is called when a
memslot DELETE operation is about to be committed.  The "remove" hook
will be used by KVMGT and will effectively replace the existing
track_flush_slot() altogether now that KVM itself doesn't rely on the
"flush" hook either.

The "flush" hook is flawed as it's invoked before the memslot operation
is guaranteed to succeed, i.e. KVM might ultimately keep the existing
memslot without notifying external page track users, a.k.a. KVMGT.  In
practice, this can't currently happen on x86, but there are no guarantees
that won't change in the future, not to mention that "flush" does a very
poor job of describing what is happening.

Pass in the gfn+nr_pages instead of the slot itself so external users,
i.e. KVMGT, don't need to exposed to KVM internals (memslots).  This will
help set the stage for additional cleanups to the page-track APIs.

Opportunistically align the existing srcu_read_lock_held() usage so that
the new case doesn't stand out like a sore thumb (and not aligning the
new code makes bots unhappy).

Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-19-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agodrm/i915/gvt: Don't bother removing write-protection on to-be-deleted slot
Sean Christopherson [Sat, 29 Jul 2023 01:35:23 +0000 (18:35 -0700)]
drm/i915/gvt: Don't bother removing write-protection on to-be-deleted slot

When handling a slot "flush", don't call back into KVM to drop write
protection for gfns in the slot.  Now that KVM rejects attempts to move
memory slots while KVMGT is attached, the only time a slot is "flushed"
is when it's being removed, i.e. the memslot and all its write-tracking
metadata is about to be deleted.

Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-18-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86: Reject memslot MOVE operations if KVMGT is attached
Sean Christopherson [Sat, 29 Jul 2023 01:35:22 +0000 (18:35 -0700)]
KVM: x86: Reject memslot MOVE operations if KVMGT is attached

Disallow moving memslots if the VM has external page-track users, i.e. if
KVMGT is being used to expose a virtual GPU to the guest, as KVMGT doesn't
correctly handle moving memory regions.

Note, this is potential ABI breakage!  E.g. userspace could move regions
that aren't shadowed by KVMGT without harming the guest.  However, the
only known user of KVMGT is QEMU, and QEMU doesn't move generic memory
regions.  KVM's own support for moving memory regions was also broken for
multiple years (albeit for an edge case, but arguably moving RAM is
itself an edge case), e.g. see commit edd4fa37baa6 ("KVM: x86: Allocate
new rmap and large page tracking when moving memslot").

Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-17-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: drm/i915/gvt: Drop @vcpu from KVM's ->track_write() hook
Sean Christopherson [Sat, 29 Jul 2023 01:35:21 +0000 (18:35 -0700)]
KVM: drm/i915/gvt: Drop @vcpu from KVM's ->track_write() hook

Drop @vcpu from KVM's ->track_write() hook provided for external users of
the page-track APIs now that KVM itself doesn't use the page-track
mechanism.

Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-16-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Don't bounce through page-track mechanism for guest PTEs
Sean Christopherson [Sat, 29 Jul 2023 01:35:20 +0000 (18:35 -0700)]
KVM: x86/mmu: Don't bounce through page-track mechanism for guest PTEs

Don't use the generic page-track mechanism to handle writes to guest PTEs
in KVM's MMU.  KVM's MMU needs access to information that should not be
exposed to external page-track users, e.g. KVM needs (for some definitions
of "need") the vCPU to query the current paging mode, whereas external
users, i.e. KVMGT, have no ties to the current vCPU and so should never
need the vCPU.

Moving away from the page-track mechanism will allow dropping use of the
page-track mechanism for KVM's own MMU, and will also allow simplifying
and cleaning up the page-track APIs.

Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-15-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Don't rely on page-track mechanism to flush on memslot change
Sean Christopherson [Sat, 29 Jul 2023 01:35:19 +0000 (18:35 -0700)]
KVM: x86/mmu: Don't rely on page-track mechanism to flush on memslot change

Call kvm_mmu_zap_all_fast() directly when flushing a memslot instead of
bouncing through the page-track mechanism.  KVM (unfortunately) needs to
zap and flush all page tables on memslot DELETE/MOVE irrespective of
whether KVM is shadowing guest page tables.

This will allow changing KVM to register a page-track notifier on the
first shadow root allocation, and will also allow deleting the misguided
kvm_page_track_flush_slot() hook itself once KVM-GT also moves to a
different method for reacting to memslot changes.

No functional change intended.

Cc: Yan Zhao <yan.y.zhao@intel.com>
Link: https://lore.kernel.org/r/20221110014821.1548347-2-seanjc@google.com
Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-14-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Move kvm_arch_flush_shadow_{all,memslot}() to mmu.c
Sean Christopherson [Sat, 29 Jul 2023 01:35:18 +0000 (18:35 -0700)]
KVM: x86/mmu: Move kvm_arch_flush_shadow_{all,memslot}() to mmu.c

Move x86's implementation of kvm_arch_flush_shadow_{all,memslot}() into
mmu.c, and make kvm_mmu_zap_all() static as it was globally visible only
for kvm_arch_flush_shadow_all().  This will allow refactoring
kvm_arch_flush_shadow_memslot() to call kvm_mmu_zap_all() directly without
having to expose kvm_mmu_zap_all_fast() outside of mmu.c.  Keeping
everything in mmu.c will also likely simplify supporting TDX, which
intends to do zap only relevant SPTEs on memslot updates.

No functional change intended.

Suggested-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-13-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agodrm/i915/gvt: Protect gfn hash table with vgpu_lock
Sean Christopherson [Sat, 29 Jul 2023 01:35:17 +0000 (18:35 -0700)]
drm/i915/gvt: Protect gfn hash table with vgpu_lock

Use vgpu_lock instead of KVM's mmu_lock to protect accesses to the hash
table used to track which gfns are write-protected when shadowing the
guest's GTT, and hoist the acquisition of vgpu_lock from
intel_vgpu_page_track_handler() out to its sole caller,
kvmgt_page_track_write().

This fixes a bug where kvmgt_page_track_write(), which doesn't hold
kvm->mmu_lock, could race with intel_gvt_page_track_remove() and trigger
a use-after-free.

Fixing kvmgt_page_track_write() by taking kvm->mmu_lock is not an option
as mmu_lock is a r/w spinlock, and intel_vgpu_page_track_handler() might
sleep when acquiring vgpu->cache_lock deep down the callstack:

  intel_vgpu_page_track_handler()
  |
  |->  page_track->handler / ppgtt_write_protection_handler()
       |
       |-> ppgtt_handle_guest_write_page_table_bytes()
           |
           |->  ppgtt_handle_guest_write_page_table()
                |
                |-> ppgtt_handle_guest_entry_removal()
                    |
                    |-> ppgtt_invalidate_pte()
                        |
                        |-> intel_gvt_dma_unmap_guest_page()
                            |
                            |-> mutex_lock(&vgpu->cache_lock);

Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-12-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agodrm/i915/gvt: Drop unused helper intel_vgpu_reset_gtt()
Sean Christopherson [Sat, 29 Jul 2023 01:35:16 +0000 (18:35 -0700)]
drm/i915/gvt: Drop unused helper intel_vgpu_reset_gtt()

Drop intel_vgpu_reset_gtt() as it no longer has any callers.  In addition
to eliminating dead code, this eliminates the last possible scenario where
__kvmgt_protect_table_find() can be reached without holding vgpu_lock.
Requiring vgpu_lock to be held when calling __kvmgt_protect_table_find()
will allow a protecting the gfn hash with vgpu_lock without too much fuss.

No functional change intended.

Fixes: ba25d977571e ("drm/i915/gvt: Do not destroy ppgtt_mm during vGPU D3->D0.")
Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-11-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agodrm/i915/gvt: Use an "unsigned long" to iterate over memslot gfns
Sean Christopherson [Sat, 29 Jul 2023 01:35:15 +0000 (18:35 -0700)]
drm/i915/gvt: Use an "unsigned long" to iterate over memslot gfns

Use an "unsigned long" instead of an "int" when iterating over the gfns
in a memslot.  The number of pages in the memslot is tracked as an
"unsigned long", e.g. KVMGT could theoretically break if a KVM memslot
larger than 16TiB were deleted (2^32 * 4KiB).

Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-10-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agodrm/i915/gvt: Don't rely on KVM's gfn_to_pfn() to query possible 2M GTT
Sean Christopherson [Sat, 29 Jul 2023 01:35:14 +0000 (18:35 -0700)]
drm/i915/gvt: Don't rely on KVM's gfn_to_pfn() to query possible 2M GTT

Now that gvt_pin_guest_page() explicitly verifies the pinned PFN is a
transparent hugepage page, don't use KVM's gfn_to_pfn() to pre-check if a
2MiB GTT entry is possible and instead just try to map the GFN with a 2MiB
entry.  Using KVM to query pfn that is ultimately managed through VFIO is
odd, and KVM's gfn_to_pfn() is not intended for non-KVM consumption; it's
exported only because of KVM vendor modules (x86 and PPC).

Open code the check on 2MiB support instead of keeping
is_2MB_gtt_possible() around for a single line of code.

Move the call to intel_gvt_dma_map_guest_page() for a 4KiB entry into its
case statement, i.e. fork the common path into the 4KiB and 2MiB "direct"
shadow paths.  Keeping the call in the "common" path is arguably more in
the spirit of "one change per patch", but retaining the local "page_size"
variable is silly, i.e. the call site will be changed either way, and
jumping around the no-longer-common code is more subtle and rather odd,
i.e. would just need to be immediately cleaned up.

Drop the error message from gvt_pin_guest_page() when KVMGT attempts to
shadow a 2MiB guest page that isn't backed by a compatible hugepage in the
host.  Dropping the pre-check on a THP makes it much more likely that the
"error" will be encountered in normal operation.

Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-9-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agodrm/i915/gvt: Error out on an attempt to shadowing an unknown GTT entry type
Sean Christopherson [Sat, 29 Jul 2023 01:35:13 +0000 (18:35 -0700)]
drm/i915/gvt: Error out on an attempt to shadowing an unknown GTT entry type

Bail from ppgtt_populate_shadow_entry() if an unexpected GTT entry type
is encountered instead of subtly falling through to the common "direct
shadow" path.  Eliminating the default/error path's reliance on the common
handling will allow hoisting intel_gvt_dma_map_guest_page() into the case
statements so that the 2MiB case can try intel_gvt_dma_map_guest_page()
and fallback to splitting the entry on failure.

Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-8-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agodrm/i915/gvt: Explicitly check that vGPU is attached before shadowing
Sean Christopherson [Thu, 3 Aug 2023 22:32:06 +0000 (15:32 -0700)]
drm/i915/gvt: Explicitly check that vGPU is attached before shadowing

Move the check that a vGPU is attached from is_2MB_gtt_possible() all the
way up to shadow_ppgtt_mm() to avoid unnecessary work, and to make it more
obvious that a future cleanup of is_2MB_gtt_possible() isn't introducing a
bug.

is_2MB_gtt_possible() has only one caller, ppgtt_populate_shadow_entry(),
and all paths in ppgtt_populate_shadow_entry() eventually check for
attachment by way of intel_gvt_dma_map_guest_page().

And of the paths that lead to ppgtt_populate_shadow_entry(),
shadow_ppgtt_mm() is the only one that doesn't already check for
INTEL_VGPU_STATUS_ACTIVE or INTEL_VGPU_STATUS_ATTACHED.

  workload_thread() <= pick_next_workload() => INTEL_VGPU_STATUS_ACTIVE
  |
  -> dispatch_workload()
     |
     |-> prepare_workload()
         |
         -> intel_vgpu_sync_oos_pages()
         |  |
         |  |-> ppgtt_set_guest_page_sync()
         |      |
         |      |-> sync_oos_page()
         |          |
         |          |-> ppgtt_populate_shadow_entry()
         |
         |-> intel_vgpu_flush_post_shadow()
             |
  1:         |-> ppgtt_handle_guest_write_page_table()
                 |
                 |-> ppgtt_handle_guest_entry_add()
                     |
  2:                 | -> ppgtt_populate_spt_by_guest_entry()
                     |    |
                     |    |-> ppgtt_populate_spt()
                     |        |
                     |        |-> ppgtt_populate_shadow_entry()
                     |            |
                     |            |-> ppgtt_populate_spt_by_guest_entry() [see 2]
                     |
                     |-> ppgtt_populate_shadow_entry()

  kvmgt_page_track_write()  <= KVM callback => INTEL_VGPU_STATUS_ATTACHED
  |
  |-> intel_vgpu_page_track_handler()
      |
      |-> ppgtt_write_protection_handler()
          |
          |-> ppgtt_handle_guest_write_page_table_bytes()
              |
              |-> ppgtt_handle_guest_write_page_table() [see 1]

Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yan Zhao <yan.y.zhao@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agodrm/i915/gvt: Put the page reference obtained by KVM's gfn_to_pfn()
Sean Christopherson [Sat, 29 Jul 2023 01:35:11 +0000 (18:35 -0700)]
drm/i915/gvt: Put the page reference obtained by KVM's gfn_to_pfn()

Put the struct page reference acquired by gfn_to_pfn(), KVM's API is that
the caller is ultimately responsible for dropping any reference.

Note, kvm_release_pfn_clean() ensures the pfn is actually a refcounted
struct page before trying to put any references.

Fixes: b901b252b6cf ("drm/i915/gvt: Add 2M huge gtt support")
Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-6-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agodrm/i915/gvt: Don't try to unpin an empty page range
Yan Zhao [Sat, 29 Jul 2023 01:35:10 +0000 (18:35 -0700)]
drm/i915/gvt: Don't try to unpin an empty page range

Attempt to unpin pages in the error path of gvt_pin_guest_page() if and
only if at least one page was successfully pinned.  Unpinning doesn't
cause functional problems, but vfio_device_container_unpin_pages()
rightfully warns about being asked to unpin zero pages.

Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
[sean: write changelog]
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-5-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agodrm/i915/gvt: Verify hugepages are contiguous in physical address space
Sean Christopherson [Sat, 29 Jul 2023 01:35:09 +0000 (18:35 -0700)]
drm/i915/gvt: Verify hugepages are contiguous in physical address space

When shadowing a GTT entry with a 2M page, verify that the pfns are
contiguous, not just that the struct page pointers are contiguous.  The
memory map is virtual contiguous if "CONFIG_FLATMEM=y ||
CONFIG_SPARSEMEM_VMEMMAP=y", but not for "CONFIG_SPARSEMEM=y &&
CONFIG_SPARSEMEM_VMEMMAP=n", so theoretically KVMGT could encounter struct
pages that are virtually contiguous, but not physically contiguous.

In practice, this flaw is likely a non-issue as it would cause functional
problems iff a section isn't 2M aligned _and_ is directly adjacent to
another section with discontiguous pfns.

Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-4-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agodrm/i915/gvt: remove interface intel_gvt_is_valid_gfn
Yan Zhao [Sat, 29 Jul 2023 01:35:08 +0000 (18:35 -0700)]
drm/i915/gvt: remove interface intel_gvt_is_valid_gfn

Currently intel_gvt_is_valid_gfn() is called in two places:
(1) shadowing guest GGTT entry
(2) shadowing guest PPGTT leaf entry,
which was introduced in commit cc753fbe1ac4
("drm/i915/gvt: validate gfn before set shadow page entry").

However, now it's not necessary to call this interface any more, because
a. GGTT partial write issue has been fixed by
   commit bc0686ff5fad
   ("drm/i915/gvt: support inconsecutive partial gtt entry write")
   commit 510fe10b6180
   ("drm/i915/gvt: fix a bug of partially write ggtt enties")
b. PPGTT resides in normal guest RAM and we only treat 8-byte writes
   as valid page table writes. Any invalid GPA found is regarded as
   an error, either due to guest misbehavior/attack or bug in host
   shadow code.
   So,rather than do GFN pre-checking and replace invalid GFNs with
   scratch GFN and continue silently, just remove the pre-checking and
   abort PPGTT shadowing on error detected.
c. GFN validity check is still performed in
   intel_gvt_dma_map_guest_page() --> gvt_pin_guest_page().
   It's more desirable to call VFIO interface to do both validity check
   and mapping.
   Calling intel_gvt_is_valid_gfn() to do GFN validity check from KVM side
   while later mapping the GFN through VFIO interface is unnecessarily
   fragile and confusing for unaware readers.

Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
[sean: remove now-unused local variables]
Acked-by: Zhi Wang <zhi.a.wang@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agodrm/i915/gvt: Verify pfn is "valid" before dereferencing "struct page"
Sean Christopherson [Sat, 29 Jul 2023 01:35:07 +0000 (18:35 -0700)]
drm/i915/gvt: Verify pfn is "valid" before dereferencing "struct page"

Check that the pfn found by gfn_to_pfn() is actually backed by "struct
page" memory prior to retrieving and dereferencing the page.  KVM
supports backing guest memory with VM_PFNMAP, VM_IO, etc., and so
there is no guarantee the pfn returned by gfn_to_pfn() has an associated
"struct page".

Fixes: b901b252b6cf ("drm/i915/gvt: Add 2M huge gtt support")
Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Link: https://lore.kernel.org/r/20230729013535.1070024-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: BUG() in rmap helpers iff CONFIG_BUG_ON_DATA_CORRUPTION=y
Sean Christopherson [Sat, 29 Jul 2023 00:47:22 +0000 (17:47 -0700)]
KVM: x86/mmu: BUG() in rmap helpers iff CONFIG_BUG_ON_DATA_CORRUPTION=y

Introduce KVM_BUG_ON_DATA_CORRUPTION() and use it in the low-level rmap
helpers to convert the existing BUG()s to WARN_ON_ONCE() when the kernel
is built with CONFIG_BUG_ON_DATA_CORRUPTION=n, i.e. does NOT want to BUG()
on corruption of host kernel data structures.  Environments that don't
have infrastructure to automatically capture crash dumps, i.e. aren't
likely to enable CONFIG_BUG_ON_DATA_CORRUPTION=y, are typically better
served overall by WARN-and-continue behavior (for the kernel, the VM is
dead regardless), as a BUG() while holding mmu_lock all but guarantees
the _best_ case scenario is a panic().

Make the BUG()s conditional instead of removing/replacing them entirely as
there's a non-zero chance (though by no means a guarantee) that the damage
isn't contained to the target VM, e.g. if no rmap is found for a SPTE then
KVM may be double-zapping the SPTE, i.e. has already freed the memory the
SPTE pointed at and thus KVM is reading/writing memory that KVM no longer
owns.

Link: https://lore.kernel.org/all/20221129191237.31447-1-mizhang@google.com
Suggested-by: Mingwei Zhang <mizhang@google.com>
Cc: David Matlack <dmatlack@google.com>
Cc: Jim Mattson <jmattson@google.com>
Reviewed-by: Mingwei Zhang <mizhang@google.com>
Link: https://lore.kernel.org/r/20230729004722.1056172-13-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Plumb "struct kvm" all the way to pte_list_remove()
Mingwei Zhang [Sat, 29 Jul 2023 00:47:21 +0000 (17:47 -0700)]
KVM: x86/mmu: Plumb "struct kvm" all the way to pte_list_remove()

Plumb "struct kvm" all the way to pte_list_remove() to allow the usage of
KVM_BUG() and/or KVM_BUG_ON().  This will allow killing only the offending
VM instead of doing BUG() if the kernel is built with
CONFIG_BUG_ON_DATA_CORRUPTION=n, i.e. does NOT want to BUG() if KVM's data
structures (rmaps) appear to be corrupted.

Signed-off-by: Mingwei Zhang <mizhang@google.com>
[sean: tweak changelog]
Link: https://lore.kernel.org/r/20230729004722.1056172-12-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Use BUILD_BUG_ON_INVALID() for KVM_MMU_WARN_ON() stub
Sean Christopherson [Sat, 29 Jul 2023 00:47:20 +0000 (17:47 -0700)]
KVM: x86/mmu: Use BUILD_BUG_ON_INVALID() for KVM_MMU_WARN_ON() stub

Use BUILD_BUG_ON_INVALID() instead of an empty do-while loop to stub out
KVM_MMU_WARN_ON() when CONFIG_KVM_PROVE_MMU=n, that way _some_ build
issues with the usage of KVM_MMU_WARN_ON() will be dected even if the
kernel is using the stubs, e.g. basic syntax errors will be detected.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Link: https://lore.kernel.org/r/20230729004722.1056172-11-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Replace MMU_DEBUG with proper KVM_PROVE_MMU Kconfig
Sean Christopherson [Sat, 29 Jul 2023 00:47:19 +0000 (17:47 -0700)]
KVM: x86/mmu: Replace MMU_DEBUG with proper KVM_PROVE_MMU Kconfig

Replace MMU_DEBUG, which requires manually modifying KVM to enable the
macro, with a proper Kconfig, KVM_PROVE_MMU.  Now that pgprintk() and
rmap_printk() are gone, i.e. the macro guards only KVM_MMU_WARN_ON() and
won't flood the kernel logs, enabling the option for debug kernels is both
desirable and feasible.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Link: https://lore.kernel.org/r/20230729004722.1056172-10-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Bug the VM if a vCPU ends up in long mode without PAE enabled
Sean Christopherson [Sat, 29 Jul 2023 00:47:18 +0000 (17:47 -0700)]
KVM: x86/mmu: Bug the VM if a vCPU ends up in long mode without PAE enabled

Promote the ASSERT(), which is quite dead code in KVM, into a KVM_BUG_ON()
for KVM's sanity check that CR4.PAE=1 if the vCPU is in long mode when
performing a walk of guest page tables.  The sanity is quite cheap since
neither EFER nor CR4.PAE requires a VMREAD, especially relative to the
cost of walking the guest page tables.

More importantly, the sanity check would have prevented the true badness
fixed by commit 112e66017bff ("KVM: nVMX: add missing consistency checks
for CR0 and CR4").  The missed consistency check resulted in some versions
of KVM corrupting the on-stack guest_walker structure due to KVM thinking
there are 4/5 levels of page tables, but wiring up the MMU hooks to point
at the paging32 implementation, which only allocates space for two levels
of page tables in "struct guest_walker32".

Queue a page fault for injection if the assertion fails, as both callers,
FNAME(gva_to_gpa) and FNAME(walk_addr_generic), assume that walker.fault
contains sane info on a walk failure.  E.g. not populating the fault info
could result in KVM consuming and/or exposing uninitialized stack data
before the vCPU is kicked out to userspace, which doesn't happen until
KVM checks for KVM_REQ_VM_DEAD on the next enter.

Move the check below the initialization of "pte_access" so that the
aforementioned to-be-injected page fault doesn't consume uninitialized
stack data.  The information _shouldn't_ reach the guest or userspace,
but there's zero downside to being paranoid in this case.

Link: https://lore.kernel.org/r/20230729004722.1056172-9-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Convert "runtime" WARN_ON() assertions to WARN_ON_ONCE()
Sean Christopherson [Sat, 29 Jul 2023 00:47:17 +0000 (17:47 -0700)]
KVM: x86/mmu: Convert "runtime" WARN_ON() assertions to WARN_ON_ONCE()

Convert all "runtime" assertions, i.e. assertions that can be triggered
while running vCPUs, from WARN_ON() to WARN_ON_ONCE().  Every WARN in the
MMU that is tied to running vCPUs, i.e. not contained to loading and
initializing KVM, is likely to fire _a lot_ when it does trigger.  E.g. if
KVM ends up with a bug that causes a root to be invalidated before the
page fault handler is invoked, pretty much _every_ page fault VM-Exit
triggers the WARN.

If a WARN is triggered frequently, the resulting spam usually causes a lot
of damage of its own, e.g. consumes resources to log the WARN and pollutes
the kernel log, often to the point where other useful information can be
lost.  In many case, the damage caused by the spam is actually worse than
the bug itself, e.g. KVM can almost always recover from an unexpectedly
invalid root.

On the flip side, warning every time is rarely helpful for debug and
triage, i.e. a single splat is usually sufficient to point a debugger in
the right direction, and automated testing, e.g. syzkaller, typically runs
with warn_on_panic=1, i.e. will never get past the first WARN anyways.

Lastly, when an assertions fails multiple times, the stack traces in KVM
are almost always identical, i.e. the full splat only needs to be captured
once.  And _if_ there is value in captruing information about the failed
assert, a ratelimited printk() is sufficient and less likely to rack up a
large amount of collateral damage.

Link: https://lore.kernel.org/r/20230729004722.1056172-8-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Rename MMU_WARN_ON() to KVM_MMU_WARN_ON()
Sean Christopherson [Sat, 29 Jul 2023 00:47:16 +0000 (17:47 -0700)]
KVM: x86/mmu: Rename MMU_WARN_ON() to KVM_MMU_WARN_ON()

Rename MMU_WARN_ON() to make it super obvious that the assertions are
all about KVM's MMU, not the primary MMU.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Link: https://lore.kernel.org/r/20230729004722.1056172-7-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Cleanup sanity check of SPTEs at SP free
Sean Christopherson [Sat, 29 Jul 2023 00:47:15 +0000 (17:47 -0700)]
KVM: x86/mmu: Cleanup sanity check of SPTEs at SP free

Massage the error message for the sanity check on SPTEs when freeing a
shadow page to be more verbose, and to print out all shadow-present SPTEs,
not just the first SPTE encountered.  Printing all SPTEs can be quite
valuable for debug, e.g. highlights whether the leak is a one-off or
widepsread, or possibly the result of memory corruption (something else
in the kernel stomping on KVM's SPTEs).

Opportunistically move the MMU_WARN_ON() into the helper itself, which
will allow a future cleanup to use BUILD_BUG_ON_INVALID() as the stub for
MMU_WARN_ON().  BUILD_BUG_ON_INVALID() works as intended and results in
the compiler complaining about is_empty_shadow_page() not being declared.

Link: https://lore.kernel.org/r/20230729004722.1056172-6-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Avoid pointer arithmetic when iterating over SPTEs
Sean Christopherson [Sat, 29 Jul 2023 00:47:14 +0000 (17:47 -0700)]
KVM: x86/mmu: Avoid pointer arithmetic when iterating over SPTEs

Replace the pointer arithmetic used to iterate over SPTEs in
is_empty_shadow_page() with more standard interger-based iteration.

No functional change intended.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Link: https://lore.kernel.org/r/20230729004722.1056172-5-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Delete the "dbg" module param
Sean Christopherson [Sat, 29 Jul 2023 00:47:13 +0000 (17:47 -0700)]
KVM: x86/mmu: Delete the "dbg" module param

Delete KVM's "dbg" module param now that its usage in KVM is gone (it
used to guard pgprintk() and rmap_printk()).

Link: https://lore.kernel.org/r/20230729004722.1056172-4-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Delete rmap_printk() and all its usage
Sean Christopherson [Sat, 29 Jul 2023 00:47:12 +0000 (17:47 -0700)]
KVM: x86/mmu: Delete rmap_printk() and all its usage

Delete rmap_printk() so that MMU_WARN_ON() and MMU_DEBUG can be morphed
into something that can be regularly enabled for debug kernels.  The
information provided by rmap_printk() isn't all that useful now that the
rmap and unsync code is mature, as the prints are simultaneously too
verbose (_lots_ of message) and yet not verbose enough to be helpful for
debug (most instances print just the SPTE pointer/value, which is rarely
sufficient to root cause anything but trivial bugs).

Alternatively, rmap_printk() could be reworked to into tracepoints, but
it's not clear there is a real need as rmap bugs rarely escape initial
development, and when bugs do escape to production, they are often edge
cases and/or reside in code that isn't directly related to the rmaps.
In other words, the problems with rmap_printk() being unhelpful also apply
to tracepoints.  And deleting rmap_printk() doesn't preclude adding
tracepoints in the future.

Link: https://lore.kernel.org/r/20230729004722.1056172-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Delete pgprintk() and all its usage
Sean Christopherson [Sat, 29 Jul 2023 00:47:11 +0000 (17:47 -0700)]
KVM: x86/mmu: Delete pgprintk() and all its usage

Delete KVM's pgprintk() and all its usage, as the code is very prone
to bitrot due to being buried behind MMU_DEBUG, and the functionality has
been rendered almost entirely obsolete by the tracepoints KVM has gained
over the years.  And for the situations where the information provided by
KVM's tracepoints is insufficient, pgprintk() rarely fills in the gaps,
and is almost always far too noisy, i.e. developers end up implementing
custom prints anyways.

Link: https://lore.kernel.org/r/20230729004722.1056172-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Guard against collision with KVM-defined PFERR_IMPLICIT_ACCESS
Sean Christopherson [Fri, 21 Jul 2023 22:37:11 +0000 (15:37 -0700)]
KVM: x86/mmu: Guard against collision with KVM-defined PFERR_IMPLICIT_ACCESS

Add an assertion in kvm_mmu_page_fault() to ensure the error code provided
by hardware doesn't conflict with KVM's software-defined IMPLICIT_ACCESS
flag.  In the unlikely scenario that future hardware starts using bit 48
for a hardware-defined flag, preserving the bit could result in KVM
incorrectly interpreting the unknown flag as KVM's IMPLICIT_ACCESS flag.

WARN so that any such conflict can be surfaced to KVM developers and
resolved, but otherwise ignore the bit as KVM can't possibly rely on a
flag it knows nothing about.

Fixes: 4f4aa80e3b88 ("KVM: X86: Handle implicit supervisor access with SMAP")
Acked-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Link: https://lore.kernel.org/r/20230721223711.2334426-1-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoKVM: x86/mmu: Move the lockdep_assert of mmu_lock to inside clear_dirty_pt_masked()
Like Xu [Tue, 27 Jun 2023 04:26:39 +0000 (12:26 +0800)]
KVM: x86/mmu: Move the lockdep_assert of mmu_lock to inside clear_dirty_pt_masked()

Move the lockdep_assert_held_write(&kvm->mmu_lock) from the only one caller
kvm_tdp_mmu_clear_dirty_pt_masked() to inside clear_dirty_pt_masked().

This change makes it more obvious why it's safe for clear_dirty_pt_masked()
to use the non-atomic (for non-volatile SPTEs) tdp_mmu_clear_spte_bits()
helper. for_each_tdp_mmu_root() does its own lockdep, so the only "loss"
in lockdep coverage is if the list is completely empty.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Like Xu <likexu@tencent.com>
Link: https://lore.kernel.org/r/20230627042639.12636-1-likexu@tencent.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9 months agoMerge tag 'kvm-x86-misc-6.6' of https://github.com/kvm-x86/linux into HEAD
Paolo Bonzini [Thu, 31 Aug 2023 17:36:33 +0000 (13:36 -0400)]
Merge tag 'kvm-x86-misc-6.6' of https://github.com/kvm-x86/linux into HEAD

KVM x86 changes for 6.6:

 - Misc cleanups

 - Retry APIC optimized recalculation if a vCPU is added/enabled

 - Overhaul emergency reboot code to bring SVM up to par with VMX, tie the
   "emergency disabling" behavior to KVM actually being loaded, and move all of
   the logic within KVM

 - Fix user triggerable WARNs in SVM where KVM incorrectly assumes the TSC
   ratio MSR can diverge from the default iff TSC scaling is enabled, and clean
   up related code

 - Add a framework to allow "caching" feature flags so that KVM can check if
   the guest can use a feature without needing to search guest CPUID

9 months agoMerge tag 'kvm-x86-svm-6.6' of https://github.com/kvm-x86/linux into HEAD
Paolo Bonzini [Thu, 31 Aug 2023 17:32:40 +0000 (13:32 -0400)]
Merge tag 'kvm-x86-svm-6.6' of https://github.com/kvm-x86/linux into HEAD

KVM: x86: SVM changes for 6.6:

 - Add support for SEV-ES DebugSwap, i.e. allow SEV-ES guests to use debug
   registers and generate/handle #DBs

 - Clean up LBR virtualization code

 - Fix a bug where KVM fails to set the target pCPU during an IRTE update

 - Fix fatal bugs in SEV-ES intrahost migration

 - Fix a bug where the recent (architecturally correct) change to reinject
   #BP and skip INT3 broke SEV guests (can't decode INT3 to skip it)

9 months agoMerge tag 'kvm-x86-vmx-6.6' of https://github.com/kvm-x86/linux into HEAD
Paolo Bonzini [Thu, 31 Aug 2023 17:32:06 +0000 (13:32 -0400)]
Merge tag 'kvm-x86-vmx-6.6' of https://github.com/kvm-x86/linux into HEAD

KVM: x86: VMX changes for 6.6:

 - Misc cleanups

 - Fix a bug where KVM reads a stale vmcs.IDT_VECTORING_INFO_FIELD when trying
   to handle NMI VM-Exits

9 months agoMerge tag 'kvm-x86-pmu-6.6' of https://github.com/kvm-x86/linux into HEAD
Paolo Bonzini [Thu, 31 Aug 2023 17:31:32 +0000 (13:31 -0400)]
Merge tag 'kvm-x86-pmu-6.6' of https://github.com/kvm-x86/linux into HEAD

KVM x86 PMU changes for 6.6:

 - Clean up KVM's handling of Intel architectural events

9 months agoMerge tag 'kvm-riscv-6.6-1' of https://github.com/kvm-riscv/linux into HEAD
Paolo Bonzini [Thu, 31 Aug 2023 17:25:55 +0000 (13:25 -0400)]
Merge tag 'kvm-riscv-6.6-1' of https://github.com/kvm-riscv/linux into HEAD

KVM/riscv changes for 6.6

- Zba, Zbs, Zicntr, Zicsr, Zifencei, and Zihpm support for Guest/VM
- Added ONE_REG interface for SATP mode
- Added ONE_REG interface to enable/disable multiple ISA extensions
- Improved error codes returned by ONE_REG interfaces
- Added KVM_GET_REG_LIST ioctl() implementation for KVM RISC-V
- Added get-reg-list selftest for KVM RISC-V

9 months agoMerge tag 'kvm-s390-next-6.6-1' of https://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Thu, 31 Aug 2023 17:21:27 +0000 (13:21 -0400)]
Merge tag 'kvm-s390-next-6.6-1' of https://git./linux/kernel/git/kvms390/linux into HEAD

- PV crypto passthrough enablement (Tony, Steffen, Viktor, Janosch)
  Allows a PV guest to use crypto cards. Card access is governed by
  the firmware and once a crypto queue is "bound" to a PV VM every
  other entity (PV or not) looses access until it is not bound
  anymore. Enablement is done via flags when creating the PV VM.

- Guest debug fixes (Ilya)

9 months agoMerge tag 'kvm-x86-selftests-6.6' of https://github.com/kvm-x86/linux into HEAD
Paolo Bonzini [Thu, 31 Aug 2023 17:20:45 +0000 (13:20 -0400)]
Merge tag 'kvm-x86-selftests-6.6' of https://github.com/kvm-x86/linux into HEAD

KVM: x86: Selftests changes for 6.6:

 - Add testcases to x86's sync_regs_test for detecting KVM TOCTOU bugs

 - Add support for printf() in guest code and covert all guest asserts to use
   printf-based reporting

 - Clean up the PMU event filter test and add new testcases

 - Include x86 selftests in the KVM x86 MAINTAINERS entry

9 months agoMerge tag 'kvm-x86-generic-6.6' of https://github.com/kvm-x86/linux into HEAD
Paolo Bonzini [Thu, 31 Aug 2023 17:19:55 +0000 (13:19 -0400)]
Merge tag 'kvm-x86-generic-6.6' of https://github.com/kvm-x86/linux into HEAD

Common KVM changes for 6.6:

 - Wrap kvm_{gfn,hva}_range.pte in a union to allow mmu_notifier events to pass
   action specific data without needing to constantly update the main handlers.

 - Drop unused function declarations

9 months agoMerge tag 'kvmarm-6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm...
Paolo Bonzini [Thu, 31 Aug 2023 17:18:53 +0000 (13:18 -0400)]
Merge tag 'kvmarm-6.6' of git://git./linux/kernel/git/kvmarm/kvmarm into HEAD

KVM/arm64 updates for Linux 6.6

- Add support for TLB range invalidation of Stage-2 page tables,
  avoiding unnecessary invalidations. Systems that do not implement
  range invalidation still rely on a full invalidation when dealing
  with large ranges.

- Add infrastructure for forwarding traps taken from a L2 guest to
  the L1 guest, with L0 acting as the dispatcher, another baby step
  towards the full nested support.

- Simplify the way we deal with the (long deprecated) 'CPU target',
  resulting in a much needed cleanup.

- Fix another set of PMU bugs, both on the guest and host sides,
  as we seem to never have any shortage of those...

- Relax the alignment requirements of EL2 VA allocations for
  non-stack allocations, as we were otherwise wasting a lot of that
  precious VA space.

- The usual set of non-functional cleanups, although I note the lack
  of spelling fixes...

9 months agoKVM: VMX: Refresh available regs and IDT vectoring info before NMI handling
Sean Christopherson [Fri, 25 Aug 2023 01:45:32 +0000 (18:45 -0700)]
KVM: VMX: Refresh available regs and IDT vectoring info before NMI handling

Reset the mask of available "registers" and refresh the IDT vectoring
info snapshot in vmx_vcpu_enter_exit(), before KVM potentially handles a
an NMI VM-Exit.  One of the "registers" that KVM VMX lazily loads is the
vmcs.VM_EXIT_INTR_INFO field, which is holds the vector+type on "exception
or NMI" VM-Exits, i.e. is needed to identify NMIs.  Clearing the available
registers bitmask after handling NMIs results in KVM querying info from
the last VM-Exit that read vmcs.VM_EXIT_INTR_INFO, and leads to both
missed NMIs and spurious NMIs in the host.

Opportunistically grab vmcs.IDT_VECTORING_INFO_FIELD early in the VM-Exit
path too, e.g. to guard against similar consumption of stale data.  The
field is read on every "normal" VM-Exit, and there's no point in delaying
the inevitable.

Reported-by: Like Xu <like.xu.linux@gmail.com>
Fixes: 11df586d774f ("KVM: VMX: Handle NMI VM-Exits in noinstr region")
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20230825014532.2846714-1-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
9 months agoKVM: s390: pv: Allow AP-instructions for pv-guests
Steffen Eiden [Tue, 15 Aug 2023 15:14:15 +0000 (17:14 +0200)]
KVM: s390: pv: Allow AP-instructions for pv-guests

Introduces new feature bits and enablement flags for AP and AP IRQ
support.

Signed-off-by: Steffen Eiden <seiden@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Michael Mueller <mimu@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Link: https://lore.kernel.org/r/20230815151415.379760-5-seiden@linux.ibm.com
Message-Id: <20230815151415.379760-5-seiden@linux.ibm.com>

9 months agoKVM: s390: Add UV feature negotiation
Steffen Eiden [Tue, 15 Aug 2023 15:14:14 +0000 (17:14 +0200)]
KVM: s390: Add UV feature negotiation

Add a uv_feature list for pv-guests to the KVM cpu-model.
The feature bits 'AP-interpretation for secure guests' and
'AP-interrupt for secure guests' are available.

Signed-off-by: Steffen Eiden <seiden@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Michael Mueller <mimu@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Link: https://lore.kernel.org/r/20230815151415.379760-4-seiden@linux.ibm.com
Message-Id: <20230815151415.379760-4-seiden@linux.ibm.com>

9 months agos390/uv: UV feature check utility
Steffen Eiden [Tue, 15 Aug 2023 15:14:13 +0000 (17:14 +0200)]
s390/uv: UV feature check utility

Introduces a function to check the existence of an UV feature.
Refactor feature bit checks to use the new function.

Signed-off-by: Steffen Eiden <seiden@linux.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Michael Mueller <mimu@linux.ibm.com>
Link: https://lore.kernel.org/r/20230815151415.379760-3-seiden@linux.ibm.com
Message-Id: <20230815151415.379760-3-seiden@linux.ibm.com>

9 months agoKVM: s390: pv: relax WARN_ONCE condition for destroy fast
Viktor Mihajlovski [Tue, 15 Aug 2023 15:14:12 +0000 (17:14 +0200)]
KVM: s390: pv: relax WARN_ONCE condition for destroy fast

Destroy configuration fast may return with RC 0x104 if there
are still bound APQNs in the configuration. The final cleanup
will occur with the standard destroy configuration UVC as
at this point in time all APQNs have been reset and thus
unbound. Therefore, don't warn if RC 0x104 is reported.

Signed-off-by: Viktor Mihajlovski <mihajlov@linux.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Steffen Eiden <seiden@linux.ibm.com>
Reviewed-by: Michael Mueller <mimu@linux.ibm.com>
Link: https://lore.kernel.org/r/20230815151415.379760-2-seiden@linux.ibm.com
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Message-ID: <20230815151415.379760-2-seiden@linux.ibm.com>

9 months agoMerge remote-tracking branch 'vfio-ap' into next
Janosch Frank [Mon, 28 Aug 2023 09:26:35 +0000 (09:26 +0000)]
Merge remote-tracking branch 'vfio-ap' into next

The Secure Execution AP support makes it possible for SE VMs to
securely use APQNs without a third party being able to snoop IO. VMs
first bind to an APQN to securely attach it and granting protected key
crypto function access. Afterwards they can associate the APQN which
grants them clear key crypto function access. Once bound the APQNs are
not accessible to the host until a reset is performed.

The vfio-ap patches being merged here provide the base hypervisor
Secure Execution / Protected Virtualization AP support. This includes
proper handling of APQNs that are securely attached to a SE/PV guest
especially regarding resets.

9 months agoKVM: s390: selftests: Add selftest for single-stepping
Ilya Leoshkevich [Tue, 25 Jul 2023 14:37:21 +0000 (16:37 +0200)]
KVM: s390: selftests: Add selftest for single-stepping

Test different variations of single-stepping into interrupts:

- SVC and PGM interrupts;
- Interrupts generated by ISKE;
- Interrupts generated by instructions emulated by KVM;
- Interrupts generated by instructions emulated by userspace.

Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-ID: <20230725143857.228626-7-iii@linux.ibm.com>
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
[frankja@de.igm.com: s/ASSERT_EQ/TEST_ASSERT_EQ/ because function was
renamed in the selftest printf series]
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
9 months agoKVM: s390: interrupt: Fix single-stepping keyless mode exits
Ilya Leoshkevich [Tue, 25 Jul 2023 14:37:20 +0000 (16:37 +0200)]
KVM: s390: interrupt: Fix single-stepping keyless mode exits

kvm_s390_skey_check_enable() does not emulate any instructions, rather,
it clears CPUSTAT_KSS and arranges the instruction that caused the exit
(e.g., ISKE, SSKE, RRBE or LPSWE with a keyed PSW) to run again.

Therefore, skip the PER check and let the instruction execution happen.
Otherwise, a debugger will see two single-step events on the same
instruction.

Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-ID: <20230725143857.228626-6-iii@linux.ibm.com>
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
9 months agoKVM: s390: interrupt: Fix single-stepping userspace-emulated instructions
Ilya Leoshkevich [Tue, 25 Jul 2023 14:37:19 +0000 (16:37 +0200)]
KVM: s390: interrupt: Fix single-stepping userspace-emulated instructions

Single-stepping a userspace-emulated instruction that generates an
interrupt causes GDB to land on the instruction following it instead of
the respective interrupt handler.

The reason is that after arranging a KVM_EXIT_S390_SIEIC exit,
kvm_handle_sie_intercept() calls kvm_s390_handle_per_ifetch_icpt(),
which sets KVM_GUESTDBG_EXIT_PENDING. This bit, however, is not
processed immediately, but rather persists until the next ioctl(),
causing a spurious single-step exit.

Fix by clearing this bit in ioctl().

Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-ID: <20230725143857.228626-5-iii@linux.ibm.com>
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
9 months agoKVM: s390: interrupt: Fix single-stepping kernel-emulated instructions
Ilya Leoshkevich [Tue, 25 Jul 2023 14:37:18 +0000 (16:37 +0200)]
KVM: s390: interrupt: Fix single-stepping kernel-emulated instructions

Single-stepping a kernel-emulated instruction that generates an
interrupt causes GDB to land on the instruction following it instead of
the respective interrupt handler.

The reason is that kvm_handle_sie_intercept(), after injecting the
interrupt, also processes the PER event and arranges a KVM_SINGLESTEP
exit. The interrupt is not yet delivered, however, so the userspace
sees the next instruction.

Fix by avoiding the KVM_SINGLESTEP exit when there is a pending
interrupt. The next __vcpu_run() loop iteration will arrange a
KVM_SINGLESTEP exit after delivering the interrupt.

Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-ID: <20230725143857.228626-4-iii@linux.ibm.com>
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
9 months agoKVM: s390: interrupt: Fix single-stepping into program interrupt handlers
Ilya Leoshkevich [Tue, 25 Jul 2023 14:37:17 +0000 (16:37 +0200)]
KVM: s390: interrupt: Fix single-stepping into program interrupt handlers

Currently, after single-stepping an instruction that generates a
specification exception, GDB ends up on the instruction immediately
following it.

The reason is that vcpu_post_run() injects the interrupt and sets
KVM_GUESTDBG_EXIT_PENDING, causing a KVM_SINGLESTEP exit. The
interrupt is not delivered, however, therefore userspace sees the
address of the next instruction.

Fix by letting the __vcpu_run() loop go into the next iteration,
where vcpu_pre_run() delivers the interrupt and sets
KVM_GUESTDBG_EXIT_PENDING.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Message-ID: <20230725143857.228626-3-iii@linux.ibm.com>
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
9 months agoKVM: s390: interrupt: Fix single-stepping into interrupt handlers
Ilya Leoshkevich [Tue, 25 Jul 2023 14:37:16 +0000 (16:37 +0200)]
KVM: s390: interrupt: Fix single-stepping into interrupt handlers

After single-stepping an instruction that generates an interrupt, GDB
ends up on the second instruction of the respective interrupt handler.

The reason is that vcpu_pre_run() manually delivers the interrupt, and
then __vcpu_run() runs the first handler instruction using the
CPUSTAT_P flag. This causes a KVM_SINGLESTEP exit on the second handler
instruction.

Fix by delaying the KVM_SINGLESTEP exit until after the manual
interrupt delivery.

Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-ID: <20230725143857.228626-2-iii@linux.ibm.com>
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
9 months agoMerge tag 'kvm-x86-selftests-immutable-6.6' into next
Janosch Frank [Mon, 28 Aug 2023 09:23:36 +0000 (09:23 +0000)]
Merge tag 'kvm-x86-selftests-immutable-6.6' into next

Provide an immutable point in kvm-x86/selftests so that the guest printf()
support can be merged into other architectures' trees.

9 months agoMerge branch kvm-arm64/6.6/misc into kvmarm-master/next
Marc Zyngier [Mon, 28 Aug 2023 08:30:32 +0000 (09:30 +0100)]
Merge branch kvm-arm64/6.6/misc into kvmarm-master/next

* kvm-arm64/6.6/misc:
  : .
  : Misc KVM/arm64 updates for 6.6:
  :
  : - Don't unnecessary align non-stack allocations in the EL2 VA space
  :
  : - Drop HCR_VIRT_EXCP_MASK, which was never used...
  :
  : - Don't use smp_processor_id() in kvm_arch_vcpu_load(),
  :   but the cpu parameter instead
  :
  : - Drop redundant call to kvm_set_pfn_accessed() in user_mem_abort()
  :
  : - Remove prototypes without implementations
  : .
  KVM: arm64: Remove size-order align in the nVHE hyp private VA range
  KVM: arm64: Remove unused declarations
  KVM: arm64: Remove redundant kvm_set_pfn_accessed() from user_mem_abort()
  KVM: arm64: Drop HCR_VIRT_EXCP_MASK
  KVM: arm64: Use the known cpu id instead of smp_processor_id()

Signed-off-by: Marc Zyngier <maz@kernel.org>
9 months agoMerge branch kvm-arm64/6.6/pmu-fixes into kvmarm-master/next
Marc Zyngier [Mon, 28 Aug 2023 08:29:11 +0000 (09:29 +0100)]
Merge branch kvm-arm64/6.6/pmu-fixes into kvmarm-master/next

* kvm-arm64/6.6/pmu-fixes:
  : .
  : Another set of PMU fixes, coutrtesy of Reiji Watanabe.
  : From the cover letter:
  :
  : "This series fixes a couple of PMUver related handling of
  : vPMU support.
  :
  : On systems where the PMUVer is not uniform across all PEs,
  : KVM currently does not advertise PMUv3 to the guest,
  : even if userspace successfully runs KVM_ARM_VCPU_INIT with
  : KVM_ARM_VCPU_PMU_V3."
  :
  : Additionally, a fix for an obscure counter oversubscription
  : issue happening when the hsot profines the guest's EL0.
  : .
  KVM: arm64: pmu: Guard PMU emulation definitions with CONFIG_KVM
  KVM: arm64: pmu: Resync EL0 state on counter rotation
  KVM: arm64: PMU: Don't advertise STALL_SLOT_{FRONTEND,BACKEND}
  KVM: arm64: PMU: Don't advertise the STALL_SLOT event
  KVM: arm64: PMU: Avoid inappropriate use of host's PMUVer
  KVM: arm64: PMU: Disallow vPMU on non-uniform PMUVer

Signed-off-by: Marc Zyngier <maz@kernel.org>
9 months agoMerge branch kvm-arm64/tlbi-range into kvmarm-master/next
Marc Zyngier [Mon, 28 Aug 2023 08:29:02 +0000 (09:29 +0100)]
Merge branch kvm-arm64/tlbi-range into kvmarm-master/next

* kvm-arm64/tlbi-range:
  : .
  : FEAT_TLBIRANGE support, courtesy of Raghavendra Rao Ananta.
  : From the cover letter:
  :
  : "In certain code paths, KVM/ARM currently invalidates the entire VM's
  : page-tables instead of just invalidating a necessary range. For example,
  : when collapsing a table PTE to a block PTE, instead of iterating over
  : each PTE and flushing them, KVM uses 'vmalls12e1is' TLBI operation to
  : flush all the entries. This is inefficient since the guest would have
  : to refill the TLBs again, even for the addresses that aren't covered
  : by the table entry. The performance impact would scale poorly if many
  : addresses in the VM is going through this remapping.
  :
  : For architectures that implement FEAT_TLBIRANGE, KVM can replace such
  : inefficient paths by performing the invalidations only on the range of
  : addresses that are in scope. This series tries to achieve the same in
  : the areas of stage-2 map, unmap and write-protecting the pages."
  : .
  KVM: arm64: Use TLBI range-based instructions for unmap
  KVM: arm64: Invalidate the table entries upon a range
  KVM: arm64: Flush only the memslot after write-protect
  KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range()
  KVM: arm64: Define kvm_tlb_flush_vmid_range()
  KVM: arm64: Implement __kvm_tlb_flush_vmid_range()
  arm64: tlb: Implement __flush_s2_tlb_range_op()
  arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range
  KVM: Move kvm_arch_flush_remote_tlbs_memslot() to common code
  KVM: Allow range-based TLB invalidation from common code
  KVM: Remove CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL
  KVM: arm64: Use kvm_arch_flush_remote_tlbs()
  KVM: Declare kvm_arch_flush_remote_tlbs() globally
  KVM: Rename kvm_arch_flush_remote_tlb() to kvm_arch_flush_remote_tlbs()

Signed-off-by: Marc Zyngier <maz@kernel.org>
9 months agoMerge branch kvm-arm64/nv-trap-forwarding into kvmarm-master/next
Marc Zyngier [Mon, 28 Aug 2023 08:28:53 +0000 (09:28 +0100)]
Merge branch kvm-arm64/nv-trap-forwarding into kvmarm-master/next

* kvm-arm64/nv-trap-forwarding: (30 commits)
  : .
  : This implements the so called "trap forwarding" infrastructure, which
  : gets used when we take a trap from an L2 guest and that the L1 guest
  : wants to see the trap for itself.
  : .
  KVM: arm64: nv: Add trap description for SPSR_EL2 and ELR_EL2
  KVM: arm64: nv: Select XARRAY_MULTI to fix build error
  KVM: arm64: nv: Add support for HCRX_EL2
  KVM: arm64: Move HCRX_EL2 switch to load/put on VHE systems
  KVM: arm64: nv: Expose FGT to nested guests
  KVM: arm64: nv: Add switching support for HFGxTR/HDFGxTR
  KVM: arm64: nv: Expand ERET trap forwarding to handle FGT
  KVM: arm64: nv: Add SVC trap forwarding
  KVM: arm64: nv: Add trap forwarding for HDFGxTR_EL2
  KVM: arm64: nv: Add trap forwarding for HFGITR_EL2
  KVM: arm64: nv: Add trap forwarding for HFGxTR_EL2
  KVM: arm64: nv: Add fine grained trap forwarding infrastructure
  KVM: arm64: nv: Add trap forwarding for CNTHCTL_EL2
  KVM: arm64: nv: Add trap forwarding for MDCR_EL2
  KVM: arm64: nv: Expose FEAT_EVT to nested guests
  KVM: arm64: nv: Add trap forwarding for HCR_EL2
  KVM: arm64: nv: Add trap forwarding infrastructure
  KVM: arm64: Restructure FGT register switching
  KVM: arm64: nv: Add FGT registers
  KVM: arm64: Add missing HCR_EL2 trap bits
  ...

Signed-off-by: Marc Zyngier <maz@kernel.org>
9 months agoLinux 6.5 v6.5
Linus Torvalds [Sun, 27 Aug 2023 21:49:51 +0000 (14:49 -0700)]
Linux 6.5

9 months agoMerge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Linus Torvalds [Sun, 27 Aug 2023 14:33:54 +0000 (07:33 -0700)]
Merge tag 'scsi-fixes' of git://git./linux/kernel/git/jejb/scsi

Pull SCSI fixes from James Bottomley:
 "Three small driver fixes and one larger unused function set removal in
  the raid class (so no external impact)"

* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
  scsi: snic: Fix double free in snic_tgt_create()
  scsi: core: raid_class: Remove raid_component_add()
  scsi: ufs: ufs-qcom: Clear qunipro_g4_sel for HW major version > 5
  scsi: ufs: mcq: Fix the search/wrap around logic

9 months agoMerge tag 'x86-urgent-2023-08-26' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sat, 26 Aug 2023 17:57:29 +0000 (10:57 -0700)]
Merge tag 'x86-urgent-2023-08-26' of git://git./linux/kernel/git/tip/tip

Pull x86 fixes from Ingo Molnar:
 "Fix an FPU invalidation bug on exec(), and fix a performance
  regression due to a missing setting of X86_FEATURE_OSXSAVE"

* tag 'x86-urgent-2023-08-26' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/fpu: Set X86_FEATURE_OSXSAVE feature after enabling OSXSAVE in CR4
  x86/fpu: Invalidate FPU state correctly on exec()

9 months agoMerge tag 'irq-urgent-2023-08-26' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sat, 26 Aug 2023 17:34:29 +0000 (10:34 -0700)]
Merge tag 'irq-urgent-2023-08-26' of git://git./linux/kernel/git/tip/tip

Pull irq fix from Thomas Gleixner:
 "A last minute fix for a regression introduced in the v6.5 merge
  window.

  The conversion of the software based interrupt resend mechanism to
  hlist missed to add a check whether the descriptor is already enqueued
  and dropped the interrupt descriptor lookup for nested interrupts.

  The missing check whether the descriptor is already queued causes
  hlist corruption and can be observed in the wild. The dropped parent
  descriptor lookup has not yet caused problems, but it would result in
  stale interrupt line in the worst case.

  Add the missing enqueued check and bring the descriptor lookup back to
  cure this"

* tag 'irq-urgent-2023-08-26' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  genirq: Fix software resend lockup and nested resend

9 months agoMerge tag 'loongarch-fixes-6.5-2' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sat, 26 Aug 2023 17:28:52 +0000 (10:28 -0700)]
Merge tag 'loongarch-fixes-6.5-2' of git://git./linux/kernel/git/chenhuacai/linux-loongson

Pull LoongArch fixes from Huacai Chen:
 "Fix a ptrace bug, a hw_breakpoint bug, some build errors/warnings and
  some trivial cleanups"

* tag 'loongarch-fixes-6.5-2' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson:
  LoongArch: Fix hw_breakpoint_control() for watchpoints
  LoongArch: Ensure FP/SIMD registers in the core dump file is up to date
  LoongArch: Put the body of play_dead() into arch_cpu_idle_dead()
  LoongArch: Add identifier names to arguments of die() declaration
  LoongArch: Return earlier in die() if notify_die() returns NOTIFY_STOP
  LoongArch: Do not kill the task in die() if notify_die() returns NOTIFY_STOP
  LoongArch: Remove <asm/export.h>
  LoongArch: Replace #include <asm/export.h> with #include <linux/export.h>
  LoongArch: Remove unneeded #include <asm/export.h>
  LoongArch: Replace -ffreestanding with finer-grained -fno-builtin's
  LoongArch: Remove redundant "source drivers/firmware/Kconfig"

9 months agogenirq: Fix software resend lockup and nested resend
Johan Hovold [Sat, 26 Aug 2023 15:40:04 +0000 (17:40 +0200)]
genirq: Fix software resend lockup and nested resend

The switch to using hlist for managing software resend of interrupts
broke resend in at least two ways:

First, unconditionally adding interrupt descriptors to the resend list can
corrupt the list when the descriptor in question has already been
added. This causes the resend tasklet to loop indefinitely with interrupts
disabled as was recently reported with the Lenovo ThinkPad X13s after
threaded NAPI was disabled in the ath11k WiFi driver.

This bug is easily fixed by restoring the old semantics of irq_sw_resend()
so that it can be called also for descriptors that have already been marked
for resend.

Second, the offending commit also broke software resend of nested
interrupts by simply discarding the code that made sure that such
interrupts are retriggered using the parent interrupt.

Add back the corresponding code that adds the parent descriptor to the
resend list.

Fixes: bc06a9e08742 ("genirq: Use hlist for managing resend handlers")
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/lkml/20230809073432.4193-1-johan+linaro@kernel.org/
Link: https://lore.kernel.org/r/20230826154004.1417-1-johan+linaro@kernel.org
9 months agoLoongArch: Fix hw_breakpoint_control() for watchpoints
Huacai Chen [Sat, 26 Aug 2023 14:21:57 +0000 (22:21 +0800)]
LoongArch: Fix hw_breakpoint_control() for watchpoints

In hw_breakpoint_control(), encode_ctrl_reg() has already encoded the
MWPnCFG3_LoadEn/MWPnCFG3_StoreEn bits in info->ctrl. We don't need to
add (1 << MWPnCFG3_LoadEn | 1 << MWPnCFG3_StoreEn) unconditionally.

Otherwise we can't set read watchpoint and write watchpoint separately.

Cc: stable@vger.kernel.org
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
9 months agoLoongArch: Ensure FP/SIMD registers in the core dump file is up to date
Huacai Chen [Sat, 26 Aug 2023 14:21:57 +0000 (22:21 +0800)]
LoongArch: Ensure FP/SIMD registers in the core dump file is up to date

This is a port of commit 379eb01c21795edb4c ("riscv: Ensure the value
of FP registers in the core dump file is up to date").

The values of FP/SIMD registers in the core dump file come from the
thread.fpu. However, kernel saves the FP/SIMD registers only before
scheduling out the process. If no process switch happens during the
exception handling, kernel will not have a chance to save the latest
values of FP/SIMD registers. So it may cause their values in the core
dump file incorrect. To solve this problem, force fpr_get()/simd_get()
to save the FP/SIMD registers into the thread.fpu if the target task
equals the current task.

Cc: stable@vger.kernel.org
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
9 months agoKVM: arm64: Remove size-order align in the nVHE hyp private VA range
Vincent Donnefort [Fri, 11 Aug 2023 11:20:37 +0000 (12:20 +0100)]
KVM: arm64: Remove size-order align in the nVHE hyp private VA range

commit f922c13e778d ("KVM: arm64: Introduce
pkvm_alloc_private_va_range()") and commit 92abe0f81e13 ("KVM: arm64:
Introduce hyp_alloc_private_va_range()") added an alignment for the
start address of any allocation into the nVHE hypervisor private VA
range.

This alignment (order of the size of the allocation) intends to enable
efficient stack verification (if the PAGE_SHIFT bit is zero, the stack
pointer is on the guard page and a stack overflow occurred).

But this is only necessary for stack allocation and can waste a lot of
VA space. So instead make stack-specific functions, handling the guard
page requirements, while other users (e.g.  fixmap) will only get page
alignment.

Reviewed-by: Kalesh Singh <kaleshsingh@google.com>
Signed-off-by: Vincent Donnefort <vdonnefort@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20230811112037.1147863-1-vdonnefort@google.com
9 months agoMerge tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sat, 26 Aug 2023 00:49:03 +0000 (17:49 -0700)]
Merge tag 'clk-fixes-for-linus' of git://git./linux/kernel/git/clk/linux

Pull clk fixes from Stephen Boyd:
 "One clk driver fix and two clk framework fixes:

   - Fix an OOB access when devm_get_clk_from_child() is used and
     devm_clk_release() casts the void pointer to the wrong type

   - Move clk_rate_exclusive_{get,put}() within the correct ifdefs in
     clk.h so that the stubs are used when CONFIG_COMMON_CLK=n

   - Register the proper clk provider function depending on the value of
     #clock-cells in the TI keystone driver"

* tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux:
  clk: Fix slab-out-of-bounds error in devm_clk_release()
  clk: Fix undefined reference to `clk_rate_exclusive_{get,put}'
  clk: keystone: syscon-clk: Fix audio refclk

9 months agolib/clz_ctz.c: Fix __clzdi2() and __ctzdi2() for 32-bit kernels
Helge Deller [Fri, 25 Aug 2023 19:50:33 +0000 (21:50 +0200)]
lib/clz_ctz.c: Fix __clzdi2() and __ctzdi2() for 32-bit kernels

The gcc compiler translates on some architectures the 64-bit
__builtin_clzll() function to a call to the libgcc function __clzdi2(),
which should take a 64-bit parameter on 32- and 64-bit platforms.

But in the current kernel code, the built-in __clzdi2() function is
defined to operate (wrongly) on 32-bit parameters if BITS_PER_LONG ==
32, thus the return values on 32-bit kernels are in the range from
[0..31] instead of the expected [0..63] range.

This patch fixes the in-kernel functions __clzdi2() and __ctzdi2() to
take a 64-bit parameter on 32-bit kernels as well, thus it makes the
functions identical for 32- and 64-bit kernels.

This bug went unnoticed since kernel 3.11 for over 10 years, and here
are some possible reasons for that:

 a) Some architectures have assembly instructions to count the bits and
    which are used instead of calling __clzdi2(), e.g. on x86 the bsr
    instruction and on ppc cntlz is used. On such architectures the
    wrong __clzdi2() implementation isn't used and as such the bug has
    no effect and won't be noticed.

 b) Some architectures link to libgcc.a, and the in-kernel weak
    functions get replaced by the correct 64-bit variants from libgcc.a.

 c) __builtin_clzll() and __clzdi2() doesn't seem to be used in many
    places in the kernel, and most likely only in uncritical functions,
    e.g. when printing hex values via seq_put_hex_ll(). The wrong return
    value will still print the correct number, but just in a wrong
    formatting (e.g. with too many leading zeroes).

 d) 32-bit kernels aren't used that much any longer, so they are less
    tested.

A trivial testcase to verify if the currently running 32-bit kernel is
affected by the bug is to look at the output of /proc/self/maps:

Here the kernel uses a correct implementation of __clzdi2():

  root@debian:~# cat /proc/self/maps
  00010000-00019000 r-xp 00000000 08:05 787324     /usr/bin/cat
  00019000-0001a000 rwxp 00009000 08:05 787324     /usr/bin/cat
  0001a000-0003b000 rwxp 00000000 00:00 0          [heap]
  f7551000-f770d000 r-xp 00000000 08:05 794765     /usr/lib/hppa-linux-gnu/libc.so.6
  ...

and this kernel uses the broken implementation of __clzdi2():

  root@debian:~# cat /proc/self/maps
  0000000010000-0000000019000 r-xp 00000000 000000008:000000005 787324  /usr/bin/cat
  0000000019000-000000001a000 rwxp 000000009000 000000008:000000005 787324  /usr/bin/cat
  000000001a000-000000003b000 rwxp 00000000 00:00 0  [heap]
  00000000f73d1000-00000000f758d000 r-xp 00000000 000000008:000000005 794765  /usr/lib/hppa-linux-gnu/libc.so.6
  ...

Signed-off-by: Helge Deller <deller@gmx.de>
Fixes: 4df87bb7b6a22 ("lib: add weak clz/ctz functions")
Cc: Chanho Min <chanho.min@lge.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: stable@vger.kernel.org # v3.11+
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
9 months agoMerge tag 'mm-hotfixes-stable-2023-08-25-11-07' of git://git.kernel.org/pub/scm/linux...
Linus Torvalds [Fri, 25 Aug 2023 18:44:43 +0000 (11:44 -0700)]
Merge tag 'mm-hotfixes-stable-2023-08-25-11-07' of git://git./linux/kernel/git/akpm/mm

Pull misc fixes from Andrew Morton:
 "18 hotfixes. 13 are cc:stable and the remainder pertain to post-6.4
  issues or aren't considered suitable for a -stable backport"

* tag 'mm-hotfixes-stable-2023-08-25-11-07' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
  shmem: fix smaps BUG sleeping while atomic
  selftests: cachestat: catch failing fsync test on tmpfs
  selftests: cachestat: test for cachestat availability
  maple_tree: disable mas_wr_append() when other readers are possible
  madvise:madvise_free_pte_range(): don't use mapcount() against large folio for sharing check
  madvise:madvise_free_huge_pmd(): don't use mapcount() against large folio for sharing check
  madvise:madvise_cold_or_pageout_pte_range(): don't use mapcount() against large folio for sharing check
  mm: multi-gen LRU: don't spin during memcg release
  mm: memory-failure: fix unexpected return value in soft_offline_page()
  radix tree: remove unused variable
  mm: add a call to flush_cache_vmap() in vmap_pfn()
  selftests/mm: FOLL_LONGTERM need to be updated to 0x100
  nilfs2: fix general protection fault in nilfs_lookup_dirty_data_buffers()
  mm/gup: handle cont-PTE hugetlb pages correctly in gup_must_unshare() via GUP-fast
  selftests: cgroup: fix test_kmem_basic less than error
  mm: enable page walking API to lock vmas during the walk
  smaps: use vm_normal_page_pmd() instead of follow_trans_huge_pmd()
  mm/gup: reintroduce FOLL_NUMA as FOLL_HONOR_NUMA_FAULT

9 months agoMerge tag 'riscv-for-linus-6.5-rc8' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Fri, 25 Aug 2023 16:29:47 +0000 (09:29 -0700)]
Merge tag 'riscv-for-linus-6.5-rc8' of git://git./linux/kernel/git/riscv/linux

Pull RISC-V fixes from Palmer Dabbelt:
 "This is obviously not ideal, particularly for something this late in
  the cycle.

  Unfortunately we found some uABI issues in the vector support while
  reviewing the GDB port, which has triggered a revert -- probably a
  good sign we should have reviewed GDB before merging this, I guess I
  just dropped the ball because I was so worried about the context
  extension and libc suff I forgot. Hence the late revert.

  There's some risk here as we're still exposing the vector context for
  signal handlers, but changing that would have meant reverting all of
  the vector support. The issues we've found so far have been fixed
  already and they weren't absolute showstoppers, so we're essentially
  just playing it safe by holding ptrace support for another release (or
  until we get through a proper userspace code review).

  Summary:

   - The vector ucontext extension has been extended with vlenb

   - The vector registers ELF core dump note type has been changed to
     avoid aliasing with the CSR type used in embedded systems

   - Support for accessing vector registers via ptrace() has been
     reverted

   - Another build fix for the ISA spec changes around Zifencei/Zicsr
     that manifests on some systems built with binutils-2.37 and
     gcc-11.2"

* tag 'riscv-for-linus-6.5-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux:
  riscv: Fix build errors using binutils2.37 toolchains
  RISC-V: vector: export VLENB csr in __sc_riscv_v_state
  RISC-V: Remove ptrace support for vectors

9 months agoMerge tag 'gpio-fixes-for-v6.5' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Fri, 25 Aug 2023 16:18:22 +0000 (09:18 -0700)]
Merge tag 'gpio-fixes-for-v6.5' of git://git./linux/kernel/git/brgl/linux

Pull gpio fixes from Bartosz Golaszewski:

 - fix an irq mapping leak in gpio-sim

 - associate the GPIO device's software node with the irq domain in
   gpio-sim

* tag 'gpio-fixes-for-v6.5' of git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux:
  gpio: sim: pass the GPIO device's software node to irq domain
  gpio: sim: dispose of irq mappings before destroying the irq_sim domain

9 months agoMerge tag 'pinctrl-v6.5-4' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw...
Linus Torvalds [Fri, 25 Aug 2023 16:10:16 +0000 (09:10 -0700)]
Merge tag 'pinctrl-v6.5-4' of git://git./linux/kernel/git/linusw/linux-pinctrl

Pull pin control fixes from Linus Walleij:
 "Here are some Renesas and AMD driver fixes, the AMD fix affects
  important laptops in the wild so this one is pretty important. It
  seems a bit tough to get this right.

   - Fix DT parsing and related locking in the Renesas driver.

   - Fix wakeup IRQs in the AMD driver once again. Really tricky this
     one"

* tag 'pinctrl-v6.5-4' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl:
  pinctrl: amd: Mask wake bits on probe again
  pinctrl: renesas: rza2: Add lock around pinctrl_generic{{add,remove}_group,{add,remove}_function}
  pinctrl: renesas: rzv2m: Fix NULL pointer dereference in rzv2m_dt_subnode_to_map()
  pinctrl: renesas: rzg2l: Fix NULL pointer dereference in rzg2l_dt_subnode_to_map()

9 months agoKVM: VMX: Delete ancient pr_warn() about KVM_SET_TSS_ADDR not being set
Sean Christopherson [Tue, 15 Aug 2023 17:42:15 +0000 (10:42 -0700)]
KVM: VMX: Delete ancient pr_warn() about KVM_SET_TSS_ADDR not being set

Delete KVM's printk about KVM_SET_TSS_ADDR not being called.  When the
printk was added by commit 776e58ea3d37 ("KVM: unbreak userspace that does
not sets tss address"), KVM also stuffed a "hopefully safe" value, i.e.
the message wasn't purely informational.  For reasons unknown, ostensibly
to try and help people running outdated qemu-kvm versions, the message got
left behind when KVM's stuffing was removed by commit 4918c6ca6838
("KVM: VMX: Require KVM_SET_TSS_ADDR being called prior to running a VCPU").

Today, the message is completely nonsensical, as it has been over a decade
since KVM supported userspace running a Real Mode guest, on a CPU without
unrestricted guest support, without doing KVM_SET_TSS_ADDR before KVM_RUN.
I.e. KVM's ABI has required KVM_SET_TSS_ADDR for 10+ years.

To make matters worse, the message is prone to false positives as it
triggers when simply *creating* a vCPU due to RESET putting vCPUs into
Real Mode, even when the user has no intention of ever *running* the vCPU
in a Real Mode.  E.g. KVM selftests stuff 64-bit mode and never touch Real
Mode, but trigger the message even though they run just fine without
doing KVM_SET_TSS_ADDR.  Creating "dummy" vCPUs, e.g. to probe features,
can also trigger the message.  In both scenarios, the message confuses
users and falsely implies that they've done something wrong.

Reported-by: Thorsten Glaser <t.glaser@tarent.de>
Closes: https://lkml.kernel.org/r/f1afa6c0-cde2-ab8b-ea71-bfa62a45b956%40tarent.de
Link: https://lore.kernel.org/r/20230815174215.433222-1-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
9 months agoKVM: x86: Update MAINTAINTERS to include selftests
Sean Christopherson [Thu, 17 Aug 2023 23:41:14 +0000 (16:41 -0700)]
KVM: x86: Update MAINTAINTERS to include selftests

Give KVM x86 the same treatment as all other KVM architectures, and
officially take ownership of x86 specific KVM selftests (changes have
been routed through kvm and/or kvm-x86 for quite some time).

Cc: kvm@vger.kernel.org
Link: https://lore.kernel.org/r/20230817234114.1420092-1-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
9 months agoKVM: selftests: Explicit set #UD when *potentially* injecting exception
Sean Christopherson [Thu, 17 Aug 2023 23:34:30 +0000 (16:34 -0700)]
KVM: selftests: Explicit set #UD when *potentially* injecting exception

Explicitly set the exception vector to #UD when potentially injecting an
exception in sync_regs_test's subtests that try to detect TOCTOU bugs
in KVM's handling of exceptions injected by userspace.  A side effect of
the original KVM bug was that KVM would clear the vector, but relying on
KVM to clear the vector (i.e. make it #DE) makes it less likely that the
test would ever find *new* KVM bugs, e.g. because only the first iteration
would run with a legal vector to start.

Explicitly inject #UD for race_events_inj_pen() as well, e.g. so that it
doesn't inherit the illegal 255 vector from race_events_exc(), which
currently runs first.

Link: https://lore.kernel.org/r/20230817233430.1416463-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
9 months agoKVM: selftests: Reload "good" vCPU state if vCPU hits shutdown
Sean Christopherson [Thu, 17 Aug 2023 23:34:29 +0000 (16:34 -0700)]
KVM: selftests: Reload "good" vCPU state if vCPU hits shutdown

Reload known good vCPU state if the vCPU triple faults in any of the
race_sync_regs() subtests, e.g. if KVM successfully injects an exception
(the vCPU isn't configured to handle exceptions).  On Intel, the VMCS
is preserved even after shutdown, but AMD's APM states that the VMCB is
undefined after a shutdown and so KVM synthesizes an INIT to sanitize
vCPU/VMCB state, e.g. to guard against running with a garbage VMCB.

The synthetic INIT results in the vCPU never exiting to userspace, as it
gets put into Real Mode at the reset vector, which is full of zeros (as is
GPA 0 and beyond), and so executes ADD for a very, very long time.

Fixes: 60c4063b4752 ("KVM: selftests: Extend x86's sync_regs_test to check for event vector races")
Cc: Michal Luczaj <mhal@rbox.co>
Link: https://lore.kernel.org/r/20230817233430.1416463-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
9 months agoKVM: SVM: Require nrips support for SEV guests (and beyond)
Sean Christopherson [Fri, 25 Aug 2023 01:36:19 +0000 (18:36 -0700)]
KVM: SVM: Require nrips support for SEV guests (and beyond)

Disallow SEV (and beyond) if nrips is disabled via module param, as KVM
can't read guest memory to partially emulate and skip an instruction.  All
CPUs that support SEV support NRIPS, i.e. this is purely stopping the user
from shooting themselves in the foot.

Cc: Tom Lendacky <thomas.lendacky@amd.com>
Link: https://lore.kernel.org/r/20230825013621.2845700-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
9 months agoKVM: SVM: Don't inject #UD if KVM attempts to skip SEV guest insn
Sean Christopherson [Fri, 25 Aug 2023 01:36:18 +0000 (18:36 -0700)]
KVM: SVM: Don't inject #UD if KVM attempts to skip SEV guest insn

Don't inject a #UD if KVM attempts to "emulate" to skip an instruction
for an SEV guest, and instead resume the guest and hope that it can make
forward progress.  When commit 04c40f344def ("KVM: SVM: Inject #UD on
attempted emulation for SEV guest w/o insn buffer") added the completely
arbitrary #UD behavior, there were no known scenarios where a well-behaved
guest would induce a VM-Exit that triggered emulation, i.e. it was thought
that injecting #UD would be helpful.

However, now that KVM (correctly) attempts to re-inject INT3/INTO, e.g. if
a #NPF is encountered when attempting to deliver the INT3/INTO, an SEV
guest can trigger emulation without a buffer, through no fault of its own.
Resuming the guest and retrying the INT3/INTO is architecturally wrong,
e.g. the vCPU will incorrectly re-hit code #DBs, but for SEV guests there
is literally no other option that has a chance of making forward progress.

Drop the #UD injection for all "skip" emulation, not just those related to
INT3/INTO, even though that means that the guest will likely end up in an
infinite loop instead of getting a #UD (the vCPU may also crash, e.g. if
KVM emulated everything about an instruction except for advancing RIP).
There's no evidence that suggests that an unexpected #UD is actually
better than hanging the vCPU, e.g. a soft-hung vCPU can still respond to
IRQs and NMIs to generate a backtrace.

Reported-by: Wu Zongyo <wuzongyo@mail.ustc.edu.cn>
Closes: https://lore.kernel.org/all/8eb933fd-2cf3-d7a9-32fe-2a1d82eac42a@mail.ustc.edu.cn
Fixes: 6ef88d6e36c2 ("KVM: SVM: Re-inject INT3/INTO instead of retrying the instruction")
Cc: stable@vger.kernel.org
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Link: https://lore.kernel.org/r/20230825013621.2845700-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
9 months agoKVM: SVM: Skip VMSA init in sev_es_init_vmcb() if pointer is NULL
Sean Christopherson [Fri, 25 Aug 2023 02:23:57 +0000 (19:23 -0700)]
KVM: SVM: Skip VMSA init in sev_es_init_vmcb() if pointer is NULL

Skip initializing the VMSA physical address in the VMCB if the VMSA is
NULL, which occurs during intrahost migration as KVM initializes the VMCB
before copying over state from the source to the destination (including
the VMSA and its physical address).

In normal builds, __pa() is just math, so the bug isn't fatal, but with
CONFIG_DEBUG_VIRTUAL=y, the validity of the virtual address is verified
and passing in NULL will make the kernel unhappy.

Fixes: 6defa24d3b12 ("KVM: SEV: Init target VMCBs in sev_migrate_from")
Cc: stable@vger.kernel.org
Cc: Peter Gonda <pgonda@google.com>
Reviewed-by: Peter Gonda <pgonda@google.com>
Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com>
Link: https://lore.kernel.org/r/20230825022357.2852133-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
9 months agoKVM: SVM: Get source vCPUs from source VM for SEV-ES intrahost migration
Sean Christopherson [Fri, 25 Aug 2023 02:23:56 +0000 (19:23 -0700)]
KVM: SVM: Get source vCPUs from source VM for SEV-ES intrahost migration

Fix a goof where KVM tries to grab source vCPUs from the destination VM
when doing intrahost migration.  Grabbing the wrong vCPU not only hoses
the guest, it also crashes the host due to the VMSA pointer being left
NULL.

  BUG: unable to handle page fault for address: ffffe38687000000
  #PF: supervisor read access in kernel mode
  #PF: error_code(0x0000) - not-present page
  PGD 0 P4D 0
  Oops: 0000 [#1] SMP NOPTI
  CPU: 39 PID: 17143 Comm: sev_migrate_tes Tainted: GO       6.5.0-smp--fff2e47e6c3b-next #151
  Hardware name: Google, Inc. Arcadia_IT_80/Arcadia_IT_80, BIOS 34.28.0 07/10/2023
  RIP: 0010:__free_pages+0x15/0xd0
  RSP: 0018:ffff923fcf6e3c78 EFLAGS: 00010246
  RAX: 0000000000000000 RBX: ffffe38687000000 RCX: 0000000000000100
  RDX: 0000000000000100 RSI: 0000000000000000 RDI: ffffe38687000000
  RBP: ffff923fcf6e3c88 R08: ffff923fcafb0000 R09: 0000000000000000
  R10: 0000000000000000 R11: ffffffff83619b90 R12: ffff923fa9540000
  R13: 0000000000080007 R14: ffff923f6d35d000 R15: 0000000000000000
  FS:  0000000000000000(0000) GS:ffff929d0d7c0000(0000) knlGS:0000000000000000
  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
  CR2: ffffe38687000000 CR3: 0000005224c34005 CR4: 0000000000770ee0
  PKRU: 55555554
  Call Trace:
   <TASK>
   sev_free_vcpu+0xcb/0x110 [kvm_amd]
   svm_vcpu_free+0x75/0xf0 [kvm_amd]
   kvm_arch_vcpu_destroy+0x36/0x140 [kvm]
   kvm_destroy_vcpus+0x67/0x100 [kvm]
   kvm_arch_destroy_vm+0x161/0x1d0 [kvm]
   kvm_put_kvm+0x276/0x560 [kvm]
   kvm_vm_release+0x25/0x30 [kvm]
   __fput+0x106/0x280
   ____fput+0x12/0x20
   task_work_run+0x86/0xb0
   do_exit+0x2e3/0x9c0
   do_group_exit+0xb1/0xc0
   __x64_sys_exit_group+0x1b/0x20
   do_syscall_64+0x41/0x90
   entry_SYSCALL_64_after_hwframe+0x63/0xcd
   </TASK>
  CR2: ffffe38687000000

Fixes: 6defa24d3b12 ("KVM: SEV: Init target VMCBs in sev_migrate_from")
Cc: stable@vger.kernel.org
Cc: Peter Gonda <pgonda@google.com>
Reviewed-by: Peter Gonda <pgonda@google.com>
Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com>
Link: https://lore.kernel.org/r/20230825022357.2852133-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
9 months agoMerge tag 'sound-6.5' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound
Linus Torvalds [Fri, 25 Aug 2023 15:48:14 +0000 (08:48 -0700)]
Merge tag 'sound-6.5' of git://git./linux/kernel/git/tiwai/sound

Pull sound fixes from Takashi Iwai:
 "Hopefully the last bits for 6.5. It's slightly higher LOCs than
  wished, but it doesn't look scary.

  The biggest change is MAINTAINERS update for TI; it's good to have the
  update before the final release, so that people can contact to the
  right persons for bug reports (which shouldn't happen of course!)

  The rest are all device-specific fixes and quirks, most for various
  ASoC platforms"

* tag 'sound-6.5' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound:
  ASoC: amd: yc: Fix a non-functional mic on Lenovo 82SJ
  ALSA: ymfpci: Fix the missing snd_card_free() call at probe error
  ASoC: cs35l41: Correct amp_gain_tlv values
  ASoC: amd: yc: Add VivoBook Pro 15 to quirks list for acp6x
  ASoC: tas2781: fixed register access error when switching to other chips
  ASoC: cs35l56: Add an ACPI match table
  ASoC: cs35l56: Read firmware uuid from a device property instead of _SUB
  ASoC: SOF: ipc4-pcm: fix possible null pointer deference
  MAINTAINERS: Add entries for TEXAS INSTRUMENTS ASoC DRIVERS

9 months agoLoongArch: Put the body of play_dead() into arch_cpu_idle_dead()
Tiezhu Yang [Fri, 25 Aug 2023 15:40:38 +0000 (23:40 +0800)]
LoongArch: Put the body of play_dead() into arch_cpu_idle_dead()

The initial aim is to silence the following objtool warning:

arch/loongarch/kernel/process.o: warning: objtool: arch_cpu_idle_dead() falls through to next function start_thread()

According to tools/objtool/Documentation/objtool.txt, this is because
the last instruction of arch_cpu_idle_dead() is a call to a noreturn
function play_dead(). In order to silence the warning, one simple way
is to add the noreturn function play_dead() to objtool's hard-coded
global_noreturns array, that is to say, just put "NORETURN(play_dead)"
into tools/objtool/noreturns.h, it works well.

But I noticed that play_dead() is only defined once and only called by
arch_cpu_idle_dead(), so put the body of play_dead() into the caller
arch_cpu_idle_dead(), then remove the noreturn function play_dead() is
an alternative way which can reduce the overhead of the function call
at the same time.

Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
9 months agoLoongArch: Add identifier names to arguments of die() declaration
Tiezhu Yang [Fri, 25 Aug 2023 15:40:26 +0000 (23:40 +0800)]
LoongArch: Add identifier names to arguments of die() declaration

Add identifier names to arguments of die() declaration in ptrace.h
to fix the following checkpatch warnings:

  WARNING: function definition argument 'const char *' should also have an identifier name
  WARNING: function definition argument 'struct pt_regs *' should also have an identifier name

Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
9 months agoLoongArch: Return earlier in die() if notify_die() returns NOTIFY_STOP
Tiezhu Yang [Fri, 25 Aug 2023 15:40:26 +0000 (23:40 +0800)]
LoongArch: Return earlier in die() if notify_die() returns NOTIFY_STOP

After the call to oops_exit(), it should not panic or execute
the crash kernel if the oops is to be suppressed.

Suggested-by: Maciej W. Rozycki <macro@orcam.me.uk>
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
9 months agoLoongArch: Do not kill the task in die() if notify_die() returns NOTIFY_STOP
Tiezhu Yang [Fri, 25 Aug 2023 15:40:26 +0000 (23:40 +0800)]
LoongArch: Do not kill the task in die() if notify_die() returns NOTIFY_STOP

If notify_die() returns NOTIFY_STOP, honor the return value from the
handler chain invocation in die() and return without killing the task
as, through a debugger, the fault may have been fixed. It makes sense
even if ignoring the event will make the system unstable: by allowing
access through a debugger it has been compromised already anyway. It
makes our port consistent with x86, arm64, riscv and csky.

Commit 20c0d2d44029 ("[PATCH] i386: pass proper trap numbers to die
chain handlers") may be the earliest of similar changes.

Link: https://lore.kernel.org/r/43DDF02E.76F0.0078.0@novell.com/
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
9 months agoLoongArch: Remove <asm/export.h>
Masahiro Yamada [Fri, 25 Aug 2023 15:40:26 +0000 (23:40 +0800)]
LoongArch: Remove <asm/export.h>

All *.S files under arch/loongarch/ have been converted to include
<linux/export.h> instead of <asm/export.h>.

Remove <asm/export.h>.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
9 months agoLoongArch: Replace #include <asm/export.h> with #include <linux/export.h>
Masahiro Yamada [Fri, 25 Aug 2023 15:40:26 +0000 (23:40 +0800)]
LoongArch: Replace #include <asm/export.h> with #include <linux/export.h>

Commit ddb5cdbafaaad ("kbuild: generate KSYMTAB entries by modpost")
deprecated <asm/export.h>, which is now a wrapper of <linux/export.h>.

Replace #include <asm/export.h> with #include <linux/export.h>.

After all the <asm/export.h> lines are converted, <asm/export.h> and
<asm-generic/export.h> will be removed.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
9 months agoLoongArch: Remove unneeded #include <asm/export.h>
Masahiro Yamada [Fri, 25 Aug 2023 15:40:26 +0000 (23:40 +0800)]
LoongArch: Remove unneeded #include <asm/export.h>

There is no EXPORT_SYMBOL() line there, hence #include <asm/export.h>
is unneeded.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
9 months agoLoongArch: Replace -ffreestanding with finer-grained -fno-builtin's
WANG Xuerui [Fri, 25 Aug 2023 15:40:26 +0000 (23:40 +0800)]
LoongArch: Replace -ffreestanding with finer-grained -fno-builtin's

As explained by Nick in the original issue: the kernel usually does a
good job of providing library helpers that have similar semantics as
their ordinary userspace libc equivalents, but -ffreestanding disables
such libcall optimization and other related features in the compiler,
which can lead to unexpected things such as CONFIG_FORTIFY_SOURCE not
working (!).

However, due to the desire for better control over unaligned accesses
with respect to CONFIG_ARCH_STRICT_ALIGN, and also for avoiding the
GCC bug https://gcc.gnu.org/PR109465, we do want to still disable
optimizations for the memory libcalls (memcpy, memmove and memset for
now). Use finer-grained -fno-builtin-* toggles to achieve this without
losing source fortification and other libcall optimizations.

Closes: https://github.com/ClangBuiltLinux/linux/issues/1897
Reported-by: Nathan Chancellor <nathan@kernel.org>
Suggested-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: WANG Xuerui <git@xen0n.name>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
9 months agoLoongArch: Remove redundant "source drivers/firmware/Kconfig"
Xi Ruoyao [Fri, 25 Aug 2023 15:40:26 +0000 (23:40 +0800)]
LoongArch: Remove redundant "source drivers/firmware/Kconfig"

In drivers/Kconfig, drivers/firmware/Kconfig is sourced for all ports so
there is no need to source it in the port-specific Kconfig file.  And
sourcing it here also caused the "Firmware Drivers" menu appeared two
times: one in the "Device Drivers" menu, another in the toplevel menu.
This is really puzzling so remove it.

Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Xi Ruoyao <xry111@xry111.site>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
9 months agoMerge tag 'drm-fixes-2023-08-25' of git://anongit.freedesktop.org/drm/drm
Linus Torvalds [Fri, 25 Aug 2023 15:38:40 +0000 (08:38 -0700)]
Merge tag 'drm-fixes-2023-08-25' of git://anongit.freedesktop.org/drm/drm

Pull drm fixes from Dave Airlie:
 "A bit bigger than I'd care for, but it's mostly a single vmwgfx fix
  and a fix for an i915 hotplug probing. Otherwise misc i915, bridge,
  panfrost and dma-buf fixes.

  core:
   - add a HPD poll helper

  i915:
   - fix regression in i915 polling
   - fix docs build warning
   - fix DG2 idle power consumption

  bridge:
   - samsung-dsim: init fix

  panfrost:
   - fix speed binning issue

  dma-buf:
   - fix recursive lock in fence signal

  vmwgfx:
   - fix shader stage validation
   - fix NULL ptr derefs in gem put"

* tag 'drm-fixes-2023-08-25' of git://anongit.freedesktop.org/drm/drm:
  drm/i915: Fix HPD polling, reenabling the output poll work as needed
  drm: Add an HPD poll helper to reschedule the poll work
  drm/vmwgfx: Fix possible invalid drm gem put calls
  drm/vmwgfx: Fix shader stage validation
  dma-buf/sw_sync: Avoid recursive lock during fence signal
  drm/i915: fix Sphinx indentation warning
  drm/i915/dgfx: Enable d3cold at s2idle
  drm/display/dp: Fix the DP DSC Receiver cap size
  drm/panfrost: Skip speed binning on EOPNOTSUPP
  drm: bridge: samsung-dsim: Fix init during host transfer

9 months agoMerge tag 'asoc-fix-v6.5-rc7-2' of https://git.kernel.org/pub/scm/linux/kernel/git...
Takashi Iwai [Fri, 25 Aug 2023 07:43:49 +0000 (09:43 +0200)]
Merge tag 'asoc-fix-v6.5-rc7-2' of https://git./linux/kernel/git/broonie/sound into for-linus

ASoC: Quirk for v6.5

One additional fix for v6.5, an additional quirk.  As with the other
fixes this could wait for the merge window.