KVM: x86/mmu: Expand on the comment in kvm_vcpu_ad_need_write_protect()
authorSean Christopherson <seanjc@google.com>
Sat, 13 Feb 2021 00:50:08 +0000 (16:50 -0800)
committerPaolo Bonzini <pbonzini@redhat.com>
Fri, 19 Feb 2021 08:08:33 +0000 (03:08 -0500)
Expand the comment about need to use write-protection for nested EPT
when PML is enabled to clarify that the tagging is a nop when PML is
_not_ enabled.  Without the clarification, omitting the PML check looks
wrong at first^Wfifth glance.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210213005015.1651772-8-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/mmu/mmu_internal.h

index 0b55aa561ec8fc286a9cb31fe744e21f2e47bedf..72b0928f2b2d96c54cb2d75bbff84494c150b627 100644 (file)
@@ -84,7 +84,10 @@ static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu)
         * When using the EPT page-modification log, the GPAs in the log
         * would come from L2 rather than L1.  Therefore, we need to rely
         * on write protection to record dirty pages.  This also bypasses
-        * PML, since writes now result in a vmexit.
+        * PML, since writes now result in a vmexit.  Note, this helper will
+        * tag SPTEs as needing write-protection even if PML is disabled or
+        * unsupported, but that's ok because the tag is consumed if and only
+        * if PML is enabled.  Omit the PML check to save a few uops.
         */
        return vcpu->arch.mmu == &vcpu->arch.guest_mmu;
 }