samples/bpf: Reduce syscall overhead in map_perf_test.
authorAlexei Starovoitov <ast@kernel.org>
Fri, 2 Sep 2022 21:10:46 +0000 (14:10 -0700)
committerDaniel Borkmann <daniel@iogearbox.net>
Mon, 5 Sep 2022 13:33:05 +0000 (15:33 +0200)
commit89dc8d0c38e0df27e580876a1681a55c686a51ff
treebc6607523f6ca1f49178bc1a7d5af506a53b1047
parent37521bffdd2d1efcb1dbdfd3ee89584c8943421c
samples/bpf: Reduce syscall overhead in map_perf_test.

Make map_perf_test for preallocated and non-preallocated hash map
spend more time inside bpf program to focus performance analysis
on the speed of update/lookup/delete operations performed by bpf program.

It makes 'perf report' of bpf_mem_alloc look like:
 11.76%  map_perf_test    [k] _raw_spin_lock_irqsave
 11.26%  map_perf_test    [k] htab_map_update_elem
  9.70%  map_perf_test    [k] _raw_spin_lock
  9.47%  map_perf_test    [k] htab_map_delete_elem
  8.57%  map_perf_test    [k] memcpy_erms
  5.58%  map_perf_test    [k] alloc_htab_elem
  4.09%  map_perf_test    [k] __htab_map_lookup_elem
  3.44%  map_perf_test    [k] syscall_exit_to_user_mode
  3.13%  map_perf_test    [k] lookup_nulls_elem_raw
  3.05%  map_perf_test    [k] migrate_enable
  3.04%  map_perf_test    [k] memcmp
  2.67%  map_perf_test    [k] unit_free
  2.39%  map_perf_test    [k] lookup_elem_raw

Reduce default iteration count as well to make 'map_perf_test' quick enough
even on debug kernels.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-5-alexei.starovoitov@gmail.com
samples/bpf/map_perf_test_kern.c
samples/bpf/map_perf_test_user.c