Memcheck/Addrcheck may issue a warning just before this happens, but they
might not if the jump happens to land in addressable memory.
------------------------------------------------------------------
-
-3.4. My program dies like this:
-
- error: /lib/librt.so.1: symbol __pthread_clock_settime, version
- GLIBC_PRIVATE not defined in file libpthread.so.0 with link time
- reference
-
-This is a total swamp. Nevertheless there is a way out. It's a problem
-which is not easy to fix. Really the problem is that /lib/librt.so.1
-refers to some symbols __pthread_clock_settime and
-__pthread_clock_gettime in /lib/libpthread.so which are not intended to
-be exported, ie they are private.
-
-Best solution is to ensure your program does not use /lib/librt.so.1.
-
-However .. since you're probably not using it directly, or even
-knowingly, that's hard to do. You might instead be able to fix it by
-playing around with coregrind/vg_libpthread.vs. Things to try:
-
-Remove this
-
- GLIBC_PRIVATE {
- __pthread_clock_gettime;
- __pthread_clock_settime;
- };
-
-or maybe remove this
-
- GLIBC_2.2.3 {
- __pthread_clock_gettime;
- __pthread_clock_settime;
- } GLIBC_2.2;
-
-or maybe add this
-
- GLIBC_2.2.4 {
- __pthread_clock_gettime;
- __pthread_clock_settime;
- } GLIBC_2.2;
-
- GLIBC_2.2.5 {
- __pthread_clock_gettime;
- __pthread_clock_settime;
- } GLIBC_2.2;
-
-or some combination of the above. After each change you need to delete
-coregrind/libpthread.so and do make && make install.
-
-I just don't know if any of the above will work. If you can find a
-solution which works, I would be interested to hear it.
-
-To which someone replied:
-
- I deleted this:
-
- GLIBC_2.2.3 {
- __pthread_clock_gettime;
- __pthread_clock_settime;
- } GLIBC_2.2;
-
- and it worked.
-
-----------------------------------------------------------------
4. Valgrind behaves unexpectedly
arrays. We'd like to, but it's just not possible to do in a reasonable
way that fits with how Memcheck works. Sorry.
------------------------------------------------------------------
-
-5.3. My program dies with a segmentation fault, but Memcheck doesn't give
- any error messages before it, or none that look related.
-
-One possibility is that your program accesses to memory with
-inappropriate permissions set, such as writing to read-only memory.
-Maybe your program is writing to a static string like this:
-
- char* s = "hello";
- s[0] = 'j';
-
-or something similar. Writing to read-only memory can also apparently
-make LinuxThreads behave strangely.
-
-----------------------------------------------------------------
6. Miscellaneous
## include must be first for tool.h
## addrcheck must come after memcheck, for mac_*.o
-SUBDIRS = include coregrind . docs tests auxprogs \
- memcheck \
- addrcheck \
- cachegrind \
- corecheck \
- helgrind \
- massif \
- lackey \
- none
+#TOOLS = memcheck \
+# addrcheck \
+# cachegrind \
+# corecheck \
+# massif \
+# lackey \
+# none
+TOOLS = none
+
+SUBDIRS = include coregrind . docs tests auxprogs $(TOOLS)
+DIST_SUBDIRS = $(SUBDIRS) helgrind
SUPP_FILES = \
glibc-2.1.supp glibc-2.2.supp glibc-2.3.supp \
## Preprend @PERL@ because tests/vg_regtest isn't executable
regtest: check
- @PERL@ tests/vg_regtest --all
+ @PERL@ tests/vg_regtest $(TOOLS)
EXTRA_DIST = \
FAQ.txt \
README_DEVELOPERS \
README_PACKAGERS \
README_MISSING_SYSCALL_OR_IOCTL TODO \
- valgrind.spec.in valgrind.pc.in \
+ valgrind.spec valgrind.spec.in valgrind.pc.in \
Makefile.all.am Makefile.tool.am Makefile.core-AM_CPPFLAGS.am \
Makefile.tool-inplace.am
install-exec-hook:
$(mkinstalldirs) $(DESTDIR)$(valdir)
- rm -f $(DESTDIR)$(valdir)/libpthread.so.0
- $(LN_S) libpthread.so $(DESTDIR)$(valdir)/libpthread.so.0
all-local:
mkdir -p $(inplacedir)
add_includes = -I$(top_builddir)/coregrind -I$(top_srcdir)/coregrind \
+ -I$(top_srcdir) \
-I$(top_srcdir)/coregrind/$(VG_ARCH) \
+ -I$(top_builddir)/coregrind/$(VG_ARCH) \
-I$(top_srcdir)/coregrind/$(VG_OS) \
-I$(top_srcdir)/coregrind/$(VG_PLATFORM) \
-I$(top_builddir)/include -I$(top_srcdir)/include \
-I@VEX_DIR@/pub
AM_CPPFLAGS = $(add_includes)
-AM_CCASFLAGS = $(add_includes) @ARCH_CORE_AM_CCASFLAGS@
+AM_CCASFLAGS = $(add_includes) @ARCH_CORE_AM_CCASFLAGS@ -Wa,-gstabs
-I@VEX_DIR@/pub
AM_CPPFLAGS = $(add_includes)
-AM_CFLAGS = $(WERROR) -Winline -Wall -Wshadow -O -g @ARCH_TOOL_AM_CFLAGS@
+AM_CFLAGS = $(WERROR) -Wmissing-prototypes -Winline -Wall -Wshadow -O -g @ARCH_TOOL_AM_CFLAGS@
AM_CCASFLAGS = $(add_includes)
Sockets: move the socketcall marshaller from vg_syscalls.c into
x86-linux/syscalls.c; it is in the wrong place.
-Tra La La
from valgrind.
--- Try and ensure that the /usr/include/asm/unistd.h file on the
- build machine contains an entry for all the system calls that
- the kernels on the target machines can actually support. On my
- Red Hat 7.2 (kernel 2.4.9) box the highest-numbered entry is
- #define __NR_fcntl64 221
- but I have heard of 2.2 boxes where it stops at 179 or so.
-
- Reason for this is that at build time, support for syscalls
- is compiled in -- or not -- depending on which of these __NR_*
- symbols is defined. Problems arise when /usr/include/asm/unistd.h
- fails to give an entry for a system call which is actually
- available in the target kernel. In that case, valgrind will
- abort if asked to handle such a syscall. This is despite the
- fact that (usually) valgrind's sources actually contain the
- code to deal with the syscall.
-
- Several people have reported having this problem. So, please
- be aware of it. If it's useful, the syscall wrappers are
- all done in vg_syscall_mem.c; you might want to have a little
- look in there.
-
-
-- Please test the final installation works by running it on
something huge. I suggest checking that it can start and
exit successfully both Mozilla-1.0 and OpenOffice.org 1.0.
Desirable
~~~~~~~~~
-Stack: make return address into NoAccess ?
+Stack: make return address into NoAccess ?
Future
typedef
struct {
- UChar abits[8192];
+ UChar abits[SECONDARY_SIZE / 8];
}
AcSecMap;
-static AcSecMap* primary_map[ /*65536*/ 262144 ];
-static AcSecMap distinguished_secondary_map;
+static AcSecMap* primary_map[ /*PRIMARY_SIZE*/ PRIMARY_SIZE*4 ];
+static const AcSecMap distinguished_secondary_maps[2] = {
+ [ VGM_BIT_INVALID ] = { { [0 ... (SECONDARY_SIZE/8) - 1] = VGM_BYTE_INVALID } },
+ [ VGM_BIT_VALID ] = { { [0 ... (SECONDARY_SIZE/8) - 1] = VGM_BYTE_VALID } },
+};
+#define N_SECONDARY_MAPS (sizeof(distinguished_secondary_maps)/sizeof(*distinguished_secondary_maps))
+
+#define DSM_IDX(a) ((a) & 1)
+
+#define DSM(a) ((AcSecMap *)&distinguished_secondary_maps[DSM_IDX(a)])
+
+#define DSM_NOTADDR DSM(VGM_BIT_INVALID)
+#define DSM_ADDR DSM(VGM_BIT_VALID)
static void init_shadow_memory ( void )
{
- Int i;
+ Int i, a;
- for (i = 0; i < 8192; i++) /* Invalid address */
- distinguished_secondary_map.abits[i] = VGM_BYTE_INVALID;
+ /* check construction of the distinguished secondaries */
+ sk_assert(VGM_BIT_INVALID == 1);
+ sk_assert(VGM_BIT_VALID == 0);
+
+ for(a = 0; a <= 1; a++)
+ sk_assert(distinguished_secondary_maps[DSM_IDX(a)].abits[0] == BIT_EXPAND(a));
/* These entries gradually get overwritten as the used address
space expands. */
- for (i = 0; i < 65536; i++)
- primary_map[i] = &distinguished_secondary_map;
+ for (i = 0; i < PRIMARY_SIZE; i++)
+ primary_map[i] = DSM_NOTADDR;
/* These ones should never change; it's a bug in Valgrind if they do. */
- for (i = 65536; i < 262144; i++)
- primary_map[i] = &distinguished_secondary_map;
+ for (i = PRIMARY_SIZE; i < PRIMARY_SIZE*4; i++)
+ primary_map[i] = DSM_NOTADDR;
}
/*------------------------------------------------------------*/
/* Allocate and initialise a secondary map. */
static AcSecMap* alloc_secondary_map ( __attribute__ ((unused))
- Char* caller )
+ Char* caller,
+ const AcSecMap *prototype)
{
AcSecMap* map;
- UInt i;
PROF_EVENT(10);
- /* Mark all bytes as invalid access and invalid value. */
map = (AcSecMap *)VG_(shadow_alloc)(sizeof(AcSecMap));
- for (i = 0; i < 8192; i++)
- map->abits[i] = VGM_BYTE_INVALID; /* Invalid address */
+ VG_(memcpy)(map, prototype, sizeof(*map));
/* VG_(printf)("ALLOC_2MAP(%s)\n", caller ); */
return map;
static __inline__ UChar get_abit ( Addr a )
{
- AcSecMap* sm = primary_map[a >> 16];
- UInt sm_off = a & 0xFFFF;
+ AcSecMap* sm = primary_map[PM_IDX(a)];
+ UInt sm_off = SM_OFF(a);
PROF_EVENT(20);
# if 0
if (IS_DISTINGUISHED_SM(sm))
UInt sm_off;
PROF_EVENT(22);
ENSURE_MAPPABLE(a, "set_abit");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
if (abit)
BITARR_SET(sm->abits, sm_off);
else
# ifdef VG_DEBUG_MEMORY
tl_assert(IS_4_ALIGNED(a));
# endif
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
abits8 = sm->abits[sm_off >> 3];
abits8 >>= (a & 4 /* 100b */); /* a & 4 is either 0 or 4 */
abits8 &= 0x0F;
/* In order that we can charge through the address space at 8
bytes/main-loop iteration, make up some perms. */
- abyte8 = (example_a_bit << 7)
- | (example_a_bit << 6)
- | (example_a_bit << 5)
- | (example_a_bit << 4)
- | (example_a_bit << 3)
- | (example_a_bit << 2)
- | (example_a_bit << 1)
- | (example_a_bit << 0);
+ abyte8 = BIT_EXPAND(example_a_bit);
# ifdef VG_DEBUG_MEMORY
/* Do it ... */
}
tl_assert((a % 8) == 0 && len > 0);
- /* Once aligned, go fast. */
- while (True) {
+ /* Once aligned, go fast up to primary boundary. */
+ for (; (a & SECONDARY_MASK) && len >= 8; a += 8, len -= 8) {
PROF_EVENT(32);
- if (len < 8) break;
+
+ /* If the primary is already pointing to a distinguished map
+ with the same properties as we're trying to set, then leave
+ it that way. */
+ if (primary_map[PM_IDX(a)] == DSM(example_a_bit))
+ continue;
ENSURE_MAPPABLE(a, "set_address_range_perms(fast)");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
sm->abits[sm_off >> 3] = abyte8;
- a += 8;
- len -= 8;
}
- if (len == 0) {
- VGP_POPCC(VgpSetMem);
- return;
+ /* Now set whole secondary maps to the right distinguished value.
+
+ Note that if the primary already points to a non-distinguished
+ secondary, then don't replace the reference. That would just
+ leak memory.
+ */
+ for(; len >= SECONDARY_SIZE; a += SECONDARY_SIZE, len -= SECONDARY_SIZE) {
+ sm = primary_map[PM_IDX(a)];
+
+ if (IS_DISTINGUISHED_SM(sm))
+ primary_map[PM_IDX(a)] = DSM(example_a_bit);
+ else
+ VG_(memset)(sm->abits, abyte8, sizeof(sm->abits));
+ }
+
+ /* Now finished the remains. */
+ for (; len >= 8; a += 8, len -= 8) {
+ PROF_EVENT(32);
+
+ /* If the primary is already pointing to a distinguished map
+ with the same properties as we're trying to set, then leave
+ it that way. */
+ if (primary_map[PM_IDX(a)] == DSM(example_a_bit))
+ continue;
+ ENSURE_MAPPABLE(a, "set_address_range_perms(fast)");
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
+ sm->abits[sm_off >> 3] = abyte8;
}
- tl_assert((a % 8) == 0 && len > 0 && len < 8);
+
/* Finish the upper fragment. */
while (True) {
VGP_PUSHCC(VgpESPAdj);
ENSURE_MAPPABLE(a, "make_aligned_word_noaccess");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
mask = 0x0F;
mask <<= (a & 4 /* 100b */); /* a & 4 is either 0 or 4 */
/* mask now contains 1s where we wish to make address bits invalid (1s). */
VGP_PUSHCC(VgpESPAdj);
ENSURE_MAPPABLE(a, "make_aligned_word_accessible");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
mask = 0x0F;
mask <<= (a & 4 /* 100b */); /* a & 4 is either 0 or 4 */
/* mask now contains 1s where we wish to make address bits
VGP_PUSHCC(VgpESPAdj);
ENSURE_MAPPABLE(a, "make_aligned_doubleword_accessible");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
sm->abits[sm_off >> 3] = VGM_BYTE_VALID;
VGP_POPCC(VgpESPAdj);
}
VGP_PUSHCC(VgpESPAdj);
ENSURE_MAPPABLE(a, "make_aligned_doubleword_noaccess");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
sm->abits[sm_off >> 3] = VGM_BYTE_INVALID;
VGP_POPCC(VgpESPAdj);
}
}
static
-void ac_set_perms (Addr a, SizeT len, Bool rr, Bool ww, Bool xx)
+void ac_new_mem_mmap (Addr a, SizeT len, Bool rr, Bool ww, Bool xx)
{
DEBUG("ac_set_perms(%p, %u, rr=%u ww=%u, xx=%u)\n",
a, len, rr, ww, xx);
- if (rr || ww || xx) {
- ac_make_accessible(a, len);
- } else {
- ac_make_noaccess(a, len);
- }
+ ac_make_accessible(a, len);
}
static
# else
UInt sec_no = rotateRight16(a) & 0x3FFFF;
AcSecMap* sm = primary_map[sec_no];
- UInt a_off = (a & 0xFFFF) >> 3;
+ UInt a_off = (SM_OFF(a)) >> 3;
UChar abits = sm->abits[a_off];
abits >>= (a & 4);
abits &= 15;
# else
UInt sec_no = rotateRight16(a) & 0x1FFFF;
AcSecMap* sm = primary_map[sec_no];
- UInt a_off = (a & 0xFFFF) >> 3;
+ UInt a_off = (SM_OFF(a)) >> 3;
PROF_EVENT(67);
if (sm->abits[a_off] == VGM_BYTE_VALID) {
/* Handle common case quickly. */
# else
UInt sec_no = shiftRight16(a);
AcSecMap* sm = primary_map[sec_no];
- UInt a_off = (a & 0xFFFF) >> 3;
+ UInt a_off = (SM_OFF(a)) >> 3;
PROF_EVENT(68);
if (sm->abits[a_off] == VGM_BYTE_VALID) {
/* Handle common case quickly. */
if (!MAC_(clo_partial_loads_ok)
|| ((a & 3) != 0)
|| (!a0ok && !a1ok && !a2ok && !a3ok)) {
- MAC_(record_address_error)( VG_(get_current_tid)(), a, 4, isWrite );
+ MAC_(record_address_error)( VG_(get_VCPU_tid)(), a, 4, isWrite );
return;
}
/* If an address error has happened, report it. */
if (aerr) {
- MAC_(record_address_error)( VG_(get_current_tid)(), a, 2, isWrite );
+ MAC_(record_address_error)( VG_(get_VCPU_tid)(), a, 2, isWrite );
}
}
/* If an address error has happened, report it. */
if (aerr) {
- MAC_(record_address_error)( VG_(get_current_tid)(), a, 1, isWrite );
+ MAC_(record_address_error)( VG_(get_VCPU_tid)(), a, 1, isWrite );
}
}
if (!IS_4_ALIGNED(addr)) goto slow4;
PROF_EVENT(91);
/* Properly aligned. */
- sm = primary_map[addr >> 16];
- sm_off = addr & 0xFFFF;
+ sm = primary_map[PM_IDX(addr)];
+ sm_off = SM_OFF(addr);
a_off = sm_off >> 3;
if (sm->abits[a_off] != VGM_BYTE_VALID) goto slow4;
/* Properly aligned and addressible. */
/* Properly aligned. Do it in two halves. */
addr4 = addr + 4;
/* First half. */
- sm = primary_map[addr >> 16];
- sm_off = addr & 0xFFFF;
+ sm = primary_map[PM_IDX(addr)];
+ sm_off = SM_OFF(addr);
a_off = sm_off >> 3;
if (sm->abits[a_off] != VGM_BYTE_VALID) goto slow8;
/* First half properly aligned and addressible. */
/* Second half. */
- sm = primary_map[addr4 >> 16];
- sm_off = addr4 & 0xFFFF;
+ sm = primary_map[PM_IDX(addr4)];
+ sm_off = SM_OFF(addr4);
a_off = sm_off >> 3;
if (sm->abits[a_off] != VGM_BYTE_VALID) goto slow8;
/* Second half properly aligned and addressible. */
}
if (aerr) {
- MAC_(record_address_error)( VG_(get_current_tid)(), addr, size, isWrite );
+ MAC_(record_address_error)( VG_(get_VCPU_tid)(), addr, size, isWrite );
}
}
static
Bool ac_is_valid_64k_chunk ( UInt chunk_number )
{
- tl_assert(chunk_number >= 0 && chunk_number < 65536);
- if (IS_DISTINGUISHED_SM(primary_map[chunk_number])) {
+ tl_assert(chunk_number >= 0 && chunk_number < PRIMARY_SIZE);
+ if (primary_map[chunk_number] == DSM_NOTADDR) {
/* Definitely not in use. */
return False;
} else {
/* Leak detector for this tool. We don't actually do anything, merely
run the generic leak detector with suitable parameters for this
tool. */
-static void ac_detect_memory_leaks ( ThreadId tid )
+static void ac_detect_memory_leaks ( LeakCheckMode mode )
{
- MAC_(do_detect_memory_leaks) (
- tid, ac_is_valid_64k_chunk, ac_is_valid_address );
+ MAC_(do_detect_memory_leaks) ( mode, ac_is_valid_64k_chunk, ac_is_valid_address );
}
{
Int i;
+#if 0
/* Make sure nobody changed the distinguished secondary. */
for (i = 0; i < 8192; i++)
if (distinguished_secondary_map.abits[i] != VGM_BYTE_INVALID)
return False;
+#endif
/* Make sure that the upper 3/4 of the primary map hasn't
been messed with. */
- for (i = 65536; i < 262144; i++)
- if (primary_map[i] != & distinguished_secondary_map)
+ for (i = PRIMARY_SIZE; i < PRIMARY_SIZE*4; i++)
+ if (primary_map[i] != DSM_NOTADDR)
return False;
return True;
switch (arg[0]) {
case VG_USERREQ__DO_LEAK_CHECK:
- ac_detect_memory_leaks(tid);
+ ac_detect_memory_leaks(arg[1] ? LC_Summary : LC_Full);
*ret = 0; /* return value is meaningless */
break;
VG_(init_new_mem_startup) ( & ac_new_mem_startup );
VG_(init_new_mem_stack_signal) ( & ac_make_accessible );
VG_(init_new_mem_brk) ( & ac_make_accessible );
- VG_(init_new_mem_mmap) ( & ac_set_perms );
+ VG_(init_new_mem_mmap) ( & ac_new_mem_mmap );
VG_(init_copy_mem_remap) ( & ac_copy_address_range_state );
- VG_(init_change_mem_mprotect) ( & ac_set_perms );
VG_(init_die_mem_stack_signal) ( & ac_make_noaccess );
VG_(init_die_mem_brk) ( & ac_make_noaccess );
+
noinst_SCRIPTS = filter_stderr
EXTRA_DIST = $(noinst_SCRIPTS) \
+ addressable.vgtest addressable.stderr.exp addressable.stdout.exp \
badrw.stderr.exp badrw.vgtest \
fprw.stderr.exp fprw.vgtest \
overlap.stderr.exp overlap.stdout.exp overlap.vgtest \
/*--- Cache configuration ---*/
/*------------------------------------------------------------*/
-#define UNDEFINED_CACHE ((cache_t) { -1, -1, -1 })
+#define UNDEFINED_CACHE { -1, -1, -1 }
static cache_t clo_I1_cache = UNDEFINED_CACHE;
static cache_t clo_D1_cache = UNDEFINED_CACHE;
return 0;
}
-static jmp_buf cpuid_jmpbuf;
-
-static
-void cpuid_SIGILL_handler(int signum)
-{
- __builtin_longjmp(cpuid_jmpbuf, 1);
-}
-
static
Int get_caches_from_CPUID(cache_t* I1c, cache_t* D1c, cache_t* L2c)
{
- Int level, res, ret;
+ Int level, ret;
Char vendor_id[13];
- struct vki_sigaction sigill_new, sigill_saved;
-
- /* Install own SIGILL handler */
- sigill_new.ksa_handler = cpuid_SIGILL_handler;
- sigill_new.sa_flags = 0;
- sigill_new.sa_restorer = NULL;
- res = VG_(sigemptyset)( &sigill_new.sa_mask );
tl_assert(res == 0);
res = VG_(sigaction)( VKI_SIGILL, &sigill_new, &sigill_saved );
tl_assert(res == 0);
- /* Trap for illegal instruction, in case it's a really old processor that
- * doesn't support CPUID. */
- if (__builtin_setjmp(cpuid_jmpbuf) == 0) {
- VG_(cpuid)(0, &level, (int*)&vendor_id[0],
- (int*)&vendor_id[8], (int*)&vendor_id[4]);
- vendor_id[12] = '\0';
-
- /* Restore old SIGILL handler */
- res = VG_(sigaction)( VKI_SIGILL, &sigill_saved, NULL );
- tl_assert(res == 0);
-
- } else {
+ if (!VG_(has_cpuid)()) {
VG_(message)(Vg_DebugMsg, "CPUID instruction not supported");
-
- /* Restore old SIGILL handler */
- res = VG_(sigaction)( VKI_SIGILL, &sigill_saved, NULL );
- tl_assert(res == 0);
return -1;
}
+ VG_(cpuid)(0, &level, (int*)&vendor_id[0],
+ (int*)&vendor_id[8], (int*)&vendor_id[4]);
+ vendor_id[12] = '\0';
if (0 == level) {
VG_(message)(Vg_DebugMsg, "CPUID level is 0, early Pentium?\n");
AC_SUBST(VEX_DIR)
# Checks for programs.
-CFLAGS=""
+CFLAGS="-Wno-long-long"
AC_PROG_LN_S
AC_PROG_CC
AC_MSG_RESULT([ok (${host_cpu})])
VG_ARCH="x86"
KICKSTART_BASE="0xb0000000"
- ARCH_CORE_AM_CFLAGS="-fomit-frame-pointer @PREFERRED_STACK_BOUNDARY@ -DELFSZ=32"
- ARCH_TOOL_AM_CFLAGS="-fomit-frame-pointer @PREFERRED_STACK_BOUNDARY@"
+ ARCH_CORE_AM_CFLAGS="@PREFERRED_STACK_BOUNDARY@ -DELFSZ=32"
+ ARCH_TOOL_AM_CFLAGS="@PREFERRED_STACK_BOUNDARY@"
ARCH_CORE_AM_CCASFLAGS=""
;;
;;
esac
-# APIs introduced in recent glibc versions
-
-AC_MSG_CHECKING([whether sched_param has a sched_priority member])
-AC_CACHE_VAL(vg_have_sched_priority,
-[
-AC_TRY_COMPILE([#include <pthread.h>],[
-struct sched_param p; p.sched_priority=1;],
-vg_have_sched_priority=yes,
-vg_have_sched_priority=no)
-])
-AC_MSG_RESULT($vg_have_sched_priority)
-if test "$vg_have_sched_priority" = yes; then
-AC_DEFINE([HAVE_SCHED_PRIORITY], 1, [pthread / sched_priority exists])
-fi
-
# We don't know how to detect the X client library version
# (detecting the server version is easy, bu no help). So we
# just use a hack: always include the suppressions for both
AC_SUBST(PREFERRED_STACK_BOUNDARY)
+# does this compiler support -Wno-pointer-sign ?
+AC_MSG_CHECKING([if gcc accepts -Wno-pointer-sign ])
+
+safe_CFLAGS=$CFLAGS
+CFLAGS="-Wno-pointer-sign"
+
+AC_TRY_COMPILE(, [
+int main () { return 0 ; }
+],
+[
+no_pointer_sign=yes
+AC_MSG_RESULT([yes])
+], [
+no_pointer_sign=no
+AC_MSG_RESULT([no])
+])
+CFLAGS=$safe_CFLAGS
+
+if test x$no_pointer_sign = xyes; then
+ CFLAGS="$CFLAGS -Wno-pointer-sign"
+fi
+
+# Check for TLS support in the compiler and linker
+AC_CACHE_CHECK([for TLS support], vg_cv_tls,
+ [AC_ARG_ENABLE(tls, [ --enable-tls platform supports TLS],
+ [vg_cv_tls=$enableval],
+ [AC_RUN_IFELSE([AC_LANG_PROGRAM([[static __thread int foo;]],
+ [[return foo;]])],
+ [vg_cv_tls=yes],
+ [vg_cv_tls=no])])])
+
+if test "$vg_cv_tls" = yes; then
+AC_DEFINE([HAVE_TLS], 1, [can use __thread to define thread-local variables])
+fi
# Check for PIE support in the compiler and linker
AC_CACHE_CHECK([for PIE support], vg_cv_pie,
- [safe_CFLAGS=$CFLAGS
- CFLAGS="$CFLAGS -fpie"
- safe_LDFLAGS=$LDFLAGS
- LDFLAGS="$LDFLAGS -pie"
- AC_TRY_LINK([int foo;],
- [],
- [vg_cv_pie=yes],
- [vg_cv_pie=no])
- CFLAGS=$safe_CFLAGS
- LDFLAGS=$safe_LDFLAGS])
+ [AC_ARG_ENABLE(pie, [ --enable-pie platform supports PIE linking],
+ [vg_cv_pie=$enableval],
+ [safe_CFLAGS=$CFLAGS
+ CFLAGS="$CFLAGS -fpie"
+ safe_LDFLAGS=$LDFLAGS
+ LDFLAGS="$LDFLAGS -pie"
+ AC_TRY_LINK([int foo;],
+ [],
+ [vg_cv_pie=yes],
+ [vg_cv_pie=no])
+ CFLAGS=$safe_CFLAGS
+ LDFLAGS=$safe_LDFLAGS])])
if test "$vg_cv_pie" = yes; then
AC_DEFINE([HAVE_PIE], 1, [can create position-independent executables])
fi
AC_TYPE_OFF_T
AC_TYPE_SIZE_T
AC_HEADER_TIME
-AC_CHECK_TYPES(__pthread_unwind_buf_t,,,[#include <pthread.h>])
# Checks for library functions.
AC_FUNC_MEMCMP
pth_mutexspeed.stdout.exp pth_mutexspeed.vgtest \
pth_once.stderr.exp pth_once.stdout.exp pth_once.vgtest \
pth_rwlock.stderr.exp pth_rwlock.vgtest \
- sigkill.stderr.exp sigkill.vgtest \
+ sigkill.stderr.exp sigkill.stderr.exp2 sigkill.vgtest \
res_search.stderr.exp res_search.stdout.exp res_search.vgtest \
vgprintf.stderr.exp vgprintf.stdout.exp vgprintf.vgtest
1
-Warning: client syscall mmap2 tried to modify addresses 0x........-0x........
-mmap @ 0x........
2
3
4
1
-Warning: client syscall old_mmap tried to modify addresses 0x........-0x........
-mmap @ 0x........
2
3
4
#include <fcntl.h>
#include <unistd.h>
#include <stdlib.h>
+#include <errno.h>
char filea[24];
char fileb[24];
exit(1);
}
+ again:
if((size = recvmsg(s, &msg, 0)) == -1) {
+ if (errno == EINTR)
+ goto again; /* SIGCHLD from server exiting could interrupt */
perror("recvmsg");
exit(1);
}
FILE DESCRIPTORS: 7 open at exit.
Open AF_UNIX socket .: /tmp/sock
at 0x........: accept (in /...libc...)
- by 0x........: main (fdleak_cmsg.c:170)
+ by 0x........: main (fdleak_cmsg.c:174)
Open AF_UNIX socket .: /tmp/sock
at 0x........: socket (in /...libc...)
- by 0x........: main (fdleak_cmsg.c:170)
+ by 0x........: main (fdleak_cmsg.c:174)
Open file descriptor .: /tmp/data2
at 0x........: open (in /...libc...)
- by 0x........: main (fdleak_cmsg.c:170)
+ by 0x........: main (fdleak_cmsg.c:174)
Open file descriptor .: /tmp/data1
at 0x........: open (in /...libc...)
- by 0x........: main (fdleak_cmsg.c:170)
+ by 0x........: main (fdleak_cmsg.c:174)
Open file descriptor .: .
<inherited from parent>
FILE DESCRIPTORS: 6 open at exit.
Open file descriptor .: /tmp/data2
at 0x........: recvmsg (in /...libc...)
- by 0x........: main (fdleak_cmsg.c:174)
+ by 0x........: main (fdleak_cmsg.c:178)
Open file descriptor .: /tmp/data1
at 0x........: recvmsg (in /...libc...)
- by 0x........: main (fdleak_cmsg.c:174)
+ by 0x........: main (fdleak_cmsg.c:178)
Open AF_UNIX socket .: <unknown>
at 0x........: socket (in /...libc...)
- by 0x........: main (fdleak_cmsg.c:174)
+ by 0x........: main (fdleak_cmsg.c:178)
Open file descriptor .: .
<inherited from parent>
Open file descriptor .: /tmp/file
at 0x........: creat (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Open file descriptor .: .
FILE DESCRIPTORS: 5 open at exit.
Open file descriptor .: /dev/null
at 0x........: dup (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Open file descriptor .: /dev/null
at 0x........: open (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Open file descriptor .: .
FILE DESCRIPTORS: 6 open at exit.
Open file descriptor .: /dev/null
at 0x........: dup2 (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Open file descriptor .: /dev/null
at 0x........: dup2 (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Open file descriptor .: /dev/null
at 0x........: open (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Open file descriptor .: .
Open file descriptor .: /dev/null
at 0x........: open (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Open file descriptor .: .
FILE DESCRIPTORS: 4 open at exit.
Open file descriptor .: /dev/null
at 0x........: open (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Open file descriptor .: .
FILE DESCRIPTORS: 5 open at exit.
Open file descriptor .:
at 0x........: pipe (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Open file descriptor .:
at 0x........: pipe (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Open file descriptor .: .
FILE DESCRIPTORS: 5 open at exit.
Open AF_UNIX socket .: <unknown>
at 0x........: socketpair (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Open AF_UNIX socket .: <unknown>
at 0x........: socketpair (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Open file descriptor .: .
$dir/../../tests/filter_test_paths |
-# Anonymise paths like "(in /foo/bar/libc-baz.so)"
-sed "s/(in \/.*libc.*)$/(in \/...libc...)/" |
-
-# Anonymise paths like "xxx (../sysdeps/unix/sysv/linux/quux.c:129)"
-sed "s/(\.\.\/sysdeps\/unix\/sysv\/linux\/.*\.c:[0-9]*)$/(in \/...libc...)/" |
-
-# Anonymise paths like "__libc_start_main (../foo/bar/libc-quux.c:129)"
-sed "s/__libc_\(.*\) (.*)$/__libc_\1 (...libc...)/" |
-
sed s/"^Open AF_UNIX socket [0-9]*: <unknown>/Open AF_UNIX socket .: <unknown>/" |
sed s/"^Open \(AF_UNIX socket\|file descriptor\) [0-9]*: \/dev\/null/Open \\1 .: \/dev\/null/" |
sed s/"^Open \(AF_UNIX socket\|file descriptor\) [0-9]*: \/tmp\/\(sock\|data1\|data2\|file\)\.[0-9]*/Open \\1 .: \/tmp\/\\2/" |
-warning: Valgrind's pthread_cond_destroy is incomplete
- (it doesn't check if the cond is waited on)
- your program may misbehave as a result
-warning: Valgrind's pthread_cond_destroy is incomplete
- (it doesn't check if the cond is waited on)
- your program may misbehave as a result
-warning: Valgrind's pthread_cond_destroy is incomplete
- (it doesn't check if the cond is waited on)
- your program may misbehave as a result
ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
int i;
int rc;
for (i = 1; i <= 65; i++) {
- // skip signals 32 and 33: some systems say "warning, ignored attempt
+ // skip signals 63 and 64: some systems say "warning, ignored attempt
// to catch 32 because it's used internally by Valgrind", others say
// "invalid argument".
- if (i == 32 || i == 33) {
+ if (i == 63 || i == 64) {
continue;
} // different systems
sa.sa_flags = 0;
setting signal 31: Success
getting signal 31: Success
+setting signal 32: Success
+getting signal 32: Success
+
+setting signal 33: Success
+getting signal 33: Success
+
setting signal 34: Success
getting signal 34: Success
setting signal 62: Success
getting signal 62: Success
-setting signal 63: Success
-getting signal 63: Success
-
-setting signal 64: Success
-getting signal 64: Success
-
setting signal 65: Warning: bad signal number 65 in sigaction()
Invalid argument
getting signal 65: Warning: bad signal number 65 in sigaction()
AM_CPPFLAGS += -DVG_LIBDIR="\"$(valdir)"\" -I$(srcdir)/demangle \
-DKICKSTART_BASE=@KICKSTART_BASE@ \
-DVG_PLATFORM="\"$(VG_PLATFORM)"\"
-AM_CFLAGS = $(WERROR) -Winline -Wall -Wshadow -O -g @ARCH_CORE_AM_CFLAGS@
+AM_CFLAGS = $(WERROR) -Wmissing-prototypes -Winline -Wall -Wshadow -O -g @ARCH_CORE_AM_CFLAGS@
AM_CFLAGS += -fno-omit-frame-pointer
default.supp: $(SUPP_FILES)
val_PROGRAMS = \
stage2 \
- libpthread.so \
vg_inject.so
noinst_LIBRARIES = lib_replace_malloc.a
vg_toolint.h
EXTRA_DIST = \
- vg_libpthread.vs valgrind.vs \
+ valgrind.vs \
gen_toolint.pl toolfuncs.def \
- gen_intercepts.pl vg_replace_malloc.c.base vg_intercept.c.base
+ gen_intercepts.pl vg_replace_malloc.c.base
BUILT_SOURCES = vg_toolint.c vg_toolint.h
-CLEANFILES = vg_toolint.c vg_toolint.h vg_replace_malloc.c vg_intercept.c
+CLEANFILES = vg_toolint.c vg_toolint.h vg_replace_malloc.c
valgrind_SOURCES = \
ume.c \
vg_mylibc.c \
vg_needs.c \
vg_procselfmaps.c \
- vg_proxylwp.c \
vg_dummy_profile.c \
vg_signals.c \
vg_symtab2.c \
+ vg_threadmodel.c \
+ vg_pthreadmodel.c \
+ vg_redir.c \
vg_dwarf.c \
vg_stabs.c \
vg_skiplist.c \
stage2_LDADD= $(stage2_extra) -ldl
-vg_intercept.c: $(srcdir)/gen_intercepts.pl $(srcdir)/vg_intercept.c.base
- rm -f $@
- $(PERL) $(srcdir)/gen_intercepts.pl < $(srcdir)/vg_intercept.c.base > $@
-
vg_replace_malloc.c: $(srcdir)/gen_intercepts.pl $(srcdir)/vg_replace_malloc.c.base
rm -f $@
$(PERL) $(srcdir)/gen_intercepts.pl < $(srcdir)/vg_replace_malloc.c.base > $@
$(PERL) $(srcdir)/gen_toolint.pl proto < $(srcdir)/toolfuncs.def > $@ || rm -f $@
$(PERL) $(srcdir)/gen_toolint.pl struct < $(srcdir)/toolfuncs.def >> $@ || rm -f $@
-libpthread_so_SOURCES = \
- vg_libpthread.c \
- vg_libpthread_unimp.c \
- ${VG_ARCH}/libpthread.c \
- ${VG_PLATFORM}/syscall.S
-libpthread_so_DEPENDENCIES = $(srcdir)/vg_libpthread.vs
-libpthread_so_CFLAGS = $(AM_CFLAGS) -fpic -fno-omit-frame-pointer
-libpthread_so_LDFLAGS = -Werror -fno-omit-frame-pointer -UVG_LIBDIR \
- -shared -ldl \
- -Wl,-version-script $(srcdir)/vg_libpthread.vs \
- -Wl,-z,nodelete \
- -Wl,--soname=libpthread.so.0
-
-vg_inject_so_SOURCES = \
- vg_intercept.c
+vg_inject_so_SOURCES = vg_intercept.c
vg_inject_so_CFLAGS = $(AM_CFLAGS) -fpic
+vg_inject_so_LDADD = -ldl
vg_inject_so_LDFLAGS = \
-shared \
-Wl,--soname,vg_inject.so \
lib_replace_malloc_a_SOURCES = vg_replace_malloc.c
lib_replace_malloc_a_CFLAGS = $(AM_CFLAGS) -fpic -fno-omit-frame-pointer
-MANUAL_DEPS = $(noinst_HEADERS) $(include_HEADERS) $(inplacedir)/libpthread.so.0
+MANUAL_DEPS = $(noinst_HEADERS) $(include_HEADERS)
all-local:
mkdir -p $(inplacedir)
for i in $(val_PROGRAMS); do \
- to=$(inplacedir)/$$(echo $$i | sed 's,libpthread.so,libpthread.so.0,'); \
+ to=$(inplacedir)/$$i; \
rm -f $$$to; \
ln -sf ../$(subdir)/$$i $$to; \
done
// Ugly: this is needed by linux/core_os.h
typedef struct _ThreadState ThreadState;
-#include "core_os.h" // OS-specific stuff, eg. linux/core_os.h
#include "core_platform.h" // platform-specific stuff,
// eg. x86-linux/core_platform.h
+#include "core_os.h" // OS-specific stuff, eg. linux/core_os.h
#include "valgrind.h"
#define VG_SCHEDULING_QUANTUM 50000
/* Number of file descriptors that Valgrind tries to reserve for
- it's own use - two per thread plues a small number of extras. */
-#define VG_N_RESERVED_FDS (VG_N_THREADS*2 + 4)
-
-/* Stack size for a thread. We try and check that they do not go
- beyond it. */
-#define VG_PTHREAD_STACK_SIZE (1 << 20)
-
-/* Number of entries in each thread's cleanup stack. */
-#define VG_N_CLEANUPSTACK 16
-
-/* Number of entries in each thread's fork-handler stack. */
-#define VG_N_FORKHANDLERSTACK 4
-
-/* Max number of callers for context in a suppression. */
-#define VG_N_SUPP_CALLERS 4
-
-/* Numer of entries in each thread's signal queue. */
-#define VG_N_SIGNALQUEUE 8
+ it's own use - just a small constant. */
+#define VG_N_RESERVED_FDS (10)
/* Useful macros */
/* a - alignment - must be a power of 2 */
VgLogTo_Socket
} VgLogTo;
-/* pid of main process */
-extern Int VG_(main_pid);
-
-/* pgrp of process (global to all threads) */
-extern Int VG_(main_pgrp);
-
/* Application-visible file descriptor limits */
extern Int VG_(fd_soft_limit);
extern Int VG_(fd_hard_limit);
extern Bool VG_(clo_trace_signals);
/* DEBUG: print symtab details? default: NO */
extern Bool VG_(clo_trace_symtab);
+/* DEBUG: print redirection details? default: NO */
+extern Bool VG_(clo_trace_redir);
/* DEBUG: print thread scheduling events? default: NO */
extern Bool VG_(clo_trace_sched);
-/* DEBUG: print pthread (mutex etc) events? default: 0 (none), 1
- (some), 2 (all) */
-extern Int VG_(clo_trace_pthread_level);
+/* DEBUG: print pthreads calls? default: NO */
+extern Bool VG_(clo_trace_pthreads);
/* Display gory details for the k'th most popular error. default:
Infinity. */
extern Int VG_(clo_dump_error);
extern Int VG_(clo_backtrace_size);
/* Engage miscellaneous weird hacks needed for some progs. */
extern Char* VG_(clo_weird_hacks);
-/* How often we should poll for signals, assuming we need to poll for
- signals. */
-extern Int VG_(clo_signal_polltime);
-
-/* Low latency syscalls and signals */
-extern Bool VG_(clo_lowlat_syscalls);
-extern Bool VG_(clo_lowlat_signals);
/* Track open file descriptors? */
extern Bool VG_(clo_track_fds);
/* Test each client pointer dereference to check it's within the
client address space bounds */
extern Bool VG_(clo_pointercheck);
+/* Model the pthread library */
+extern Bool VG_(clo_model_pthreads);
/* HACK: Use hacked version of clone for Quadrics Elan3 drivers */
extern Bool VG_(clo_support_elan3);
/* Set up the libc freeres wrapper */
-extern void VG_(intercept_libc_freeres_wrapper)(Addr);
+extern void VGA_(intercept_libc_freeres_wrapper)(Addr);
+
+// Clean up the client by calling before the final reports
+extern void VGA_(final_tidyup)(ThreadId tid);
+// Arch-specific client requests
+extern Bool VGA_(client_requests)(ThreadId tid, UWord *args);
/* ---------------------------------------------------------------------
Profiling stuff
#define VG_USERREQ__MALLOC 0x2001
#define VG_USERREQ__FREE 0x2002
-/* (Fn, Arg): Create a new thread and run Fn applied to Arg in it. Fn
- MUST NOT return -- ever. Eventually it will do either __QUIT or
- __WAIT_JOINER. */
+/* Obsolete pthread-related requests */
#define VG_USERREQ__APPLY_IN_NEW_THREAD 0x3001
-
-/* ( no-args ): calling thread disappears from the system forever.
- Reclaim resources. */
#define VG_USERREQ__QUIT 0x3002
-
-/* ( void* ): calling thread waits for joiner and returns the void* to
- it. */
#define VG_USERREQ__WAIT_JOINER 0x3003
-
-/* ( ThreadId, void** ): wait to join a thread. */
#define VG_USERREQ__PTHREAD_JOIN 0x3004
-
-/* Set cancellation state and type for this thread. */
#define VG_USERREQ__SET_CANCELSTATE 0x3005
#define VG_USERREQ__SET_CANCELTYPE 0x3006
-
-/* ( no-args ): Test if we are at a cancellation point. */
#define VG_USERREQ__TESTCANCEL 0x3007
-
-/* ( ThreadId, &thread_exit_wrapper is the only allowable arg ): call
- with this arg to indicate that a cancel is now pending for the
- specified thread. */
#define VG_USERREQ__SET_CANCELPEND 0x3008
-
-/* Set/get detach state for this thread. */
#define VG_USERREQ__SET_OR_GET_DETACH 0x3009
-
#define VG_USERREQ__PTHREAD_GET_THREADID 0x300A
#define VG_USERREQ__PTHREAD_MUTEX_LOCK 0x300B
#define VG_USERREQ__PTHREAD_MUTEX_TIMEDLOCK 0x300C
#define VG_USERREQ__PTHREAD_KEY_DELETE 0x3014
#define VG_USERREQ__PTHREAD_SETSPECIFIC_PTR 0x3015
#define VG_USERREQ__PTHREAD_GETSPECIFIC_PTR 0x3016
-#define VG_USERREQ__READ_MILLISECOND_TIMER 0x3017
#define VG_USERREQ__PTHREAD_SIGMASK 0x3018
-#define VG_USERREQ__SIGWAIT 0x3019 /* unused */
+#define VG_USERREQ__SIGWAIT 0x3019
#define VG_USERREQ__PTHREAD_KILL 0x301A
#define VG_USERREQ__PTHREAD_YIELD 0x301B
#define VG_USERREQ__PTHREAD_KEY_VALIDATE 0x301C
-
#define VG_USERREQ__CLEANUP_PUSH 0x3020
#define VG_USERREQ__CLEANUP_POP 0x3021
#define VG_USERREQ__GET_KEY_D_AND_S 0x3022
-
#define VG_USERREQ__NUKE_OTHER_THREADS 0x3023
-
-/* Ask how many signal handler returns have happened to this
- thread. */
-#define VG_USERREQ__GET_N_SIGS_RETURNED 0x3024 /* unused */
-
-/* Get/set entries for a thread's pthread_atfork stack. */
+#define VG_USERREQ__GET_N_SIGS_RETURNED 0x3024
#define VG_USERREQ__SET_FHSTACK_USED 0x3025
#define VG_USERREQ__GET_FHSTACK_USED 0x3026
#define VG_USERREQ__SET_FHSTACK_ENTRY 0x3027
#define VG_USERREQ__GET_FHSTACK_ENTRY 0x3028
-
-/* Denote the finish of __libc_freeres_wrapper(). */
-#define VG_USERREQ__LIBC_FREERES_DONE 0x3029
-
-/* Allocate RT signals */
#define VG_USERREQ__GET_SIGRT_MIN 0x302B
#define VG_USERREQ__GET_SIGRT_MAX 0x302C
#define VG_USERREQ__ALLOC_RTSIG 0x302D
-
-/* Hook for replace_malloc.o to get malloc functions */
#define VG_USERREQ__GET_MALLOCFUNCS 0x3030
-
-/* Get stack information for a thread. */
#define VG_USERREQ__GET_STACK_INFO 0x3033
-
-/* Cosmetic ... */
#define VG_USERREQ__GET_PTHREAD_TRACE_LEVEL 0x3101
-/* Log a pthread error from client-space. Cosmetic. */
#define VG_USERREQ__PTHREAD_ERROR 0x3102
+
+
+#define VG_USERREQ__READ_MILLISECOND_TIMER 0x3017
+
/* Internal equivalent of VALGRIND_PRINTF . */
#define VG_USERREQ__INTERNAL_PRINTF 0x3103
/* Internal equivalent of VALGRIND_PRINTF_BACKTRACE . */
#define VG_USERREQ__INTERNAL_PRINTF_BACKTRACE 0x3104
-/*
-In core_asm.h:
-#define VG_USERREQ__SIGNAL_RETURNS 0x4001
-*/
+/* Denote the finish of __libc_freeres_wrapper().
+ A synonym for exit. */
+#define VG_USERREQ__LIBC_FREERES_DONE 0x3029
#define VG_INTERCEPT_PREFIX "_vgi__"
#define VG_INTERCEPT_PREFIX_LEN 6
extern Bool VG_(tl_malloc_called_by_scheduler);
-/* ---------------------------------------------------------------------
- Exports of vg_libpthread.c
- ------------------------------------------------------------------ */
-
-/* Replacements for pthread types, shared between vg_libpthread.c and
- vg_scheduler.c. See comment in vg_libpthread.c above the other
- vg_pthread_*_t types for a description of how these are used. */
-
-struct _vg_pthread_fastlock
-{
- long int __vg_status; /* "Free" or "taken" or head of waiting list */
- int __vg_spinlock; /* Used by compare_and_swap emulation. Also,
- adaptive SMP lock stores spin count here. */
-};
-
-typedef struct
-{
- int __vg_m_reserved; /* Reserved for future use */
- int __vg_m_count; /* Depth of recursive locking */
- /*_pthread_descr*/ void* __vg_m_owner; /* Owner thread (if recursive or errcheck) */
- int __vg_m_kind; /* Mutex kind: fast, recursive or errcheck */
- struct _vg_pthread_fastlock __vg_m_lock; /* Underlying fast lock */
-} vg_pthread_mutex_t;
-
-typedef struct
-{
- struct _vg_pthread_fastlock __vg_c_lock; /* Protect against concurrent access */
- /*_pthread_descr*/ void* __vg_c_waiting; /* Threads waiting on this condition */
-
- // Nb: the following padding removed because it was missing from an
- // earlier glibc, so the size test in the CONVERT macro was failing.
- // --njn
-
- // Padding ensures the size is 48 bytes
- /*char __vg_padding[48 - sizeof(struct _vg_pthread_fastlock)
- - sizeof(void*) - sizeof(long long)];
- long long __vg_align;*/
-} vg_pthread_cond_t;
-
/* ---------------------------------------------------------------------
Exports of vg_scheduler.c
------------------------------------------------------------------ */
+/*
+ Thread state machine:
+
+ Empty -> Init -> Runnable <=> WaitSys/Yielding
+ ^ |
+ \---- Zombie -----/
+ */
typedef
enum ThreadStatus {
VgTs_Empty, /* this slot is not in use */
- VgTs_Runnable, /* waiting to be scheduled */
- VgTs_WaitJoiner, /* waiting for someone to do join on me */
- VgTs_WaitJoinee, /* waiting for the thread I did join on */
- VgTs_WaitMX, /* waiting on a mutex */
- VgTs_WaitCV, /* waiting on a condition variable */
+ VgTs_Init, /* just allocated */
+ VgTs_Runnable, /* ready to run */
VgTs_WaitSys, /* waiting for a syscall to complete */
- VgTs_Sleeping, /* sleeping for a while */
+ VgTs_Yielding, /* temporarily yielding the CPU */
+ VgTs_Zombie, /* transient state just before exiting */
}
ThreadStatus;
+/* Return codes from the scheduler. */
typedef
- enum CleanupType {
- VgCt_None, /* this cleanup entry is not initialised */
- VgCt_Function, /* an old-style function pointer cleanup */
- VgCt_Longjmp /* a new-style longjmp based cleanup */
- }
- CleanupType;
-
-/* Information on a thread's stack. */
-typedef
- struct {
- Addr base;
- UInt size;
- UInt guardsize;
- }
- StackInfo;
-
-/* An entry in a threads's cleanup stack. */
-typedef
- struct {
- CleanupType type;
- union {
- struct {
- void (*fn)(void*);
- void* arg;
- } function;
- struct {
- void *ub;
- int ctype;
- } longjmp;
- } data;
- }
- CleanupEntry;
-
-/* An entry in a thread's fork-handler stack. */
-typedef
- struct {
- void (*prepare)(void);
- void (*parent)(void);
- void (*child)(void);
+ enum {
+ VgSrc_None, /* not exiting yet */
+ VgSrc_ExitSyscall, /* client called exit(). This is the normal
+ route out. */
+ VgSrc_FatalSig /* Killed by the default action of a fatal
+ signal */
}
- ForkHandlerEntry;
-
-typedef struct ProxyLWP ProxyLWP;
+ VgSchedReturnCode;
-//typedef
- struct _ThreadState {
+struct _ThreadState {
/* ThreadId == 0 (and hence vg_threads[0]) is NEVER USED.
The thread identity is simply the index in vg_threads[].
ThreadId == 1 is the root thread and has the special property
ALWAYS == the index in vg_threads[]. */
ThreadId tid;
- /* Current scheduling status.
-
- Complications: whenever this is set to VgTs_WaitMX, you
- should also set .m_edx to whatever the required return value
- is for pthread_mutex_lock / pthread_cond_timedwait for when
- the mutex finally gets unblocked. */
+ /* Current scheduling status. */
ThreadStatus status;
- /* When .status == WaitMX, points to the mutex I am waiting for.
- When .status == WaitCV, points to the mutex associated with
- the condition variable indicated by the .associated_cv field.
- In all other cases, should be NULL. */
- vg_pthread_mutex_t* associated_mx;
-
- /* When .status == WaitCV, points to the condition variable I am
- waiting for. In all other cases, should be NULL. */
- void* /*pthread_cond_t* */ associated_cv;
-
- /* If VgTs_Sleeping, this is when we should wake up, measured in
- milliseconds as supplied by VG_(read_millisecond_timer).
+ /* This is set if the thread is in the process of exiting for any
+ reason. The precise details of the exit are in the OS-specific
+ state. */
+ VgSchedReturnCode exitreason;
- If VgTs_WaitCV, this indicates the time at which
- pthread_cond_timedwait should wake up. If == 0xFFFFFFFF,
- this means infinitely far in the future, viz,
- pthread_cond_wait. */
- UInt awaken_at;
-
- /* If VgTs_WaitJoiner, return value, as generated by joinees. */
- void* joinee_retval;
-
- /* If VgTs_WaitJoinee, place to copy the return value to, and
- the identity of the thread we're waiting for. */
- void** joiner_thread_return;
- ThreadId joiner_jee_tid;
-
- /* If VgTs_WaitSys, this is the syscall we're currently running */
- Int syscallno;
-
- /* If VgTs_WaitSys, this is the syscall flags */
- UInt sys_flags;
-
- /* Details about this thread's proxy LWP */
- ProxyLWP *proxy;
-
- /* Whether or not detached. */
- Bool detached;
-
- /* Cancelability state and type. */
- Bool cancel_st; /* False==PTH_CANCEL_DISABLE; True==.._ENABLE */
- Bool cancel_ty; /* False==PTH_CANC_ASYNCH; True==..._DEFERRED */
-
- /* Pointer to fn to call to do cancellation. Indicates whether
- or not cancellation is pending. If NULL, not pending. Else
- should be &thread_exit_wrapper(), indicating that
- cancallation is pending. */
- void (*cancel_pend)(void*);
-
- /* The cleanup stack. */
- Int custack_used;
- CleanupEntry custack[VG_N_CLEANUPSTACK];
-
- /* A pointer to the thread's-specific-data. This is handled almost
- entirely from vg_libpthread.c. We just provide hooks to get and
- set this ptr. This is either NULL, indicating the thread has
- read/written none of its specifics so far, OR points to a
- void*[VG_N_THREAD_KEYS], allocated and deallocated in
- vg_libpthread.c. */
- void** specifics_ptr;
+ /* Architecture-specific thread state. */
+ ThreadArchState arch;
/* This thread's blocked-signals mask. Semantics is that for a
signal to be delivered to this thread, the signal must not be
blocked by this signal mask. If more than one thread accepts a
signal, then it will be delivered to one at random. If all
threads block the signal, it will remain pending until either a
- thread unblocks it or someone uses sigwaitsig/sigtimedwait.
-
- sig_mask reflects what the client told us its signal mask should
- be, but isn't necessarily the current signal mask of the proxy
- LWP: it may have more signals blocked because of signal
- handling, or it may be different because of sigsuspend.
- */
+ thread unblocks it or someone uses sigwaitsig/sigtimedwait. */
vki_sigset_t sig_mask;
- /* Effective signal mask. This is the mask which currently
- applies; it may be different from sig_mask while a signal
- handler is running.
- */
- vki_sigset_t eff_sig_mask;
-
- /* Signal queue. This is used when the kernel doesn't route
- signals properly in order to remember the signal information
- while we are routing the signal. It is a circular queue with
- insertions performed at the head and removals at the tail.
- */
- vki_siginfo_t sigqueue[VG_N_SIGNALQUEUE];
- Int sigqueue_head;
- Int sigqueue_tail;
+ /* tmp_sig_mask is usually the same as sig_mask, and is kept in
+ sync whenever sig_mask is changed. The only time they have
+ different values is during the execution of a sigsuspend, where
+ tmp_sig_mask is the temporary mask which sigsuspend installs.
+ It is only consulted to compute the signal mask applied to a
+ signal handler. */
+ vki_sigset_t tmp_sig_mask;
+
+ /* A little signal queue for signals we can't get the kernel to
+ queue for us. This is only allocated as needed, since it should
+ be rare. */
+ struct SigQueue *sig_queue;
+
+ /* Syscall the Thread is currently running; -1 if none. Should only
+ be set while Thread is in VgTs_WaitSys. */
+ Int syscallno;
+
+ /* A value the Tool wants to pass from its pre-syscall to its
+ post-syscall function. */
+ void *tool_pre_syscall_value;
/* Stacks. When a thread slot is freed, we don't deallocate its
stack; we just leave it lying around for the next use of the
*/
Addr stack_base;
- /* The allocated size of this thread's stack's guard area (permanently
- zero if this is ThreadId == 0, since we didn't allocate its stack) */
- UInt stack_guard_size;
-
/* Address of the highest legitimate word in this stack. This is
used for error messages only -- not critical for execution
correctness. Is is set for all stacks, specifically including
/* Alternate signal stack */
vki_stack_t altstack;
- /* Architecture-specific thread state */
- ThreadArchState arch;
+ /* OS-specific thread state */
+ os_thread_t os_state;
/* Used in the syscall handlers. Set to True to indicate that the
PRE routine for a syscall has set the syscall result already and
so the syscall does not need to be handed to the kernel. */
Bool syscall_result_set;
+
+ /* Per-thread jmp_buf to resume scheduler after a signal */
+ Bool sched_jmpbuf_valid;
+ jmp_buf sched_jmpbuf;
+
+ /* Info about the signal we just got */
+ vki_siginfo_t siginfo;
};
//ThreadState;
-
/* The thread table. */
extern ThreadState VG_(threads)[VG_N_THREADS];
+/* Allocate a new ThreadState */
+extern ThreadId VG_(alloc_ThreadState)(void);
+
+/* A thread exits. tid must currently be running. */
+extern void VG_(exit_thread)(ThreadId tid);
+
+/* Kill a thread. This interrupts whatever a thread is doing, and
+ makes it exit ASAP. This does not set the exitreason or
+ exitcode. */
+extern void VG_(kill_thread)(ThreadId tid);
+
/* Check that tid is in range and denotes a non-Empty thread. */
extern Bool VG_(is_valid_tid) ( ThreadId tid );
/* Get the ThreadState for a particular thread */
extern ThreadState *VG_(get_ThreadState)(ThreadId tid);
+/* Given an LWP id (ie, real kernel thread id), find the corresponding
+ ThreadId */
+extern ThreadId VG_(get_lwp_tid)(Int lwpid);
+
+/* Returns true if a thread is currently running (ie, has the CPU lock) */
+extern Bool VG_(is_running_thread)(ThreadId tid);
+
+/* Returns true if the thread is in the process of exiting */
+extern Bool VG_(is_exiting)(ThreadId tid);
+
+/* Return the number of non-dead Threads */
+extern Int VG_(count_living_threads)(void);
+
/* Nuke all threads except tid. */
-extern void VG_(nuke_all_threads_except) ( ThreadId me );
+extern void VG_(nuke_all_threads_except) ( ThreadId me, VgSchedReturnCode reason );
-/* Give a hint to the scheduler that it may be a good time to find a
- new runnable thread. If prefer_sched != VG_INVALID_THREADID, then
- try to schedule that thread.
-*/
-extern void VG_(need_resched) ( ThreadId prefer_sched );
+/* Make a thread the running thread. The thread must previously been
+ sleeping, and not holding the CPU semaphore. This will set the
+ thread state to VgTs_Runnable, and the thread will attempt to take
+ the CPU semaphore. By the time it returns, tid will be the running
+ thread. */
+extern void VG_(set_running) ( ThreadId tid );
-/* Return codes from the scheduler. */
-typedef
- enum {
- VgSrc_Deadlock, /* no runnable threads and no prospect of any
- even if we wait for a long time */
- VgSrc_ExitSyscall, /* client called exit(). This is the normal
- route out. */
- VgSrc_FatalSig /* Killed by the default action of a fatal
- signal */
- }
- VgSchedReturnCode;
+/* Set a thread into a sleeping state. Before the call, the thread
+ must be runnable, and holding the CPU semaphore. When this call
+ returns, the thread will be set to the specified sleeping state,
+ and will not be holding the CPU semaphore. Note that another
+ thread could be running by the time this call returns, so the
+ caller must be careful not to touch any shared state. It is also
+ the caller's responsibility to actually block until the thread is
+ ready to run again. */
+extern void VG_(set_sleeping) ( ThreadId tid, ThreadStatus state );
+
+/* Yield the CPU for a while */
+extern void VG_(vg_yield)(void);
+// The scheduler.
+extern VgSchedReturnCode VG_(scheduler) ( ThreadId tid );
-// The scheduler. 'fatal_sigNo' is only set if VgSrc_FatalSig is returned.
-extern VgSchedReturnCode VG_(scheduler)
- ( Int* exit_code, ThreadId* last_run_thread, Int* fatal_sigNo );
+// Do everything which needs doing before the process finally ends,
+// like printing reports, etc
+extern void VG_(shutdown_actions)(ThreadId tid);
extern void VG_(scheduler_init) ( void );
extern void VG_(pp_sched_status) ( void );
// Longjmp back to the scheduler and thus enter the sighandler immediately.
-extern void VG_(resume_scheduler) ( Int sigNo, vki_siginfo_t *info );
+extern void VG_(resume_scheduler) ( ThreadId tid );
-// Longjmp, ending the scheduler, when a fatal signal occurs in the client.
-extern void VG_(scheduler_handle_fatal_signal)( Int sigNo );
+/* If true, a fault is Valgrind-internal (ie, a bug) */
+extern Bool VG_(my_fault);
/* The red-zone size which we put at the bottom (highest address) of
thread stacks, for paranoia reasons. This can be arbitrary, and
SET_THREAD_REG(zztid, zzval, PTHREQ_RET, post_reg_write, \
Vg_CorePThread, zztid, O_PTHREQ_RET, sizeof(UWord))
-
/* ---------------------------------------------------------------------
Exports of vg_signals.c
------------------------------------------------------------------ */
-extern Bool VG_(do_signal_routing); /* whether scheduler LWP has to route signals */
+/* Set the standard set of blocked signals, used wheneever we're not
+ running a client syscall. */
+extern void VG_(block_signals)(ThreadId tid);
-/* RT signal allocation */
-extern Int VG_(sig_rtmin);
-extern Int VG_(sig_rtmax);
-extern Int VG_(sig_alloc_rtsig) ( Int high );
+/* Highest signal the kernel will let us use */
+extern Int VG_(max_signal);
extern void VG_(sigstartup_actions) ( void );
-extern void VG_(deliver_signal) ( ThreadId tid, const vki_siginfo_t *, Bool async );
-extern void VG_(unblock_host_signal) ( Int sigNo );
+/* Modify a thread's state so that when it next runs it will be
+ running in the signal handler (or doing the default action if there
+ is none). */
+extern void VG_(deliver_signal) ( ThreadId tid, const vki_siginfo_t * );
extern Bool VG_(is_sig_ign) ( Int sigNo );
-/* Route pending signals from the scheduler LWP to the appropriate
- thread LWP. */
-extern void VG_(route_signals) ( void );
+/* Poll a thread's set of pending signals, and update the Thread's context to deliver one */
+extern void VG_(poll_signals) ( ThreadId );
/* Fake system calls for signal handling. */
extern void VG_(do_sys_sigaltstack) ( ThreadId tid );
-extern void VG_(do_sys_sigaction) ( ThreadId tid );
+extern Int VG_(do_sys_sigaction) ( Int signo,
+ const struct vki_sigaction *new_act,
+ struct vki_sigaction *old_act );
extern void VG_(do_sys_sigprocmask) ( ThreadId tid, Int how,
vki_sigset_t* set,
vki_sigset_t* oldset );
vki_sigset_t* set,
vki_sigset_t* oldset );
-/* Modify the current thread's state once we have detected it is
- returning from a signal handler. */
-extern Bool VG_(signal_returns) ( ThreadId tid );
-
/* Handy utilities to block/restore all host signals. */
extern void VG_(block_all_host_signals)
( /* OUT */ vki_sigset_t* saved_mask );
extern void VG_(get_sigstack_bounds)( Addr* low, Addr* high );
+/* Extend the stack to cover addr, if possible */
+extern Bool VG_(extend_stack)(Addr addr, UInt maxsize);
+
+/* Returns True if the signal is OK for the client to use */
+extern Bool VG_(client_signal_OK)(Int sigNo);
+
+/* Forces the client's signal handler to SIG_DFL - generally just
+ before using that signal to kill the process. */
+extern void VG_(set_default_handler)(Int sig);
+
+/* Adjust a client's signal mask to match our internal requirements */
+extern void VG_(sanitize_client_sigmask)(ThreadId tid, vki_sigset_t *mask);
+
+/* Wait until a thread-related predicate is true */
+extern void VG_(wait_for_threadstate)(Bool (*pred)(void *), void *arg);
/* ---------------------------------------------------------------------
Exports of vg_mylibc.c
extern void VG_(env_unsetenv) ( Char **env, const Char *varname );
extern void VG_(env_remove_valgrind_env_stuff) ( Char** env );
-
+extern void VG_(nanosleep)(struct vki_timespec *);
/* ---------------------------------------------------------------------
Exports of vg_message.c
------------------------------------------------------------------ */
extern void VG_(send_bytes_to_logging_sink) ( Char* msg, Int nbytes );
// Functions for printing from code within Valgrind, but which runs on the
-// sim'd CPU. Defined here because needed for vg_libpthread.c,
-// vg_replace_malloc.c, plus the rest of the core. The weak attribute
-// ensures the multiple definitions are not a problem. They must be functions
-// rather than macros so that va_list can be used.
+// sim'd CPU. Defined here because needed for vg_replace_malloc.c. The
+// weak attribute ensures the multiple definitions are not a problem. They
+// must be functions rather than macros so that va_list can be used.
__attribute__((weak))
int
extern void VG_(demangle) ( Char* orig, Char* result, Int result_size );
+extern void VG_(reloc_abs_jump) ( UChar *jmp );
/* ---------------------------------------------------------------------
Exports of vg_translate.c
Exports of vg_errcontext.c.
------------------------------------------------------------------ */
-extern void VG_(load_suppressions) ( void );
+typedef
+ enum {
+ ThreadErr = -1, // Thread error
+ MutexErr = -2, // Mutex error
+ }
+ CoreErrorKind;
-extern void VG_(record_pthread_error) ( ThreadId tid, Char* msg );
+extern void VG_(load_suppressions) ( void );
extern void VG_(show_all_errors) ( void );
Exports of vg_procselfmaps.c
------------------------------------------------------------------ */
-/* Reads /proc/self/maps into a static buffer which can be parsed by
- VG_(parse_procselfmaps)(). */
-extern void VG_(read_procselfmaps) ( void );
-
-/* Parses /proc/self/maps, calling `record_mapping' for each entry. If
- `read_from_file' is True, /proc/self/maps is read directly, otherwise
- it's read from the buffer filled by VG_(read_procselfmaps_contents)(). */
+/* Parses /proc/self/maps, calling `record_mapping' for each entry. */
extern
void VG_(parse_procselfmaps) (
- void (*record_mapping)( Addr addr, SizeT len, Char rr, Char ww, Char xx,
+ void (*record_mapping)( Addr addr, SizeT len, UInt prot,
UInt dev, UInt ino, ULong foff,
const UChar *filename ) );
------------------------------------------------------------------ */
typedef struct _Segment Segment;
+typedef struct _CodeRedirect CodeRedirect;
extern Bool VG_(is_object_file) ( const void *hdr );
extern void VG_(mini_stack_dump) ( Addr ips[], UInt n_ips );
extern Bool VG_(get_fnname_nodemangle)( Addr a, Char* fnname, Int n_fnname );
+extern Addr VG_(reverse_search_one_symtab) ( const SegInfo* si, const Char* name );
+
/* Set up some default redirects */
extern void VG_(setup_code_redirect_table) ( void );
+extern Bool VG_(resolve_redir_allsegs)(CodeRedirect *redir);
+
+/* ---------------------------------------------------------------------
+ Exports of vg_redir.c
+ ------------------------------------------------------------------ */
/* Redirection machinery */
extern Addr VG_(code_redirect) ( Addr orig );
+extern void VG_(add_redirect_addr)(const Char *from_lib, const Char *from_sym,
+ Addr to_addr);
+extern void VG_(resolve_seg_redirs)(SegInfo *si);
+extern Bool VG_(resolve_redir)(CodeRedirect *redir, const SegInfo *si);
+
+/* Wrapping machinery */
+enum return_type {
+ RT_RETURN,
+ RT_LONGJMP,
+ RT_EXIT,
+};
+
+typedef struct _FuncWrapper FuncWrapper;
+struct _FuncWrapper {
+ void *(*before)(va_list args);
+ void (*after) (void *nonce, enum return_type, Word retval);
+};
+
+extern void VG_(wrap_function)(Addr eip, const FuncWrapper *wrapper);
+extern const FuncWrapper *VG_(is_wrapped)(Addr eip);
+extern Bool VG_(is_wrapper_return)(Addr eip);
+
+/* Primary interface for adding wrappers for client-side functions. */
+extern CodeRedirect *VG_(add_wrapper)(const Char *from_lib, const Char *from_sym,
+ const FuncWrapper *wrapper);
+
+extern Bool VG_(is_resolved)(const CodeRedirect *redir);
/* ---------------------------------------------------------------------
Exports of vg_main.c
Char* VG_(build_child_VALGRINDCLO) ( Char* exename );
Char* VG_(build_child_exename) ( void );
+/* The master thread the one which will be responsible for mopping
+ everything up at exit. Normally it is tid 1, since that's the
+ first thread created, but it may be something else after a
+ fork(). */
+extern ThreadId VG_(master_tid);
+
/* Called when some unhandleable client behaviour is detected.
Prints a msg and aborts. */
extern void VG_(unimplemented) ( Char* msg )
#define SF_CORE (1 << 12) // allocated by core on behalf of the client
#define SF_VALGRIND (1 << 13) // a valgrind-internal mapping - not in client
#define SF_CODE (1 << 14) // segment contains cached code
+#define SF_DEVICE (1 << 15) // device mapping; avoid careless touching
struct _Segment {
UInt prot; // VKI_PROT_*
extern Bool VG_(seg_contains)(const Segment *s, Addr ptr, SizeT size);
extern Bool VG_(seg_overlaps)(const Segment *s, Addr ptr, SizeT size);
-extern void VG_(pad_address_space) (void);
-extern void VG_(unpad_address_space)(void);
+extern Segment *VG_(split_segment)(Addr a);
+
+extern void VG_(pad_address_space) (Addr start);
+extern void VG_(unpad_address_space)(Addr start);
extern REGPARM(2)
void VG_(unknown_SP_update) ( Addr old_SP, Addr new_SP );
-/* ---------------------------------------------------------------------
- Exports of vg_proxylwp.c
- ------------------------------------------------------------------ */
+///* Search /proc/self/maps for changes which aren't reflected in the
+// segment list */
+//extern void VG_(sync_segments)(UInt flags);
-/* Issue a syscall for thread tid */
-extern Int VG_(sys_issue)(ThreadId tid);
-
-extern void VG_(proxy_init) ( void );
-extern void VG_(proxy_create) ( ThreadId tid );
-extern void VG_(proxy_delete) ( ThreadId tid, Bool force );
-extern void VG_(proxy_results) ( void );
-extern void VG_(proxy_sendsig) ( ThreadId fromTid, ThreadId toTid, Int signo );
-extern void VG_(proxy_setsigmask)(ThreadId tid);
-extern void VG_(proxy_sigack) ( ThreadId tid, const vki_sigset_t *);
-extern void VG_(proxy_abort_syscall) ( ThreadId tid );
-extern void VG_(proxy_waitsig) ( void );
-extern void VG_(proxy_wait_sys) (ThreadId tid, Bool restart);
-
-extern void VG_(proxy_shutdown) ( void ); // shut down the syscall workers
-extern Int VG_(proxy_resfd) ( void ); // FD something can select on to know
- // a syscall finished
-
-/* Sanity-check the whole proxy-LWP machinery */
-void VG_(sanity_check_proxy)(void);
-
-/* Send a signal from a thread's proxy to the thread. This longjmps
- back into the proxy's main loop, so it doesn't return. */
-__attribute__ ((__noreturn__))
-extern void VG_(proxy_handlesig)( const vki_siginfo_t *siginfo,
- Addr ip, Int sysnum );
+/* Return string for prot */
+extern const HChar *VG_(prot_str)(UInt prot);
+
+//extern void VG_(print_shadow_stats)();
/* ---------------------------------------------------------------------
Exports of vg_syscalls.c
extern HChar* VG_(resolve_filename_nodup)(Int fd);
extern HChar* VG_(resolve_filename)(Int fd);
-extern Bool VG_(pre_syscall) ( ThreadId tid );
-extern void VG_(post_syscall)( ThreadId tid, Bool restart );
+/* Simple Valgrind-internal atfork mechanism */
+extern void VG_(do_atfork_pre) (ThreadId tid);
+extern void VG_(do_atfork_parent)(ThreadId tid);
+extern void VG_(do_atfork_child) (ThreadId tid);
+
+
+extern void VG_(client_syscall) ( ThreadId tid );
+
+extern void VG_(post_syscall) ( ThreadId tid );
extern Bool VG_(is_kerror) ( Word res );
void VG_(record_fd_open)(ThreadId tid, Int fd, char *pathname);
// Flags describing syscall wrappers
-#define Special (1 << 0)
-#define MayBlock (1 << 1)
-#define NBRunInLWP (1 << 2) // non-blocking, but must run in LWP context
-#define PostOnFail (1 << 3)
+#define Special (1 << 0) /* handled specially */
+#define MayBlock (1 << 1) /* may block */
+#define PostOnFail (1 << 2) /* call POST() function on failure */
+#define PadAddr (1 << 3) /* pad+unpad address space around syscall */
+#define Done (1 << 4) /* used if a PRE() did the syscall */
// Templates for generating the PRE and POST macros. For ones that must be
// publically visible, use an empty 'qual', 'prefix' should start with
GEN_SYSCALL_WRAPPER(sys_munlockall);
GEN_SYSCALL_WRAPPER(sys_sched_setparam);
GEN_SYSCALL_WRAPPER(sys_sched_getparam);
+GEN_SYSCALL_WRAPPER(sys_sched_rr_get_interval);
GEN_SYSCALL_WRAPPER(sys_sched_setscheduler);
GEN_SYSCALL_WRAPPER(sys_sched_getscheduler);
GEN_SYSCALL_WRAPPER(sys_sched_yield);
GEN_SYSCALL_WRAPPER(sys_clock_settime);
GEN_SYSCALL_WRAPPER(sys_clock_gettime);
GEN_SYSCALL_WRAPPER(sys_clock_getres);
+GEN_SYSCALL_WRAPPER(sys_clock_nanosleep);
GEN_SYSCALL_WRAPPER(sys_getcwd);
GEN_SYSCALL_WRAPPER(sys_symlink);
GEN_SYSCALL_WRAPPER(sys_getgroups);
GEN_SYSCALL_WRAPPER(sys_flock); // 4.4BSD
GEN_SYSCALL_WRAPPER(sys_poll); // XPG4-UNIX
GEN_SYSCALL_WRAPPER(sys_getrusage); // SVr4, 4.3BSD
+GEN_SYSCALL_WRAPPER(sys_stime); // SVr4, SVID, X/OPEN
GEN_SYSCALL_WRAPPER(sys_settimeofday); // SVr4, 4.3BSD (non-POSIX)
GEN_SYSCALL_WRAPPER(sys_getpriority); // SVr4, 4.4BSD
GEN_SYSCALL_WRAPPER(sys_setpriority); // SVr4, 4.4BSD
GEN_SYSCALL_WRAPPER(sys_fremovexattr); // * L?
GEN_SYSCALL_WRAPPER(sys_sched_setaffinity); // * L?
GEN_SYSCALL_WRAPPER(sys_sched_getaffinity); // * L?
-GEN_SYSCALL_WRAPPER(sys_exit_group); // * ?
GEN_SYSCALL_WRAPPER(sys_lookup_dcookie); // (*/32/64) L
GEN_SYSCALL_WRAPPER(sys_set_tid_address); // * ?
GEN_SYSCALL_WRAPPER(sys_statfs64); // * (?)
GEN_SYSCALL_WRAPPER(sys_mq_timedreceive); // * P?
GEN_SYSCALL_WRAPPER(sys_mq_notify); // * P?
GEN_SYSCALL_WRAPPER(sys_mq_getsetattr); // * P?
+GEN_SYSCALL_WRAPPER(sys_tkill); // * L
+GEN_SYSCALL_WRAPPER(sys_tgkill); // * L
+GEN_SYSCALL_WRAPPER(sys_gettid); // * L?
#undef GEN_SYSCALL_WRAPPER
extern ULong* VG_(tt_fast) [VG_TT_FAST_SIZE];
extern UInt* VG_(tt_fastN)[VG_TT_FAST_SIZE];
+
extern void VG_(init_tt_tc) ( void );
extern
#define vgPlain_do_syscall6(s,a,b,c,d,e,f) VG_(do_syscall)((s),(a),(b),(c),(d),(e),(f))
extern Int VG_(clone) ( Int (*fn)(void *), void *stack, Int flags, void *arg,
- Int *child_tid, Int *parent_tid);
+ Int *child_tid, Int *parent_tid, vki_modify_ldt_t * );
extern void VG_(sigreturn)(void);
/* ---------------------------------------------------------------------
extern const Char VG_(trampoline_code_start);
extern const Int VG_(trampoline_code_length);
extern const Int VG_(tramp_sigreturn_offset);
+extern const Int VG_(tramp_rt_sigreturn_offset);
extern const Int VG_(tramp_syscall_offset);
/* ---------------------------------------------------------------------
// Returns the architecture and subarchitecture, or indicates
// that this subarchitecture is unable to run Valgrind
// Returns False to indicate we cannot proceed further.
-
extern Bool VGA_(getArchAndSubArch)( /*OUT*/VexArch*,
/*OUT*/VexSubArch* );
-
// Accessors for the ThreadArchState
#define INSTR_PTR(regs) ((regs).vex.ARCH_INSTR_PTR)
#define STACK_PTR(regs) ((regs).vex.ARCH_STACK_PTR)
#define FRAME_PTR(regs) ((regs).vex.ARCH_FRAME_PTR)
-
#define CLREQ_ARGS(regs) ((regs).vex.ARCH_CLREQ_ARGS)
#define PTHREQ_RET(regs) ((regs).vex.ARCH_PTHREQ_RET)
#define CLREQ_RET(regs) ((regs).vex.ARCH_CLREQ_RET)
-
-
// Offsets for the Vex state
#define O_STACK_PTR (offsetof(VexGuestArchState, ARCH_STACK_PTR))
#define O_FRAME_PTR (offsetof(VexGuestArchState, ARCH_FRAME_PTR))
-
#define O_CLREQ_RET (offsetof(VexGuestArchState, ARCH_CLREQ_RET))
#define O_PTHREQ_RET (offsetof(VexGuestArchState, ARCH_PTHREQ_RET))
extern void VGA_(set_arg_and_bogus_ret) ( ThreadId tid, UWord arg, Addr ret );
extern void VGA_(thread_initial_stack) ( ThreadId tid, UWord arg, Addr ret );
+// OS/Platform-specific thread clear (after thread exit)
+extern void VGA_(os_state_clear)(ThreadState *);
+
+// OS/Platform-specific thread init (at scheduler init time)
+extern void VGA_(os_state_init)(ThreadState *);
+
+// Run a thread from beginning to end. Does not return if tid == VG_(master_tid).
+void VGA_(thread_wrapper)(ThreadId tid);
+
+// Like VGA_(thread_wrapper), but it allocates a stack before calling
+// to VGA_(thread_wrapper) on that stack, as if it had been set up by
+// clone()
+void VGA_(main_thread_wrapper)(ThreadId tid) __attribute__ ((__noreturn__));
+
+// Return how many bytes of a thread's Valgrind stack are unused
+Int VGA_(stack_unused)(ThreadId tid);
+
+// Terminate the process. Does not return.
+void VGA_(terminate)(ThreadId tid, VgSchedReturnCode src) __attribute__((__noreturn__));
+
+// wait until all other threads are dead
+extern void VGA_(reap_threads)(ThreadId self);
+
+// handle an arch-specific client request
+extern Bool VGA_(client_request)(ThreadId tid, UWord *args);
+
// Symtab stuff
extern UInt* VGA_(reg_addr_from_tst) ( Int regno, ThreadArchState* );
// For attaching the debugger
extern Int VGA_(ptrace_setregs_from_tst) ( Int pid, ThreadArchState* arch );
+// Used by leakcheck
+extern void VGA_(mark_from_registers)(ThreadId tid, void (*marker)(Addr));
+
// Signal stuff
extern void VGA_(push_signal_frame) ( ThreadId tid, Addr sp_top_of_frame,
const vki_siginfo_t *siginfo,
void *handler, UInt flags,
- const vki_sigset_t *mask);
-extern Int VGA_(pop_signal_frame) ( ThreadId tid );
-
-// libpthread stuff
-typedef struct _ThreadArchAux ThreadArchAux;
-
-void VGA_(thread_create) ( ThreadArchAux *aux );
-void VGA_(thread_wrapper)( ThreadArchAux *aux );
-void VGA_(thread_exit) ( void );
-
-Bool VGA_(has_tls) ( void );
+ const vki_sigset_t *mask,
+ void *restorer );
+////typedef struct _ThreadArchAux ThreadArchAux;
#define MY__STRING(__str) #__str
// Assertion to use in code running on the simulated CPU.
struct SyscallTableEntry {
UInt *flags_ptr;
- void (*before)(ThreadId tid, ThreadState *tst);
+ void (*before)(ThreadId tid, ThreadState *tst /*, UInt *flags*/);
void (*after) (ThreadId tid, ThreadState *tst);
};
extern void VGA_(restart_syscall)(ThreadArchState* arch);
-/* We need our own copy of VG_(do_syscall)() to handle a special
- race-condition. If we've got signals unblocked, and we take a
- signal in the gap either just before or after the syscall, we may
- end up not running the syscall at all, or running it more than
- once.
-
- The solution is to make the signal handler derive the proxy's
- precise state by looking to see which eip it is executing at
- exception time.
-
- Ranges:
-
- VGA_(sys_before) ... VGA_(sys_restarted):
- Setting up register arguments and running state. If
- interrupted, then the syscall should be considered to return
- ERESTARTSYS.
-
- VGA_(sys_restarted):
- If interrupted and eip==VGA_(sys_restarted), then either the syscall
- was about to start running, or it has run, was interrupted and
- the kernel wants to restart it. eax still contains the
- syscall number. If interrupted, then the syscall return value
- should be ERESTARTSYS.
-
- VGA_(sys_after):
- If interrupted and eip==VGA_(sys_after), the syscall either just
- finished, or it was interrupted and the kernel doesn't want to
- restart it. Either way, eax equals the correct return value
- (either the actual return value, or EINTR).
-
- VGA_(sys_after) ... VGA_(sys_done):
- System call is complete, but the state hasn't been updated,
- nor has the result been written back. eax contains the return
- value.
-
- Freakin' horrible...
+/*
+ Perform a syscall on behalf of a client thread, using a specific
+ signal mask. On completion, the signal mask is set to restore_mask
+ (which presumably blocks almost everything). If a signal happens
+ during the syscall, the handler should call
+ VGA_(interrupted_syscall)() to adjust the thread's context to do the
+ right thing.
*/
-extern const Addr VGA_(sys_before), VGA_(sys_restarted),
- VGA_(sys_after), VGA_(sys_done);
-
-extern void VGA_(do_thread_syscall)
- ( UWord sys,
- UWord arg1, UWord arg2, UWord arg3,
- UWord arg4, UWord arg5, UWord arg6,
- /*OUT*/HWord *resultP,
- /*enum PXState*/Int *stateP,
- /*enum PXState*/Int poststate
- );
+extern void VGA_(client_syscall)(Int syscallno, ThreadState *tst,
+ const vki_sigset_t *syscall_mask);
+
+/*
+ Fix up the thread's state because a syscall may have been
+ interrupted with a signal. Returns True if the syscall completed
+ (either interrupted or finished normally), or False if it was
+ restarted (or the signal didn't actually interrupt a syscall).
+ */
+extern void VGA_(interrupted_syscall)(ThreadId tid,
+ struct vki_ucontext *uc,
+ Bool restart);
+
+
+///* ---------------------------------------------------------------------
+// Thread modelling
+// ------------------------------------------------------------------ */
+//extern void VG_(tm_thread_create) (ThreadId creator, ThreadId tid, Bool detached);
+//extern void VG_(tm_thread_exit) (ThreadId tid);
+//extern Bool VG_(tm_thread_exists) (ThreadId tid);
+//extern void VG_(tm_thread_detach) (ThreadId tid);
+//extern void VG_(tm_thread_join) (ThreadId joiner, ThreadId joinee);
+//extern void VG_(tm_thread_switchto)(ThreadId tid);
+//
+//extern void VG_(tm_mutex_init) (ThreadId tid, Addr mutexp);
+//extern void VG_(tm_mutex_destroy)(ThreadId tid, Addr mutexp);
+//extern void VG_(tm_mutex_trylock)(ThreadId tid, Addr mutexp);
+//extern void VG_(tm_mutex_giveup) (ThreadId tid, Addr mutexp);
+//extern void VG_(tm_mutex_acquire)(ThreadId tid, Addr mutexp);
+//extern void VG_(tm_mutex_tryunlock)(ThreadId tid, Addr mutexp);
+//extern void VG_(tm_mutex_unlock) (ThreadId tid, Addr mutexp);
+//extern Bool VG_(tm_mutex_exists) (Addr mutexp);
+//
+//extern UInt VG_(tm_error_update_extra) (Error *err);
+//extern Bool VG_(tm_error_equal) (VgRes res, Error *e1, Error *e2);
+//extern void VG_(tm_error_print) (Error *err);
+//
+//extern void VG_(tm_init) ();
+//
+//extern void VG_(tm_cond_init) (ThreadId tid, Addr condp);
+//extern void VG_(tm_cond_destroy) (ThreadId tid, Addr condp);
+//extern void VG_(tm_cond_wait) (ThreadId tid, Addr condp, Addr mutexp);
+//extern void VG_(tm_cond_wakeup) (ThreadId tid, Addr condp, Addr mutexp);
+//extern void VG_(tm_cond_signal) (ThreadId tid, Addr condp);
+//
+///* ----- pthreads ----- */
+//extern void VG_(pthread_init) ();
+//extern void VG_(pthread_startfunc_wrapper)(Addr wrapper);
+//
+//struct vg_pthread_newthread_data {
+// void *(*startfunc)(void *arg);
+// void *arg;
+//};
/* ---------------------------------------------------------------------
Finally - autoconf-generated settings
/* Magic values that the guest state might be set to when returning to the
dispatcher. The only other legitimate value is to point to the
start of the thread's VEX guest state. These also are return values from
- VG_(run_innerloop) to the scheduler.
+ from VG_(run_innerloop) to the scheduler.
*/
/* Defines values for JMP_EMWARN, JMP_SYSCALL, JMP_CLIENTREQ and
JMP_YIELD */
those from libvex_trc_values.h. */
#define VG_TRC_INNER_FASTMISS 37 /* TRC only; means fast-cache miss. */
#define VG_TRC_INNER_COUNTERZERO 41 /* TRC only; means bb ctr == 0 */
-#define VG_TRC_UNRESUMABLE_SIGNAL 43 /* TRC only; got sigsegv/sigbus */
+#define VG_TRC_FAULT_SIGNAL 43 /* TRC only; got sigsegv/sigbus */
#define VG_TRC_INVARIANT_FAILED 47 /* TRC only; invariant violation */
+#define VG_MAX_TRC 128 /* Highest possible TRC value */
+
/* Constants for the fast translation lookup cache. */
#define VG_TT_FAST_BITS 16
#define VG_TT_FAST_SIZE (1 << VG_TT_FAST_BITS)
#define VG_TT_FAST_MASK ((VG_TT_FAST_SIZE) - 1)
-/* Assembly code stubs make this request */
-#define VG_USERREQ__SIGNAL_RETURNS 0x4001
-
// XXX: all this will go into x86/ eventually...
/*
0 - standard feature flags
my $args = join ', ', @args;
print "$ret $pfxmap{$pfx}($func)($args);\n";
+ print "$ret VG_(missing_$func)($args);\n";
print "Bool VG_(defined_$func)(void);\n";
}
} elsif ($output eq "toolproto") {
my ($pfx, $ret, $func, @args) = @_;
my $args = join ", ", @args;
- print "static $ret missing_${pfx}_$func($args) {\n";
+ print "__attribute__ ((weak))\n$ret VG_(missing_$func)($args) {\n";
print " VG_(missing_tool_func)(\"${pfx}_$func\");\n";
print "}\n";
print "Bool VG_(defined_$func)(void) {\n";
- print " return $struct.${pfx}_$func != missing_${pfx}_$func;\n";
+ print " return $struct.${pfx}_$func != VG_(missing_$func);\n";
print "}\n\n";
};
$indent = " ";
$generate = sub ($$$@) {
my ($pfx, $ret, $func, @args) = @_;
- print "$indent.${pfx}_$func = missing_${pfx}_$func,\n"
+ print "$indent.${pfx}_$func = VG_(missing_$func),\n"
};
$indent = " ";
} elsif ($output eq "initfunc") {
void VG_(init_$func)($ret (*func)($args))
{
if (func == NULL)
- func = missing_${pfx}_$func;
+ func = VG_(missing_$func);
if (VG_(defined_$func)())
VG_(printf)("Warning tool is redefining $func\\n");
if (func == TL_($func))
include $(top_srcdir)/Makefile.all.am
include $(top_srcdir)/Makefile.core-AM_CPPFLAGS.am
-AM_CFLAGS = $(WERROR) -Winline -Wall -Wshadow -O -fomit-frame-pointer -g
+AM_CFLAGS = $(WERROR) -Wmissing-prototypes -Winline -Wall -Wshadow -O -g
noinst_HEADERS = \
core_os.h
noinst_LIBRARIES = libos.a
libos_a_SOURCES = \
+ core_os.c \
+ sema.c \
syscalls.c
extern void VGA_(linux_##x##_before)(ThreadId tid, ThreadState *tst); \
extern void VGA_(linux_##x##_after) (ThreadId tid, ThreadState *tst)
+LINUX_SYSCALL_WRAPPER(sys_exit_group);
+
LINUX_SYSCALL_WRAPPER(sys_mount);
LINUX_SYSCALL_WRAPPER(sys_oldumount);
LINUX_SYSCALL_WRAPPER(sys_umount);
LINUX_SYSCALL_WRAPPER(sys_io_submit);
LINUX_SYSCALL_WRAPPER(sys_io_cancel);
+#define FUTEX_SEMA 0
+
+#if FUTEX_SEMA
+/* ---------------------------------------------------------------------
+ Definition for a semaphore. Defined in terms of futex.
+
+ Futex semaphore operations taken from futex-2.2/usersem.h
+ ------------------------------------------------------------------ */
+typedef struct {
+ int count;
+} vg_sema_t;
+
+extern Int __futex_down_slow(vg_sema_t *, int, struct vki_timespec *);
+extern Int __futex_up_slow(vg_sema_t *);
+
+void VG_(sema_init)(vg_sema_t *);
+static inline void VG_(sema_deinit)(vg_sema_t *)
+{
+}
+
+static inline void VG_(sema_down)(vg_sema_t *futx)
+{
+ Int val, woken = 0;
+
+ /* Returns new value */
+ while ((val = __futex_down(&futx->count)) != 0) {
+ Int ret = __futex_down_slow(futx, val, NULL);
+ if (ret < 0)
+ return; /* error */
+ else if (ret == 1)
+ return; /* passed */
+ else if (ret == 0)
+ woken = 1; /* slept */
+ else
+ /* loop */;
+ }
+ /* If we were woken, someone else might be sleeping too: set to -1 */
+ if (woken) {
+ futx->count = -1;
+ }
+ return;
+}
+
+/* If __futex_up increments count from 0 -> 1, noone was waiting.
+ Otherwise, set to 1 and tell kernel to wake them up. */
+static inline void VG_(sema_up)(vg_sema_t *futx)
+{
+ if (!__futex_up(&futx->count))
+ __futex_up_slow(futx);
+}
+#else /* !FUTEX_SEMA */
+/*
+ Not really a semaphore, but use a pipe for a token-passing scheme
+ */
+typedef struct {
+ Int pipe[2];
+ Int owner_thread; /* who currently has it */
+} vg_sema_t;
+
+void VG_(sema_init)(vg_sema_t *);
+void VG_(sema_deinit)(vg_sema_t *);
+void VG_(sema_down)(vg_sema_t *sema);
+void VG_(sema_up)(vg_sema_t *sema);
+
+#endif /* FUTEX_SEMA */
+
+/* OS-specific thread state */
+typedef struct {
+ /* who we are */
+ Int lwpid; /* PID of kernel task */
+ Int threadgroup; /* thread group id */
+
+ /* how we were started */
+ UInt clone_flags; /* flags passed to clone() to create this thread */
+ Int *parent_tidptr;
+ Int *child_tidptr;
+
+ ThreadId parent; /* parent tid (if any) */
+
+ /* runtime details */
+ UInt *stack; /* stack base */
+ UInt stacksize; /* stack size in UInts */
+
+ /* exit details */
+ Int exitcode; /* in the case of exitgroup, set by someone else */
+ Int fatalsig; /* fatal signal */
+} os_thread_t;
+
#endif // __LINUX_CORE_OS_H
/*--------------------------------------------------------------------*/
#define PRE(name, f) PRE_TEMPLATE( , vgArch_linux, name, f)
#define POST(name) POST_TEMPLATE( , vgArch_linux, name)
+PRE(sys_exit_group, Special)
+{
+ ThreadId t;
+
+ PRINT("exit_group( %d )", ARG1);
+ PRE_REG_READ1(void, "exit_group", int, exit_code);
+
+ /* A little complex; find all the threads with the same threadgroup
+ as this one (including this one), and mark them to exit */
+ for (t = 1; t < VG_N_THREADS; t++) {
+ if (VG_(threads)[t].status == VgTs_Empty || /* not alive */
+ VG_(threads)[t].os_state.threadgroup != tst->os_state.threadgroup) /* not our group */
+ continue;
+
+ VG_(threads)[t].exitreason = VgSrc_ExitSyscall;
+ VG_(threads)[t].os_state.exitcode = ARG1;
+
+ if (t != tid)
+ VG_(kill_thread)(t); /* unblock it, if blocked */
+ }
+
+ /* exit_group doesn't return anything (perhaps it doesn't return?)
+ Nevertheless, if we don't do this, the result-not-assigned-
+ yet-you-said-you-were-Special assertion in the main syscall
+ handling logic will fire. Hence ..
+ */
+ SET_RESULT(0);
+}
+
PRE(sys_mount, MayBlock)
{
// Nb: depending on 'flags', the 'type' and 'data' args may be ignored.
// We are conservative and check everything, except the memory pointed to
// by 'data'.
- PRINT( "sys_mount( %p, %p, %p, %p, %p )" ,ARG1,ARG2,ARG3);
+ PRINT( "sys_mount( %p, %p, %p, %p, %p )" ,ARG1,ARG2,ARG3,ARG4,ARG5);
PRE_REG_READ5(long, "mount",
char *, source, char *, target, char *, type,
unsigned long, flags, void *, data);
PRE(sys_sysctl, 0)
{
PRINT("sys_sysctl ( %p )", ARG1 );
+ struct __vki_sysctl_args *args;
+ args = (struct __vki_sysctl_args *)ARG1;
PRE_REG_READ1(long, "sysctl", struct __sysctl_args *, args);
PRE_MEM_WRITE( "sysctl(args)", ARG1, sizeof(struct __vki_sysctl_args) );
+ if (!VG_(is_addressable)(ARG1, sizeof(struct __vki_sysctl_args), VKI_PROT_READ)) {
+ SET_RESULT( -VKI_EFAULT );
+ return;
+ }
+
+ PRE_MEM_READ("sysctl(name)", (Addr)args->name, args->nlen * sizeof(*args->name));
+ if (args->newval != NULL)
+ PRE_MEM_READ("sysctl(newval)", (Addr)args->newval, args->newlen);
+ if (args->oldlenp != NULL) {
+ PRE_MEM_READ("sysctl(oldlenp)", (Addr)args->oldlenp, sizeof(*args->oldlenp));
+ PRE_MEM_WRITE("sysctl(oldval)", (Addr)args->oldval, *args->oldlenp);
+ }
}
POST(sys_sysctl)
{
- POST_MEM_WRITE( ARG1, sizeof(struct __vki_sysctl_args) );
+ struct __vki_sysctl_args *args;
+ args = (struct __vki_sysctl_args *)ARG1;
+ if (args->oldlenp != NULL) {
+ POST_MEM_WRITE((Addr)args->oldlenp, sizeof(*args->oldlenp));
+ POST_MEM_WRITE((Addr)args->oldval, *args->oldlenp);
+ }
}
PRE(sys_prctl, MayBlock)
int, option, unsigned long, arg2, unsigned long, arg3,
unsigned long, arg4, unsigned long, arg5);
// XXX: totally wrong... we need to look at the 'option' arg, and do
- // PRE_MEM_READs/PRE_MEM_WRITEs as necessary...
+ // SYS_PRE_MEM_READs/SYS_PRE_MEM_WRITEs as necessary...
}
PRE(sys_sendfile, MayBlock)
size = PGROUNDUP(sizeof(struct vki_aio_ring) +
ARG1*sizeof(struct vki_io_event));
addr = VG_(find_map_space)(0, size, True);
- VG_(map_segment)(addr, size, VKI_PROT_READ|VKI_PROT_EXEC, SF_FIXED);
- VG_(pad_address_space)();
- SET_RESULT( VG_(do_syscall2)(SYSNO, ARG1, ARG2) );
- VG_(unpad_address_space)();
+ if (addr == 0) {
+ SET_RESULT( -VKI_ENOMEM );
+ return;
+ }
+
+ VG_(map_segment)(addr, size, VKI_PROT_READ|VKI_PROT_WRITE, SF_FIXED);
+
+ VG_(pad_address_space)(0);
+ VG_(unpad_address_space)(0);
if (RES == 0) {
struct vki_aio_ring *r = *(struct vki_aio_ring **)ARG2;
// before the syscall, while the aio_ring structure still exists. (And we
// know that we must look at the aio_ring structure because Tom inspected the
// kernel and glibc sources to see what they do, yuk.)
+//
+// XXX This segment can be implicitly unmapped when aio
+// file-descriptors are closed...
PRE(sys_io_destroy, Special)
{
Segment *s = VG_(find_segment)(ARG1);
SET_RESULT( VG_(do_syscall1)(SYSNO, ARG1) );
- if (RES == 0 && s != NULL && VG_(seg_contains)(s, ARG1, size)) {
+ if (RES == 0 && s != NULL) {
VG_TRACK( die_mem_munmap, ARG1, size );
VG_(unmap_range)(ARG1, size);
}
ARG4, sizeof(struct vki_io_event)*ARG3 );
if (ARG5 != 0)
PRE_MEM_READ( "io_getevents(timeout)",
- ARG5, sizeof(struct vki_timespec));
+ ARG5, sizeof(struct vki_timespec));
}
POST(sys_io_getevents)
#include "core.h"
#include "ume.h"
+#include "memcheck/memcheck.h"
static int stack[SIGSTKSZ*4];
seen = 0;
for(; auxv->a_type != AT_NULL; auxv++) {
if (0)
- printf("doing auxv %p %4lld: %lld %p\n",
- auxv, (ULong)auxv->a_type, (ULong)auxv->u.a_val, auxv->u.a_ptr);
+ printf("doing auxv %p %5lld: %lld %p\n",
+ auxv, (Long)auxv->a_type, (Long)auxv->u.a_val, auxv->u.a_ptr);
switch(auxv->a_type) {
case AT_PHDR:
foreach_map(prmap, /*dummy*/NULL);
}
- jmp_with_stack(info.init_eip, (Addr)esp);
+ jmp_with_stack((void (*)(void))info.init_eip, (Addr)esp);
}
int main(int argc, char** argv)
{
struct rlimit rlim;
- const char *cp = getenv(VALGRINDLIB);
-
- if (cp != NULL)
- valgrind_lib = cp;
+ const char *cp;
// Initial stack pointer is to argc, which is immediately before argv[0]
// on the stack. Nb: Assumes argc is word-aligned.
init_sp = argv - 1;
+ /* The Linux libc startup sequence leaves this in an apparently
+ undefined state, but it really is defined, so mark it so. */
+ VALGRIND_MAKE_READABLE(init_sp, sizeof(int));
+
+ cp = getenv(VALGRINDLIB);
+
+ if (cp != NULL)
+ valgrind_lib = cp;
+
/* Set the address space limit as high as it will go, since we make
a lot of very large mappings. */
getrlimit(RLIMIT_AS, &rlim);
setrlimit(RLIMIT_AS, &rlim);
/* move onto another stack so we can play with the main one */
- jmp_with_stack((Addr)main2, (Addr)stack + sizeof(stack));
+ jmp_with_stack(main2, (Addr)stack + sizeof(stack));
}
/*--------------------------------------------------------------------*/
info->phnum = e->e.e_phnum;
info->entry = e->e.e_entry + ebase;
+ info->phdr = 0;
for(i = 0; i < e->e.e_phnum; i++) {
ESZ(Phdr) *ph = &e->p[i];
}
}
+ if (info->phdr == 0)
+ info->phdr = minaddr + e->e.e_phoff;
+
if (info->exe_base != info->exe_end) {
if (minaddr >= maxaddr ||
(minaddr + ebase < info->exe_base ||
entry = baseoff + interp->e.e_entry;
info->interp_base = (ESZ(Addr))base;
+ free(interp->p);
free(interp);
} else
entry = (void *)e->e.e_entry;
info->init_eip = (Addr)entry;
+ free(e->p);
free(e);
return 0;
/*--- General stuff ---*/
/*------------------------------------------------------------*/
+extern
void foreach_map(int (*fn)(char *start, char *end,
const char *perm, off_t offset,
int maj, int min, int ino, void* extra),
// Jump to a new 'ip' with the stack 'sp'. This is intended
// to simulate the initial CPU state when the kernel starts an program
// after exec; and so should clear all the other registers.
-void jmp_with_stack(Addr ip, Addr sp) __attribute__((noreturn));
+extern
+__attribute__((noreturn))
+void jmp_with_stack(void (*eip)(void), Addr sp);
/*------------------------------------------------------------*/
/*--- Loading ELF files ---*/
// checks execute permissions, sets up interpreter if program is a script,
// reads headers, maps file into memory, and returns important info about
// the program.
-int do_exec(const char *exe, struct exeinfo *info);
+extern int do_exec(const char *exe, struct exeinfo *info);
/*------------------------------------------------------------*/
/*--- Finding and dealing with auxv ---*/
} u;
};
-struct ume_auxv *find_auxv(UWord* orig_esp);
+extern struct ume_auxv *find_auxv(UWord* orig_esp);
/* Our private auxv entries */
#define AT_UME_PADFD 0xff01 /* padding file fd */
}
__attribute__ ((weak))
-void TL_(free)( ThreadId tid, void* p )
+void TL_(free)( ThreadId tid, void* p )
{
/* see comment for TL_(malloc)() above */
if (VG_(tl_malloc_called_by_scheduler))
/* Check that the demangler isn't leaking. */
/* 15 Feb 02: if this assertion fails, this is not a disaster.
Comment it out, and let me know. (jseward@acm.org). */
- vg_assert(VG_(is_empty_arena)(VG_AR_DEMANGLE));
+ // 9 Feb 05: it fails very occasionally, as reported in bug #87480.
+ // It's very rare, and not a disaster, so let it slide.
+ //vg_assert(VG_(is_empty_arena)(VG_AR_DEMANGLE));
/* VG_(show_all_arena_stats)(); */
/* forwards ... */
static Supp* is_suppressible_error ( Error* err );
+static ThreadId last_tid_printed = 1;
/*------------------------------------------------------------*/
/*--- Error type ---*/
/* Note: it is imperative this doesn't overlap with (0..) at all, as tools
* effectively extend it by defining their own enums in the (0..) range. */
-typedef
- enum {
- PThreadErr = -1, // Pthreading error
- }
- CoreErrorKind;
/* Errors. Extensible (via the 'extra' field). Tools can use a normal
enum (with element values in the normal range (0..)) for `ekind'.
}
CoreSuppKind;
+/* Max number of callers for context in a suppression. */
+#define VG_MAX_SUPP_CALLERS 24
+
/* For each caller specified for a suppression, record the nature of
the caller name. Not of interest to tools. */
typedef
enum {
+ NoName, /* Error case */
ObjName, /* Name is of an shared object file. */
FunName /* Name is of a function. */
}
SuppLocTy;
+typedef
+ struct {
+ SuppLocTy ty;
+ Char* name;
+ }
+ SuppLoc;
+
/* Suppressions. Tools can get/set tool-relevant parts with functions
declared in include/tool.h. Extensible via the 'extra' field.
Tools can use a normal enum (with element values in the normal range
struct _Supp* next;
Int count; // The number of times this error has been suppressed.
Char* sname; // The name by which the suppression is referred to.
- /* First two (name of fn where err occurs, and immediate caller)
- * are mandatory; extra two are optional. */
- SuppLocTy caller_ty[VG_N_SUPP_CALLERS];
- Char* caller [VG_N_SUPP_CALLERS];
+
+ // Length of 'callers'
+ Int n_callers;
+ // Array of callers, for matching stack traces. First one (name of fn
+ // where err occurs) is mandatory; rest are optional.
+ SuppLoc* callers;
/* The tool-specific part */
SuppKind skind; // What kind of suppression. Must use the range (0..).
return False;
switch (e1->ekind) {
- case PThreadErr:
- vg_assert(VG_(needs).core_errors);
- if (e1->string == e2->string)
- return True;
- if (0 == VG_(strcmp)(e1->string, e2->string))
- return True;
- return False;
+ // case ThreadErr:
+ // case MutexErr:
+ // vg_assert(VG_(needs).core_errors);
+ // return VG_(tm_error_equal)(res, e1, e2);
default:
if (VG_(needs).tool_errors)
return TL_(eq_Error)(res, e1, e2);
{
if (printCount)
VG_(message)(Vg_UserMsg, "Observed %d times:", err->count );
- if (err->tid > 1)
+ if (err->tid > 0 && err->tid != last_tid_printed) {
VG_(message)(Vg_UserMsg, "Thread %d:", err->tid );
+ last_tid_printed = err->tid;
+ }
switch (err->ekind) {
- case PThreadErr:
- vg_assert(VG_(needs).core_errors);
- VG_(message)(Vg_UserMsg, "%s", err->string );
- VG_(pp_ExeContext)(err->where);
- break;
+ // case ThreadErr:
+ // case MutexErr:
+ // vg_assert(VG_(needs).core_errors);
+ // VG_(tm_error_print)(err);
+ // break;
default:
if (VG_(needs).tool_errors)
TL_(pp_Error)( err );
}
-// Initialisation picks out values from the appropriate ThreadState as
-// necessary.
+/* Construct an error */
static __inline__
void construct_error ( Error* err, ThreadId tid, ErrorKind ekind, Addr a,
Char* s, void* extra, ExeContext* where )
ExeContext* ec = VG_(get_error_where)(err);
Int stop_at = VG_(clo_backtrace_size);
- /* At most VG_N_SUPP_CALLERS names */
- if (stop_at > VG_N_SUPP_CALLERS) stop_at = VG_N_SUPP_CALLERS;
+ /* At most VG_MAX_SUPP_CALLERS names */
+ if (stop_at > VG_MAX_SUPP_CALLERS) stop_at = VG_MAX_SUPP_CALLERS;
vg_assert(stop_at > 0);
VG_(printf)("{\n");
VG_(printf)(" <insert a suppression name here>\n");
- if (PThreadErr == err->ekind) {
+ if (ThreadErr == err->ekind || MutexErr == err->ekind) {
VG_(printf)(" core:PThread\n");
} else {
p = VG_(arena_malloc)(VG_AR_ERRORS, sizeof(Error));
*p = err;
- /* update `extra', for non-core errors (core ones don't use 'extra') */
- if (VG_(needs).tool_errors && PThreadErr != ekind) {
- extra_size = TL_(update_extra)(p);
+ /* update `extra' */
+ switch (ekind) {
+ // case ThreadErr:
+ // case MutexErr:
+ // vg_assert(VG_(needs).core_errors);
+ // extra_size = VG_(tm_error_update_extra)(p);
+ // break;
+ default:
+ vg_assert(VG_(needs).tool_errors);
+ extra_size = TL_(update_extra)(p);
+ break;
+ }
- /* copy block pointed to by `extra', if there is one */
- if (NULL != p->extra && 0 != extra_size) {
- void* new_extra = VG_(malloc)(extra_size);
- VG_(memcpy)(new_extra, p->extra, extra_size);
- p->extra = new_extra;
- }
+ /* copy block pointed to by `extra', if there is one */
+ if (NULL != p->extra && 0 != extra_size) {
+ void* new_extra = VG_(malloc)(extra_size);
+ VG_(memcpy)(new_extra, p->extra, extra_size);
+ p->extra = new_extra;
}
p->next = vg_errors;
/* These are called not from generated code but from the scheduler */
-void VG_(record_pthread_error) ( ThreadId tid, Char* msg )
-{
- if (! VG_(needs).core_errors) return;
- VG_(maybe_record_error)( tid, PThreadErr, /*addr*/0, msg, /*extra*/NULL );
-}
-
void VG_(show_all_errors) ( void )
{
Int i, n_min;
(fun: or obj:) part.
Returns False if failed.
*/
-static Bool setLocationTy ( Char** p_caller, SuppLocTy* p_ty )
+static Bool setLocationTy ( SuppLoc* p )
{
- if (VG_(strncmp)(*p_caller, "fun:", 4) == 0) {
- (*p_caller) += 4;
- *p_ty = FunName;
+ if (VG_(strncmp)(p->name, "fun:", 4) == 0) {
+ p->name += 4;
+ p->ty = FunName;
return True;
}
- if (VG_(strncmp)(*p_caller, "obj:", 4) == 0) {
- (*p_caller) += 4;
- *p_ty = ObjName;
+ if (VG_(strncmp)(p->name, "obj:", 4) == 0) {
+ p->name += 4;
+ p->ty = ObjName;
return True;
}
VG_(printf)("location should start with fun: or obj:\n");
{
# define N_BUF 200
Int fd, i;
- Bool eof, too_many_contexts = False;
+ Bool eof;
Char buf[N_BUF+1];
Char* tool_names;
Char* supp_name;
+ Char* err_str = NULL;
+ SuppLoc tmp_callers[VG_MAX_SUPP_CALLERS];
fd = VG_(open)( filename, VKI_O_RDONLY, 0 );
if (fd < 0) {
VG_(exit)(1);
}
+#define BOMB(S) { err_str = S; goto syntax_error; }
+
while (True) {
/* Assign and initialise the two suppression halves (core and tool) */
Supp* supp;
supp = VG_(arena_malloc)(VG_AR_CORE, sizeof(Supp));
supp->count = 0;
- for (i = 0; i < VG_N_SUPP_CALLERS; i++) supp->caller[i] = NULL;
+
+ // Initialise temporary reading-in buffer.
+ for (i = 0; i < VG_MAX_SUPP_CALLERS; i++) {
+ tmp_callers[i].ty = NoName;
+ tmp_callers[i].name = NULL;
+ }
+
supp->string = supp->extra = NULL;
eof = VG_(get_line) ( fd, buf, N_BUF );
if (eof) break;
- if (!VG_STREQ(buf, "{")) goto syntax_error;
+ if (!VG_STREQ(buf, "{")) BOMB("expected '{' or end-of-file");
eof = VG_(get_line) ( fd, buf, N_BUF );
- if (eof || VG_STREQ(buf, "}")) goto syntax_error;
+
+ if (eof || VG_STREQ(buf, "}")) BOMB("unexpected '}'");
+
supp->sname = VG_(arena_strdup)(VG_AR_CORE, buf);
eof = VG_(get_line) ( fd, buf, N_BUF );
- if (eof) goto syntax_error;
+ if (eof) BOMB("unexpected end-of-file");
/* Check it has the "tool1,tool2,...:supp" form (look for ':') */
i = 0;
while (True) {
if (buf[i] == ':') break;
- if (buf[i] == '\0') goto syntax_error;
+ if (buf[i] == '\0') BOMB("malformed 'tool1,tool2,...:supp' line");
i++;
}
buf[i] = '\0'; /* Replace ':', splitting into two strings */
tool_names = & buf[0];
supp_name = & buf[i+1];
- /* Is it a core suppression? */
if (VG_(needs).core_errors && tool_name_present("core", tool_names))
{
+ // A core suppression
if (VG_STREQ(supp_name, "PThread"))
supp->skind = PThreadSupp;
else
- goto syntax_error;
+ BOMB("unknown core suppression type");
}
-
- /* Is it a tool suppression? */
else if (VG_(needs).tool_errors &&
tool_name_present(VG_(details).name, tool_names))
{
- if (TL_(recognised_suppression)(supp_name, supp))
- {
+ // A tool suppression
+ if (TL_(recognised_suppression)(supp_name, supp)) {
/* Do nothing, function fills in supp->skind */
- } else
- goto syntax_error;
+ } else {
+ BOMB("unknown tool suppression type");
+ }
}
-
else {
- /* Ignore rest of suppression */
+ // Ignore rest of suppression
while (True) {
eof = VG_(get_line) ( fd, buf, N_BUF );
- if (eof) goto syntax_error;
+ if (eof) BOMB("unexpected end-of-file");
if (VG_STREQ(buf, "}"))
break;
}
}
if (VG_(needs).tool_errors &&
- !TL_(read_extra_suppression_info)(fd, buf, N_BUF, supp))
- goto syntax_error;
+ !TL_(read_extra_suppression_info)(fd, buf, N_BUF, supp))
+ {
+ BOMB("bad or missing extra suppression info");
+ }
- /* "i > 0" ensures at least one caller read. */
- for (i = 0; i <= VG_N_SUPP_CALLERS; i++) {
+ i = 0;
+ while (True) {
eof = VG_(get_line) ( fd, buf, N_BUF );
- if (eof) goto syntax_error;
- if (i > 0 && VG_STREQ(buf, "}"))
- break;
- if (i == VG_N_SUPP_CALLERS)
+ if (eof)
+ BOMB("unexpected end-of-file");
+ if (VG_STREQ(buf, "}")) {
+ if (i > 0) {
+ break;
+ } else {
+ BOMB("missing stack trace");
+ }
+ }
+ if (i == VG_MAX_SUPP_CALLERS)
+ BOMB("too many callers in stack trace");
+ if (i > 0 && i >= VG_(clo_backtrace_size))
break;
- supp->caller[i] = VG_(arena_strdup)(VG_AR_CORE, buf);
- if (!setLocationTy(&(supp->caller[i]), &(supp->caller_ty[i])))
- goto syntax_error;
+ tmp_callers[i].name = VG_(arena_strdup)(VG_AR_CORE, buf);
+ if (!setLocationTy(&(tmp_callers[i])))
+ BOMB("location should start with 'fun:' or 'obj:'");
+ i++;
}
- /* make sure to grab the '}' if the num callers is >=
- VG_N_SUPP_CALLERS */
+ // If the num callers is >= VG_(clo_backtrace_size), ignore any extra
+ // lines and grab the '}'.
if (!VG_STREQ(buf, "}")) {
- // Don't just ignore extra lines -- abort. (Someone complained
- // about silent ignoring of lines in bug #77922.)
- //do {
- // eof = VG_(get_line) ( fd, buf, N_BUF );
- //} while (!eof && !VG_STREQ(buf, "}"));
- too_many_contexts = True;
- goto syntax_error;
+ do {
+ eof = VG_(get_line) ( fd, buf, N_BUF );
+ } while (!eof && !VG_STREQ(buf, "}"));
+ }
+
+ // Copy tmp_callers[] into supp->callers[]
+ supp->n_callers = i;
+ supp->callers = VG_(arena_malloc)(VG_AR_CORE, i*sizeof(SuppLoc));
+ for (i = 0; i < supp->n_callers; i++) {
+ supp->callers[i] = tmp_callers[i];
}
supp->next = vg_suppressions;
return;
syntax_error:
- if (eof) {
- VG_(message)(Vg_UserMsg,
- "FATAL: in suppressions file `%s': unexpected EOF",
- filename );
- } else if (too_many_contexts) {
- VG_(message)(Vg_UserMsg,
- "FATAL: in suppressions file: `%s': at %s:",
- filename, buf );
- VG_(message)(Vg_UserMsg,
- "too many lines (limit of %d contexts in suppressions)",
- VG_N_SUPP_CALLERS);
- } else {
- VG_(message)(Vg_UserMsg,
- "FATAL: in suppressions file: `%s': syntax error on: %s",
- filename, buf );
- }
+ VG_(message)(Vg_UserMsg,
+ "FATAL: in suppressions file `%s': %s", filename, err_str );
+
VG_(close)(fd);
VG_(message)(Vg_UserMsg, "exiting now.");
VG_(exit)(1);
+# undef BOMB
# undef N_BUF
}
}
}
-/* Return the name of an erring fn in a way which is useful
- for comparing against the contents of a suppressions file.
- Doesn't demangle the fn name, because we want to refer to
- mangled names in the suppressions file.
-*/
-static void get_objname_fnname ( Addr a, Char* obj_buf, Int n_obj_buf,
- Char* fun_buf, Int n_fun_buf )
-{
- (void)VG_(get_objname) ( a, obj_buf, n_obj_buf );
- (void)VG_(get_fnname_nodemangle)( a, fun_buf, n_fun_buf );
-}
-
-static __inline__
+static
Bool supp_matches_error(Supp* su, Error* err)
{
switch (su->skind) {
case PThreadSupp:
- return (err->ekind == PThreadErr);
+ return (err->ekind == ThreadErr || err->ekind == MutexErr);
default:
if (VG_(needs).tool_errors) {
return TL_(error_matches_suppression)(err, su);
}
}
-static __inline__
-Bool supp_matches_callers(Supp* su, Char caller_obj[][M_VG_ERRTXT],
- Char caller_fun[][M_VG_ERRTXT])
+static
+Bool supp_matches_callers(Error* err, Supp* su)
{
Int i;
-
- for (i = 0; i < VG_N_SUPP_CALLERS && su->caller[i] != NULL; i++) {
- switch (su->caller_ty[i]) {
- case ObjName: if (VG_(string_match)(su->caller[i],
- caller_obj[i])) break;
- return False;
- case FunName: if (VG_(string_match)(su->caller[i],
- caller_fun[i])) break;
- return False;
+ Char caller_name[M_VG_ERRTXT];
+
+ for (i = 0; i < su->n_callers; i++) {
+ Addr a = err->where->ips[i];
+ vg_assert(su->callers[i].name != NULL);
+ switch (su->callers[i].ty) {
+ case ObjName:
+ (void)VG_(get_objname)(a, caller_name, M_VG_ERRTXT);
+ break;
+
+ case FunName:
+ // Nb: mangled names used in suppressions
+ (void)VG_(get_fnname_nodemangle)(a, caller_name, M_VG_ERRTXT);
+ break;
default: VG_(tool_panic)("supp_matches_callers");
}
+ if (!VG_(string_match)(su->callers[i].name, caller_name))
+ return False;
}
/* If we reach here, it's a match */
*/
static Supp* is_suppressible_error ( Error* err )
{
- Int i;
-
- static Char caller_obj[VG_N_SUPP_CALLERS][M_VG_ERRTXT];
- static Char caller_fun[VG_N_SUPP_CALLERS][M_VG_ERRTXT];
-
Supp* su;
- /* get_objname_fnname() writes the function name and object name if
- it finds them in the debug info. So the strings in the suppression
- file should match these.
- */
-
- /* Initialise these strs so they are always safe to compare, even
- if get_objname_fnname doesn't write anything to them. */
- for (i = 0; i < VG_N_SUPP_CALLERS; i++)
- caller_obj[i][0] = caller_fun[i][0] = 0;
-
- for (i = 0; i < VG_N_SUPP_CALLERS && i < VG_(clo_backtrace_size); i++) {
- get_objname_fnname ( err->where->ips[i], caller_obj[i], M_VG_ERRTXT,
- caller_fun[i], M_VG_ERRTXT );
- }
-
/* See if the error context matches any suppression. */
for (su = vg_suppressions; su != NULL; su = su->next) {
if (supp_matches_error(su, err) &&
- supp_matches_callers(su, caller_obj, caller_fun)) {
+ supp_matches_callers(err, su))
+ {
return su;
}
}
}
/*--------------------------------------------------------------------*/
-/*--- end vg_errcontext.c ---*/
+/*--- end ---*/
/*--------------------------------------------------------------------*/
static UInt stack_snapshot2 ( Addr* ips, UInt n_ips, Addr ip, Addr fp,
Addr fp_min, Addr fp_max_orig )
{
+ static const Bool debug = False;
Int i;
Addr fp_max;
UInt n_found = 0;
fp_max = (fp_max_orig + VKI_PAGE_SIZE - 1) & ~(VKI_PAGE_SIZE - 1);
fp_max -= sizeof(Addr);
+ if (debug)
+ VG_(printf)("n_ips=%d fp_min=%p fp_max_orig=%p, fp_max=%p ip=%p fp=%p\n",
+ n_ips, fp_min, fp_max_orig, fp_max, ip, fp);
+
/* Assertion broken before main() is reached in pthreaded programs; the
* offending stack traces only have one item. --njn, 2002-aug-16 */
/* vg_assert(fp_min <= fp_max);*/
fp = FIRST_STACK_FRAME(fp);
for (i = 1; i < n_ips; i++) {
if (!(fp_min <= fp && fp <= fp_max)) {
- //VG_(printf)("... out of range %p\n", fp);
+ if (debug)
+ VG_(printf)("... out of range %p\n", fp);
break; /* fp gone baaaad */
}
// NJN 2002-sep-17: monotonicity doesn't work -- gives wrong traces...
// }
ips[i] = STACK_FRAME_RET(fp); /* ret addr */
fp = STACK_FRAME_NEXT(fp); /* old fp */
- //VG_(printf)(" %p\n", ips[i]);
+ if (debug)
+ VG_(printf)(" ips[%d]=%08p\n", i, ips[i]);
}
}
n_found = i;
void get_needed_regs(ThreadId tid, Addr* ip, Addr* fp, Addr* sp,
Addr* stack_highest_word)
{
+ /* thread in thread table */
ThreadState* tst = & VG_(threads)[ tid ];
*ip = INSTR_PTR(tst->arch);
*fp = FRAME_PTR(tst->arch);
useful. */
if (*ip >= VG_(client_trampoline_code)+VG_(tramp_syscall_offset) &&
*ip < VG_(client_trampoline_code)+VG_(trampoline_code_length) &&
- VG_(is_addressable)(*sp, sizeof(Addr))) {
+ VG_(is_addressable)(*sp, sizeof(Addr), VKI_PROT_READ)) {
*ip = *(Addr *)*sp;
*sp += sizeof(Addr);
}
#endif
+ if (0)
+ VG_(printf)("tid %d: stack_highest=%p ip=%p sp=%p fp=%p\n",
+ tid, *stack_highest_word, *ip, *sp, *fp);
}
ExeContext* VG_(get_ExeContext) ( ThreadId tid )
return INSTR_PTR(VG_(threads)[ tid ].arch);
}
+Addr VG_(get_thread_stack_pointer) ( ThreadId tid )
+{
+ Addr ret;
+
+ ret = STACK_PTR(VG_(threads)[ tid ].arch);
+
+ return ret;
+}
+
/*--------------------------------------------------------------------*/
/*--- end ---*/
/*--------------------------------------------------------------------*/
#include <sys/wait.h>
#include <unistd.h>
+#include "memcheck/memcheck.h"
+
#ifndef AT_DCACHEBSIZE
#define AT_DCACHEBSIZE 19
#endif /* AT_DCACHEBSIZE */
static Int vg_argc;
static Char **vg_argv;
-/* PID of the main thread */
-Int VG_(main_pid);
-
-/* PGRP of process */
-Int VG_(main_pgrp);
+/* The master thread the one which will be responsible for mopping
+ everything up at exit. Normally it is tid 1, since that's the
+ first thread created, but it may be something else after a
+ fork(). */
+ThreadId VG_(master_tid) = VG_INVALID_THREADID;
/* Application-visible file descriptor limits */
Int VG_(fd_soft_limit) = -1;
VG_(sanity_check_malloc_all)();
VG_(print_all_arena_stats)();
VG_(message)(Vg_DebugMsg, "");
+ //VG_(print_shadow_stats)();
+ VG_(message)(Vg_DebugMsg, "");
VG_(message)(Vg_DebugMsg,
"------ Valgrind's ExeContext management stats follow ------" );
VG_(print_ExeContext_stats)();
OINK(n);
}
-/* Initialize the PID and PGRP of scheduler LWP; this is also called
- in any new children after fork. */
-static void newpid(ThreadId unused)
-{
- /* PID of scheduler LWP */
- VG_(main_pid) = VG_(getpid)();
- VG_(main_pgrp) = VG_(getpgrp)();
-}
-
/*====================================================================*/
/*=== Check we were launched by stage 1 ===*/
/*====================================================================*/
/* Look for our AUXV table */
-int scan_auxv(void* init_sp)
+static int scan_auxv(void* init_sp)
{
const struct ume_auxv *auxv = find_auxv((UWord*)init_sp);
int padfile = -1, found = 0;
cl_argv = argv;
} else {
+ Bool noaugment = False;
+
/* Count the arguments on the command line. */
vg_argv0 = argv;
for (vg_argc0 = 1; vg_argc0 < argc; vg_argc0++) {
- if (argv[vg_argc0][0] != '-') /* exe name */
+ Char* arg = argv[vg_argc0];
+ if (arg[0] != '-') /* exe name */
break;
- if (VG_STREQ(argv[vg_argc0], "--")) { /* dummy arg */
+ if (VG_STREQ(arg, "--")) { /* dummy arg */
vg_argc0++;
break;
}
+ VG_BOOL_CLO("--command-line-only", noaugment)
}
cl_argv = &argv[vg_argc0];
/* Get extra args from VALGRIND_OPTS and .valgrindrc files.
Note we don't do this if getting args from VALGRINDCLO, as
- those extra args will already be present in VALGRINDCLO. */
- augment_command_line(&vg_argc0, &vg_argv0);
+ those extra args will already be present in VALGRINDCLO.
+ (We also don't do it when --command-line-only=yes.) */
+ if (!noaugment)
+ augment_command_line(&vg_argc0, &vg_argv0);
}
if (0) {
return False;
}
-static Bool contains(const char *p) {
- if (VG_STREQ(p, VG_(libdir))) {
- return True;
- }
- return False;
-}
-
/* Prepare the client's environment. This is basically a copy of our
environment, except:
- 1. LD_LIBRARY_PATH=$VALGRINDLIB:$LD_LIBRARY_PATH
- 2. LD_PRELOAD=$VALGRINDLIB/vg_inject.so:($VALGRINDLIB/vgpreload_TOOL.so:)?$LD_PRELOAD
+ LD_PRELOAD=$VALGRINDLIB/vg_inject.so:($VALGRINDLIB/vgpreload_TOOL.so:)?$LD_PRELOAD
- If any of these is missing, then it is added.
+ If this is missing, then it is added.
Yummy. String hacking in C.
static char **fix_environment(char **origenv, const char *preload)
{
static const char inject_so[] = "vg_inject.so";
- static const char ld_library_path[] = "LD_LIBRARY_PATH=";
static const char ld_preload[] = "LD_PRELOAD=";
static const char valgrind_clo[] = VALGRINDCLO "=";
- static const int ld_library_path_len = sizeof(ld_library_path)-1;
static const int ld_preload_len = sizeof(ld_preload)-1;
static const int valgrind_clo_len = sizeof(valgrind_clo)-1;
int ld_preload_done = 0;
- int ld_library_path_done = 0;
char *inject_path;
int inject_path_len;
int vgliblen = strlen(VG_(libdir));
envc++;
/* Allocate a new space */
- ret = malloc(sizeof(char *) * (envc+3+1)); /* 3 new entries + NULL */
+ ret = malloc(sizeof(char *) * (envc+1+1)); /* 1 new entry + NULL */
vg_assert(ret);
/* copy it over */
/* Walk over the new environment, mashing as we go */
for (cpp = ret; cpp && *cpp; cpp++) {
- if (memcmp(*cpp, ld_library_path, ld_library_path_len) == 0) {
- /* If the LD_LIBRARY_PATH already contains libdir, then don't
- bother adding it again, even if it isn't the first (it
- seems that the Java runtime will keep reexecing itself
- unless its paths are at the front of LD_LIBRARY_PATH) */
- if (!scan_colsep(*cpp + ld_library_path_len, contains)) {
- int len = strlen(*cpp) + vgliblen*2 + 16;
- char *cp = malloc(len);
- vg_assert(cp);
-
- snprintf(cp, len, "%s%s:%s",
- ld_library_path, VG_(libdir),
- (*cpp)+ld_library_path_len);
-
- *cpp = cp;
- }
-
- ld_library_path_done = 1;
- } else if (memcmp(*cpp, ld_preload, ld_preload_len) == 0) {
+ if (memcmp(*cpp, ld_preload, ld_preload_len) == 0) {
int len = strlen(*cpp) + inject_path_len;
char *cp = malloc(len);
vg_assert(cp);
}
/* Add the missing bits */
-
- if (!ld_library_path_done) {
- int len = ld_library_path_len + vgliblen*2 + 16;
- char *cp = malloc(len);
- vg_assert(cp);
-
- snprintf(cp, len, "%s%s", ld_library_path, VG_(libdir));
-
- ret[envc++] = cp;
- }
-
if (!ld_preload_done) {
int len = ld_preload_len + inject_path_len;
char *cp = malloc(len);
ret[envc++] = cp;
}
+ free(inject_path);
ret[envc] = NULL;
return ret;
}
extern char **environ; /* our environment */
-//#include <error.h>
/* Add a string onto the string table, and return its address */
static char *copy_str(char **tab, const char *str)
break;
case AT_BASE:
- if (info->interp_base == 0)
- auxv->a_type = AT_IGNORE;
- else
- auxv->u.a_val = info->interp_base;
+ auxv->u.a_val = info->interp_base;
break;
case AT_PLATFORM: /* points to a platform description string */
suid, and therefore the dynamic linker should be careful
about LD_PRELOAD, etc. However, since stage1 (the thing
the kernel actually execve's) should never be SUID, and we
- need LD_PRELOAD/LD_LIBRARY_PATH to work for the client, we
+ need LD_PRELOAD to work for the client, we
set AT_SECURE to 0. */
auxv->u.a_val = 0;
break;
ToolInfo** toolinfo_out, char **preloadpath_out )
{
Bool ok;
- int len = strlen(VG_(libdir)) + strlen(toolname)*2 + 16;
+ int len = strlen(VG_(libdir)) + strlen(toolname) + 16;
char buf[len];
void* handle;
ToolInfo* toolinfo;
VG_(exit)(1);
}
-static void missing_tool_option ( void )
-{
- abort_msg();
- VG_(printf)("valgrind: Missing --tool option\n");
- list_tools();
- VG_(printf)("valgrind: Use --help for more information.\n");
- VG_(exit)(1);
-}
-
static void missing_prog ( void )
{
abort_msg();
killpad_extra extra;
int res;
- vg_assert(padfile > 0);
+ vg_assert(padfile >= 0);
res = fstat(padfile, &padstat);
vg_assert(0 == res);
Bool VG_(clo_trace_syscalls) = False;
Bool VG_(clo_trace_signals) = False;
Bool VG_(clo_trace_symtab) = False;
+Bool VG_(clo_trace_redir) = False;
Bool VG_(clo_trace_sched) = False;
-Int VG_(clo_trace_pthread_level) = 0;
+Bool VG_(clo_trace_pthreads) = False;
Int VG_(clo_dump_error) = 0;
-Int VG_(clo_backtrace_size) = 4;
+Int VG_(clo_backtrace_size) = 12;
Char* VG_(clo_weird_hacks) = NULL;
Bool VG_(clo_run_libc_freeres) = True;
Bool VG_(clo_track_fds) = False;
Bool VG_(clo_pointercheck) = True;
Bool VG_(clo_support_elan3) = False;
Bool VG_(clo_branchpred) = False;
+Bool VG_(clo_model_pthreads) = False;
static Bool VG_(clo_wait_for_gdb) = False;
-/* If we're doing signal routing, poll for signals every 50mS by
- default. */
-Int VG_(clo_signal_polltime) = 50;
-
-/* These flags reduce thread wakeup latency on syscall completion and
- signal delivery, respectively. The downside is possible unfairness. */
-Bool VG_(clo_lowlat_syscalls) = False; /* low-latency syscalls */
-Bool VG_(clo_lowlat_signals) = False; /* low-latency signals */
-
void usage ( Bool debug_help )
{
"usage: valgrind --tool=<toolname> [options] prog-and-args\n"
"\n"
" common user options for all Valgrind tools, with defaults in [ ]:\n"
-" --tool=<name> use the Valgrind tool named <name>\n"
+" --tool=<name> use the Valgrind tool named <name> [memcheck]\n"
" -h --help show this message\n"
" --help-debug show this message, plus debugging options\n"
" --version show version\n"
"\n"
" uncommon user options for all Valgrind tools:\n"
" --run-libc-freeres=no|yes free up glibc memory at exit? [yes]\n"
-" --weird-hacks=hack1,hack2,... recognised hacks: lax-ioctls [none]\n"
-" --signal-polltime=<time> signal poll period (mS) for older kernels [50]\n"
-" --lowlat-signals=no|yes improve thread signal wake-up latency [no]\n"
-" --lowlat-syscalls=no|yes improve thread syscall wake-up latency [no]\n"
+" --weird-hacks=hack1,hack2,... recognised hacks: lax-ioctls,ioctl-mmap [none]\n"
" --pointercheck=no|yes enforce client address space limits [yes]\n"
" --support-elan3=no|yes hacks for Quadrics Elan3 support [no]\n"
"\n"
" --trace-signals=no|yes show signal handling details? [no]\n"
" --trace-symtab=no|yes show symbol table details? [no]\n"
" --trace-sched=no|yes show thread scheduler details? [no]\n"
-" --trace-pthread=none|some|all show pthread event details? [none]\n"
" --wait-for-gdb=yes|no pause on startup to wait for gdb attach\n"
"\n"
" --vex-iropt-verbosity 0 .. 9 [0]\n"
" --vex-iropt-level 0 .. 2 [2]\n"
" --vex-iropt-precise-memory-exns [no]\n"
+#if 0
+" --model-pthreads=yes|no model the pthreads library [no]\n"
+#endif
+" --command-line-only=no|yes only use command line options [no]\n"
" --vex-iropt-unroll-thresh 0 .. 400 [120]\n"
" --vex-guest-max-insns 1 .. 100 [50]\n"
" --vex-guest-chase-thresh 0 .. 99 [10]\n"
*exec = &vg_argv[i][7];
}
}
-
- /* If no tool specified, can act appropriately without loading tool */
- if (*tool == NULL) {
- if (0 == *need_help) {
- // neither --tool nor --help/--help-debug specified
- missing_tool_option();
- } else {
- // Give help message, without any tool-specific help
- usage(/*help-debug?*/2 == *need_help);
- }
- }
}
static void process_cmd_line_options( UInt* client_auxv, const char* toolname )
// XXX: what architectures is this necessary for? x86 yes, PPC no, others ?
#ifdef __x86__
{
- Word *auxp;
+ UInt* auxp;
for (auxp = client_auxv; auxp[0] != AT_NULL; auxp += 2) {
switch(auxp[0]) {
case AT_SYSINFO:
continue;
if (VG_CLO_STREQN(7, arg, "--exec="))
continue;
+ if (VG_CLO_STREQN(20, arg, "--command-line-only="))
+ continue;
if ( VG_CLO_STREQ(arg, "--"))
continue;
else VG_BOOL_CLO("--db-attach", VG_(clo_db_attach))
else VG_BOOL_CLO("--demangle", VG_(clo_demangle))
else VG_BOOL_CLO("--error-limit", VG_(clo_error_limit))
- else VG_BOOL_CLO("--lowlat-signals", VG_(clo_lowlat_signals))
- else VG_BOOL_CLO("--lowlat-syscalls", VG_(clo_lowlat_syscalls))
else VG_BOOL_CLO("--pointercheck", VG_(clo_pointercheck))
else VG_BOOL_CLO("--support-elan3", VG_(clo_support_elan3))
else VG_BOOL_CLO("--profile", VG_(clo_profile))
else VG_BOOL_CLO("--trace-sched", VG_(clo_trace_sched))
else VG_BOOL_CLO("--trace-signals", VG_(clo_trace_signals))
else VG_BOOL_CLO("--trace-symtab", VG_(clo_trace_symtab))
+ else VG_BOOL_CLO("--trace-redir", VG_(clo_trace_redir))
else VG_BOOL_CLO("--trace-syscalls", VG_(clo_trace_syscalls))
+ else VG_BOOL_CLO("--trace-pthreads", VG_(clo_trace_pthreads))
else VG_BOOL_CLO("--wait-for-gdb", VG_(clo_wait_for_gdb))
+ else VG_BOOL_CLO("--model-pthreads", VG_(clo_model_pthreads))
else VG_STR_CLO ("--db-command", VG_(clo_db_command))
else VG_STR_CLO ("--weird-hacks", VG_(clo_weird_hacks))
else VG_NUM_CLO ("--dump-error", VG_(clo_dump_error))
else VG_NUM_CLO ("--input-fd", VG_(clo_input_fd))
else VG_NUM_CLO ("--sanity-level", VG_(clo_sanity_level))
- else VG_NUM_CLO ("--signal-polltime", VG_(clo_signal_polltime))
else VG_BNUM_CLO("--num-callers", VG_(clo_backtrace_size), 1,
VG_DEEPEST_BACKTRACE)
else VG_NUM_CLO ("--trace-notbelow", VG_(clo_trace_notbelow))
- else if (VG_CLO_STREQ(arg, "--trace-pthread=none"))
- VG_(clo_trace_pthread_level) = 0;
- else if (VG_CLO_STREQ(arg, "--trace-pthread=some"))
- VG_(clo_trace_pthread_level) = 1;
- else if (VG_CLO_STREQ(arg, "--trace-pthread=all"))
- VG_(clo_trace_pthread_level) = 2;
-
else if (VG_CLO_STREQ(arg, "--gen-suppressions=no"))
VG_(clo_gen_suppressions) = 0;
else if (VG_CLO_STREQ(arg, "--gen-suppressions=yes"))
/*=== Initialise program data/text, etc. ===*/
/*====================================================================*/
-static void build_valgrind_map_callback
- ( Addr start, SizeT size, Char rr, Char ww, Char xx,
- UInt dev, UInt ino, ULong foffset, const UChar* filename )
+static void build_valgrind_map_callback ( Addr start, SizeT size, UInt prot,
+ UInt dev, UInt ino, ULong foffset,
+ const UChar* filename )
{
- UInt prot = 0;
- UInt flags = SF_MMAP|SF_NOSYMS;
- Bool is_stack_segment;
-
- is_stack_segment =
- (start == VG_(clstk_base) && (start+size) == VG_(clstk_end));
-
/* Only record valgrind mappings for now, without loading any
symbols. This is so we know where the free space is before we
start allocating more memory (note: heap is OK, it's just mmap
which is the problem here). */
- if (start >= VG_(valgrind_base) && (start+size-1) <= VG_(valgrind_last)) {
- flags |= SF_VALGRIND;
- VG_(map_file_segment)(start, size, prot, flags, dev, ino, foffset, filename);
+ if (start >= VG_(client_end) && start < VG_(valgrind_last)) {
+ if (0)
+ VG_(printf)("init1: %p-%p prot %s\n",
+ start, start+size, VG_(prot_str)(prot));
+ VG_(map_file_segment)(start, size, prot,
+ SF_MMAP|SF_NOSYMS|SF_VALGRIND,
+ dev, ino, foffset, filename);
+ /* update VG_(valgrind_last) if it looks wrong */
+ if (start+size > VG_(valgrind_last))
+ VG_(valgrind_last) = start+size-1;
}
}
// Global var used to pass local data to callback
Addr sp_at_startup___global_arg = 0;
-static void build_segment_map_callback
- ( Addr start, SizeT size, Char rr, Char ww, Char xx,
- UInt dev, UInt ino, ULong foffset, const UChar* filename )
+/*
+ This second pass adds in client mappings, and loads symbol tables
+ for all interesting mappings. The trouble is that things can
+ change as we go, because we're calling the Tool to track memory as
+ we find it.
+
+ So for Valgrind mappings, we don't replace any mappings which
+ aren't still identical (which will include the .so mappings, so we
+ will load their symtabs)>
+ */
+static void build_segment_map_callback ( Addr start, SizeT size, UInt prot,
+ UInt dev, UInt ino, ULong foffset,
+ const UChar* filename )
{
- UInt prot = 0;
UInt flags;
Bool is_stack_segment;
Addr r_esp;
is_stack_segment
= (start == VG_(clstk_base) && (start+size) == VG_(clstk_end));
- if (rr == 'r') prot |= VKI_PROT_READ;
- if (ww == 'w') prot |= VKI_PROT_WRITE;
- if (xx == 'x') prot |= VKI_PROT_EXEC;
+ if (0)
+ VG_(printf)("init2: %p-%p prot %s stack=%d\n",
+ start, start+size, VG_(prot_str)(prot), is_stack_segment);
if (is_stack_segment)
flags = SF_STACK | SF_GROWDOWN;
if (filename != NULL)
flags |= SF_FILE;
- if (start >= VG_(valgrind_base) && (start+size-1) <= VG_(valgrind_last))
- flags |= SF_VALGRIND;
-
- VG_(map_file_segment)(start, size, prot, flags, dev, ino, foffset, filename);
+#if 0
+ // This needs to be fixed properly. jrs 20050307
+ if (start >= VG_(client_end) && start < VG_(valgrind_last)) {
+ Segment *s = VG_(find_segment_before)(start);
+
+ /* We have to be a bit careful about inserting new mappings into
+ the Valgrind part of the address space. We're actively
+ changing things as we parse these mappings, particularly in
+ shadow memory, and so we don't want to overwrite those
+ changes. Therefore, we only insert/update a mapping if it is
+ mapped from a file or it exactly matches an existing mapping.
+
+ NOTE: we're only talking about the Segment list mapping
+ metadata; this doesn't actually mmap anything more. */
+ if (filename || (s && s->addr == start && s->len == size)) {
+ flags |= SF_VALGRIND;
+ VG_(map_file_segment)(start, size, prot, flags, dev, ino, foffset, filename);
+ } else {
+ /* assert range is already mapped */
+ vg_assert(VG_(is_addressable)(start, size, VKI_PROT_NONE));
+ }
+ } else
+#endif
+ VG_(map_file_segment)(start, size, prot, flags, dev, ino, foffset, filename);
- if (VG_(is_client_addr)(start) && VG_(is_client_addr)(start+size-1))
- VG_TRACK( new_mem_startup, start, size, rr=='r', ww=='w', xx=='x' );
+ if (VG_(is_client_addr)(start) && VG_(is_client_addr)(start+size-1)) {
+ VG_TRACK( new_mem_startup, start, size,
+ !!(prot & VKI_PROT_READ),
+ !!(prot & VKI_PROT_WRITE),
+ !!(prot & VKI_PROT_EXEC));
+ }
/* If this is the stack segment mark all below %esp as noaccess. */
r_esp = sp_at_startup___global_arg;
vg_assert(0 != r_esp);
if (is_stack_segment) {
- if (0)
- VG_(message)(Vg_DebugMsg, "invalidating stack area: %x .. %x",
+ if (0) {
+ VG_(message)(Vg_DebugMsg, "invalidating stack area: %p .. %p",
start,r_esp);
+ VG_(message)(Vg_DebugMsg, " validating stack area: %p .. %p",
+ r_esp, start+size);
+ }
VG_TRACK( die_mem_stack, start, r_esp-start );
+ // what's this for?
+ //VG_TRACK( post_mem_write, r_esp, (start+size)-r_esp );
}
}
void VG_(sanity_check_general) ( Bool force_expensive )
{
+ ThreadId tid;
+
VGP_PUSHCC(VgpCoreCheapSanity);
if (VG_(clo_sanity_level) < 1) return;
VGP_PUSHCC(VgpCoreExpensiveSanity);
sanity_slow_count++;
- VG_(sanity_check_proxy)();
-
# if 0
{ void zzzmemscan(void); zzzmemscan(); }
# endif
vg_assert(TL_(expensive_sanity_check)());
VGP_POPCC(VgpToolExpensiveSanity);
}
+
+ /* Check that Segments and /proc/self/maps match up */
+ //vg_assert(VG_(sanity_check_memory)());
+
+ /* Look for stack overruns. Visit all threads. */
+ for(tid = 1; tid < VG_N_THREADS; tid++) {
+ Int remains;
+
+ if (VG_(threads)[tid].status == VgTs_Empty ||
+ VG_(threads)[tid].status == VgTs_Zombie)
+ continue;
+
+ remains = VGA_(stack_unused)(tid);
+ if (remains < VKI_PAGE_SIZE)
+ VG_(message)(Vg_DebugMsg, "WARNING: Thread %d is within %d bytes of running out of stack!",
+ tid, remains);
+ }
+
/*
if ((sanity_fast_count % 500) == 0) VG_(mallocSanityCheckAll)();
*/
return True;
}
-int main(int argc, char **argv)
+int main(int argc, char **argv, char **envp)
{
char **cl_argv;
- const char *tool = NULL;
+ const char *tool = "memcheck"; // default to Memcheck
const char *exec = NULL;
char *preload; /* tool-specific LD_PRELOAD .so */
char **env;
Addr client_eip;
Addr sp_at_startup; /* client's SP at the point we gained control. */
UInt * client_auxv;
- VgSchedReturnCode src;
- Int exitcode = 0;
- Int fatal_sigNo = -1;
struct vki_rlimit zero = { 0, 0 };
Int padfile;
- ThreadId last_run_tid = 0; // Last thread the scheduler ran.
-
//============================================================
// Nb: startup is complex. Prerequisites are shown at every step.
//============================================================
// Command line argument handling order:
// * If --help/--help-debug are present, show usage message
- // (if --tool is also present, that includes the tool-specific usage)
- // * Then, if --tool is missing, abort with error msg
+ // (including the tool-specific usage)
+ // * (If no --tool option given, default to Memcheck)
// * Then, if client is missing, abort with error msg
// * Then, if any cmdline args are bad, abort with error msg
//============================================================
void* init_sp = argv - 1;
padfile = scan_auxv(init_sp);
}
-
if (0) {
printf("========== main() ==========\n");
foreach_map(prmap, /*dummy*/NULL);
// p: set-libdir [for VG_(libdir)]
// p: load_tool() [for 'preload']
//--------------------------------------------------------------
- env = fix_environment(environ, preload);
+ env = fix_environment(envp, preload);
//--------------------------------------------------------------
// Setup client stack, eip, and VG_(client_arg[cv])
//--------------------------------------------------------------
{
void* init_sp = argv - 1;
+
sp_at_startup = setup_client_stack(init_sp, cl_argv, env, &info,
&client_auxv);
+ free(env);
}
if (0)
// Valgrind. (This is where the old VG_(main)() started.)
//==============================================================
- //--------------------------------------------------------------
- // atfork
- // p: n/a
- //--------------------------------------------------------------
- VG_(atfork)(NULL, NULL, newpid);
- newpid(VG_INVALID_THREADID);
-
//--------------------------------------------------------------
// setup file descriptors
// p: n/a
//--------------------------------------------------------------
setup_file_descriptors();
- //--------------------------------------------------------------
- // Read /proc/self/maps into a buffer
- // p: all memory layout, environment setup [so memory maps are right]
- //--------------------------------------------------------------
- VG_(read_procselfmaps)();
-
//--------------------------------------------------------------
// Build segment map (Valgrind segments only)
- // p: read proc/self/maps
// p: tl_pre_clo_init() [to setup new_mem_startup tracker]
//--------------------------------------------------------------
VG_(parse_procselfmaps) ( build_valgrind_map_callback );
//--------------------------------------------------------------
// Build segment map (all segments)
+ // p: shadow/redzone segments
// p: setup_client_stack() [for 'sp_at_startup']
// p: init tool [for 'new_mem_startup']
//--------------------------------------------------------------
// Protect client trampoline page (which is also sysinfo stuff)
// p: segment stuff [otherwise get seg faults...]
//--------------------------------------------------------------
- VG_(mprotect)( (void *)VG_(client_trampoline_code),
- VG_(trampoline_code_length), VKI_PROT_READ|VKI_PROT_EXEC );
+ {
+ Segment *seg;
+ VG_(mprotect)( (void *)VG_(client_trampoline_code),
+ VG_(trampoline_code_length), VKI_PROT_READ|VKI_PROT_EXEC );
#endif
+ /* Make sure this segment isn't treated as stack */
+ seg = VG_(find_segment)(VG_(client_trampoline_code));
+ if (seg)
+ seg->flags &= ~(SF_STACK | SF_GROWDOWN);
+ }
+
//==============================================================
// Can use VG_(map)() after segments set up
//==============================================================
{ Long q; for (q = 0; q < 10ULL *1000*1000*1000; q++) ; }
}
- //--------------------------------------------------------------
// Search for file descriptors that are inherited from our parent
// p: process_cmd_line_options [for VG_(clo_track_fds)]
//--------------------------------------------------------------
VG_(scheduler_init)();
//--------------------------------------------------------------
- // Set up state of thread 1
- // p: {pre,post}_clo_init() [for tool helper registration]
+ // Initialise the pthread model
+ // p: ?
// load_client() [for 'client_eip']
// setup_client_stack() [for 'sp_at_startup']
// setup_scheduler() [for the rest of state 1 stuff]
VG_(instr_ptr_offset) = offsetof(VexGuestArchState, ARCH_INSTR_PTR);
//--------------------------------------------------------------
- // Set up the ProxyLWP machinery
- // p: VG_(scheduler_init)()? [XXX: subtle dependency?]
//--------------------------------------------------------------
- VG_(proxy_init)();
+ //if (VG_(clo_model_pthreads))
+ // VG_(pthread_init)();
//--------------------------------------------------------------
// Initialise the signal handling subsystem
- // p: VG_(atfork)(NULL, NULL, newpid) [else problems with sigmasks]
- // p: VG_(proxy_init)() [else breaks...]
+ // p: n/a
//--------------------------------------------------------------
// Nb: temporarily parks the saved blocking-mask in saved_sigmask.
VG_(sigstartup_actions)();
//--------------------------------------------------------------
VG_(init_tt_tc)();
- //--------------------------------------------------------------
- // Read debug info to find glibc entry points to intercept
- // p: parse_procselfmaps? [XXX for debug info?]
- // p: init_tt_tc? [XXX ???]
- //--------------------------------------------------------------
- VG_(setup_code_redirect_table)();
-
//--------------------------------------------------------------
// Verbosity message
// p: end_rdtsc_calibration [so startup message is printed first]
// Run!
//--------------------------------------------------------------
VGP_POPCC(VgpStartup);
- VGP_PUSHCC(VgpSched);
- src = VG_(scheduler)( &exitcode, &last_run_tid, &fatal_sigNo );
+ vg_assert(VG_(master_tid) == 1);
+
+ VGA_(main_thread_wrapper)(1);
+
+ abort();
+}
+
+
+/* Do everything which needs doing when the last thread exits */
+void VG_(shutdown_actions)(ThreadId tid)
+{
+ vg_assert(tid == VG_(master_tid));
+ vg_assert(VG_(is_running_thread)(tid));
+
+ // Wait for all other threads to exit.
+ VGA_(reap_threads)(tid);
+
+ VG_(clo_model_pthreads) = False;
+
+ // Clean the client up before the final report
+ VGA_(final_tidyup)(tid);
- VGP_POPCC(VgpSched);
+ // OK, done
+ VG_(exit_thread)(tid);
+ /* should be no threads left */
+ vg_assert(VG_(count_living_threads)() == 0);
+
+ VG_(threads)[tid].status = VgTs_Empty;
//--------------------------------------------------------------
// Finalisation: cleanup, messages, etc. Order no so important, only
// affects what order the messages come.
if (VG_(clo_verbosity) > 0)
VG_(message)(Vg_UserMsg, "");
- if (src == VgSrc_Deadlock) {
- VG_(message)(Vg_UserMsg,
- "Warning: pthread scheduler exited due to deadlock");
- }
-
/* Print out file descriptor summary and stats. */
if (VG_(clo_track_fds))
VG_(show_open_fds)();
if (VG_(needs).core_errors || VG_(needs).tool_errors)
VG_(show_all_errors)();
- TL_(fini)( exitcode );
+ TL_(fini)( 0 /*exitcode*/ );
VG_(sanity_check_general)( True /*include expensive checks*/ );
if (VG_(clo_profile))
VGP_(done_profiling)();
-
if (VG_(clo_profile_flags) > 0)
VG_(show_BB_profile)();
- /* We're exiting, so nuke all the threads and clean up the proxy LWPs */
- vg_assert(src == VgSrc_FatalSig ||
- VG_(threads)[last_run_tid].status == VgTs_Runnable ||
- VG_(threads)[last_run_tid].status == VgTs_WaitJoiner);
- VG_(nuke_all_threads_except)(VG_INVALID_THREADID);
-
/* Print Vex storage stats */
if (0)
LibVEX_ShowAllocStats();
- //--------------------------------------------------------------
- // Exit, according to the scheduler's return code
- //--------------------------------------------------------------
- switch (src) {
- case VgSrc_ExitSyscall: /* the normal way out */
- vg_assert(last_run_tid > 0 && last_run_tid < VG_N_THREADS);
- VG_(proxy_shutdown)();
-
- /* The thread's %EBX at the time it did __NR_exit() will hold
- the arg to __NR_exit(), so we just do __NR_exit() with
- that arg. */
- VG_(exit)( exitcode );
- /* NOT ALIVE HERE! */
- VG_(core_panic)("entered the afterlife in main() -- ExitSyscall");
- break; /* what the hell :) */
-
- case VgSrc_Deadlock:
- /* Just exit now. No point in continuing. */
- VG_(proxy_shutdown)();
- VG_(exit)(0);
- VG_(core_panic)("entered the afterlife in main() -- Deadlock");
- break;
-
- case VgSrc_FatalSig:
- /* We were killed by a fatal signal, so replicate the effect */
- vg_assert(fatal_sigNo != -1);
- VG_(kill_self)(fatal_sigNo);
- VG_(core_panic)("main(): signal was supposed to be fatal");
- break;
-
- default:
- VG_(core_panic)("main(): unexpected scheduler return code");
- }
-
- abort();
}
-
/*--------------------------------------------------------------------*/
/*--- end vg_main.c ---*/
/*--------------------------------------------------------------------*/
#include "core.h"
+//zz#include "memcheck/memcheck.h"
//#define DEBUG_MALLOC // turn on heavyweight debugging machinery
//#define VERBOSE_MALLOC // make verbose, esp. in debugging machinery
sb = VG_(get_memory_from_mmap) ( cszB, "newSuperblock" );
}
vg_assert(NULL != sb);
+ //zzVALGRIND_MAKE_WRITABLE(sb, cszB);
vg_assert(0 == (Addr)sb % VG_MIN_MALLOC_SZB);
sb->n_payload_bytes = cszB - sizeof(Superblock);
a->bytes_mmaped += cszB;
{
SizeT pszB = bszB_to_pszB(a, bszB);
vg_assert(b_lno == pszB_to_listNo(pszB));
+ //zzVALGRIND_MAKE_WRITABLE(b, bszB);
// Set the size fields and indicate not-in-use.
set_bszB_lo(b, mk_free_bszB(bszB));
set_bszB_hi(b, mk_free_bszB(bszB));
{
UInt i;
vg_assert(bszB >= min_useful_bszB(a));
+ //zzVALGRIND_MAKE_WRITABLE(b, bszB);
set_bszB_lo(b, mk_inuse_bszB(bszB));
set_bszB_hi(b, mk_inuse_bszB(bszB));
set_prev_b(b, NULL); // Take off freelist
VGP_POPCC(VgpMalloc);
v = get_block_payload(a, b);
vg_assert( (((Addr)v) & (VG_MIN_MALLOC_SZB-1)) == 0 );
+
+ VALGRIND_MALLOCLIKE_BLOCK(v, req_pszB, 0, False);
return v;
}
sanity_check_malloc_arena(aid);
# endif
+ VALGRIND_FREELIKE_BLOCK(ptr, 0);
+
VGP_POPCC(VgpMalloc);
}
VGP_POPCC(VgpMalloc);
vg_assert( (((Addr)align_p) % req_alignB) == 0 );
+
+ VALGRIND_MALLOCLIKE_BLOCK(align_p, req_pszB, 0, False);
+
return align_p;
}
void* VG_(arena_calloc) ( ArenaId aid, SizeT alignB, SizeT nmemb, SizeT nbytes )
{
- UInt i;
SizeT size;
UChar* p;
else
p = VG_(arena_malloc_aligned) ( aid, alignB, size );
- for (i = 0; i < size; i++) p[i] = 0;
+ VG_(memset)(p, 0, nbytes);
+
+ VALGRIND_MALLOCLIKE_BLOCK(p, nbytes, 0, True);
VGP_POPCC(VgpMalloc);
{
Arena* a;
SizeT old_bszB, old_pszB;
- UInt i;
- UChar *p_old, *p_new;
+ UChar *p_new;
Block* b;
VGP_PUSHCC(VgpMalloc);
p_new = VG_(arena_malloc_aligned) ( aid, req_alignB, req_pszB );
}
- p_old = (UChar*)ptr;
- for (i = 0; i < old_pszB; i++)
- p_new[i] = p_old[i];
+ VG_(memcpy)(p_new, ptr, old_pszB);
- VG_(arena_free)(aid, p_old);
+ VG_(arena_free)(aid, ptr);
VGP_POPCC(VgpMalloc);
return p_new;
}
-/*--------------------------------------------------------------*/
-/*--- ---*/
-/*--------------------------------------------------------------*/
-
-
-static Int addrcmp(const void *ap, const void *bp)
-{
- Addr a = *(Addr *)ap;
- Addr b = *(Addr *)bp;
- Int ret;
-
- if (a == b)
- ret = 0;
- else
- ret = (a < b) ? -1 : 1;
-
- return ret;
-}
-
-static Char *straddr(void *p)
-{
- static Char buf[16];
-
- VG_(sprintf)(buf, "%p", *(Addr *)p);
-
- return buf;
-}
-
-static SkipList sk_segments = SKIPLIST_INIT(Segment, addr, addrcmp, straddr, VG_AR_CORE);
-
/*--------------------------------------------------------------*/
/*--- Maintain an ordered list of all the client's mappings ---*/
/*--------------------------------------------------------------*/
{
static const Bool debug = False || mem_debug;
Addr ret;
+ Addr addrOrig = addr;
Addr limit = (for_client ? VG_(client_end)-1 : VG_(valgrind_last));
Addr base = (for_client ? VG_(client_mapbase) : VG_(valgrind_base));
Addr hole_start, hole_end, hstart_any, hstart_fixed, hstart_final;
vg_assert(hole_end > hole_start);
hole_len = hole_end - hole_start + 1;
+ vg_assert(IS_PAGE_ALIGNED(hole_len));
if (hole_len >= len && i_any == -1) {
/* It will at least fit in this hole. */
hstart_any = hole_start;
}
- if (fixed && hole_start <= addr && hole_len >= len) {
+ if (fixed && hole_start <= addr
+ && hole_start+hole_len >= addr+len) {
/* We were asked for a fixed mapping, and this hole works.
Bag it -- and stop searching as further searching is
pointless. */
i_fixed = i;
- hstart_fixed = hole_start;
+ hstart_fixed = addr;
break;
}
}
if (fixed) {
i_final = i_fixed;
- hstart_final = hstart_fixed;
+ hstart_final = hstart_fixed + VKI_PAGE_SIZE; /* skip leading redzone */
} else {
i_final = i_any;
hstart_final = hstart_any;
if (i_final != -1)
- ret = hstart_final + VKI_PAGE_SIZE; /* skip leading redzone */
+ ret = hstart_final;
else
ret = 0; /* not found */
VG_(printf)("find_map_space(%p, %d, %d) -> %p\n\n",
addr, len, for_client, ret);
+ if (fixed) {
+ vg_assert(ret == 0 || ret == addrOrig);
+ }
+
return ret;
}
-/* Pad the entire process address space, from VG_(client_base)
+/* Pad the entire process address space, from "start"
to VG_(valgrind_last) by creating an anonymous and inaccessible
mapping over any part of the address space which is not covered
by an entry in the segment list.
address with VG_(find_map_space) and then adding a segment for
it and padding the address space valgrind can ensure that the
kernel has no choice but to put the memory where we want it. */
-void VG_(pad_address_space)(void)
+void VG_(pad_address_space)(Addr start)
{
- Addr addr = VG_(client_base);
+ vg_assert(0);
+#if 0
+ Addr addr = (start == 0) ? VG_(client_base) : start;
Segment *s = VG_(SkipNode_First)(&sk_segments);
Addr ret;
-vg_assert(0);
while (s && addr <= VG_(valgrind_last)) {
if (addr < s->addr) {
}
return;
+#endif
}
/* Remove the address space padding added by VG_(pad_address_space)
by removing any mappings that it created. */
-void VG_(unpad_address_space)(void)
+void VG_(unpad_address_space)(Addr start)
{
- Addr addr = VG_(client_base);
+vg_assert(0);
+#if 0
+ Addr addr = (start == 0) ? VG_(client_base) : start;
Segment *s = VG_(SkipNode_First)(&sk_segments);
Int ret;
-vg_assert(0);
while (s && addr <= VG_(valgrind_last)) {
if (addr < s->addr) {
}
return;
+#endif
}
/* Find the segment holding 'a', or NULL if none. */
}
/*
- Test if a piece of memory is addressable by setting up a temporary
- SIGSEGV handler, then try to touch the memory. No signal = good,
- signal = bad.
+ Test if a piece of memory is addressable with at least the "prot"
+ protection permissions by examining the underlying segments.
+
+ Really this is a very stupid algorithm and we could do much
+ better by iterating through the segment array instead of through
+ the address space.
*/
-Bool VG_(is_addressable)(Addr p, SizeT size)
+Bool VG_(is_addressable)(Addr p, SizeT size, UInt prot)
{
- volatile Char * volatile cp = (volatile Char *)p;
- volatile Bool ret;
- struct vki_sigaction sa, origsa;
- vki_sigset_t mask;
-
- sa.ksa_handler = segv_handler;
- sa.sa_flags = 0;
- VG_(sigfillset)(&sa.sa_mask);
- VG_(sigaction)(VKI_SIGSEGV, &sa, &origsa);
- VG_(sigprocmask)(VKI_SIG_SETMASK, NULL, &mask);
-
- if (__builtin_setjmp(&segv_jmpbuf) == 0) {
- while(size--)
- *cp++;
- ret = True;
- } else
- ret = False;
-
- VG_(sigaction)(VKI_SIGSEGV, &origsa, NULL);
- VG_(sigprocmask)(VKI_SIG_SETMASK, &mask, NULL);
+ Segment *seg;
- return ret;
+ if ((p + size) < p)
+ return False; /* reject wraparounds */
+ if (size == 0)
+ return True; /* isn't this a bit of a strange case? */
+
+ p = PGROUNDDN(p);
+ size = PGROUNDUP(size);
+ vg_assert(IS_PAGE_ALIGNED(p));
+ vg_assert(IS_PAGE_ALIGNED(size));
+
+ for (; size > 0; size -= VKI_PAGE_SIZE) {
+ seg = VG_(find_segment)(p);
+ if (!seg)
+ return False;
+ if ((seg->prot & prot) != prot)
+ return False;
+ p += VKI_PAGE_SIZE;
+ }
+
+ return True;
}
+
/*--------------------------------------------------------------------*/
/*--- Manage allocation of memory on behalf of the client ---*/
/*--------------------------------------------------------------------*/
static char vg_mbuf[M_VG_MSGBUF];
static int vg_n_mbuf;
-static void add_to_buf ( Char c )
+static void add_to_buf ( Char c, void *p )
{
if (vg_n_mbuf >= (M_VG_MSGBUF-1)) return;
vg_mbuf[vg_n_mbuf++] = c;
/* Publically visible from here onwards. */
int
-VG_(add_to_msg) ( Char *format, ... )
+VG_(add_to_msg) ( const Char *format, ... )
{
int count;
va_list vargs;
va_start(vargs,format);
- count = VG_(vprintf) ( add_to_buf, format, vargs );
+ count = VG_(vprintf) ( add_to_buf, format, vargs, 0 );
va_end(vargs);
return count;
}
-int VG_(vmessage) ( VgMsgKind kind, Char* format, va_list vargs )
+int VG_(vmessage) ( VgMsgKind kind, const Char* format, va_list vargs )
{
int count;
count = VG_(start_msg) ( kind );
- count += VG_(vprintf) ( add_to_buf, format, vargs );
+ count += VG_(vprintf) ( add_to_buf, format, vargs, 0 );
count += VG_(end_msg)();
return count;
}
/* Send a simple single-part message. */
-int VG_(message) ( VgMsgKind kind, Char* format, ... )
+int VG_(message) ( VgMsgKind kind, const Char* format, ... )
{
int count;
va_list vargs;
{
Char ts[32];
Char c;
+ static const Char pfx[] = ">>>>>>>>>>>>>>>>";
vg_n_mbuf = 0;
vg_mbuf[vg_n_mbuf] = 0;
if (VG_(clo_time_stamp))
case Vg_ClientMsg: c = '*'; break;
default: c = '?'; break;
}
- return VG_(add_to_msg)( "%c%c%s%d%c%c ",
- c,c, ts, VG_(getpid)(), c,c );
+ return VG_(add_to_msg)( "%s%c%c%s%d%c%c ",
+ &pfx[sizeof(pfx)-1-RUNNING_ON_VALGRIND],
+ c,c, ts, VG_(getpid)(), c,c );
}
{
int count = 0;
if (VG_(clo_log_fd) >= 0) {
- add_to_buf('\n');
+ add_to_buf('\n',0);
VG_(send_bytes_to_logging_sink) (
vg_mbuf, VG_(strlen)(vg_mbuf) );
count = 1;
return 0;
}
-Bool VG_(isemptysigset)( vki_sigset_t* set )
+Bool VG_(isemptysigset)( const vki_sigset_t* set )
{
Int i;
vg_assert(set != NULL);
return True;
}
-Bool VG_(isfullsigset)( vki_sigset_t* set )
+Bool VG_(isfullsigset)( const vki_sigset_t* set )
{
Int i;
vg_assert(set != NULL);
return 0;
}
-Int VG_(sigismember) ( vki_sigset_t* set, Int signum )
+Int VG_(sigismember) ( const vki_sigset_t* set, Int signum )
{
if (set == NULL)
return 0;
Int res = VG_(do_syscall4)(__NR_rt_sigtimedwait, (UWord)set, (UWord)info,
(UWord)timeout, sizeof(*set));
- return VG_(is_kerror)(res) ? -1 : res;
+ return res;
}
Int VG_(signal)(Int signum, void (*sighandler)(Int))
{
Int ret = -VKI_ENOSYS;
-#ifdef __NR_tgkill
- ret = VG_(do_syscall3)(__NR_tgkill, VG_(main_pid), tid, signo);
-#endif /* __NR_tgkill */
+#if 0
+ /* This isn't right because the client may create a process
+ structure with multiple thread groups */
+ ret = VG_(do_syscall)(__NR_tgkill, VG_(getpid)(), tid, signo);
+#endif
-#ifdef __NR_tkill
- if (ret == -VKI_ENOSYS)
- ret = VG_(do_syscall2)(__NR_tkill, tid, signo);
-#endif /* __NR_tkill */
+ ret = VG_(do_syscall2)(__NR_tkill, tid, signo);
if (ret == -VKI_ENOSYS)
ret = VG_(do_syscall2)(__NR_kill, tid, signo);
return VG_(is_kerror)(res) ? -1 : 0;
}
+/* Terminate this single thread */
+void VG_(exit_single)( Int status )
+{
+ (void)VG_(do_syscall1)(__NR_exit, status );
+ /* Why are we still alive here? */
+ /*NOTREACHED*/
+ *(volatile Int *)0 = 'x';
+ vg_assert(2+2 == 5);
+}
+
+/* Pull down the entire world */
void VG_(exit)( Int status )
{
(void)VG_(do_syscall1)(__NR_exit_group, status );
/* Copy a string into the buffer. */
static UInt
-myvprintf_str ( void(*send)(Char), Int flags, Int width, Char* str,
- Bool capitalise )
+myvprintf_str ( void(*send)(Char, void*), Int flags, Int width, Char* str,
+ Bool capitalise, void *send_arg )
{
# define MAYBE_TOUPPER(ch) (capitalise ? VG_(toupper)(ch) : (ch))
UInt ret = 0;
if (width == 0) {
ret += len;
for (i = 0; i < len; i++)
- send(MAYBE_TOUPPER(str[i]));
+ send(MAYBE_TOUPPER(str[i]), send_arg);
return ret;
}
if (len > width) {
ret += width;
for (i = 0; i < width; i++)
- send(MAYBE_TOUPPER(str[i]));
+ send(MAYBE_TOUPPER(str[i]), send_arg);
return ret;
}
if (flags & VG_MSG_LJUSTIFY) {
ret += extra;
for (i = 0; i < extra; i++)
- send(' ');
+ send(' ', send_arg);
}
ret += len;
for (i = 0; i < len; i++)
- send(MAYBE_TOUPPER(str[i]));
+ send(MAYBE_TOUPPER(str[i]), send_arg);
if (!(flags & VG_MSG_LJUSTIFY)) {
ret += extra;
for (i = 0; i < extra; i++)
- send(' ');
+ send(' ', send_arg);
}
# undef MAYBE_TOUPPER
* WIDTH is the width of the field.
*/
static UInt
-myvprintf_int64 ( void(*send)(Char), Int flags, Int base, Int width, ULong p)
+myvprintf_int64 ( void(*send)(Char,void*), Int flags, Int base, Int width, ULong p, void *send_arg)
{
Char buf[40];
Int ind = 0;
/* Reverse copy to buffer. */
ret += ind;
for (i = ind -1; i >= 0; i--) {
- send(buf[i]);
+ send(buf[i], send_arg);
}
if (width > 0 && (flags & VG_MSG_LJUSTIFY)) {
for(; ind < width; ind++) {
ret++;
- send(' '); // Never pad with zeroes on RHS -- changes the value!
+ send(' ', send_arg); // Never pad with zeroes on RHS -- changes the value!
}
}
return ret;
/* A simple vprintf(). */
UInt
-VG_(vprintf) ( void(*send)(Char), const Char *format, va_list vargs )
+VG_(vprintf) ( void(*send)(Char,void*), const Char *format, va_list vargs, void *send_arg )
{
UInt ret = 0;
int i;
for (i = 0; format[i] != 0; i++) {
if (format[i] != '%') {
- send(format[i]);
+ send(format[i], send_arg);
ret++;
continue;
}
break;
if (format[i] == '%') {
/* `%%' is replaced by `%'. */
- send('%');
+ send('%', send_arg);
ret++;
continue;
}
flags |= VG_MSG_SIGNED;
if (is_long)
ret += myvprintf_int64(send, flags, 10, width,
- (ULong)(va_arg (vargs, Long)));
+ (ULong)(va_arg (vargs, Long)), send_arg);
else
ret += myvprintf_int64(send, flags, 10, width,
- (ULong)(va_arg (vargs, Int)));
+ (ULong)(va_arg (vargs, Int)), send_arg);
break;
case 'u': /* %u */
if (is_long)
ret += myvprintf_int64(send, flags, 10, width,
- (ULong)(va_arg (vargs, ULong)));
+ (ULong)(va_arg (vargs, ULong)), send_arg);
else
ret += myvprintf_int64(send, flags, 10, width,
- (ULong)(va_arg (vargs, UInt)));
+ (ULong)(va_arg (vargs, UInt)), send_arg);
break;
case 'p': /* %p */
ret += 2;
- send('0');
- send('x');
+ send('0',send_arg);
+ send('x',send_arg);
ret += myvprintf_int64(send, flags, 16, width,
- (ULong)((UWord)va_arg (vargs, void *)));
+ (ULong)((UWord)va_arg (vargs, void *)), send_arg);
break;
case 'x': /* %x */
if (is_long)
ret += myvprintf_int64(send, flags, 16, width,
- (ULong)(va_arg (vargs, ULong)));
+ (ULong)(va_arg (vargs, ULong)), send_arg);
else
ret += myvprintf_int64(send, flags, 16, width,
- (ULong)(va_arg (vargs, UInt)));
+ (ULong)(va_arg (vargs, UInt)), send_arg);
break;
case 'c': /* %c */
ret++;
- send(va_arg (vargs, int));
+ send(va_arg (vargs, int), send_arg);
break;
case 's': case 'S': { /* %s */
char *str = va_arg (vargs, char *);
if (str == (char*) 0) str = "(null)";
- ret += myvprintf_str(send, flags, width, str, format[i]=='S');
+ ret += myvprintf_str(send, flags, width, str, format[i]=='S', send_arg);
break;
}
case 'y': { /* %y - print symbol */
*cp++ = ')';
*cp = '\0';
}
- ret += myvprintf_str(send, flags, width, buf, 0);
+ ret += myvprintf_str(send, flags, width, buf, 0, send_arg);
}
break;
debugging info should be sent via here. The official route is to
to use vg_message(). This interface is deprecated.
*/
-static char myprintf_buf[100];
-static int n_myprintf_buf;
+typedef struct {
+ char buf[100];
+ int n;
+} printf_buf;
-static void add_to_myprintf_buf ( Char c )
+static void add_to_myprintf_buf ( Char c, void *p )
{
- if (n_myprintf_buf >= 100-10 /*paranoia*/ ) {
+ printf_buf *myprintf_buf = (printf_buf *)p;
+
+ if (myprintf_buf->n >= 100-10 /*paranoia*/ ) {
if (VG_(clo_log_fd) >= 0) {
VG_(send_bytes_to_logging_sink)(
- myprintf_buf, VG_(strlen)(myprintf_buf) );
+ myprintf_buf->buf, VG_(strlen)(myprintf_buf->buf) );
}
- n_myprintf_buf = 0;
- myprintf_buf[n_myprintf_buf] = 0;
+ myprintf_buf->n = 0;
+ myprintf_buf->buf[myprintf_buf->n] = 0;
}
- myprintf_buf[n_myprintf_buf++] = c;
- myprintf_buf[n_myprintf_buf] = 0;
+ myprintf_buf->buf[myprintf_buf->n++] = c;
+ myprintf_buf->buf[myprintf_buf->n] = 0;
}
UInt VG_(printf) ( const char *format, ... )
{
UInt ret;
va_list vargs;
+ printf_buf myprintf_buf = {"",0};
va_start(vargs,format);
- n_myprintf_buf = 0;
- myprintf_buf[n_myprintf_buf] = 0;
- ret = VG_(vprintf) ( add_to_myprintf_buf, format, vargs );
+ ret = VG_(vprintf) ( add_to_myprintf_buf, format, vargs, &myprintf_buf );
- if (n_myprintf_buf > 0 && VG_(clo_log_fd) >= 0) {
- VG_(send_bytes_to_logging_sink)( myprintf_buf, n_myprintf_buf );
+ if (myprintf_buf.n > 0 && VG_(clo_log_fd) >= 0) {
+ VG_(send_bytes_to_logging_sink)( myprintf_buf.buf, myprintf_buf.n );
}
va_end(vargs);
}
/* A general replacement for sprintf(). */
-
-static Char *vg_sprintf_ptr;
-
-static void add_to_vg_sprintf_buf ( Char c )
+static void add_to_vg_sprintf_buf ( Char c, void *p )
{
- *vg_sprintf_ptr++ = c;
+ char **vg_sprintf_ptr = p;
+ *(*vg_sprintf_ptr)++ = c;
}
UInt VG_(sprintf) ( Char* buf, Char *format, ... )
{
Int ret;
va_list vargs;
-
- vg_sprintf_ptr = buf;
+ Char *vg_sprintf_ptr = buf;
va_start(vargs,format);
- ret = VG_(vprintf) ( add_to_vg_sprintf_buf, format, vargs );
- add_to_vg_sprintf_buf(0);
+ ret = VG_(vprintf) ( add_to_vg_sprintf_buf, format, vargs, &vg_sprintf_ptr );
+ add_to_vg_sprintf_buf(0,&vg_sprintf_ptr);
va_end(vargs);
vg_assert(VG_(strlen)(buf) == ret);
+
return ret;
}
{
ExeContext *ec;
Addr sp, fp;
- Addr stacktop, sigstack_low, sigstack_high;
+ Addr stacktop;
+ ThreadId tid = VG_(get_lwp_tid)(VG_(gettid)());
+ ThreadState *tst = VG_(get_ThreadState)(tid);
ARCH_GET_REAL_STACK_PTR(sp);
ARCH_GET_REAL_FRAME_PTR(fp);
- stacktop = VG_(valgrind_last);
- VG_(get_sigstack_bounds)( &sigstack_low, &sigstack_high );
- if (sp >= sigstack_low && sp < sigstack_high)
- stacktop = sigstack_high;
-
+
+ stacktop = (Addr)(tst->os_state.stack + tst->os_state.stacksize);
+
ec = VG_(get_ExeContext2)(ret, fp, sp, stacktop);
return ec;
{
Int res;
/* res = getrlimit( resource, rlim ); */
- res = VG_(do_syscall2)(__NR_getrlimit, resource, (UWord)rlim);
+ res = VG_(do_syscall2)(__NR_ugetrlimit, resource, (UWord)rlim);
+ if (res == -VKI_ENOSYS)
+ res = VG_(do_syscall2)(__NR_getrlimit, resource, (UWord)rlim);
if(VG_(is_kerror)(res)) res = -1;
return res;
}
/* Support for setrlimit. */
-Int VG_(setrlimit) (Int resource, struct vki_rlimit *rlim)
+Int VG_(setrlimit) (Int resource, const struct vki_rlimit *rlim)
{
Int res;
/* res = setrlimit( resource, rlim ); */
}
+void VG_(nanosleep)(struct vki_timespec *ts)
+{
+ VG_(do_syscall2)(__NR_nanosleep, (UWord)ts, (UWord)NULL);
+}
+
/* ---------------------------------------------------------------------
Primitive support for bagging memory via mmap.
------------------------------------------------------------------ */
#include "core.h"
-
/* static ... to keep it out of the stack frame. */
static Char procmap_buf[M_PROCMAP_BUF];
return -1;
}
-static Int readchar ( Char* buf, Char* ch )
+static Int readchar ( const Char* buf, Char* ch )
{
if (*buf == 0) return 0;
*ch = *buf;
return 1;
}
-static Int readhex ( Char* buf, UWord* val )
+static Int readhex ( const Char* buf, UWord* val )
{
Int n = 0;
*val = 0;
return n;
}
-static Int readdec ( Char* buf, UInt* val )
+static Int readdec ( const Char* buf, UInt* val )
{
Int n = 0;
*val = 0;
}
-/* Read /proc/self/maps, store the contents in a static buffer. If there's
- a syntax error or other failure, just abort. */
-void VG_(read_procselfmaps)(void)
+/* Read /proc/self/maps, store the contents into a static buffer. If
+ there's a syntax error or other failure, just abort. */
+
+static void read_procselfmaps ( void )
{
Int n_chunk, fd;
}
buf_n_tot = 0;
do {
- n_chunk = VG_(read) ( fd, &procmap_buf[buf_n_tot],
+ n_chunk = VG_(read) ( fd, &procmap_buf[buf_n_tot],
M_PROCMAP_BUF - buf_n_tot );
buf_n_tot += n_chunk;
} while ( n_chunk > 0 && buf_n_tot < M_PROCMAP_BUF );
start address in memory
length
- r permissions char; either - or r
- w permissions char; either - or w
- x permissions char; either - or x
+ page protections (using the VKI_PROT_* flags)
+ mapped file device and inode
offset in file, or zero if no file
filename, zero terminated, or NULL if no file
So the sig of the called fn might be
- void (*record_mapping)( Addr start, SizeT size,
- Char r, Char w, Char x,
+ void (*record_mapping)( Addr start, SizeT size, UInt prot,
+ UInt dev, UInt info,
ULong foffset, UChar* filename )
Note that the supplied filename is transiently stored; record_mapping
procmap_buf!
*/
void VG_(parse_procselfmaps) (
- void (*record_mapping)( Addr addr, SizeT len, Char rr, Char ww, Char xx,
+ void (*record_mapping)( Addr addr, SizeT len, UInt prot,
UInt dev, UInt ino, ULong foff, const UChar* filename )
)
{
Addr start, endPlusOne;
UChar* filename;
UChar rr, ww, xx, pp, ch, tmp;
- UInt ino;
+ UInt ino, prot;
UWord foffset, maj, min;
+ read_procselfmaps();
+
tl_assert( '\0' != procmap_buf[0] && 0 != buf_n_tot);
if (0)
for (k = i-50; k <= i; k++) VG_(printf)("%c", procmap_buf[k]);
VG_(printf)("'\n");
}
- VG_(exit)(1);
+ VG_(exit)(1);
read_line_ok:
/* Try and find the name of the file mapped to this segment, if
it exists. */
- while (procmap_buf[i] != '\n' && i < M_PROCMAP_BUF-1) i++;
+ while (procmap_buf[i] != '\n' && i < buf_n_tot-1) i++;
i_eol = i;
i--;
while (!VG_(isspace)(procmap_buf[i]) && i >= 0) i--;
tmp = filename[i_eol - i];
filename[i_eol - i] = '\0';
} else {
- tmp = '\0';
+ tmp = 0;
filename = NULL;
foffset = 0;
}
+ prot = 0;
+ if (rr == 'r') prot |= VKI_PROT_READ;
+ if (ww == 'w') prot |= VKI_PROT_WRITE;
+ if (xx == 'x') prot |= VKI_PROT_EXEC;
+
+ //if (start < VG_(valgrind_last))
(*record_mapping) ( start, endPlusOne-start,
- rr, ww, xx, maj * 256 + min, ino,
+ prot, maj * 256 + min, ino,
foffset, filename );
if ('\0' != tmp) {
--- /dev/null
+
+/*--------------------------------------------------------------------*/
+/*--- Pthreads library modelling. vg_pthreadmodel.c ---*/
+/*--------------------------------------------------------------------*/
+
+/*
+ This file is part of Valgrind, an extensible x86 protected-mode
+ emulator for monitoring program execution on x86-Unixes.
+
+ Copyright (C) 2005 Jeremy Fitzhardinge
+ jeremy@goop.org
+
+ This program is free software; you can redistribute it and/or
+ modify it under the terms of the GNU General Public License as
+ published by the Free Software Foundation; either version 2 of the
+ License, or (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
+ 02111-1307, USA.
+
+ The GNU General Public License is contained in the file COPYING.
+*/
+
+/*
+ This file wraps the client's use of libpthread functions and calls
+ on vg_threadmodel.c to model the state of the the client's threads.
+ The intent is to 1) look for problem's in the client's use of the
+ pthread API, and 2) tell tools which care about thread events (eg,
+ helgrind).
+
+ This file is intended to be implementation-independent. It assumes
+ that the client is using the same pthread.h as the one we include
+ here, but makes minimal assumptions about the actual structures
+ defined and so on (ie, the exact nature of pthread_t).
+
+ (For now we assume there's a 1:1 relationship between pthread_t's
+ and Valgrind-visible threads; N:M implementations will need further
+ work.)
+
+ The model is based on the pthread standard rather than any
+ particular implementation, in order to encourage portable use of
+ libpthread. On the other hand, we will probably need to implement
+ particular implementation extensions if they're widely used.
+
+ One tricky problem we need to solve is the mapping between
+ pthread_t identifiers and internal thread identifiers.
+ */
+
+#include "core.h"
+
+#if 0
+
+#define __USE_GNU
+#define __USE_UNIX98
+#include <pthread.h>
+
+static const Bool debug = False;
+
+static Bool check_wrappings(void);
+
+#define ENTER(x) \
+ do { \
+ if (VG_(clo_trace_pthreads)) \
+ VG_(message)(Vg_DebugMsg, ">>> %d entering %s", \
+ VG_(get_running_tid)(), #x); \
+ } while(0)
+
+static const Char *pp_retval(enum return_type rt, Word retval)
+{
+ static Char buf[50];
+
+ switch(rt) {
+ case RT_RETURN:
+ VG_(sprintf)(buf, "return %d 0x%x", retval, retval);
+ return buf;
+
+ case RT_LONGJMP:
+ return "LONGJMPed out";
+
+ case RT_EXIT:
+ return "thread exit";
+ }
+ return "??";
+}
+
+#define LEAVE(x, rt, retval) \
+ do { \
+ if (VG_(clo_trace_pthreads)) \
+ VG_(message)(Vg_DebugMsg, "<<< %d leaving %s -> %s", \
+ VG_(get_running_tid)(), #x, pp_retval(rt, retval)); \
+ } while(0)
+
+struct pthread_map
+{
+ pthread_t id;
+
+ ThreadId tid;
+};
+
+static Int pthread_cmp(const void *v1, const void *v2)
+{
+ const pthread_t *a = (const pthread_t *)v1;
+ const pthread_t *b = (const pthread_t *)v2;
+
+ return VG_(memcmp)(a, b, sizeof(*a));
+}
+
+static SkipList sk_pthread_map = SKIPLIST_INIT(struct pthread_map, id, pthread_cmp,
+ NULL, VG_AR_CORE);
+
+/* Find a ThreadId for a particular pthread_t; block until it becomes available */
+static ThreadId get_pthread_mapping(pthread_t id)
+{
+ /* Nasty little spin loop; revise if this turns out to be a
+ problem. This should only spin for as long as it takes for the
+ child thread to register the pthread_t. */
+ for(;;) {
+ struct pthread_map *m = VG_(SkipList_Find_Exact)(&sk_pthread_map, &id);
+
+ if (m && m->tid != VG_INVALID_THREADID)
+ return m->tid;
+
+ //VG_(printf)("find %x -> %p\n", id, m);
+ VG_(vg_yield)();
+ }
+}
+
+/* Create a mapping between a ThreadId and a pthread_t */
+static void pthread_id_mapping(ThreadId tid, Addr idp, UInt idsz)
+{
+ pthread_t id = *(pthread_t *)idp;
+ struct pthread_map *m = VG_(SkipList_Find_Exact)(&sk_pthread_map, &id);
+
+ if (debug)
+ VG_(printf)("Thread %d maps to %p\n", tid, id);
+
+ if (m == NULL) {
+ m = VG_(SkipNode_Alloc)(&sk_pthread_map);
+ m->id = id;
+ m->tid = tid;
+ VG_(SkipList_Insert)(&sk_pthread_map, m);
+ } else {
+ if (m->tid != VG_INVALID_THREADID && m->tid != tid)
+ VG_(message)(Vg_UserMsg, "Thread %d is creating duplicate mapping for pthread identifier %x; previously mapped to %d\n",
+ tid, (UInt)id, m->tid);
+ m->tid = tid;
+ }
+}
+
+static void check_thread_exists(ThreadId tid)
+{
+ if (!VG_(tm_thread_exists)(tid)) {
+ if (debug)
+ VG_(printf)("creating thread %d\n", tid);
+ VG_(tm_thread_create)(VG_INVALID_THREADID, tid, False);
+ }
+}
+
+static Addr startfunc_wrapper = 0;
+
+void VG_(pthread_startfunc_wrapper)(Addr wrapper)
+{
+ startfunc_wrapper = wrapper;
+}
+
+struct pthread_create_nonce {
+ Bool detached;
+ pthread_t *threadid;
+};
+
+static void *before_pthread_create(va_list va)
+{
+ pthread_t *threadp = va_arg(va, pthread_t *);
+ const pthread_attr_t *attr = va_arg(va, const pthread_attr_t *);
+ void *(*start)(void *) = va_arg(va, void *(*)(void *));
+ void *arg = va_arg(va, void *);
+ struct pthread_create_nonce *n;
+ struct vg_pthread_newthread_data *data;
+ ThreadState *tst;
+
+ if (!check_wrappings())
+ return NULL;
+
+ ENTER(pthread_create);
+
+ /* Data is in the client heap and is freed by the client in the
+ startfunc_wrapper. */
+ vg_assert(startfunc_wrapper != 0);
+
+ tst = VG_(get_ThreadState)(VG_(get_running_tid)());
+
+ VG_(sk_malloc_called_by_scheduler) = True;
+ data = SK_(malloc)(sizeof(*data));
+ VG_(sk_malloc_called_by_scheduler) = False;
+
+ VG_TRACK(pre_mem_write, Vg_CorePThread, tst->tid, "new thread data",
+ (Addr)data, sizeof(*data));
+ data->startfunc = start;
+ data->arg = arg;
+ VG_TRACK(post_mem_write, (Addr)data, sizeof(*data));
+
+ /* Substitute arguments
+ XXX hack: need an API to do this. */
+ ((Word *)tst->arch.m_esp)[3] = startfunc_wrapper;
+ ((Word *)tst->arch.m_esp)[4] = (Word)data;
+
+ if (debug)
+ VG_(printf)("starting thread at wrapper %p\n", startfunc_wrapper);
+
+ n = VG_(arena_malloc)(VG_AR_CORE, sizeof(*n));
+ n->detached = attr && !!attr->__detachstate;
+ n->threadid = threadp;
+
+ return n;
+}
+
+static void after_pthread_create(void *nonce, enum return_type rt, Word retval)
+{
+ struct pthread_create_nonce *n = (struct pthread_create_nonce *)nonce;
+ ThreadId tid = VG_(get_running_tid)();
+
+ if (n == NULL)
+ return;
+
+ if (rt == RT_RETURN && retval == 0) {
+ if (!VG_(tm_thread_exists)(tid))
+ VG_(tm_thread_create)(tid, get_pthread_mapping(*n->threadid),
+ n->detached);
+ else {
+ if (n->detached)
+ VG_(tm_thread_detach)(tid);
+ /* XXX set creator tid as well? */
+ }
+ }
+
+ VG_(arena_free)(VG_AR_CORE, n);
+
+ LEAVE(pthread_create, rt, retval);
+}
+
+static void *before_pthread_join(va_list va)
+{
+ pthread_t pt_joinee = va_arg(va, pthread_t);
+ ThreadId joinee;
+
+ if (!check_wrappings())
+ return NULL;
+
+ ENTER(pthread_join);
+
+ joinee = get_pthread_mapping(pt_joinee);
+
+ VG_(tm_thread_join)(VG_(get_running_tid)(), joinee);
+
+ return NULL;
+}
+
+static void after_pthread_join(void *v, enum return_type rt, Word retval)
+{
+ /* nothing to be done? */
+ if (!check_wrappings())
+ return;
+
+ LEAVE(pthread_join, rt, retval);
+}
+
+struct pthread_detach_data {
+ pthread_t id;
+};
+
+static void *before_pthread_detach(va_list va)
+{
+ pthread_t id = va_arg(va, pthread_t);
+ struct pthread_detach_data *data;
+
+ if (!check_wrappings())
+ return NULL;
+
+ ENTER(pthread_detach);
+
+ data = VG_(arena_malloc)(VG_AR_CORE, sizeof(*data));
+ data->id = id;
+
+ return data;
+}
+
+static void after_pthread_detach(void *nonce, enum return_type rt, Word retval)
+{
+ struct pthread_detach_data *data = (struct pthread_detach_data *)nonce;
+ ThreadId tid;
+
+ if (data == NULL)
+ return;
+
+ tid = get_pthread_mapping(data->id);
+
+ VG_(arena_free)(VG_AR_CORE, data);
+
+ if (rt == RT_RETURN && retval == 0)
+ VG_(tm_thread_detach)(tid);
+
+ LEAVE(pthread_detach, rt, retval);
+}
+
+
+
+static void *before_pthread_self(va_list va)
+{
+ /* If pthread_t is a structure, then this might be passed a pointer
+ to the return value. On Linux/glibc, it's a simple scalar, so it is
+ returned normally. */
+ if (!check_wrappings())
+ return NULL;
+
+ ENTER(pthread_self);
+
+ check_thread_exists(VG_(get_running_tid)());
+ return NULL;
+}
+
+static void after_pthread_self(void *nonce, enum return_type rt, Word retval)
+{
+ pthread_t ret = (pthread_t)retval;
+
+ if (!check_wrappings())
+ return;
+
+ pthread_id_mapping(VG_(get_running_tid)(), (Addr)&ret, sizeof(ret));
+
+ LEAVE(pthread_self, rt, retval);
+}
+
+
+/* If a mutex hasn't been initialized, check it against all the static
+ initializers to see if it appears to have been statically
+ initialized. */
+static void check_mutex_init(ThreadId tid, pthread_mutex_t *mx)
+{
+ static const pthread_mutex_t initializers[] = {
+ PTHREAD_MUTEX_INITIALIZER,
+ PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP,
+ PTHREAD_ERRORCHECK_MUTEX_INITIALIZER_NP,
+ PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP,
+ };
+ Int i;
+
+ if (VG_(tm_mutex_exists)((Addr)mx))
+ return;
+
+ VG_TRACK(pre_mem_read, Vg_CorePThread, tid, "pthread_mutex_t", (Addr)mx, sizeof(*mx));
+
+ for(i = 0; i < sizeof(initializers)/sizeof(*initializers); i++)
+ if (VG_(memcmp)(&initializers[i], mx, sizeof(*mx)) == 0) {
+ VG_(tm_mutex_init)(tid, (Addr)mx);
+ break;
+ }
+}
+
+static void *before_pthread_mutex_init(va_list va)
+{
+ pthread_mutex_t *mx = va_arg(va, pthread_mutex_t *);
+ const pthread_mutexattr_t *attr = va_arg(va, const pthread_mutexattr_t *);
+
+ if (!check_wrappings())
+ return NULL;
+
+ ENTER(pthread_mutex_init);
+
+ /* XXX look for recursive mutex */
+ /* XXX look for non-process scope */
+ (void)attr;
+
+ return mx;
+}
+
+static void after_pthread_mutex_init(void *nonce, enum return_type rt, Word retval)
+{
+ if (!check_wrappings())
+ return;
+
+ if (rt == RT_RETURN && retval == 0)
+ VG_(tm_mutex_init)(VG_(get_running_tid)(), (Addr)nonce);
+
+ LEAVE(pthread_mutex_init, rt, retval);
+}
+
+static void *before_pthread_mutex_destroy(va_list va)
+{
+ pthread_mutex_t *mx = va_arg(va, pthread_mutex_t *);
+
+ if (!check_wrappings())
+ return NULL;
+
+ ENTER(pthread_mutex_destroy);
+
+ VG_(tm_mutex_destroy)(VG_(get_running_tid)(), (Addr)mx);
+
+ return NULL;
+}
+
+static void after_pthread_mutex_destroy(void *nonce, enum return_type rt, Word retval)
+{
+ if (!check_wrappings())
+ return;
+
+ LEAVE(pthread_mutex_destroy, rt, retval);
+}
+
+static void *before_pthread_mutex_lock(va_list va)
+{
+ pthread_mutex_t *mx = va_arg(va, pthread_mutex_t *);
+
+ if (!check_wrappings())
+ return NULL;
+
+ ENTER(pthread_mutex_lock);
+
+ if (debug)
+ VG_(printf)("%d locking %p\n", VG_(get_running_tid)(), mx);
+ check_thread_exists(VG_(get_running_tid)());
+ check_mutex_init(VG_(get_running_tid)(), mx); /* mutex might be statically initialized */
+ VG_(tm_mutex_trylock)(VG_(get_running_tid)(), (Addr)mx);
+
+ return mx;
+}
+
+static void after_pthread_mutex_lock(void *nonce, enum return_type rt, Word retval)
+{
+ if (!check_wrappings())
+ return;
+
+ if (rt == RT_RETURN && retval == 0)
+ VG_(tm_mutex_acquire)(VG_(get_running_tid)(), (Addr)nonce);
+ else {
+ if (debug)
+ VG_(printf)("after mutex_lock failed: rt=%d ret=%d\n", rt, retval);
+ VG_(tm_mutex_giveup)(VG_(get_running_tid)(), (Addr)nonce);
+ }
+
+ LEAVE(pthread_mutex_lock, rt, retval);
+}
+
+static void *before_pthread_mutex_trylock(va_list va)
+{
+ pthread_mutex_t *mx = va_arg(va, pthread_mutex_t *);
+
+ if (!check_wrappings())
+ return NULL;
+
+ ENTER(pthread_mutex_trylock);
+
+ if (debug)
+ VG_(printf)("%d trylocking %p\n", VG_(get_running_tid)(), mx);
+ check_thread_exists(VG_(get_running_tid)());
+ check_mutex_init(VG_(get_running_tid)(), mx); /* mutex might be statically initialized */
+ VG_(tm_mutex_trylock)(VG_(get_running_tid)(), (Addr)mx);
+
+ return mx;
+}
+
+static void after_pthread_mutex_trylock(void *nonce, enum return_type rt, Word retval)
+{
+ if (nonce == NULL)
+ return;
+
+ if (rt == RT_RETURN && retval == 0)
+ VG_(tm_mutex_acquire)(VG_(get_running_tid)(), (Addr)nonce);
+ else {
+ if (debug)
+ VG_(printf)("after mutex_trylock failed: rt=%d ret=%d\n", rt, retval);
+ VG_(tm_mutex_giveup)(VG_(get_running_tid)(), (Addr)nonce);
+ }
+
+ LEAVE(pthread_mutex_trylock, rt, retval);
+}
+
+static void *before_pthread_mutex_unlock(va_list va)
+{
+ pthread_mutex_t *mx = va_arg(va, pthread_mutex_t *);
+
+ if (!check_wrappings())
+ return NULL;
+
+ ENTER(pthread_mutex_unlock);
+
+ VG_(tm_mutex_tryunlock)(VG_(get_running_tid)(), (Addr)mx);
+
+ return mx;
+}
+
+static void after_pthread_mutex_unlock(void *nonce, enum return_type rt, Word retval)
+{
+ if (nonce == NULL)
+ return;
+
+ if (rt == RT_RETURN && retval == 0)
+ VG_(tm_mutex_unlock)(VG_(get_running_tid)(), (Addr)nonce); /* complete unlock */
+ else
+ VG_(tm_mutex_acquire)(VG_(get_running_tid)(), (Addr)nonce); /* re-acquire */
+
+ LEAVE(pthread_mutex_unlock, rt, retval);
+}
+
+
+static struct pt_wraps {
+ const Char *name;
+ FuncWrapper wrapper;
+ const CodeRedirect *redir;
+} wraps[] = {
+#define WRAP(func, extra) { #func extra, { before_##func, after_##func } }
+ WRAP(pthread_create, "@@GLIBC_2.1"), /* XXX TODO: 2.0 ABI (?) */
+ WRAP(pthread_join, ""),
+ WRAP(pthread_detach, ""),
+
+ WRAP(pthread_self, ""),
+
+ WRAP(pthread_mutex_init, ""),
+ WRAP(pthread_mutex_destroy, ""),
+ WRAP(pthread_mutex_lock, ""),
+ WRAP(pthread_mutex_trylock, ""),
+ WRAP(pthread_mutex_unlock, ""),
+#undef WRAP
+};
+
+/* Check to see if all the wrappers are resolved */
+static Bool check_wrappings()
+{
+ Int i;
+ static Bool ok = True;
+ static Bool checked = False;
+
+ if (checked)
+ return ok;
+
+ for(i = 0; i < sizeof(wraps)/sizeof(*wraps); i++) {
+ if (!VG_(is_resolved)(wraps[i].redir)) {
+ VG_(message)(Vg_DebugMsg, "Pthread wrapper for \"%s\" is not resolved",
+ wraps[i].name);
+ ok = False;
+ }
+ }
+
+ if (startfunc_wrapper == 0) {
+ VG_(message)(Vg_DebugMsg, "Pthread wrapper for thread start function is not resolved");
+ ok = False;
+ }
+
+ if (!ok)
+ VG_(message)(Vg_DebugMsg, "Missing intercepts; model disabled");
+
+ checked = True;
+ return ok;
+}
+
+/*
+ Set up all the wrappers for interesting functions.
+ */
+void VG_(pthread_init)()
+{
+ Int i;
+
+ for(i = 0; i < sizeof(wraps)/sizeof(*wraps); i++) {
+ //VG_(printf)("adding pthread wrapper for %s\n", wraps[i].name);
+ wraps[i].redir = VG_(add_wrapper)("soname:libpthread.so.0",
+ wraps[i].name, &wraps[i].wrapper);
+ }
+ VG_(tm_init)();
+ VG_(tm_thread_create)(VG_INVALID_THREADID, VG_(master_tid), True);
+}
+
+#else /* !0 */
+/* Stubs for now */
+
+void VG_(pthread_init)()
+{
+}
+
+void VG_(pthread_startfunc_wrapper)(Addr wrapper)
+{
+}
+#endif /* 0 */
/*--------------------------------------------------------------------*/
-/*--- A user-space pthreads implementation. vg_scheduler.c ---*/
+/*--- Thread scheduling. vg_scheduler.c ---*/
/*--------------------------------------------------------------------*/
/*
- This file is part of Valgrind, a dynamic binary instrumentation
- framework.
+ This file is part of Valgrind, an extensible x86 protected-mode
+ emulator for monitoring program execution on x86-Unixes.
Copyright (C) 2000-2004 Julian Seward
jseward@acm.org
The GNU General Public License is contained in the file COPYING.
*/
+/*
+ Overview
+
+ Valgrind tries to emulate the kernel's threading as closely as
+ possible. The client does all threading via the normal syscalls
+ (on Linux: clone, etc). Valgrind emulates this by creating exactly
+ the same process structure as would be created without Valgrind.
+ There are no extra threads.
+
+ The main difference is that Valgrind only allows one client thread
+ to run at once. This is controlled with the VCPU semaphore,
+ "run_sema". Any time a thread wants to run client code or
+ manipulate any shared state (which is anything other than its own
+ ThreadState entry), it must hold the run_sema.
+
+ When a thread is about to block in a blocking syscall, it releases
+ run_sema, and re-takes it when it becomes runnable again (either
+ because the syscall finished, or we took a signal).
+
+ VG_(scheduler) therefore runs in each thread. It returns only when
+ the thread is exiting, either because it exited itself, or it was
+ told to exit by another thread.
+
+ This file is almost entirely OS-independent. The details of how
+ the OS handles threading and signalling are abstracted away and
+ implemented elsewhere.
+ */
+
#include "valgrind.h" /* for VG_USERREQ__RUNNING_ON_VALGRIND and
VG_USERREQ__DISCARD_TRANSLATIONS, and others */
#include "core.h"
LinuxThreads. */
ThreadState VG_(threads)[VG_N_THREADS];
-/* The process' fork-handler stack. */
-static Int vg_fhstack_used = 0;
-static ForkHandlerEntry vg_fhstack[VG_N_FORKHANDLERSTACK];
-
-
-/* The tid of the thread currently running, or VG_INVALID_THREADID if
- none. */
-static ThreadId vg_tid_currently_running = VG_INVALID_THREADID;
-
-
-/* vg_oursignalhandler() might longjmp(). Here's the jmp_buf. */
-static jmp_buf scheduler_jmpbuf;
-/* This says whether scheduler_jmpbuf is actually valid. Needed so
- that our signal handler doesn't longjmp when the buffer isn't
- actually valid. */
-static Bool scheduler_jmpbuf_valid = False;
-/* ... and if so, here's the signal which caused it to do so. */
-static Int longjmpd_on_signal;
-/* If the current thread gets a syncronous unresumable signal, then
- its details are placed here by the signal handler, to be passed to
- the applications signal handler later on. */
-static vki_siginfo_t unresumable_siginfo;
-
-/* If != VG_INVALID_THREADID, this is the preferred tid to schedule */
-static ThreadId prefer_sched = VG_INVALID_THREADID;
-
-/* Keeping track of keys. */
-typedef
- struct {
- /* Has this key been allocated ? */
- Bool inuse;
- /* If .inuse==True, records the address of the associated
- destructor, or NULL if none. */
- void (*destructor)(void*);
- }
- ThreadKeyState;
-
-/* And our array of thread keys. */
-static ThreadKeyState vg_thread_keys[VG_N_THREAD_KEYS];
-
-typedef UInt ThreadKey;
-
-/* The scheduler does need to know the address of it so it can be
- called at program exit. */
-static Addr __libc_freeres_wrapper;
+/* If true, a fault is Valgrind-internal (ie, a bug) */
+Bool VG_(my_fault) = True;
/* Forwards */
-static void do_client_request ( ThreadId tid, UWord* args );
-static void scheduler_sanity ( void );
-static void do_pthread_mutex_timedlock_TIMEOUT ( ThreadId tid );
-static void do_pthread_cond_timedwait_TIMEOUT ( ThreadId tid );
-static void maybe_rendezvous_joiners_and_joinees ( void );
+static void do_client_request ( ThreadId tid );
+static void scheduler_sanity ( ThreadId tid );
+static void mostly_clear_thread_record ( ThreadId tid );
+static const HChar *name_of_thread_state ( ThreadStatus );
/* Stats. */
static UInt n_scheduling_events_MINOR = 0;
static UInt n_scheduling_events_MAJOR = 0;
+
void VG_(print_scheduler_stats)(void)
{
VG_(message)(Vg_DebugMsg,
n_scheduling_events_MAJOR, n_scheduling_events_MINOR);
}
+/* CPU semaphore, so that threads can run exclusively */
+static vg_sema_t run_sema;
+static ThreadId running_tid = VG_INVALID_THREADID;
+
+
/* ---------------------------------------------------------------------
Helper functions for the scheduler.
------------------------------------------------------------------ */
__inline__
-Bool is_valid_or_empty_tid ( ThreadId tid )
+static Bool is_valid_or_empty_tid ( ThreadId tid )
{
/* tid is unsigned, hence no < 0 test. */
if (tid == 0) return False;
( Bool (*p) ( Addr stack_min, Addr stack_max, void* d ),
void* d )
{
- ThreadId tid, tid_to_skip;
-
- tid_to_skip = VG_INVALID_THREADID;
+ ThreadId tid;
for (tid = 1; tid < VG_N_THREADS; tid++) {
if (VG_(threads)[tid].status == VgTs_Empty) continue;
- if (tid == tid_to_skip) continue;
+
if ( p ( STACK_PTR(VG_(threads)[tid].arch),
VG_(threads)[tid].stack_highest_word, d ) )
return tid;
return VG_INVALID_THREADID;
}
+void VG_(mark_from_registers)(void (*mark_addr)(Addr))
+{
+ ThreadId tid;
+
+ for(tid = 1; tid < VG_N_THREADS; tid++) {
+ if (!VG_(is_valid_tid)(tid))
+ continue;
+ VGA_(mark_from_registers)(tid, mark_addr);
+ }
+}
/* Print the scheduler status. */
void VG_(pp_sched_status) ( void )
{
Int i;
VG_(printf)("\nsched status:\n");
+ VG_(printf)(" running_tid=%d\n", running_tid);
for (i = 1; i < VG_N_THREADS; i++) {
if (VG_(threads)[i].status == VgTs_Empty) continue;
- VG_(printf)("\nThread %d: status = ", i);
- switch (VG_(threads)[i].status) {
- case VgTs_Runnable: VG_(printf)("Runnable"); break;
- case VgTs_WaitJoinee: VG_(printf)("WaitJoinee(%d)",
- VG_(threads)[i].joiner_jee_tid);
- break;
- case VgTs_WaitJoiner: VG_(printf)("WaitJoiner"); break;
- case VgTs_Sleeping: VG_(printf)("Sleeping"); break;
- case VgTs_WaitMX: VG_(printf)("WaitMX"); break;
- case VgTs_WaitCV: VG_(printf)("WaitCV"); break;
- case VgTs_WaitSys: VG_(printf)("WaitSys"); break;
- default: VG_(printf)("???"); break;
- }
- VG_(printf)(", associated_mx = %p, associated_cv = %p\n",
- VG_(threads)[i].associated_mx,
- VG_(threads)[i].associated_cv );
+ VG_(printf)("\nThread %d: status = %s\n", i, name_of_thread_state(VG_(threads)[i].status));
VG_(pp_ExeContext)(
VG_(get_ExeContext2)( INSTR_PTR(VG_(threads)[i].arch),
FRAME_PTR(VG_(threads)[i].arch),
}
static
-void print_pthread_event ( ThreadId tid, Char* what )
-{
- VG_(message)(Vg_DebugMsg, "PTHREAD[%d]: %s", tid, what );
-}
-
-static
-Char* name_of_sched_event ( UInt event )
+HChar* name_of_sched_event ( UInt event )
{
switch (event) {
case VEX_TRC_JMP_SYSCALL: return "SYSCALL";
case VEX_TRC_JMP_NODECODE: return "NODECODE";
case VG_TRC_INNER_COUNTERZERO: return "COUNTERZERO";
case VG_TRC_INNER_FASTMISS: return "FASTMISS";
- case VG_TRC_UNRESUMABLE_SIGNAL: return "FATALSIGNAL";
+ case VG_TRC_FAULT_SIGNAL: return "FAULTSIGNAL";
default: return "??UNKNOWN??";
}
}
+static
+const HChar* name_of_thread_state ( ThreadStatus state )
+{
+ switch (state) {
+ case VgTs_Empty: return "VgTs_Empty";
+ case VgTs_Init: return "VgTs_Init";
+ case VgTs_Runnable: return "VgTs_Runnable";
+ case VgTs_WaitSys: return "VgTs_WaitSys";
+ case VgTs_Yielding: return "VgTs_Yielding";
+ case VgTs_Zombie: return "VgTs_Zombie";
+ default: return "VgTs_???";
+ }
+}
/* Allocate a completely empty ThreadState record. */
-static
-ThreadId vg_alloc_ThreadState ( void )
+ThreadId VG_(alloc_ThreadState) ( void )
{
Int i;
for (i = 1; i < VG_N_THREADS; i++) {
- if (VG_(threads)[i].status == VgTs_Empty)
+ if (VG_(threads)[i].status == VgTs_Empty) {
+ VG_(threads)[i].status = VgTs_Init;
+ VG_(threads)[i].exitreason = VgSrc_None;
return i;
+ }
}
VG_(printf)("vg_alloc_ThreadState: no free slots available\n");
VG_(printf)("Increase VG_N_THREADS, rebuild and try again.\n");
return &VG_(threads)[tid];
}
-/* Return True precisely when get_current_tid can return
- successfully. */
-Bool VG_(running_a_thread) ( void )
+/* Given an LWP id (ie, real kernel thread id), find the corresponding
+ ThreadId */
+ThreadId VG_(get_lwp_tid)(Int lwp)
{
- if (vg_tid_currently_running == VG_INVALID_THREADID)
- return False;
- /* Otherwise, it must be a valid thread ID. */
- vg_assert(VG_(is_valid_tid)(vg_tid_currently_running));
- return True;
+ ThreadId tid;
+
+ for(tid = 1; tid <= VG_N_THREADS; tid++)
+ if (VG_(threads)[tid].status != VgTs_Empty && VG_(threads)[tid].os_state.lwpid == lwp)
+ return tid;
+
+ return VG_INVALID_THREADID;
+}
+
+/*
+ Mark a thread as Runnable. This will block until the run_sema is
+ available, so that we get exclusive access to all the shared
+ structures and the CPU. Up until we get the sema, we must not
+ touch any shared state.
+
+ When this returns, we'll actually be running.
+ */
+void VG_(set_running)(ThreadId tid)
+{
+ ThreadState *tst = VG_(get_ThreadState)(tid);
+
+ vg_assert(tst->status != VgTs_Runnable);
+
+ tst->status = VgTs_Runnable;
+
+ VG_(sema_down)(&run_sema);
+ if (running_tid != VG_INVALID_THREADID)
+ VG_(printf)("tid %d found %d running\n", tid, running_tid);
+ vg_assert(running_tid == VG_INVALID_THREADID);
+ running_tid = tid;
+
+ if (VG_(clo_trace_sched))
+ print_sched_event(tid, "now running");
}
-ThreadId VG_(get_current_tid) ( void )
+ThreadId VG_(get_running_tid)(void)
{
- if (vg_tid_currently_running == VG_INVALID_THREADID)
- VG_(core_panic)("VG_(get_current_tid): not running generated code");
- /* Otherwise, it must be a valid thread ID. */
- vg_assert(VG_(is_valid_tid)(vg_tid_currently_running));
- return vg_tid_currently_running;
+ return running_tid;
}
-void VG_(resume_scheduler)(Int sigNo, vki_siginfo_t *info)
+Bool VG_(is_running_thread)(ThreadId tid)
{
- if (scheduler_jmpbuf_valid) {
- /* Can't continue; must longjmp back to the scheduler and thus
- enter the sighandler immediately. */
- vg_assert(vg_tid_currently_running != VG_INVALID_THREADID);
- VG_(memcpy)(&unresumable_siginfo, info, sizeof(vki_siginfo_t));
-
- longjmpd_on_signal = sigNo;
- __builtin_longjmp(scheduler_jmpbuf,1);
- } else {
- vg_assert(vg_tid_currently_running == VG_INVALID_THREADID);
+ ThreadState *tst = VG_(get_ThreadState)(tid);
+
+ return
+// tst->os_state.lwpid == VG_(gettid)() && /* check we're this tid */
+ running_tid == tid && /* and that we've got the lock */
+ tst->status == VgTs_Runnable; /* and we're runnable */
+}
+
+/* Return the number of non-dead Threads */
+Int VG_(count_living_threads)(void)
+{
+ Int count = 0;
+ ThreadId tid;
+
+ for(tid = 1; tid < VG_N_THREADS; tid++)
+ if (VG_(threads)[tid].status != VgTs_Empty &&
+ VG_(threads)[tid].status != VgTs_Zombie)
+ count++;
+
+ return count;
+}
+
+/*
+ Set a thread into a sleeping state, and give up exclusive access to
+ the CPU. On return, the thread must be prepared to block until it
+ is ready to run again (generally this means blocking in a syscall,
+ but it may mean that we remain in a Runnable state and we're just
+ yielding the CPU to another thread).
+ */
+void VG_(set_sleeping)(ThreadId tid, ThreadStatus sleepstate)
+{
+ ThreadState *tst = VG_(get_ThreadState)(tid);
+
+ vg_assert(tst->status == VgTs_Runnable);
+
+ vg_assert(sleepstate == VgTs_WaitSys ||
+ sleepstate == VgTs_Yielding);
+
+ tst->status = sleepstate;
+
+ vg_assert(running_tid == tid);
+ running_tid = VG_INVALID_THREADID;
+
+ /* Release the run_sema; this will reschedule any runnable
+ thread. */
+ VG_(sema_up)(&run_sema);
+
+ if (VG_(clo_trace_sched)) {
+ Char buf[50];
+ VG_(sprintf)(buf, "now sleeping in state %s", name_of_thread_state(sleepstate));
+ print_sched_event(tid, buf);
}
}
+/* Return true if the thread is still alive but in the process of
+ exiting. */
+inline Bool VG_(is_exiting)(ThreadId tid)
+{
+ vg_assert(VG_(is_valid_tid)(tid));
+ return VG_(threads)[tid].exitreason != VgSrc_None;
+}
+
+/* Clear out the ThreadState and release the semaphore. Leaves the
+ ThreadState in VgTs_Zombie state, so that it doesn't get
+ reallocated until the caller is really ready. */
+void VG_(exit_thread)(ThreadId tid)
+{
+ vg_assert(VG_(is_valid_tid)(tid));
+ vg_assert(VG_(is_running_thread)(tid));
+ vg_assert(VG_(is_exiting)(tid));
+
+ /* It's stack is now off-limits
+
+ XXX Don't do this - the client thread implementation can touch
+ the stack after thread death... */
+ if (0 && VG_(threads)[tid].stack_base) {
+ Segment *seg = VG_(find_segment)( VG_(threads)[tid].stack_base );
+ if (seg)
+ VG_TRACK( die_mem_stack, seg->addr, seg->len );
+ }
+
+ VGA_(cleanup_thread)( &VG_(threads)[tid].arch );
+
+ mostly_clear_thread_record(tid);
+ running_tid = VG_INVALID_THREADID;
+
+ /* There should still be a valid exitreason for this thread */
+ vg_assert(VG_(threads)[tid].exitreason != VgSrc_None);
+
+ VG_(sema_up)(&run_sema);
+}
+
+/* Kill a thread. This interrupts whatever a thread is doing, and
+ makes it exit ASAP. This does not set the exitreason or
+ exitcode. */
+void VG_(kill_thread)(ThreadId tid)
+{
+ vg_assert(VG_(is_valid_tid)(tid));
+ vg_assert(!VG_(is_running_thread)(tid));
+ vg_assert(VG_(is_exiting)(tid));
+
+ if (VG_(threads)[tid].status == VgTs_WaitSys) {
+ if (VG_(clo_trace_signals))
+ VG_(message)(Vg_DebugMsg, "kill_thread zaps tid %d lwp %d",
+ tid, VG_(threads)[tid].os_state.lwpid);
+ VG_(tkill)(VG_(threads)[tid].os_state.lwpid, VKI_SIGVGKILL);
+ }
+}
+
+/*
+ Yield the CPU for a short time to let some other thread run.
+ */
+void VG_(vg_yield)(void)
+{
+ struct vki_timespec ts = { 0, 1 };
+ ThreadId tid = running_tid;
+
+ vg_assert(tid != VG_INVALID_THREADID);
+ vg_assert(VG_(threads)[tid].os_state.lwpid == VG_(gettid)());
+
+ VG_(set_sleeping)(tid, VgTs_Yielding);
+
+ //VG_(printf)("tid %d yielding EIP=%p\n", tid, VG_(threads)[tid].arch.m_eip);
+
+ /*
+ Tell the kernel we're yielding.
+ */
+ if (1)
+ VG_(do_syscall0)(__NR_sched_yield);
+ else
+ VG_(nanosleep)(&ts);
+
+ VG_(set_running)(tid);
+
+ VG_(poll_signals)(tid); /* something might have happened */
+}
+
+void VG_(resume_scheduler)(ThreadId tid)
+{
+ ThreadState *tst = VG_(get_ThreadState)(tid);
+
+ vg_assert(tst->os_state.lwpid == VG_(gettid)());
+
+ if (tst->sched_jmpbuf_valid) {
+ /* Can't continue; must longjmp back to the scheduler and thus
+ enter the sighandler immediately. */
+
+ LONGJMP(tst->sched_jmpbuf, True);
+ }
+}
+
+#define SCHEDSETJMP(tid, jumped, stmt) \
+ do { \
+ ThreadState * volatile _qq_tst = VG_(get_ThreadState)(tid); \
+ \
+ (jumped) = SETJMP(_qq_tst->sched_jmpbuf); \
+ if ((jumped) == 0) { \
+ vg_assert(!_qq_tst->sched_jmpbuf_valid); \
+ _qq_tst->sched_jmpbuf_valid = True; \
+ stmt; \
+ } else if (VG_(clo_trace_sched)) \
+ VG_(printf)("SCHEDSETJMP(line %d) tid %d, jumped=%d\n", __LINE__, tid, jumped); \
+ vg_assert(_qq_tst->sched_jmpbuf_valid); \
+ _qq_tst->sched_jmpbuf_valid = False; \
+ } while(0)
+
+/* Run the thread tid for a while, and return a VG_TRC_* value to the
+ scheduler indicating what happened. */
static
UInt run_thread_for_a_while ( ThreadId tid )
{
+ volatile Bool jumped;
+ volatile ThreadState *tst = VG_(get_ThreadState)(tid);
+ //volatile Addr EIP = tst->arch.m_eip;
+ //volatile Addr nextEIP;
+
volatile UInt trc = 0;
- volatile Int dispatch_ctr_SAVED = VG_(dispatch_ctr);
- volatile Int done_this_time;
+ volatile Int dispatch_ctr_SAVED = VG_(dispatch_ctr);
+ volatile Int done_this_time;
/* For paranoia purposes only */
volatile Addr a_vex = (Addr) & VG_(threads)[tid].arch.vex;
volatile UInt sz_vex = (UInt) sizeof VG_(threads)[tid].arch.vex;
volatile UInt sz_vexsh = (UInt) sizeof VG_(threads)[tid].arch.vex_shadow;
volatile UInt sz_spill = (UInt) sizeof VG_(threads)[tid].arch.vex_spill;
- /* volatile UInt zz = VG_(threads)[tid].arch.vex.guest_EIP; */
/* Paranoia */
vg_assert(VG_(is_valid_tid)(tid));
- vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
- vg_assert(!scheduler_jmpbuf_valid);
- vg_assert(vg_tid_currently_running == VG_INVALID_THREADID);
+ vg_assert(VG_(is_valid_tid)(tid));
+ vg_assert(VG_(is_running_thread)(tid));
+ vg_assert(!VG_(is_exiting)(tid));
/* Even more paranoia. Check that what we have matches
Vex's guest state layout requirements. */
-
-/* switch-back stuff (doesn't really work) */
-#if 0
-{
- static Int nn =0;
- dispatch_ctr_SAVED = VG_(dispatch_ctr)=2;
- VG_(printf)("just prior to bb %d\n",nn++);
- extern void switchback ( VexGuestAMD64State*, ULong );
- HChar* p = VG_(getenv)("LIMIT");
- Int lim = 0;
- // VG_(printf)("p = %p\n", p);
- while (*p >= '0' && *p <= '9') {
- lim = 10 * lim + (Int)(*p - '0');
- p++;
- }
- //VG_(printf)("LIMIT = %d\n", lim);
- if (nn == lim)
- switchback( &VG_(threads)[tid].arch.vex,
- LibVEX_GuestAMD64_get_rflags( &VG_(threads)[tid].arch.vex ));
-}
-#endif
-/* END switch-back stuff (doesn't really work) */
-
if (0)
VG_(printf)("%p %d %p %d %p %d\n",
- (void*)a_vex, sz_vex, (void*)a_vexsh, sz_vexsh,
- (void*)a_spill, sz_spill );
+ (void*)a_vex, sz_vex, (void*)a_vexsh, sz_vexsh,
+ (void*)a_spill, sz_spill );
vg_assert(IS_8_ALIGNED(sz_vex));
vg_assert(IS_8_ALIGNED(sz_vexsh));
VGP_PUSHCC(VgpRun);
/* there should be no undealt-with signals */
- vg_assert(unresumable_siginfo.si_signo == 0);
-
- if (__builtin_setjmp(scheduler_jmpbuf) == 0) {
- /* try this ... */
- vg_tid_currently_running = tid;
- scheduler_jmpbuf_valid = True;
- trc = VG_(run_innerloop)( &VG_(threads)[tid].arch.vex );
- scheduler_jmpbuf_valid = False;
- vg_tid_currently_running = VG_INVALID_THREADID;
- /* We get here if the client didn't take a fault. */
- } else {
+ //vg_assert(VG_(threads)[tid].siginfo.si_signo == 0);
+
+ //VG_(printf)("running EIP = %p ESP=%p\n", VG_(threads)[tid].arch.m_eip, VG_(threads)[tid].arch.m_esp);
+
+ vg_assert(VG_(my_fault));
+ VG_(my_fault) = False;
+
+ SCHEDSETJMP(tid, jumped, trc = VG_(run_innerloop)(&tst->arch.vex));
+
+ //nextEIP = tst->arch.m_eip;
+ //if (nextEIP >= VG_(client_end))
+ // VG_(printf)("trc=%d jump to %p from %p\n",
+ // trc, nextEIP, EIP);
+
+ VG_(my_fault) = True;
+
+ if (jumped) {
/* We get here if the client took a fault, which caused our
signal handler to longjmp. */
- scheduler_jmpbuf_valid = False;
- vg_tid_currently_running = VG_INVALID_THREADID;
vg_assert(trc == 0);
- trc = VG_TRC_UNRESUMABLE_SIGNAL;
- }
-
- vg_assert(!scheduler_jmpbuf_valid);
-
- if (trc == VG_TRC_INVARIANT_FAILED) {
- /* To see the bb causing this to fail, set VG_SCHEDULING_QUANTUM to 1,
- and make zz be the program counter at entry to this fn. */
- /* VG_(translate)(tid,zz,True); */
- VG_(core_panic)
- ("host invariant state failure in VEX-generated code");
- }
+ trc = VG_TRC_FAULT_SIGNAL;
+ VG_(block_signals)(tid);
+ }
done_this_time = (Int)dispatch_ctr_SAVED - (Int)VG_(dispatch_ctr) - 0;
static
void mostly_clear_thread_record ( ThreadId tid )
{
+ vki_sigset_t savedmask;
+
vg_assert(tid >= 0 && tid < VG_N_THREADS);
VGA_(cleanup_thread)(&VG_(threads)[tid].arch);
- VG_(threads)[tid].tid = tid;
- VG_(threads)[tid].status = VgTs_Empty;
- VG_(threads)[tid].associated_mx = NULL;
- VG_(threads)[tid].associated_cv = NULL;
- VG_(threads)[tid].awaken_at = 0;
- VG_(threads)[tid].joinee_retval = NULL;
- VG_(threads)[tid].joiner_thread_return = NULL;
- VG_(threads)[tid].joiner_jee_tid = VG_INVALID_THREADID;
- VG_(threads)[tid].detached = False;
- VG_(threads)[tid].cancel_st = True; /* PTHREAD_CANCEL_ENABLE */
- VG_(threads)[tid].cancel_ty = True; /* PTHREAD_CANCEL_DEFERRED */
- VG_(threads)[tid].cancel_pend = NULL; /* not pending */
- VG_(threads)[tid].custack_used = 0;
- VG_(sigemptyset)(&VG_(threads)[tid].sig_mask);
- VG_(sigfillset)(&VG_(threads)[tid].eff_sig_mask);
- VG_(threads)[tid].sigqueue_head = 0;
- VG_(threads)[tid].sigqueue_tail = 0;
- VG_(threads)[tid].specifics_ptr = NULL;
+ VG_(threads)[tid].tid = tid;
+
+ /* Leave the thread in Zombie, so that it doesn't get reallocated
+ until the caller is finally done with the thread stack. */
+ VG_(threads)[tid].status = VgTs_Zombie;
- VG_(threads)[tid].syscallno = -1;
- VG_(threads)[tid].sys_flags = 0;
+ VG_(threads)[tid].syscallno = -1;
- VG_(threads)[tid].proxy = NULL;
+ VG_(sigemptyset)(&VG_(threads)[tid].sig_mask);
+ VG_(sigemptyset)(&VG_(threads)[tid].tmp_sig_mask);
+
+ VGA_(os_state_clear)(&VG_(threads)[tid]);
/* start with no altstack */
VG_(threads)[tid].altstack.ss_sp = (void *)0xdeadbeef;
VG_(threads)[tid].altstack.ss_size = 0;
VG_(threads)[tid].altstack.ss_flags = VKI_SS_DISABLE;
+
+ /* clear out queued signals */
+ VG_(block_all_host_signals)(&savedmask);
+ if (VG_(threads)[tid].sig_queue != NULL) {
+ VG_(arena_free)(VG_AR_CORE, VG_(threads)[tid].sig_queue);
+ VG_(threads)[tid].sig_queue = NULL;
+ }
+ VG_(restore_all_host_signals)(&savedmask);
+
+ VG_(threads)[tid].sched_jmpbuf_valid = False;
}
+/* Called in the child after fork. Presumably the parent was running,
+ so we now we're running. */
+static void sched_fork_cleanup(ThreadId me)
+{
+ ThreadId tid;
+ vg_assert(running_tid == me);
+
+ VG_(master_tid) = me;
+
+ VG_(threads)[me].os_state.lwpid = VG_(gettid)();
+ VG_(threads)[me].os_state.threadgroup = VG_(getpid)();
+
+ /* clear out all the unused thread slots */
+ for (tid = 1; tid < VG_N_THREADS; tid++) {
+ if (tid != me)
+ VG_(threads)[tid].status = VgTs_Empty;
+ }
+
+ /* re-init and take the sema */
+ VG_(sema_deinit)(&run_sema);
+ VG_(sema_init)(&run_sema);
+ VG_(sema_down)(&run_sema);
+}
/* Initialise the scheduler. Create a single "main" thread ready to
run, with special ThreadId of one. This is called at startup. The
- caller subsequently initialises the guest state components of
- this main thread, thread 1.
+ caller subsequently initialises the guest state components of this
+ main thread, thread 1.
*/
void VG_(scheduler_init) ( void )
{
Int i;
ThreadId tid_main;
+ VG_(sema_init)(&run_sema);
+
for (i = 0 /* NB; not 1 */; i < VG_N_THREADS; i++) {
+ VG_(threads)[i].sig_queue = NULL;
+
+ VGA_(os_state_init)(&VG_(threads)[i]);
mostly_clear_thread_record(i);
+
+ VG_(threads)[i].status = VgTs_Empty;
VG_(threads)[i].stack_size = 0;
VG_(threads)[i].stack_base = (Addr)NULL;
- VG_(threads)[i].stack_guard_size = 0;
VG_(threads)[i].stack_highest_word = (Addr)NULL;
}
- for (i = 0; i < VG_N_THREAD_KEYS; i++) {
- vg_thread_keys[i].inuse = False;
- vg_thread_keys[i].destructor = NULL;
- }
-
- vg_fhstack_used = 0;
+ tid_main = VG_(alloc_ThreadState)();
- /* Assert this is thread one, which has certain magic
- properties. */
- tid_main = vg_alloc_ThreadState();
- vg_assert(tid_main == 1);
- VG_(threads)[tid_main].status = VgTs_Runnable;
+ VG_(master_tid) = tid_main;
- VG_(threads)[tid_main].stack_highest_word = VG_(clstk_end) - sizeof(UWord);
+ /* Initial thread's stack is the original process stack */
+ VG_(threads)[tid_main].stack_highest_word = VG_(clstk_end) - sizeof(UInt);
VG_(threads)[tid_main].stack_base = VG_(clstk_base);
VG_(threads)[tid_main].stack_size = VG_(client_rlimit_stack).rlim_cur;
- /* Not running client code right now. */
- scheduler_jmpbuf_valid = False;
-
- /* Proxy for main thread */
- VG_(proxy_create)(tid_main);
+ VG_(atfork)(NULL, NULL, sched_fork_cleanup);
}
+/* ---------------------------------------------------------------------
+ The scheduler proper.
+ ------------------------------------------------------------------ */
-/* vthread tid is returning from a signal handler; modify its
- stack/regs accordingly. */
-static
-void handle_signal_return ( ThreadId tid )
-{
- Bool restart_blocked_syscalls;
- struct vki_timespec * rem;
-
- vg_assert(VG_(is_valid_tid)(tid));
-
- restart_blocked_syscalls = VG_(signal_returns)(tid);
-
- /* If we were interrupted in the middle of a rendezvous
- then check the rendezvous hasn't completed while we
- were busy handling the signal. */
- if (VG_(threads)[tid].status == VgTs_WaitJoiner ||
- VG_(threads)[tid].status == VgTs_WaitJoinee ) {
- maybe_rendezvous_joiners_and_joinees();
- }
-
- /* If we were interrupted while waiting on a mutex then check that
- it hasn't been unlocked while we were busy handling the signal. */
- if (VG_(threads)[tid].status == VgTs_WaitMX &&
- VG_(threads)[tid].associated_mx->__vg_m_count == 0) {
- vg_pthread_mutex_t* mutex = VG_(threads)[tid].associated_mx;
- mutex->__vg_m_count = 1;
- mutex->__vg_m_owner = (/*_pthread_descr*/void*)(UWord)tid;
- VG_(threads)[tid].status = VgTs_Runnable;
- VG_(threads)[tid].associated_mx = NULL;
- /* m_edx already holds pth_mx_lock() success (0) */
- }
-
- if (restart_blocked_syscalls)
- /* Easy; we don't have to do anything. */
- return;
-
- if (VG_(threads)[tid].status == VgTs_Sleeping
- && SYSCALL_NUM(VG_(threads)[tid].arch) == __NR_nanosleep) {
- /* We interrupted a nanosleep(). The right thing to do is to
- write the unused time to nanosleep's second param, but that's
- too much effort ... we just say that 1 nanosecond was not
- used, and return EINTR. */
- rem = (struct vki_timespec*)SYSCALL_ARG2(VG_(threads)[tid].arch);
- if (rem != NULL) {
- rem->tv_sec = 0;
- rem->tv_nsec = 1;
+static void handle_tt_miss ( ThreadId tid )
+{
+ Bool found;
+ Addr ip = INSTR_PTR(VG_(threads)[tid].arch);
+
+ /* Trivial event. Miss in the fast-cache. Do a full
+ lookup for it. */
+ found = VG_(search_transtab)( NULL,
+ ip, True/*upd_fast_cache*/ );
+ if (!found) {
+ /* Not found; we need to request a translation. */
+ if (VG_(translate)( tid, ip, /*debug*/False, 0/*not verbose*/ )) {
+ found = VG_(search_transtab)( NULL, ip, True );
+ if (!found)
+ VG_(core_panic)("VG_TRC_INNER_FASTMISS: missing tt_fast entry");
+ } else {
+ // If VG_(translate)() fails, it's because it had to throw a
+ // signal because the client jumped to a bad address. That
+ // means that either a signal has been set up for delivery,
+ // or the thread has been marked for termination. Either
+ // way, we just need to go back into the scheduler loop.
}
- SET_SYSCALL_RETVAL(tid, -VKI_EINTR);
- VG_(threads)[tid].status = VgTs_Runnable;
- return;
}
-
- /* All other cases? Just return. */
}
-
-struct timeout {
- UInt time; /* time we should awaken */
- ThreadId tid; /* thread which cares about this timeout */
- struct timeout *next;
-};
-
-static struct timeout *timeouts;
-
-static void add_timeout(ThreadId tid, UInt time)
+static void handle_syscall(ThreadId tid)
{
- struct timeout *t = VG_(arena_malloc)(VG_AR_CORE, sizeof(*t));
- struct timeout **prev, *tp;
+ ThreadState *tst = VG_(get_ThreadState)(tid);
+ Bool jumped;
- t->time = time;
- t->tid = tid;
+ /* Syscall may or may not block; either way, it will be
+ complete by the time this call returns, and we'll be
+ runnable again. We could take a signal while the
+ syscall runs. */
+ SCHEDSETJMP(tid, jumped, VG_(client_syscall)(tid));
- if (VG_(clo_trace_sched)) {
- Char msg_buf[100];
- VG_(sprintf)(msg_buf, "add_timeout: now=%u adding timeout at %u",
- VG_(read_millisecond_timer)(), time);
- print_sched_event(tid, msg_buf);
+ if (!VG_(is_running_thread)(tid))
+ VG_(printf)("tid %d not running; running_tid=%d, tid %d status %d\n",
+ tid, running_tid, tid, tst->status);
+ vg_assert(VG_(is_running_thread)(tid));
+
+ if (jumped) {
+ VG_(block_signals)(tid);
+ VG_(poll_signals)(tid);
}
-
- for(tp = timeouts, prev = &timeouts;
- tp != NULL && tp->time < time;
- prev = &tp->next, tp = tp->next)
- ;
- t->next = tp;
- *prev = t;
}
-static
-void sched_do_syscall ( ThreadId tid )
+/*
+ Run a thread until it wants to exit.
+
+ We assume that the caller has already called VG_(set_running) for
+ us, so we own the VCPU. Also, all signals are blocked.
+ */
+VgSchedReturnCode VG_(scheduler) ( ThreadId tid )
{
- Int syscall_no;
- Char msg_buf[100];
+ UInt trc;
+ ThreadState *tst = VG_(get_ThreadState)(tid);
- vg_assert(VG_(is_valid_tid)(tid));
- vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
+ VGP_PUSHCC(VgpSched);
- syscall_no = SYSCALL_NUM(VG_(threads)[tid].arch);
+ /* set the proper running signal mask */
+ VG_(block_signals)(tid);
+
+ vg_assert(VG_(is_running_thread)(tid));
+
+ VG_(dispatch_ctr) = VG_SCHEDULING_QUANTUM + 1;
+
+ while(!VG_(is_exiting)(tid)) {
+ UInt remaining_bbs;
+
+ if (VG_(dispatch_ctr) == 1) {
+ /* Our slice is done, so yield the CPU to another thread. This
+ doesn't sleep between sleeping and running, since that would
+ take too much time. */
+ VG_(set_sleeping)(tid, VgTs_Yielding);
+ /* nothing */
+ VG_(set_running)(tid);
+ //VG_(tm_thread_switchto)(tid);
+
+ /* OK, do some relatively expensive housekeeping stuff */
+ scheduler_sanity(tid);
+ VG_(sanity_check_general)(False);
+
+ /* Look for any pending signals for this thread, and set them up
+ for delivery */
+ VG_(poll_signals)(tid);
+
+ if (VG_(is_exiting)(tid))
+ break; /* poll_signals picked up a fatal signal */
+
+ /* For stats purposes only. */
+ n_scheduling_events_MAJOR++;
+
+ /* Figure out how many bbs to ask vg_run_innerloop to do. Note
+ that it decrements the counter before testing it for zero, so
+ that if tst->dispatch_ctr is set to N you get at most N-1
+ iterations. Also this means that tst->dispatch_ctr must
+ exceed zero before entering the innerloop. Also also, the
+ decrement is done before the bb is actually run, so you
+ always get at least one decrement even if nothing happens. */
+ VG_(dispatch_ctr) = VG_SCHEDULING_QUANTUM + 1;
+
+ /* paranoia ... */
+ vg_assert(tst->tid == tid);
+ vg_assert(tst->os_state.lwpid == VG_(gettid)());
+ }
- /* Special-case nanosleep because we can. But should we?
+ /* For stats purposes only. */
+ n_scheduling_events_MINOR++;
- XXX not doing so for now, because it doesn't seem to work
- properly, and we can use the syscall nanosleep just as easily.
- */
- if (0 && syscall_no == __NR_nanosleep) {
- UInt t_now, t_awaken;
- struct vki_timespec* req;
- req = (struct vki_timespec*)SYSCALL_ARG1(VG_(threads)[tid].arch);
-
- if (req->tv_sec < 0 || req->tv_nsec < 0 || req->tv_nsec >= 1000000000) {
- SET_SYSCALL_RETVAL(tid, -VKI_EINVAL);
- return;
- }
+ if (0)
+ VG_(message)(Vg_DebugMsg, "thread %d: running for %d bbs",
+ tid, VG_(dispatch_ctr) - 1 );
+
+ remaining_bbs = VG_(dispatch_ctr);
+
+ trc = run_thread_for_a_while ( tid );
- t_now = VG_(read_millisecond_timer)();
- t_awaken
- = t_now
- + (UInt)1000ULL * (UInt)(req->tv_sec)
- + (UInt)(req->tv_nsec) / 1000000;
- VG_(threads)[tid].status = VgTs_Sleeping;
- VG_(threads)[tid].awaken_at = t_awaken;
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf, "at %d: nanosleep for %d",
- t_now, t_awaken-t_now);
- print_sched_event(tid, msg_buf);
+ VG_(bbs_done) += remaining_bbs - VG_(dispatch_ctr);
+
+ if (VG_(clo_trace_sched) && VG_(clo_verbosity) > 2) {
+ Char buf[50];
+ VG_(sprintf)(buf, "TRC: %s", name_of_sched_event(trc));
+ print_sched_event(tid, buf);
}
- add_timeout(tid, t_awaken);
- /* Force the scheduler to run something else for a while. */
- return;
- }
- /* If pre_syscall returns true, then we're done immediately */
- if (VG_(pre_syscall)(tid)) {
- VG_(post_syscall(tid, True));
- vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
- } else {
- vg_assert(VG_(threads)[tid].status == VgTs_WaitSys);
- }
-}
+ switch(trc) {
+ case VG_TRC_INNER_FASTMISS:
+ vg_assert(VG_(dispatch_ctr) > 1);
+ handle_tt_miss(tid);
+ break;
+
+ case VEX_TRC_JMP_CLIENTREQ:
+ do_client_request(tid);
+ break;
+
+ case VEX_TRC_JMP_SYSCALL:
+ handle_syscall(tid);
+ if (VG_(clo_sanity_level) > 2)
+ VG_(sanity_check_general)(True); /* sanity-check every syscall */
+ break;
+ case VEX_TRC_JMP_YIELD:
+ /* Explicit yield, because this thread is in a spin-lock
+ or something. Let another thread run ASAP. */
+ VG_(dispatch_ctr) = 1;
+ break;
+ case VG_TRC_INNER_COUNTERZERO:
+ /* Timeslice is out. Let a new thread be scheduled. */
+ vg_assert(VG_(dispatch_ctr) == 1);
+ break;
-/* Sleep for a while, but be willing to be woken. */
-static
-void idle ( void )
-{
- struct vki_pollfd pollfd[1];
- Int delta = -1;
- Int fd = VG_(proxy_resfd)();
-
- pollfd[0].fd = fd;
- pollfd[0].events = VKI_POLLIN;
-
- /* Look though the nearest timeouts, looking for the next future
- one (there may be stale past timeouts). They'll all be mopped
- below up when the poll() finishes. */
- if (timeouts != NULL) {
- struct timeout *tp;
- Bool wicked = False;
- UInt now = VG_(read_millisecond_timer)();
-
- for(tp = timeouts; tp != NULL && tp->time < now; tp = tp->next) {
- /* If a thread is still sleeping in the past, make it runnable */
- ThreadState *tst = VG_(get_ThreadState)(tp->tid);
- if (tst->status == VgTs_Sleeping)
- tst->status = VgTs_Runnable;
- wicked = True; /* no sleep for the wicked */
- }
+ case VG_TRC_FAULT_SIGNAL:
+ /* Everything should be set up (either we're exiting, or
+ about to start in a signal handler). */
+ break;
- if (tp != NULL) {
- vg_assert(tp->time >= now);
- /* limit the signed int delta to INT_MAX */
- if ((tp->time - now) <= 0x7FFFFFFFU) {
- delta = tp->time - now;
- } else {
- delta = 0x7FFFFFFF;
+ case VEX_TRC_JMP_EMWARN: {
+ static Int counts[EmWarn_NUMBER];
+ static Bool counts_initted = False;
+ VexEmWarn ew;
+ HChar* what;
+ Bool show;
+ Int q;
+ if (!counts_initted) {
+ counts_initted = True;
+ for (q = 0; q < EmWarn_NUMBER; q++)
+ counts[q] = 0;
}
+ ew = (VexEmWarn)VG_(threads)[tid].arch.vex.guest_EMWARN;
+ what = (ew < 0 || ew >= EmWarn_NUMBER)
+ ? "unknown (?!)"
+ : LibVEX_EmWarn_string(ew);
+ show = (ew < 0 || ew >= EmWarn_NUMBER)
+ ? True
+ : counts[ew]++ < 3;
+ if (show) {
+ VG_(message)( Vg_UserMsg,
+ "Emulation warning: unsupported action:");
+ VG_(message)( Vg_UserMsg, " %s", what);
+ VG_(pp_ExeContext) ( VG_(get_ExeContext) ( tid ) );
+ }
+ break;
}
- if (wicked)
- delta = 0;
- }
- /* gotta wake up for something! */
- vg_assert(fd != -1 || delta != -1);
+ case VEX_TRC_JMP_NODECODE:
+ VG_(synth_sigill)(tid, INSTR_PTR(VG_(threads)[tid].arch));
+ break;
- /* If we need to do signal routing, then poll for pending signals
- every VG_(clo_signal_polltime) mS */
- if (VG_(do_signal_routing) && (delta > VG_(clo_signal_polltime) || delta == -1))
- delta = VG_(clo_signal_polltime);
-
- if (VG_(clo_trace_sched)) {
- Char msg_buf[100];
- VG_(sprintf)(msg_buf, "idle: waiting for %dms and fd %d",
- delta, fd);
- print_sched_event(0, msg_buf);
+ default:
+ VG_(printf)("\ntrc = %d\n", trc);
+ VG_(core_panic)("VG_(scheduler), phase 3: "
+ "unexpected thread return code");
+ /* NOTREACHED */
+ break;
+
+ } /* switch (trc) */
}
+
+ vg_assert(VG_(is_exiting)(tid));
- VG_(poll)(pollfd, fd != -1 ? 1 : 0, delta);
+ VGP_POPCC(VgpSched);
- /* See if there's anything on the timeout list which needs
- waking, and mop up anything in the past. */
- {
- UInt now = VG_(read_millisecond_timer)();
- struct timeout *tp;
+ //if (VG_(clo_model_pthreads))
+ // VG_(tm_thread_exit)(tid);
+
+ return tst->exitreason;
+}
- tp = timeouts;
- while(tp && tp->time <= now) {
- struct timeout *dead;
- ThreadState *tst;
-
- tst = VG_(get_ThreadState)(tp->tid);
-
- if (VG_(clo_trace_sched)) {
- Char msg_buf[100];
- VG_(sprintf)(msg_buf, "idle: now=%u removing timeout at %u",
- now, tp->time);
- print_sched_event(tp->tid, msg_buf);
- }
-
- /* If awaken_at != tp->time then it means the timeout is
- stale and we should just ignore it. */
- if(tst->awaken_at == tp->time) {
- switch(tst->status) {
- case VgTs_Sleeping:
- tst->awaken_at = 0xFFFFFFFF;
- tst->status = VgTs_Runnable;
- break;
-
- case VgTs_WaitMX:
- do_pthread_mutex_timedlock_TIMEOUT(tst->tid);
- break;
-
- case VgTs_WaitCV:
- do_pthread_cond_timedwait_TIMEOUT(tst->tid);
- break;
-
- default:
- /* This is a bit odd but OK; if a thread had a timeout
- but woke for some other reason (signal, condvar
- wakeup), then it will still be on the list. */
- if (0)
- VG_(printf)("idle(): unexpected status tp->tid=%d tst->status = %d\n",
- tp->tid, tst->status);
- break;
- }
- }
+/*
+ This causes all threads to forceably exit. They aren't actually
+ dead by the time this returns; you need to call
+ VGA_(reap_threads)() to wait for them.
+ */
+void VG_(nuke_all_threads_except) ( ThreadId me, VgSchedReturnCode src )
+{
+ ThreadId tid;
- dead = tp;
- tp = tp->next;
+ vg_assert(VG_(is_running_thread)(me));
- VG_(arena_free)(VG_AR_CORE, dead);
- }
+ for (tid = 1; tid < VG_N_THREADS; tid++) {
+ if (tid == me
+ || VG_(threads)[tid].status == VgTs_Empty)
+ continue;
+ if (0)
+ VG_(printf)(
+ "VG_(nuke_all_threads_except): nuking tid %d\n", tid);
- timeouts = tp;
+ VG_(threads)[tid].exitreason = src;
+ VG_(kill_thread)(tid);
}
}
/* ---------------------------------------------------------------------
- The scheduler proper.
+ Specifying shadow register values
------------------------------------------------------------------ */
-// For handling of the default action of a fatal signal.
-// jmp_buf for fatal signals; VG_(fatal_signal_jmpbuf_ptr) is NULL until
-// the time is right that it can be used.
-static jmp_buf fatal_signal_jmpbuf;
-static jmp_buf* fatal_signal_jmpbuf_ptr;
-static Int fatal_sigNo; // the fatal signal, if it happens
-
-/* Run user-space threads until either
- * Deadlock occurs
- * One thread asks to shutdown Valgrind
- * The specified number of basic blocks has gone by.
-*/
-VgSchedReturnCode do_scheduler ( Int* exitcode, ThreadId* last_run_tid )
+void VG_(set_shadow_regs_area) ( ThreadId tid, OffT offset, SizeT size,
+ const UChar* area )
{
- ThreadId tid, tid_next;
- UInt trc;
- Int done_this_time, n_in_bounded_wait;
- Int n_exists, n_waiting_for_reaper;
- Bool found;
-
- /* Start with the root thread. tid in general indicates the
- currently runnable/just-finished-running thread. */
- *last_run_tid = tid = 1;
-
- /* This is the top level scheduler loop. It falls into three
- phases. */
- while (True) {
-
- /* ======================= Phase 0 of 3 =======================
- Be paranoid. Always a good idea. */
- stage1:
- scheduler_sanity();
- VG_(sanity_check_general)( False );
-
- /* ======================= Phase 1 of 3 =======================
- Handle I/O completions and signals. This may change the
- status of various threads. Then select a new thread to run,
- or declare deadlock, or sleep if there are no runnable
- threads but some are blocked on I/O. */
-
- /* Do the following loop until a runnable thread is found, or
- deadlock is detected. */
- while (True) {
-
- /* For stats purposes only. */
- n_scheduling_events_MAJOR++;
-
- /* Route signals to their proper places */
- VG_(route_signals)();
-
- /* See if any of the proxy LWPs report any activity: either a
- syscall completing or a signal arriving. */
- VG_(proxy_results)();
-
- /* Try and find a thread (tid) to run. */
- tid_next = tid;
- if (prefer_sched != VG_INVALID_THREADID) {
- tid_next = prefer_sched-1;
- prefer_sched = VG_INVALID_THREADID;
- }
- n_in_bounded_wait = 0;
- n_exists = 0;
- n_waiting_for_reaper = 0;
- while (True) {
- tid_next++;
- if (tid_next >= VG_N_THREADS) tid_next = 1;
- if (VG_(threads)[tid_next].status == VgTs_Sleeping
- || VG_(threads)[tid_next].status == VgTs_WaitSys
- || (VG_(threads)[tid_next].status == VgTs_WaitMX
- && VG_(threads)[tid_next].awaken_at != 0xFFFFFFFF)
- || (VG_(threads)[tid_next].status == VgTs_WaitCV
- && VG_(threads)[tid_next].awaken_at != 0xFFFFFFFF))
- n_in_bounded_wait ++;
- if (VG_(threads)[tid_next].status != VgTs_Empty)
- n_exists++;
- if (VG_(threads)[tid_next].status == VgTs_WaitJoiner)
- n_waiting_for_reaper++;
- if (VG_(threads)[tid_next].status == VgTs_Runnable)
- break; /* We can run this one. */
- if (tid_next == tid)
- break; /* been all the way round */
- }
- tid = tid_next;
-
- if (VG_(threads)[tid].status == VgTs_Runnable) {
- /* Found a suitable candidate. Fall out of this loop, so
- we can advance to stage 2 of the scheduler: actually
- running the thread. */
- break;
- }
-
- /* All threads have exited - pretend someone called exit() */
- if (n_waiting_for_reaper == n_exists) {
- *exitcode = 0; /* ? */
- return VgSrc_ExitSyscall;
- }
-
- /* We didn't find a runnable thread. Now what? */
- if (n_in_bounded_wait == 0) {
- /* No runnable threads and no prospect of any appearing
- even if we wait for an arbitrary length of time. In
- short, we have a deadlock. */
- VG_(pp_sched_status)();
- return VgSrc_Deadlock;
- }
-
- /* Nothing needs doing, so sit in idle until either a timeout
- happens or a thread's syscall completes. */
- idle();
- /* pp_sched_status(); */
- /* VG_(printf)("."); */
- }
-
-
- /* ======================= Phase 2 of 3 =======================
- Wahey! We've finally decided that thread tid is runnable, so
- we now do that. Run it for as much of a quanta as possible.
- Trivial requests are handled and the thread continues. The
- aim is not to do too many of Phase 1 since it is expensive. */
+ ThreadState* tst;
- if (0)
- VG_(printf)("SCHED: tid %d\n", tid);
-
- VG_TRACK( thread_run, tid );
-
- /* Figure out how many bbs to ask vg_run_innerloop to do. Note
- that it decrements the counter before testing it for zero, so
- that if VG_(dispatch_ctr) is set to N you get at most N-1
- iterations. Also this means that VG_(dispatch_ctr) must
- exceed zero before entering the innerloop. Also also, the
- decrement is done before the bb is actually run, so you
- always get at least one decrement even if nothing happens.
- */
- VG_(dispatch_ctr) = VG_SCHEDULING_QUANTUM + 1;
-
- /* paranoia ... */
- vg_assert(VG_(threads)[tid].tid == tid);
-
- /* Actually run thread tid. */
- while (True) {
-
- *last_run_tid = tid;
-
- /* For stats purposes only. */
- n_scheduling_events_MINOR++;
-
- if (0)
- VG_(message)(Vg_DebugMsg, "thread %d: running for %d bbs",
- tid, VG_(dispatch_ctr) - 1 );
-# if 0
- if (VG_(bbs_done) > 31700000 + 0) {
- dispatch_ctr_SAVED = VG_(dispatch_ctr) = 2;
- VG_(translate)(&VG_(threads)[tid],
- INSTR_PTR(VG_(threads)[tid].arch),
- /*debugging*/True);
- }
- vg_assert(INSTR_PTR(VG_(threads)[tid].arch) != 0);
-# endif
+ vg_assert(VG_(is_valid_tid)(tid));
+ tst = & VG_(threads)[tid];
- trc = run_thread_for_a_while ( tid );
+ // Bounds check
+ vg_assert(0 <= offset && offset < sizeof(VexGuestArchState));
+ vg_assert(offset + size <= sizeof(VexGuestArchState));
-# if 0
- if (0 == INSTR_PTR(VG_(threads)[tid].arch)) {
- VG_(printf)("tid = %d, dc = %llu\n", tid, VG_(bbs_done));
- vg_assert(0 != INSTR_PTR(VG_(threads)[tid].arch));
- }
-# endif
-
- /* Deal quickly with trivial scheduling events, and resume the
- thread. */
-
- if (trc == VG_TRC_INNER_FASTMISS) {
- Addr ip = INSTR_PTR(VG_(threads)[tid].arch);
-
- vg_assert(VG_(dispatch_ctr) > 1);
-
- /* Trivial event. Miss in the fast-cache. Do a full
- lookup for it. */
- found = VG_(search_transtab)( NULL,
- ip, True/*upd_fast_cache*/ );
- if (!found) {
- /* Not found; we need to request a translation. */
- if (VG_(translate)( tid, ip, /*debug*/False, 0/*not verbose*/ )) {
- found = VG_(search_transtab)( NULL, ip, True );
- if (!found)
- VG_(core_panic)("VG_TRC_INNER_FASTMISS: missing tt_fast entry");
- } else {
- // If VG_(translate)() fails, it's because it had to throw
- // a signal because the client jumped to a bad address.
- // This means VG_(deliver_signal)() will have been called
- // by now, and the program counter will now be pointing to
- // the start of the signal handler (if there is no
- // handler, things would have been aborted by now), so do
- // nothing, and things will work out next time around the
- // scheduler loop.
- }
- }
- continue; /* with this thread */
- }
+ VG_(memcpy)( (void*)(((Addr)(&tst->arch.vex_shadow)) + offset), area, size);
+}
- if (trc == VEX_TRC_JMP_EMWARN) {
- static Int counts[EmWarn_NUMBER];
- static Bool counts_initted = False;
- VexEmWarn ew;
- HChar* what;
- Bool show;
- Int q;
- if (!counts_initted) {
- counts_initted = True;
- for (q = 0; q < EmWarn_NUMBER; q++)
- counts[q] = 0;
- }
- ew = (VexEmWarn)VG_(threads)[tid].arch.vex.guest_EMWARN;
- what = (ew < 0 || ew >= EmWarn_NUMBER)
- ? "unknown (?!)"
- : LibVEX_EmWarn_string(ew);
- show = (ew < 0 || ew >= EmWarn_NUMBER)
- ? True
- : counts[ew]++ < 3;
- if (show) {
- VG_(message)( Vg_UserMsg,
- "Emulation warning: unsupported action:");
- VG_(message)( Vg_UserMsg, " %s", what);
- VG_(pp_ExeContext) ( VG_(get_ExeContext) ( tid ) );
- }
- continue; /* with this thread */
- }
-
- if (trc == VEX_TRC_JMP_CLIENTREQ) {
- UWord* args = (UWord*)(CLREQ_ARGS(VG_(threads)[tid].arch));
- UWord reqno = args[0];
- /* VG_(printf)("request 0x%x\n", reqno); */
-
- /* Are we really absolutely totally quitting? */
- if (reqno == VG_USERREQ__LIBC_FREERES_DONE) {
- if (0 || VG_(clo_trace_syscalls) || VG_(clo_trace_sched)) {
- VG_(message)(Vg_DebugMsg,
- "__libc_freeres() done; really quitting!");
- }
- return VgSrc_ExitSyscall;
- }
-
- do_client_request(tid,args);
- /* Following the request, we try and continue with the
- same thread if still runnable. If not, go back to
- Stage 1 to select a new thread to run. */
- if (VG_(threads)[tid].status == VgTs_Runnable
- && reqno != VG_USERREQ__PTHREAD_YIELD)
- continue; /* with this thread */
- else
- goto stage1;
- }
+void VG_(get_shadow_regs_area) ( ThreadId tid, OffT offset, SizeT size,
+ UChar* area )
+{
+ ThreadState* tst;
- if (trc == VEX_TRC_JMP_SYSCALL) {
- /* Do a syscall for the vthread tid. This could cause it
- to become non-runnable. One special case: spot the
- client doing calls to exit() and take this as the cue
- to exit. */
-# if 0
- { UInt* esp; Int i;
- esp=(UInt*)STACK_PTR(VG_(threads)[tid].arch);
- VG_(printf)("\nBEFORE\n");
- for (i = 10; i >= -10; i--)
- VG_(printf)("%2d %p = 0x%x\n", i, &esp[i], esp[i]);
- }
-# endif
-
- /* Deal with calling __libc_freeres() at exit. When the
- client does __NR_exit, it's exiting for good. So we
- then run __libc_freeres_wrapper. That quits by
- doing VG_USERREQ__LIBC_FREERES_DONE, and at that point
- we really exit. To be safe we nuke all other threads
- currently running.
-
- If not valgrinding (cachegrinding, etc) don't do this.
- __libc_freeres does some invalid frees which crash
- the unprotected malloc/free system. */
-
- if (SYSCALL_NUM(VG_(threads)[tid].arch) == __NR_exit
- || SYSCALL_NUM(VG_(threads)[tid].arch) == __NR_exit_group
- ) {
-
- /* Remember the supplied argument. */
- *exitcode = SYSCALL_ARG1(VG_(threads)[tid].arch);
-
- // Inform tool about regs read by syscall
- VG_TRACK( pre_reg_read, Vg_CoreSysCall, tid, "(syscallno)",
- O_SYSCALL_NUM, sizeof(UWord) );
-
- if (SYSCALL_NUM(VG_(threads)[tid].arch) == __NR_exit)
- VG_TRACK( pre_reg_read, Vg_CoreSysCall, tid,
- "exit(error_code)", O_SYSCALL_ARG1, sizeof(int) );
-
- if (SYSCALL_NUM(VG_(threads)[tid].arch) == __NR_exit_group)
- VG_TRACK( pre_reg_read, Vg_CoreSysCall, tid,
- "exit_group(error_code)", O_SYSCALL_ARG1,
- sizeof(int) );
-
- /* Only run __libc_freeres if the tool says it's ok and
- it hasn't been overridden with --run-libc-freeres=no
- on the command line. */
-
- if (VG_(needs).libc_freeres &&
- VG_(clo_run_libc_freeres) &&
- __libc_freeres_wrapper != 0) {
- if (VG_(clo_verbosity) > 2
- || VG_(clo_trace_syscalls) || VG_(clo_trace_sched)) {
- VG_(message)(Vg_DebugMsg,
- "Caught __NR_exit; running __libc_freeres()");
- }
- VG_(nuke_all_threads_except) ( tid );
- INSTR_PTR(VG_(threads)[tid].arch) =
- __libc_freeres_wrapper;
- vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
- goto stage1; /* party on, dudes (but not for much longer :) */
-
- } else {
- /* We won't run __libc_freeres; just exit now. */
- if (VG_(clo_verbosity) > 2
- || VG_(clo_trace_syscalls) || VG_(clo_trace_sched)) {
- VG_(message)(Vg_DebugMsg,
- "Caught __NR_exit; quitting");
- }
- return VgSrc_ExitSyscall;
- }
-
- }
-
- /* We've dealt with __NR_exit at this point. */
- vg_assert(SYSCALL_NUM(VG_(threads)[tid].arch) != __NR_exit &&
- SYSCALL_NUM(VG_(threads)[tid].arch) != __NR_exit_group);
-
- /* Trap syscalls to __NR_sched_yield and just have this
- thread yield instead. Not essential, just an
- optimisation. */
- if (SYSCALL_NUM(VG_(threads)[tid].arch) == __NR_sched_yield) {
- SET_SYSCALL_RETVAL(tid, 0); /* syscall returns with success */
- goto stage1; /* find a new thread to run */
- }
+ vg_assert(VG_(is_valid_tid)(tid));
+ tst = & VG_(threads)[tid];
- sched_do_syscall(tid);
-
-# if 0
- { UInt* esp; Int i;
- esp=(UInt*)STACK_PTR(VG_(threads)[tid].arch);
- VG_(printf)("AFTER\n");
- for (i = 10; i >= -10; i--)
- VG_(printf)("%2d %p = 0x%x\n", i, &esp[i], esp[i]);
- }
-# endif
-
- if (VG_(threads)[tid].status == VgTs_Runnable) {
- continue; /* with this thread */
- } else {
- goto stage1;
- }
- }
-
- /* It's an event we can't quickly deal with. Give up running
- this thread and handle things the expensive way. */
- break;
- }
+ // Bounds check
+ vg_assert(0 <= offset && offset < sizeof(VexGuestArchState));
+ vg_assert(offset + size <= sizeof(VexGuestArchState));
- /* ======================= Phase 3 of 3 =======================
- Handle non-trivial thread requests, mostly pthread stuff. */
-
- /* Ok, we've fallen out of the dispatcher for a
- non-completely-trivial reason. */
-
- if (0 && trc != VG_TRC_INNER_FASTMISS)
- VG_(message)(Vg_DebugMsg, "thread %d: completed %d bbs, trc %d",
- tid, done_this_time, (Int)trc );
-
- if (0 && trc != VG_TRC_INNER_FASTMISS)
- VG_(message)(Vg_DebugMsg, "thread %d: %llu bbs, event %s",
- tid, VG_(bbs_done),
- name_of_sched_event(trc) );
-
- /* Examine the thread's return code to figure out why it
- stopped. */
-
- switch (trc) {
-
- case VEX_TRC_JMP_YIELD:
- /* Explicit yield. Let a new thread be scheduled,
- simply by doing nothing, causing us to arrive back at
- Phase 1. */
- break;
-
- case VG_TRC_INNER_COUNTERZERO:
- /* Timeslice is out. Let a new thread be scheduled,
- simply by doing nothing, causing us to arrive back at
- Phase 1. */
- vg_assert(VG_(dispatch_ctr) == 1);
- break;
-
- case VG_TRC_UNRESUMABLE_SIGNAL:
- /* It got a SIGSEGV/SIGBUS/SIGILL/SIGFPE, which we need to
- deliver right away. */
- vg_assert(unresumable_siginfo.si_signo == VKI_SIGSEGV ||
- unresumable_siginfo.si_signo == VKI_SIGBUS ||
- unresumable_siginfo.si_signo == VKI_SIGILL ||
- unresumable_siginfo.si_signo == VKI_SIGFPE);
- vg_assert(longjmpd_on_signal == unresumable_siginfo.si_signo);
-
- /* make sure we've unblocked the signals which the handler blocked */
- VG_(unblock_host_signal)(longjmpd_on_signal);
-
- VG_(deliver_signal)(tid, &unresumable_siginfo, False);
- unresumable_siginfo.si_signo = 0; /* done */
- break;
-
- case VEX_TRC_JMP_NODECODE:
- VG_(synth_sigill)(tid, INSTR_PTR(VG_(threads)[tid].arch));
- break;
-
- case VEX_TRC_JMP_MAPFAIL:
- VG_(synth_fault)(tid);
- break;
-
- default:
- VG_(printf)("\ntrc = %d\n", trc);
- VG_(core_panic)("VG_(scheduler), phase 3: "
- "unexpected thread return code");
- /* NOTREACHED */
- break;
+ VG_(memcpy)( area, (void*)(((Addr)&(tst->arch.vex_shadow)) + offset), size);
+}
- } /* switch (trc) */
- /* That completes Phase 3 of 3. Return now to the top of the
- main scheduler loop, to Phase 1 of 3. */
-
- } /* top-level scheduler loop */
-
-
- /* NOTREACHED */
- VG_(core_panic)("scheduler: post-main-loop ?!");
- /* NOTREACHED */
-}
-
-VgSchedReturnCode VG_(scheduler) ( Int* exitcode, ThreadId* last_run_tid,
- Int* fatal_sigNo_ptr )
-{
- VgSchedReturnCode src;
-
- fatal_signal_jmpbuf_ptr = &fatal_signal_jmpbuf;
- if (__builtin_setjmp( fatal_signal_jmpbuf_ptr ) == 0) {
- src = do_scheduler( exitcode, last_run_tid );
- } else {
- src = VgSrc_FatalSig;
- *fatal_sigNo_ptr = fatal_sigNo;
- }
- return src;
-}
-
-void VG_(need_resched) ( ThreadId prefer )
-{
- /* Tell the scheduler now might be a good time to find a new
- runnable thread, because something happened which woke a thread
- up.
-
- NB: This can be called unsynchronized from either a signal
- handler, or from another LWP (ie, real kernel thread).
-
- In principle this could simply be a matter of setting
- VG_(dispatch_ctr) to a small value (say, 2), which would make
- any running code come back to the scheduler fairly quickly.
-
- However, since the scheduler implements a strict round-robin
- policy with only one priority level, there are, by definition,
- no better threads to be running than the current thread anyway,
- so we may as well ignore this hint. For processes with a
- mixture of compute and I/O bound threads, this means the compute
- threads could introduce longish latencies before the I/O threads
- run. For programs with only I/O bound threads, need_resched
- won't have any effect anyway.
-
- OK, so I've added command-line switches to enable low-latency
- syscalls and signals. The prefer_sched variable is in effect
- the ID of a single thread which has higher priority than all the
- others. If set, the scheduler will prefer to schedule that
- thread over all others. Naturally, this could lead to
- starvation or other unfairness.
- */
-
- if (VG_(dispatch_ctr) > 10)
- VG_(dispatch_ctr) = 2;
- prefer_sched = prefer;
-}
-
-void VG_(scheduler_handle_fatal_signal) ( Int sigNo )
-{
- if (NULL != fatal_signal_jmpbuf_ptr) {
- fatal_sigNo = sigNo;
- __builtin_longjmp(*fatal_signal_jmpbuf_ptr, 1);
- }
-}
-
-/* ---------------------------------------------------------------------
- The pthread implementation.
- ------------------------------------------------------------------ */
-
-#include <pthread.h>
-#include <errno.h>
-
-/* /usr/include/bits/pthreadtypes.h:
- typedef unsigned long int pthread_t;
-*/
-
-
-/* -----------------------------------------------------------
- Thread CREATION, JOINAGE and CANCELLATION: HELPER FNS
- -------------------------------------------------------- */
-
-/* We've decided to action a cancellation on tid. Make it jump to
- thread_exit_wrapper() in vg_libpthread.c, passing PTHREAD_CANCELED
- as the arg. */
-static
-void make_thread_jump_to_cancelhdlr ( ThreadId tid )
-{
- Char msg_buf[100];
- vg_assert(VG_(is_valid_tid)(tid));
-
- /* Push PTHREAD_CANCELED on the stack and jump to the cancellation
- handler -- which is really thread_exit_wrapper() in
- vg_libpthread.c. */
- vg_assert(VG_(threads)[tid].cancel_pend != NULL);
-
- /* Set an argument and bogus return address. The return address will not
- be used, but we still need to have it so that the arg is at the
- correct stack offset. */
- VGA_(set_arg_and_bogus_ret)(tid, (UWord)PTHREAD_CANCELED, 0xBEADDEEF);
-
- /* .cancel_pend will hold &thread_exit_wrapper */
- INSTR_PTR(VG_(threads)[tid].arch) = (UWord)VG_(threads)[tid].cancel_pend;
-
- VG_(proxy_abort_syscall)(tid);
-
- /* Make sure we aren't cancelled again whilst handling this
- cancellation. */
- VG_(threads)[tid].cancel_st = False;
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf,
- "jump to cancellation handler (hdlr = %p)",
- VG_(threads)[tid].cancel_pend);
- print_sched_event(tid, msg_buf);
- }
-
- if(VG_(threads)[tid].status == VgTs_WaitCV) {
- /* posix says we must reaquire mutex before handling cancelation */
- vg_pthread_mutex_t* mx;
- vg_pthread_cond_t* cond;
-
- mx = VG_(threads)[tid].associated_mx;
- cond = VG_(threads)[tid].associated_cv;
- VG_TRACK( pre_mutex_lock, tid, mx );
-
- if (mx->__vg_m_owner == VG_INVALID_THREADID) {
- /* Currently unheld; hand it out to thread tid. */
- vg_assert(mx->__vg_m_count == 0);
- VG_(threads)[tid].status = VgTs_Runnable;
- VG_(threads)[tid].associated_cv = NULL;
- VG_(threads)[tid].associated_mx = NULL;
- mx->__vg_m_owner = (/*_pthread_descr*/void*)(UWord)tid;
- mx->__vg_m_count = 1;
- /* .m_edx already holds pth_cond_wait success value (0) */
-
- VG_TRACK( post_mutex_lock, tid, mx );
-
- if (VG_(clo_trace_pthread_level) >= 1) {
- VG_(sprintf)(msg_buf, "%s cv %p: RESUME with mx %p",
- "pthread_cancel", cond, mx );
- print_pthread_event(tid, msg_buf);
- }
-
- } else {
- /* Currently held. Make thread tid be blocked on it. */
- vg_assert(mx->__vg_m_count > 0);
- VG_(threads)[tid].status = VgTs_WaitMX;
- VG_(threads)[tid].associated_cv = NULL;
- VG_(threads)[tid].associated_mx = mx;
- SET_PTHREQ_RETVAL(tid, 0); /* pth_cond_wait success value */
-
- if (VG_(clo_trace_pthread_level) >= 1) {
- VG_(sprintf)(msg_buf, "%s cv %p: BLOCK for mx %p",
- "pthread_cancel", cond, mx );
- print_pthread_event(tid, msg_buf);
- }
- }
- } else {
- VG_(threads)[tid].status = VgTs_Runnable;
- }
-}
-
-
-
-/* Release resources and generally clean up once a thread has finally
- disappeared.
-
- BORKAGE/ISSUES as of 29 May 02 (moved from top of file --njn 2004-Aug-02)
-
- TODO sometime:
- - Mutex scrubbing - clearup_after_thread_exit: look for threads
- blocked on mutexes held by the exiting thread, and release them
- appropriately. (??)
-*/
-static
-void cleanup_after_thread_exited ( ThreadId tid, Bool forcekill )
-{
- Segment *seg;
- vg_assert(is_valid_or_empty_tid(tid));
- vg_assert(VG_(threads)[tid].status == VgTs_Empty);
- /* Its stack is now off-limits */
- if (VG_(threads)[tid].stack_base) {
- seg = VG_(find_segment)( VG_(threads)[tid].stack_base );
- VG_TRACK( die_mem_stack, seg->addr, seg->len );
- }
- VGA_(cleanup_thread)( &VG_(threads)[tid].arch );
- /* Not interested in the timeout anymore */
- VG_(threads)[tid].awaken_at = 0xFFFFFFFF;
- /* Delete proxy LWP */
- VG_(proxy_delete)(tid, forcekill);
-}
-
-
-/* Look for matching pairs of threads waiting for joiners and threads
- waiting for joinees. For each such pair copy the return value of
- the joinee into the joiner, let the joiner resume and discard the
- joinee. */
-static
-void maybe_rendezvous_joiners_and_joinees ( void )
-{
- Char msg_buf[100];
- void** thread_return;
- ThreadId jnr, jee;
-
- for (jnr = 1; jnr < VG_N_THREADS; jnr++) {
- if (VG_(threads)[jnr].status != VgTs_WaitJoinee)
- continue;
- jee = VG_(threads)[jnr].joiner_jee_tid;
- if (jee == VG_INVALID_THREADID)
- continue;
- vg_assert(VG_(is_valid_tid)(jee));
- if (VG_(threads)[jee].status != VgTs_WaitJoiner) {
- /* if joinee has become detached, then make join fail with
- EINVAL */
- if (VG_(threads)[jee].detached) {
- VG_(threads)[jnr].status = VgTs_Runnable;
- VG_(threads)[jnr].joiner_jee_tid = VG_INVALID_THREADID;
- SET_PTHREQ_RETVAL(jnr, VKI_EINVAL);
- }
- continue;
- }
- /* ok! jnr is waiting to join with jee, and jee is waiting to be
- joined by ... well, any thread. So let's do it! */
-
- /* Copy return value to where joiner wants it. */
- thread_return = VG_(threads)[jnr].joiner_thread_return;
- if (thread_return != NULL) {
- /* CHECK thread_return writable */
- VG_TRACK( pre_mem_write, Vg_CorePThread, jnr,
- "pthread_join: thread_return",
- (Addr)thread_return, sizeof(void*));
-
- *thread_return = VG_(threads)[jee].joinee_retval;
- /* Not really right, since it makes the thread's return value
- appear to be defined even if it isn't. */
- VG_TRACK( post_mem_write, Vg_CorePThread, jnr,
- (Addr)thread_return, sizeof(void*) );
- }
-
- /* Joinee is discarded */
- VG_(threads)[jee].status = VgTs_Empty; /* bye! */
- cleanup_after_thread_exited ( jee, False );
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf,
- "rendezvous with joinee %d. %d resumes, %d exits.",
- jee, jnr, jee );
- print_sched_event(jnr, msg_buf);
- }
-
- VG_TRACK( post_thread_join, jnr, jee );
-
- /* joiner returns with success */
- VG_(threads)[jnr].status = VgTs_Runnable;
- SET_PTHREQ_RETVAL(jnr, 0);
- }
-}
-
-
-/* Nuke all threads other than tid. POSIX specifies that this should
- happen in __NR_exec, and after a __NR_fork() when I am the child,
- as POSIX requires. Also used at process exit time with
- me==VG_INVALID_THREADID */
-void VG_(nuke_all_threads_except) ( ThreadId me )
-{
- ThreadId tid;
- if (0) {
- VG_(printf)("HACK HACK HACK: nuke_all_threads_except\n");
- return;
- }
-
- for (tid = 1; tid < VG_N_THREADS; tid++) {
- if (tid == me
- || VG_(threads)[tid].status == VgTs_Empty)
- continue;
- if (1)
- VG_(printf)(
- "VG_(nuke_all_threads_except): nuking tid %d\n", tid);
- VG_(proxy_delete)(tid, True);
- VG_(threads)[tid].status = VgTs_Empty;
- VG_(threads)[tid].associated_mx = NULL;
- VG_(threads)[tid].associated_cv = NULL;
- VG_(threads)[tid].stack_base = (Addr)NULL;
- VG_(threads)[tid].stack_size = 0;
- cleanup_after_thread_exited( tid, True );
- }
-}
-
-
-/* -----------------------------------------------------------
- Thread CREATION, JOINAGE and CANCELLATION: REQUESTS
- -------------------------------------------------------- */
-
-static
-void do__cleanup_push ( ThreadId tid, CleanupEntry* cu )
-{
- Int sp;
- Char msg_buf[100];
- vg_assert(VG_(is_valid_tid)(tid));
- sp = VG_(threads)[tid].custack_used;
- if (VG_(clo_trace_sched)) {
- switch (cu->type) {
- case VgCt_Function:
- VG_(sprintf)(msg_buf,
- "cleanup_push (fn %p, arg %p) -> slot %d",
- cu->data.function.fn, cu->data.function.arg, sp);
- break;
- case VgCt_Longjmp:
- VG_(sprintf)(msg_buf,
- "cleanup_push (ub %p) -> slot %d",
- cu->data.longjmp.ub, sp);
- break;
- default:
- VG_(sprintf)(msg_buf,
- "cleanup_push (unknown type) -> slot %d",
- sp);
- break;
- }
- print_sched_event(tid, msg_buf);
- }
- vg_assert(sp >= 0 && sp <= VG_N_CLEANUPSTACK);
- if (sp == VG_N_CLEANUPSTACK)
- VG_(core_panic)("do__cleanup_push: VG_N_CLEANUPSTACK is too small."
- " Increase and recompile.");
- VG_(threads)[tid].custack[sp] = *cu;
- sp++;
- VG_(threads)[tid].custack_used = sp;
- SET_PTHREQ_RETVAL(tid, 0);
-}
-
-
-static
-void do__cleanup_pop ( ThreadId tid, CleanupEntry* cu )
-{
- Int sp;
- Char msg_buf[100];
- vg_assert(VG_(is_valid_tid)(tid));
- sp = VG_(threads)[tid].custack_used;
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf, "cleanup_pop from slot %d", sp-1);
- print_sched_event(tid, msg_buf);
- }
- vg_assert(sp >= 0 && sp <= VG_N_CLEANUPSTACK);
- if (sp == 0) {
- SET_PTHREQ_RETVAL(tid, -1);
- return;
- }
- sp--;
- VG_TRACK( pre_mem_write, Vg_CorePThread, tid,
- "cleanup pop", (Addr)cu, sizeof(CleanupEntry) );
- *cu = VG_(threads)[tid].custack[sp];
- VG_TRACK( post_mem_write, Vg_CorePThread, tid,
- (Addr)cu, sizeof(CleanupEntry) );
- VG_(threads)[tid].custack_used = sp;
- SET_PTHREQ_RETVAL(tid, 0);
-}
-
-
-static
-void do_pthread_yield ( ThreadId tid )
-{
- Char msg_buf[100];
- vg_assert(VG_(is_valid_tid)(tid));
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf, "yield");
- print_sched_event(tid, msg_buf);
- }
- SET_PTHREQ_RETVAL(tid, 0);
-}
-
-
-static
-void do__testcancel ( ThreadId tid )
-{
- Char msg_buf[100];
- vg_assert(VG_(is_valid_tid)(tid));
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf, "testcancel");
- print_sched_event(tid, msg_buf);
- }
- if (/* is there a cancellation pending on this thread? */
- VG_(threads)[tid].cancel_pend != NULL
- && /* is this thread accepting cancellations? */
- VG_(threads)[tid].cancel_st) {
- /* Ok, let's do the cancellation. */
- make_thread_jump_to_cancelhdlr ( tid );
- } else {
- /* No, we keep going. */
- SET_PTHREQ_RETVAL(tid, 0);
- }
-}
-
-
-static
-void do__set_cancelstate ( ThreadId tid, Int state )
-{
- Bool old_st;
- Char msg_buf[100];
- vg_assert(VG_(is_valid_tid)(tid));
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf, "set_cancelstate to %d (%s)", state,
- state==PTHREAD_CANCEL_ENABLE
- ? "ENABLE"
- : (state==PTHREAD_CANCEL_DISABLE ? "DISABLE" : "???"));
- print_sched_event(tid, msg_buf);
- }
- old_st = VG_(threads)[tid].cancel_st;
- if (state == PTHREAD_CANCEL_ENABLE) {
- VG_(threads)[tid].cancel_st = True;
- } else
- if (state == PTHREAD_CANCEL_DISABLE) {
- VG_(threads)[tid].cancel_st = False;
- } else {
- VG_(core_panic)("do__set_cancelstate");
- }
- SET_PTHREQ_RETVAL(tid, old_st ? PTHREAD_CANCEL_ENABLE
- : PTHREAD_CANCEL_DISABLE);
-}
-
-
-static
-void do__set_canceltype ( ThreadId tid, Int type )
-{
- Bool old_ty;
- Char msg_buf[100];
- vg_assert(VG_(is_valid_tid)(tid));
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf, "set_canceltype to %d (%s)", type,
- type==PTHREAD_CANCEL_ASYNCHRONOUS
- ? "ASYNCHRONOUS"
- : (type==PTHREAD_CANCEL_DEFERRED ? "DEFERRED" : "???"));
- print_sched_event(tid, msg_buf);
- }
- old_ty = VG_(threads)[tid].cancel_ty;
- if (type == PTHREAD_CANCEL_ASYNCHRONOUS) {
- VG_(threads)[tid].cancel_ty = False;
- } else
- if (type == PTHREAD_CANCEL_DEFERRED) {
- VG_(threads)[tid].cancel_ty = True;
- } else {
- VG_(core_panic)("do__set_canceltype");
- }
- SET_PTHREQ_RETVAL(tid, old_ty ? PTHREAD_CANCEL_DEFERRED
- : PTHREAD_CANCEL_ASYNCHRONOUS);
-}
-
-
-/* Set or get the detach state for thread det. */
-static
-void do__set_or_get_detach ( ThreadId tid,
- Int what, ThreadId det )
-{
- Char msg_buf[100];
- /* VG_(printf)("do__set_or_get_detach tid %d what %d det %d\n",
- tid, what, det); */
- vg_assert(VG_(is_valid_tid)(tid));
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf, "set_or_get_detach %d (%s) for tid %d", what,
- what==0 ? "not-detached" : (
- what==1 ? "detached" : (
- what==2 ? "fetch old value" : "???")),
- det );
- print_sched_event(tid, msg_buf);
- }
-
- if (!VG_(is_valid_tid)(det)) {
- SET_PTHREQ_RETVAL(tid, -1);
- return;
- }
-
- switch (what) {
- case 2: /* get */
- SET_PTHREQ_RETVAL(tid, VG_(threads)[det].detached ? 1 : 0);
- return;
- case 1:
- VG_(threads)[det].detached = True;
- SET_PTHREQ_RETVAL(tid, 0);
- /* wake anyone who was joining on us */
- maybe_rendezvous_joiners_and_joinees();
- return;
- case 0: /* set not detached */
- VG_(threads)[det].detached = False;
- SET_PTHREQ_RETVAL(tid, 0);
- return;
- default:
- VG_(core_panic)("do__set_or_get_detach");
- }
-}
-
-
-static
-void do__set_cancelpend ( ThreadId tid,
- ThreadId cee,
- void (*cancelpend_hdlr)(void*) )
-{
- Char msg_buf[100];
-
- vg_assert(VG_(is_valid_tid)(tid));
- vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
-
- if (!VG_(is_valid_tid)(cee) ||
- VG_(threads)[cee].status == VgTs_WaitJoiner) {
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf,
- "set_cancelpend for invalid tid %d", cee);
- print_sched_event(tid, msg_buf);
- }
- VG_(record_pthread_error)( tid,
- "pthread_cancel: target thread does not exist, or invalid");
- SET_PTHREQ_RETVAL(tid, VKI_ESRCH);
- return;
- }
-
- VG_(threads)[cee].cancel_pend = cancelpend_hdlr;
-
- /* interrupt a pending syscall if asynchronous cancellation
- is enabled for the target thread */
- if (VG_(threads)[cee].cancel_st && !VG_(threads)[cee].cancel_ty) {
- VG_(proxy_abort_syscall)(cee);
- }
-
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf,
- "set_cancelpend (hdlr = %p, set by tid %d)",
- cancelpend_hdlr, tid);
- print_sched_event(cee, msg_buf);
- }
-
- /* Thread doing the cancelling returns with success. */
- SET_PTHREQ_RETVAL(tid, 0);
-
- /* Perhaps we can nuke the cancellee right now? */
- if (!VG_(threads)[cee].cancel_ty || /* if PTHREAD_CANCEL_ASYNCHRONOUS */
- (VG_(threads)[cee].status != VgTs_Runnable &&
- VG_(threads)[cee].status != VgTs_WaitMX)) {
- do__testcancel(cee);
- }
-}
-
-
-static
-void do_pthread_join ( ThreadId tid,
- ThreadId jee, void** thread_return )
-{
- Char msg_buf[100];
- ThreadId i;
- /* jee, the joinee, is the thread specified as an arg in thread
- tid's call to pthread_join. So tid is the join-er. */
- vg_assert(VG_(is_valid_tid)(tid));
- vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
-
- if (jee == tid) {
- VG_(record_pthread_error)( tid,
- "pthread_join: attempt to join to self");
- SET_PTHREQ_RETVAL(tid, EDEADLK); /* libc constant, not a kernel one */
- vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
- return;
- }
-
- /* Flush any completed pairs, so as to make sure what we're looking
- at is up-to-date. */
- maybe_rendezvous_joiners_and_joinees();
-
- /* Is this a sane request? */
- if ( ! VG_(is_valid_tid)(jee) ||
- VG_(threads)[jee].detached) {
- /* Invalid thread to join to. */
- VG_(record_pthread_error)( tid,
- "pthread_join: target thread does not exist, invalid, or detached");
- SET_PTHREQ_RETVAL(tid, VKI_EINVAL);
- return;
- }
-
- /* Is anyone else already in a join-wait for jee? */
- for (i = 1; i < VG_N_THREADS; i++) {
- if (i == tid) continue;
- if (VG_(threads)[i].status == VgTs_WaitJoinee
- && VG_(threads)[i].joiner_jee_tid == jee) {
- /* Someone already did join on this thread */
- VG_(record_pthread_error)( tid,
- "pthread_join: another thread already "
- "in join-wait for target thread");
- SET_PTHREQ_RETVAL(tid, VKI_EINVAL);
- vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
- return;
- }
- }
-
- if(VG_(threads)[tid].cancel_pend != NULL &&
- VG_(threads)[tid].cancel_st) {
- make_thread_jump_to_cancelhdlr ( tid );
- } else {
- /* Mark this thread as waiting for the joinee. */
- VG_(threads)[tid].status = VgTs_WaitJoinee;
- VG_(threads)[tid].joiner_thread_return = thread_return;
- VG_(threads)[tid].joiner_jee_tid = jee;
-
- /* Look for matching joiners and joinees and do the right thing. */
- maybe_rendezvous_joiners_and_joinees();
-
- /* Return value is irrelevant since this this thread becomes
- non-runnable. maybe_resume_joiner() will cause it to return the
- right value when it resumes. */
-
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf,
- "wait for joinee %d (may already be ready)", jee);
- print_sched_event(tid, msg_buf);
- }
- }
-}
-
-
-/* ( void* ): calling thread waits for joiner and returns the void* to
- it. This is one of two ways in which a thread can finally exit --
- the other is do__quit. */
-static
-void do__wait_joiner ( ThreadId tid, void* retval )
-{
- Char msg_buf[100];
- vg_assert(VG_(is_valid_tid)(tid));
- vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf,
- "do__wait_joiner(retval = %p) (non-detached thread exit)", retval);
- print_sched_event(tid, msg_buf);
- }
- VG_(threads)[tid].status = VgTs_WaitJoiner;
- VG_(threads)[tid].joinee_retval = retval;
- maybe_rendezvous_joiners_and_joinees();
-}
-
-
-/* ( no-args ): calling thread disappears from the system forever.
- Reclaim resources. */
-static
-void do__quit ( ThreadId tid )
-{
- Char msg_buf[100];
- vg_assert(VG_(is_valid_tid)(tid));
- vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
- VG_(threads)[tid].status = VgTs_Empty; /* bye! */
- cleanup_after_thread_exited ( tid, False );
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf, "do__quit (detached thread exit)");
- print_sched_event(tid, msg_buf);
- }
- maybe_rendezvous_joiners_and_joinees();
- /* Return value is irrelevant; this thread will not get
- rescheduled. */
-}
-
-
-/* Should never be entered. If it is, will be on the simulated CPU. */
-static
-void do__apply_in_new_thread_bogusRA ( void )
-{
- VG_(core_panic)("do__apply_in_new_thread_bogusRA");
-}
-
-/* (Fn, Arg): Create a new thread and run Fn applied to Arg in it. Fn
- MUST NOT return -- ever. Eventually it will do either __QUIT or
- __WAIT_JOINER. Return the child tid to the parent. */
-static
-void do__apply_in_new_thread ( ThreadId parent_tid,
- void* (*fn)(void *),
- void* arg,
- StackInfo *si )
-{
- Addr new_stack;
- UInt new_stk_szb;
- ThreadId tid;
- Char msg_buf[100];
-
- /* Paranoia ... */
- vg_assert(sizeof(pthread_t) == sizeof(UInt));
-
- vg_assert(VG_(threads)[parent_tid].status != VgTs_Empty);
-
- tid = vg_alloc_ThreadState();
-
- /* If we've created the main thread's tid, we're in deep trouble :) */
- vg_assert(tid != 1);
- vg_assert(is_valid_or_empty_tid(tid));
-
- /* do this early, before the child gets any memory writes */
- VG_TRACK ( post_thread_create, parent_tid, tid );
-
- /* Create new thread with default attrs:
- deferred cancellation, not detached
- */
- mostly_clear_thread_record(tid);
- VG_(threads)[tid].status = VgTs_Runnable;
-
- /* Copy the parent's CPU state into the child's. */
- VG_(threads)[tid].arch.vex = VG_(threads)[parent_tid].arch.vex;
- VG_(threads)[tid].arch.vex_shadow = VG_(threads)[parent_tid].arch.vex_shadow;
- /* and let setup_child do any needed target-specific setup. */
- VGA_(setup_child)( &VG_(threads)[tid].arch,
- &VG_(threads)[parent_tid].arch );
-
- /* Consider allocating the child a stack, if the one it already has
- is inadequate. */
- new_stk_szb = PGROUNDUP(si->size + VG_AR_CLIENT_STACKBASE_REDZONE_SZB + si->guardsize);
-
- VG_(threads)[tid].stack_guard_size = si->guardsize;
-
- if (new_stk_szb > VG_(threads)[tid].stack_size) {
- /* Again, for good measure :) We definitely don't want to be
- allocating a stack for the main thread. */
- vg_assert(tid != 1);
- if (VG_(threads)[tid].stack_size > 0)
- VG_(client_free)(VG_(threads)[tid].stack_base);
- new_stack = VG_(client_alloc)(0, new_stk_szb,
- VKI_PROT_READ|VKI_PROT_WRITE|VKI_PROT_EXEC,
- SF_STACK);
- // Given the low number of threads Valgrind can handle, stack
- // allocation should pretty much always succeed, so having an
- // assertion here isn't too bad. However, probably better would be
- // this:
- //
- // if (0 == new_stack)
- // SET_PTHREQ_RETVAL(parent_tid, -VKI_EAGAIN);
- //
- vg_assert(0 != new_stack);
- VG_(threads)[tid].stack_base = new_stack;
- VG_(threads)[tid].stack_size = new_stk_szb;
- VG_(threads)[tid].stack_highest_word
- = new_stack + new_stk_szb
- - VG_AR_CLIENT_STACKBASE_REDZONE_SZB; /* - sizeof(UWord) ??? */;
- }
-
- /* Having got memory to hold the thread's stack:
- - set %esp as base + size
- - mark everything below %esp inaccessible
- - mark redzone at stack end inaccessible
- */
- SET_PTHREQ_ESP(tid, VG_(threads)[tid].stack_base
- + VG_(threads)[tid].stack_size
- - VG_AR_CLIENT_STACKBASE_REDZONE_SZB);
-
- VG_TRACK ( die_mem_stack, VG_(threads)[tid].stack_base,
- VG_(threads)[tid].stack_size
- - VG_AR_CLIENT_STACKBASE_REDZONE_SZB);
- VG_TRACK ( ban_mem_stack, STACK_PTR(VG_(threads)[tid].arch),
- VG_AR_CLIENT_STACKBASE_REDZONE_SZB );
-
- VGA_(thread_initial_stack)(tid, (UWord)arg,
- (Addr)&do__apply_in_new_thread_bogusRA);
-
- /* this is where we start */
- INSTR_PTR(VG_(threads)[tid].arch) = (UWord)fn;
-
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf, "new thread, created by %d", parent_tid );
- print_sched_event(tid, msg_buf);
- }
-
- /* Start the thread with all signals blocked; it's up to the client
- code to set the right signal mask when it's ready. */
- VG_(sigfillset)(&VG_(threads)[tid].sig_mask);
-
- /* Now that the signal mask is set up, create a proxy LWP for this thread */
- VG_(proxy_create)(tid);
-
- /* Set the proxy's signal mask */
- VG_(proxy_setsigmask)(tid);
-
- /* return child's tid to parent */
- SET_PTHREQ_RETVAL(parent_tid, tid); /* success */
-}
-
-
-/* -----------------------------------------------------------
- MUTEXes
- -------------------------------------------------------- */
-
-/* vg_pthread_mutex_t is defined in core.h.
-
- The initializers zero everything, except possibly the fourth word,
- which in vg_pthread_mutex_t is the __vg_m_kind field. It gets set to one
- of PTHREAD_MUTEX_{TIMED,RECURSIVE,ERRORCHECK,ADAPTIVE}_NP
-
- How we use it:
-
- __vg_m_kind never changes and indicates whether or not it is recursive.
-
- __vg_m_count indicates the lock count; if 0, the mutex is not owned by
- anybody.
-
- __vg_m_owner has a ThreadId value stuffed into it. We carefully arrange
- that ThreadId == 0 is invalid (VG_INVALID_THREADID), so that
- statically initialised mutexes correctly appear
- to belong to nobody.
-
- In summary, a not-in-use mutex is distinguised by having __vg_m_owner
- == 0 (VG_INVALID_THREADID) and __vg_m_count == 0 too. If one of those
- conditions holds, the other should too.
-
- There is no linked list of threads waiting for this mutex. Instead
- a thread in WaitMX state points at the mutex with its waited_on_mx
- field. This makes _unlock() inefficient, but simple to implement the
- right semantics viz-a-viz signals.
-
- We don't have to deal with mutex initialisation; the client side
- deals with that for us.
-*/
-
-/* Helper fns ... */
-static
-void do_pthread_mutex_timedlock_TIMEOUT ( ThreadId tid )
-{
- Char msg_buf[100];
- vg_pthread_mutex_t* mx;
-
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_WaitMX
- && VG_(threads)[tid].awaken_at != 0xFFFFFFFF);
- mx = VG_(threads)[tid].associated_mx;
- vg_assert(mx != NULL);
-
- VG_(threads)[tid].status = VgTs_Runnable;
- SET_PTHREQ_RETVAL(tid, ETIMEDOUT); /* pthread_mutex_lock return value */
- VG_(threads)[tid].associated_mx = NULL;
-
- if (VG_(clo_trace_pthread_level) >= 1) {
- VG_(sprintf)(msg_buf, "pthread_mutex_timedlock mx %p: TIMEOUT", mx);
- print_pthread_event(tid, msg_buf);
- }
-}
-
-
-static
-void release_one_thread_waiting_on_mutex ( vg_pthread_mutex_t* mutex,
- Char* caller )
-{
- Int i;
- Char msg_buf[100];
-
- /* Find some arbitrary thread waiting on this mutex, and make it
- runnable. If none are waiting, mark the mutex as not held. */
- for (i = 1; i < VG_N_THREADS; i++) {
- if (VG_(threads)[i].status == VgTs_Empty)
- continue;
- if (VG_(threads)[i].status == VgTs_WaitMX
- && VG_(threads)[i].associated_mx == mutex)
- break;
- }
-
- VG_TRACK( post_mutex_unlock, (ThreadId)(UWord)mutex->__vg_m_owner, mutex );
-
- vg_assert(i <= VG_N_THREADS);
- if (i == VG_N_THREADS) {
- /* Nobody else is waiting on it. */
- mutex->__vg_m_count = 0;
- mutex->__vg_m_owner = VG_INVALID_THREADID;
- } else {
- /* Notionally transfer the hold to thread i, whose
- pthread_mutex_lock() call now returns with 0 (success). */
- /* The .count is already == 1. */
- vg_assert(VG_(threads)[i].associated_mx == mutex);
- mutex->__vg_m_owner = (/*_pthread_descr*/void*)(UWord)i;
- VG_(threads)[i].status = VgTs_Runnable;
- VG_(threads)[i].associated_mx = NULL;
- /* m_edx already holds pth_mx_lock() success (0) */
-
- VG_TRACK( post_mutex_lock, (ThreadId)i, mutex);
-
- if (VG_(clo_trace_pthread_level) >= 1) {
- VG_(sprintf)(msg_buf, "%s mx %p: RESUME",
- caller, mutex );
- print_pthread_event(i, msg_buf);
- }
- }
-}
-
-
-static
-void do_pthread_mutex_lock( ThreadId tid,
- Bool is_trylock,
- vg_pthread_mutex_t* mutex,
- UInt ms_end )
-{
- Char msg_buf[100];
- Char* caller
- = is_trylock ? "pthread_mutex_trylock"
- : "pthread_mutex_lock ";
-
- /* If ms_end == 0xFFFFFFFF, wait forever (no timeout). Otherwise,
- ms_end is the ending millisecond. */
-
- if (VG_(clo_trace_pthread_level) >= 2) {
- VG_(sprintf)(msg_buf, "%s mx %p ...", caller, mutex );
- print_pthread_event(tid, msg_buf);
- }
-
- /* Paranoia ... */
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_Runnable);
-
- /* POSIX doesn't mandate this, but for sanity ... */
- if (mutex == NULL) {
- VG_(record_pthread_error)( tid,
- "pthread_mutex_lock/trylock: mutex is NULL");
- SET_PTHREQ_RETVAL(tid, VKI_EINVAL);
- return;
- }
-
- /* More paranoia ... */
- switch (mutex->__vg_m_kind) {
-# ifndef GLIBC_2_1
- case PTHREAD_MUTEX_TIMED_NP:
- case PTHREAD_MUTEX_ADAPTIVE_NP:
-# endif
-# ifdef GLIBC_2_1
- case PTHREAD_MUTEX_FAST_NP:
-# endif
- case PTHREAD_MUTEX_RECURSIVE_NP:
- case PTHREAD_MUTEX_ERRORCHECK_NP:
- if (mutex->__vg_m_count >= 0) break;
- /* else fall thru */
- default:
- VG_(record_pthread_error)( tid,
- "pthread_mutex_lock/trylock: mutex is invalid");
- SET_PTHREQ_RETVAL(tid, VKI_EINVAL);
- return;
- }
-
- if (mutex->__vg_m_count > 0) {
- if (!VG_(is_valid_tid)((ThreadId)(UWord)mutex->__vg_m_owner)) {
- VG_(record_pthread_error)( tid,
- "pthread_mutex_lock/trylock: mutex has invalid owner");
- SET_PTHREQ_RETVAL(tid, VKI_EINVAL);
- return;
- }
-
- /* Someone has it already. */
- if ((ThreadId)(UWord)mutex->__vg_m_owner == tid && ms_end == 0xFFFFFFFF) {
- /* It's locked -- by me! */
- if (mutex->__vg_m_kind == PTHREAD_MUTEX_RECURSIVE_NP) {
- /* return 0 (success). */
- mutex->__vg_m_count++;
- SET_PTHREQ_RETVAL(tid, 0);
- if (0)
- VG_(printf)("!!!!!! tid %d, mx %p -> locked %d\n",
- tid, mutex, mutex->__vg_m_count);
- return;
- } else {
- if (is_trylock)
- SET_PTHREQ_RETVAL(tid, EBUSY);
- else
- SET_PTHREQ_RETVAL(tid, EDEADLK);
- return;
- }
- } else {
- /* Someone else has it; we have to wait. Mark ourselves
- thusly. */
- /* GUARD: __vg_m_count > 0 && __vg_m_owner is valid */
- if (is_trylock) {
- /* caller is polling; so return immediately. */
- SET_PTHREQ_RETVAL(tid, EBUSY);
- } else {
- VG_TRACK ( pre_mutex_lock, tid, mutex );
-
- VG_(threads)[tid].status = VgTs_WaitMX;
- VG_(threads)[tid].associated_mx = mutex;
- VG_(threads)[tid].awaken_at = ms_end;
- if (ms_end != 0xFFFFFFFF)
- add_timeout(tid, ms_end);
- SET_PTHREQ_RETVAL(tid, 0); /* pth_mx_lock success value */
- if (VG_(clo_trace_pthread_level) >= 1) {
- VG_(sprintf)(msg_buf, "%s mx %p: BLOCK",
- caller, mutex );
- print_pthread_event(tid, msg_buf);
- }
- }
- return;
- }
-
- } else {
- /* Nobody owns it. Sanity check ... */
- vg_assert(mutex->__vg_m_owner == VG_INVALID_THREADID);
-
- VG_TRACK ( pre_mutex_lock, tid, mutex );
-
- /* We get it! [for the first time]. */
- mutex->__vg_m_count = 1;
- mutex->__vg_m_owner = (/*_pthread_descr*/void*)(UWord)tid;
-
- /* return 0 (success). */
- SET_PTHREQ_RETVAL(tid, 0);
-
- VG_TRACK( post_mutex_lock, tid, mutex);
- }
-}
-
-
-static
-void do_pthread_mutex_unlock ( ThreadId tid,
- vg_pthread_mutex_t* mutex )
-{
- Char msg_buf[100];
-
- if (VG_(clo_trace_pthread_level) >= 2) {
- VG_(sprintf)(msg_buf, "pthread_mutex_unlock mx %p ...", mutex );
- print_pthread_event(tid, msg_buf);
- }
-
- /* Paranoia ... */
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_Runnable);
-
- if (mutex == NULL) {
- VG_(record_pthread_error)( tid,
- "pthread_mutex_unlock: mutex is NULL");
- SET_PTHREQ_RETVAL(tid, VKI_EINVAL);
- return;
- }
-
- /* More paranoia ... */
- switch (mutex->__vg_m_kind) {
-# ifndef GLIBC_2_1
- case PTHREAD_MUTEX_TIMED_NP:
- case PTHREAD_MUTEX_ADAPTIVE_NP:
-# endif
-# ifdef GLIBC_2_1
- case PTHREAD_MUTEX_FAST_NP:
-# endif
- case PTHREAD_MUTEX_RECURSIVE_NP:
- case PTHREAD_MUTEX_ERRORCHECK_NP:
- if (mutex->__vg_m_count >= 0) break;
- /* else fall thru */
- default:
- VG_(record_pthread_error)( tid,
- "pthread_mutex_unlock: mutex is invalid");
- SET_PTHREQ_RETVAL(tid, VKI_EINVAL);
- return;
- }
-
- /* Barf if we don't currently hold the mutex. */
- if (mutex->__vg_m_count == 0) {
- /* nobody holds it */
- VG_(record_pthread_error)( tid,
- "pthread_mutex_unlock: mutex is not locked");
- SET_PTHREQ_RETVAL(tid, EPERM);
- return;
- }
-
- if ((ThreadId)(UWord)mutex->__vg_m_owner != tid) {
- /* we don't hold it */
- VG_(record_pthread_error)( tid,
- "pthread_mutex_unlock: mutex is locked by a different thread");
- SET_PTHREQ_RETVAL(tid, EPERM);
- return;
- }
-
- /* If it's a multiply-locked recursive mutex, just decrement the
- lock count and return. */
- if (mutex->__vg_m_count > 1) {
- vg_assert(mutex->__vg_m_kind == PTHREAD_MUTEX_RECURSIVE_NP);
- mutex->__vg_m_count --;
- SET_PTHREQ_RETVAL(tid, 0); /* success */
- return;
- }
-
- /* Now we're sure it is locked exactly once, and by the thread who
- is now doing an unlock on it. */
- vg_assert(mutex->__vg_m_count == 1);
- vg_assert((ThreadId)(UWord)mutex->__vg_m_owner == tid);
-
- /* Release at max one thread waiting on this mutex. */
- release_one_thread_waiting_on_mutex ( mutex, "pthread_mutex_lock" );
-
- /* Our (tid's) pth_unlock() returns with 0 (success). */
- SET_PTHREQ_RETVAL(tid, 0); /* Success. */
-}
-
-
-/* -----------------------------------------------------------
- CONDITION VARIABLES
- -------------------------------------------------------- */
-
-/* The relevant type (vg_pthread_cond_t) is in core.h.
-
- We don't use any fields of vg_pthread_cond_t for anything at all.
- Only the identity of the CVs is important. (Actually, we initialise
- __vg_c_waiting in pthread_cond_init() to VG_INVALID_THREADID.)
-
- Linux pthreads supports no attributes on condition variables, so we
- don't need to think too hard there. */
-
-
-static
-void do_pthread_cond_timedwait_TIMEOUT ( ThreadId tid )
-{
- Char msg_buf[100];
- vg_pthread_mutex_t* mx;
- vg_pthread_cond_t* cv;
-
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_WaitCV
- && VG_(threads)[tid].awaken_at != 0xFFFFFFFF);
- mx = VG_(threads)[tid].associated_mx;
- vg_assert(mx != NULL);
- cv = VG_(threads)[tid].associated_cv;
- vg_assert(cv != NULL);
-
- if (mx->__vg_m_owner == VG_INVALID_THREADID) {
- /* Currently unheld; hand it out to thread tid. */
- vg_assert(mx->__vg_m_count == 0);
- VG_(threads)[tid].status = VgTs_Runnable;
- SET_PTHREQ_RETVAL(tid, ETIMEDOUT); /* pthread_cond_wait return value */
- VG_(threads)[tid].associated_cv = NULL;
- VG_(threads)[tid].associated_mx = NULL;
- mx->__vg_m_owner = (/*_pthread_descr*/void*)(UWord)tid;
- mx->__vg_m_count = 1;
-
- VG_TRACK( post_mutex_lock, tid, mx );
-
- if (VG_(clo_trace_pthread_level) >= 1) {
- VG_(sprintf)(msg_buf,
- "pthread_cond_timedwait cv %p: TIMEOUT with mx %p",
- cv, mx );
- print_pthread_event(tid, msg_buf);
- }
- } else {
- /* Currently held. Make thread tid be blocked on it. */
- vg_assert(mx->__vg_m_count > 0);
- VG_TRACK( pre_mutex_lock, tid, mx );
-
- VG_(threads)[tid].status = VgTs_WaitMX;
- SET_PTHREQ_RETVAL(tid, ETIMEDOUT); /* pthread_cond_wait return value */
- VG_(threads)[tid].associated_cv = NULL;
- VG_(threads)[tid].associated_mx = mx;
- if (VG_(clo_trace_pthread_level) >= 1) {
- VG_(sprintf)(msg_buf,
- "pthread_cond_timedwait cv %p: TIMEOUT -> BLOCK for mx %p",
- cv, mx );
- print_pthread_event(tid, msg_buf);
- }
- }
-}
-
-
-static
-void release_N_threads_waiting_on_cond ( vg_pthread_cond_t* cond,
- Int n_to_release,
- Char* caller )
-{
- Int i;
- Char msg_buf[100];
- vg_pthread_mutex_t* mx;
-
- while (True) {
- if (n_to_release == 0)
- return;
-
- /* Find a thread waiting on this CV. */
- for (i = 1; i < VG_N_THREADS; i++) {
- if (VG_(threads)[i].status == VgTs_Empty)
- continue;
- if (VG_(threads)[i].status == VgTs_WaitCV
- && VG_(threads)[i].associated_cv == cond)
- break;
- }
- vg_assert(i <= VG_N_THREADS);
-
- if (i == VG_N_THREADS) {
- /* Nobody else is waiting on it. */
- return;
- }
-
- mx = VG_(threads)[i].associated_mx;
- vg_assert(mx != NULL);
-
- VG_TRACK( pre_mutex_lock, i, mx );
-
- if (mx->__vg_m_owner == VG_INVALID_THREADID) {
- /* Currently unheld; hand it out to thread i. */
- vg_assert(mx->__vg_m_count == 0);
- VG_(threads)[i].status = VgTs_Runnable;
- VG_(threads)[i].associated_cv = NULL;
- VG_(threads)[i].associated_mx = NULL;
- mx->__vg_m_owner = (/*_pthread_descr*/void*)(UWord)i;
- mx->__vg_m_count = 1;
- /* .m_edx already holds pth_cond_wait success value (0) */
-
- VG_TRACK( post_mutex_lock, i, mx );
-
- if (VG_(clo_trace_pthread_level) >= 1) {
- VG_(sprintf)(msg_buf, "%s cv %p: RESUME with mx %p",
- caller, cond, mx );
- print_pthread_event(i, msg_buf);
- }
-
- } else {
- /* Currently held. Make thread i be blocked on it. */
- vg_assert(mx->__vg_m_count > 0);
- VG_(threads)[i].status = VgTs_WaitMX;
- VG_(threads)[i].associated_cv = NULL;
- VG_(threads)[i].associated_mx = mx;
- SET_PTHREQ_RETVAL(i, 0); /* pth_cond_wait success value */
-
- if (VG_(clo_trace_pthread_level) >= 1) {
- VG_(sprintf)(msg_buf, "%s cv %p: BLOCK for mx %p",
- caller, cond, mx );
- print_pthread_event(i, msg_buf);
- }
-
- }
-
- n_to_release--;
- }
-}
-
-
-static
-void do_pthread_cond_wait ( ThreadId tid,
- vg_pthread_cond_t *cond,
- vg_pthread_mutex_t *mutex,
- UInt ms_end )
-{
- Char msg_buf[100];
-
- /* If ms_end == 0xFFFFFFFF, wait forever (no timeout). Otherwise,
- ms_end is the ending millisecond. */
-
- /* pre: mutex should be a valid mutex and owned by tid. */
- if (VG_(clo_trace_pthread_level) >= 2) {
- VG_(sprintf)(msg_buf, "pthread_cond_wait cv %p, mx %p, end %d ...",
- cond, mutex, ms_end );
- print_pthread_event(tid, msg_buf);
- }
-
- /* Paranoia ... */
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_Runnable);
-
- if (mutex == NULL) {
- VG_(record_pthread_error)( tid,
- "pthread_cond_wait/timedwait: mutex is NULL");
- SET_PTHREQ_RETVAL(tid, VKI_EINVAL);
- return;
- }
-
- if (cond == NULL) {
- VG_(record_pthread_error)( tid,
- "pthread_cond_wait/timedwait: cond is NULL");
- SET_PTHREQ_RETVAL(tid, VKI_EINVAL);
- return;
- }
-
- /* More paranoia ... */
- switch (mutex->__vg_m_kind) {
-# ifndef GLIBC_2_1
- case PTHREAD_MUTEX_TIMED_NP:
- case PTHREAD_MUTEX_ADAPTIVE_NP:
-# endif
-# ifdef GLIBC_2_1
- case PTHREAD_MUTEX_FAST_NP:
-# endif
- case PTHREAD_MUTEX_RECURSIVE_NP:
- case PTHREAD_MUTEX_ERRORCHECK_NP:
- if (mutex->__vg_m_count >= 0) break;
- /* else fall thru */
- default:
- VG_(record_pthread_error)( tid,
- "pthread_cond_wait/timedwait: mutex is invalid");
- SET_PTHREQ_RETVAL(tid, VKI_EINVAL);
- return;
- }
-
- /* Barf if we don't currently hold the mutex. */
- if (mutex->__vg_m_count == 0 /* nobody holds it */) {
- VG_(record_pthread_error)( tid,
- "pthread_cond_wait/timedwait: mutex is unlocked");
- SET_PTHREQ_RETVAL(tid, VKI_EPERM);
- return;
- }
-
- if ((ThreadId)(UWord)mutex->__vg_m_owner != tid /* we don't hold it */) {
- VG_(record_pthread_error)( tid,
- "pthread_cond_wait/timedwait: mutex is locked by another thread");
- SET_PTHREQ_RETVAL(tid, VKI_EPERM);
- return;
- }
-
- if(VG_(threads)[tid].cancel_pend != NULL &&
- VG_(threads)[tid].cancel_st) {
- make_thread_jump_to_cancelhdlr ( tid );
- } else {
- /* Queue ourselves on the condition. */
- VG_(threads)[tid].status = VgTs_WaitCV;
- VG_(threads)[tid].associated_cv = cond;
- VG_(threads)[tid].associated_mx = mutex;
- VG_(threads)[tid].awaken_at = ms_end;
- if (ms_end != 0xFFFFFFFF)
- add_timeout(tid, ms_end);
-
- if (VG_(clo_trace_pthread_level) >= 1) {
- VG_(sprintf)(msg_buf,
- "pthread_cond_wait cv %p, mx %p: BLOCK",
- cond, mutex );
- print_pthread_event(tid, msg_buf);
- }
-
- /* Release the mutex. */
- release_one_thread_waiting_on_mutex ( mutex, "pthread_cond_wait " );
- }
-}
-
-
-static
-void do_pthread_cond_signal_or_broadcast ( ThreadId tid,
- Bool broadcast,
- vg_pthread_cond_t *cond )
-{
- Char msg_buf[100];
- Char* caller
- = broadcast ? "pthread_cond_broadcast"
- : "pthread_cond_signal ";
-
- if (VG_(clo_trace_pthread_level) >= 2) {
- VG_(sprintf)(msg_buf, "%s cv %p ...",
- caller, cond );
- print_pthread_event(tid, msg_buf);
- }
-
- /* Paranoia ... */
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_Runnable);
-
- if (cond == NULL) {
- VG_(record_pthread_error)( tid,
- "pthread_cond_signal/broadcast: cond is NULL");
- SET_PTHREQ_RETVAL(tid, VKI_EINVAL);
- return;
- }
-
- release_N_threads_waiting_on_cond (
- cond,
- broadcast ? VG_N_THREADS : 1,
- caller
- );
-
- SET_PTHREQ_RETVAL(tid, 0); /* success */
-}
-
-
-/* -----------------------------------------------------------
- THREAD SPECIFIC DATA
- -------------------------------------------------------- */
-
-static __inline__
-Bool is_valid_key ( ThreadKey k )
-{
- /* k unsigned; hence no < 0 check */
- if (k >= VG_N_THREAD_KEYS) return False;
- if (!vg_thread_keys[k].inuse) return False;
- return True;
-}
-
-
-/* Return in %EDX a value of 1 if the key is valid, else 0. */
-static
-void do_pthread_key_validate ( ThreadId tid,
- pthread_key_t key )
-{
- Char msg_buf[100];
-
- if (VG_(clo_trace_pthread_level) >= 1) {
- VG_(sprintf)(msg_buf, "pthread_key_validate key %p",
- key );
- print_pthread_event(tid, msg_buf);
- }
-
- vg_assert(sizeof(pthread_key_t) == sizeof(ThreadKey));
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_Runnable);
-
- if (is_valid_key((ThreadKey)key)) {
- SET_PTHREQ_RETVAL(tid, 1);
- } else {
- SET_PTHREQ_RETVAL(tid, 0);
- }
-}
-
-
-static
-void do_pthread_key_create ( ThreadId tid,
- pthread_key_t* key,
- void (*destructor)(void*) )
-{
- Int i;
- Char msg_buf[100];
-
- if (VG_(clo_trace_pthread_level) >= 1) {
- VG_(sprintf)(msg_buf, "pthread_key_create *key %p, destr %p",
- key, destructor );
- print_pthread_event(tid, msg_buf);
- }
-
- vg_assert(sizeof(pthread_key_t) == sizeof(ThreadKey));
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_Runnable);
-
- for (i = 0; i < VG_N_THREAD_KEYS; i++)
- if (!vg_thread_keys[i].inuse)
- break;
-
- if (i == VG_N_THREAD_KEYS) {
- VG_(message)(Vg_UserMsg, "pthread_key_create() asked for too many keys (more than %d): increase VG_N_THREAD_KEYS and recompile Valgrind.",
- VG_N_THREAD_KEYS);
- SET_PTHREQ_RETVAL(tid, EAGAIN);
- return;
- }
-
- vg_thread_keys[i].inuse = True;
- vg_thread_keys[i].destructor = destructor;
-
- /* check key for addressibility */
- VG_TRACK( pre_mem_write, Vg_CorePThread, tid, "pthread_key_create: key",
- (Addr)key, sizeof(pthread_key_t));
- *key = i;
- VG_TRACK( post_mem_write, Vg_CorePThread, tid,
- (Addr)key, sizeof(pthread_key_t) );
-
- SET_PTHREQ_RETVAL(tid, 0);
-}
-
-
-static
-void do_pthread_key_delete ( ThreadId tid, pthread_key_t key )
-{
- Char msg_buf[100];
- if (VG_(clo_trace_pthread_level) >= 1) {
- VG_(sprintf)(msg_buf, "pthread_key_delete key %d",
- key );
- print_pthread_event(tid, msg_buf);
- }
-
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_Runnable);
-
- if (!is_valid_key(key)) {
- VG_(record_pthread_error)( tid,
- "pthread_key_delete: key is invalid");
- SET_PTHREQ_RETVAL(tid, VKI_EINVAL);
- return;
- }
-
- vg_thread_keys[key].inuse = False;
- vg_thread_keys[key].destructor = NULL;
- SET_PTHREQ_RETVAL(tid, 0);
-}
-
-
-/* Get the .specific_ptr for a thread. Return 1 if the thread-slot
- isn't in use, so that client-space can scan all thread slots. 1
- cannot be confused with NULL or a legitimately-aligned specific_ptr
- value. */
-static
-void do_pthread_getspecific_ptr ( ThreadId tid )
-{
- void** specifics_ptr;
- Char msg_buf[100];
-
- if (VG_(clo_trace_pthread_level) >= 2) {
- VG_(sprintf)(msg_buf, "pthread_getspecific_ptr" );
- print_pthread_event(tid, msg_buf);
- }
-
- vg_assert(is_valid_or_empty_tid(tid));
-
- if (VG_(threads)[tid].status == VgTs_Empty) {
- SET_PTHREQ_RETVAL(tid, 1);
- return;
- }
-
- specifics_ptr = VG_(threads)[tid].specifics_ptr;
- vg_assert(specifics_ptr == NULL || IS_WORD_ALIGNED(specifics_ptr));
-
- SET_PTHREQ_RETVAL(tid, (UWord)specifics_ptr);
-}
-
-
-static
-void do_pthread_setspecific_ptr ( ThreadId tid, void** ptr )
-{
- Char msg_buf[100];
- if (VG_(clo_trace_pthread_level) >= 1) {
- VG_(sprintf)(msg_buf, "pthread_setspecific_ptr ptr %p",
- ptr );
- print_pthread_event(tid, msg_buf);
- }
-
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_Runnable);
-
- VG_(threads)[tid].specifics_ptr = ptr;
- SET_PTHREQ_RETVAL(tid, 0);
-}
-
-
-/* Helper for calling destructors at thread exit. If key is valid,
- copy the thread's specific value into cu->arg and put the *key*'s
- destructor fn address in cu->fn. Then return 0 to the caller.
- Otherwise return non-zero to the caller. */
-static
-void do__get_key_destr_and_spec ( ThreadId tid,
- pthread_key_t key,
- CleanupEntry* cu )
-{
- Char msg_buf[100];
- if (VG_(clo_trace_pthread_level) >= 2) {
- VG_(sprintf)(msg_buf,
- "get_key_destr_and_arg (key = %d)", key );
- print_pthread_event(tid, msg_buf);
- }
- vg_assert(VG_(is_valid_tid)(tid));
- vg_assert(key >= 0 && key < VG_N_THREAD_KEYS);
-
- if (!vg_thread_keys[key].inuse) {
- SET_PTHREQ_RETVAL(tid, -1);
- return;
- }
- VG_TRACK( pre_mem_write, Vg_CorePThread, tid, "get_key_destr_and_spec: cu",
- (Addr)cu, sizeof(CleanupEntry) );
-
- cu->type = VgCt_Function;
- cu->data.function.fn = vg_thread_keys[key].destructor;
- if (VG_(threads)[tid].specifics_ptr == NULL) {
- cu->data.function.arg = NULL;
- } else {
- VG_TRACK( pre_mem_read, Vg_CorePThread, tid,
- "get_key_destr_and_spec: key",
- (Addr)(&VG_(threads)[tid].specifics_ptr[key]),
- sizeof(void*) );
- cu->data.function.arg = VG_(threads)[tid].specifics_ptr[key];
- }
-
- VG_TRACK( post_mem_write, Vg_CorePThread, tid,
- (Addr)cu, sizeof(CleanupEntry) );
- SET_PTHREQ_RETVAL(tid, 0);
-}
-
-
-/* ---------------------------------------------------
- SIGNALS
- ------------------------------------------------ */
-
-/* See comment in vg_libthread.c:pthread_sigmask() regarding
- deliberate confusion of types sigset_t and vki_sigset_t. Return 0
- for OK and 1 for some kind of addressing error, which the
- vg_libpthread.c routine turns into return values 0 and EFAULT
- respectively. */
-static
-void do_pthread_sigmask ( ThreadId tid,
- Int vki_how,
- vki_sigset_t* newmask,
- vki_sigset_t* oldmask )
-{
- Char msg_buf[100];
- if (VG_(clo_trace_pthread_level) >= 1) {
- VG_(sprintf)(msg_buf,
- "pthread_sigmask vki_how %d, newmask %p, oldmask %p",
- vki_how, newmask, oldmask );
- print_pthread_event(tid, msg_buf);
- }
-
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_Runnable);
-
- if (newmask)
- VG_TRACK( pre_mem_read, Vg_CorePThread, tid, "pthread_sigmask: newmask",
- (Addr)newmask, sizeof(vki_sigset_t));
- if (oldmask)
- VG_TRACK( pre_mem_write, Vg_CorePThread, tid, "pthread_sigmask: oldmask",
- (Addr)oldmask, sizeof(vki_sigset_t));
-
- VG_(do_pthread_sigmask_SCSS_upd) ( tid, vki_how, newmask, oldmask );
-
- if (oldmask)
- VG_TRACK( post_mem_write, Vg_CorePThread, tid,
- (Addr)oldmask, sizeof(vki_sigset_t) );
-
- /* Success. */
- SET_PTHREQ_RETVAL(tid, 0);
-}
-
-
-static
-void do_pthread_kill ( ThreadId tid, /* me */
- ThreadId thread, /* thread to signal */
- Int sig )
-{
- ThreadState* tst;
- Char msg_buf[100];
-
- if (VG_(clo_trace_signals) || VG_(clo_trace_pthread_level) >= 1) {
- VG_(sprintf)(msg_buf,
- "pthread_kill thread %d, signo %d",
- thread, sig );
- print_pthread_event(tid, msg_buf);
- }
-
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_Runnable);
-
- if (!VG_(is_valid_tid)(thread)) {
- VG_(record_pthread_error)( tid,
- "pthread_kill: invalid target thread");
- SET_PTHREQ_RETVAL(tid, VKI_ESRCH);
- return;
- }
-
- if (sig == 0) {
- /* OK, signal 0 is just for testing */
- SET_PTHREQ_RETVAL(tid, 0);
- return;
- }
-
- if (sig < 1 || sig > _VKI_NSIG) {
- SET_PTHREQ_RETVAL(tid, VKI_EINVAL);
- return;
- }
-
- tst = VG_(get_ThreadState)(thread);
- vg_assert(NULL != tst->proxy);
- VG_(proxy_sendsig)(tid/*from*/, thread/*to*/, sig);
- SET_PTHREQ_RETVAL(tid, 0);
-}
-
-
-/* -----------------------------------------------------------
- FORK HANDLERS.
- -------------------------------------------------------- */
-
-static
-void do__set_fhstack_used ( ThreadId tid, Int n )
-{
- Char msg_buf[100];
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf, "set_fhstack_used to %d", n );
- print_pthread_event(tid, msg_buf);
- }
-
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_Runnable);
-
- if (n >= 0 && n < VG_N_FORKHANDLERSTACK) {
- vg_fhstack_used = n;
- SET_PTHREQ_RETVAL(tid, 0);
- } else {
- SET_PTHREQ_RETVAL(tid, -1);
- }
-}
-
-
-static
-void do__get_fhstack_used ( ThreadId tid )
-{
- Int n;
- Char msg_buf[100];
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf, "get_fhstack_used" );
- print_pthread_event(tid, msg_buf);
- }
-
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_Runnable);
-
- n = vg_fhstack_used;
- vg_assert(n >= 0 && n < VG_N_FORKHANDLERSTACK);
- SET_PTHREQ_RETVAL(tid, n);
-}
-
-static
-void do__set_fhstack_entry ( ThreadId tid, Int n, ForkHandlerEntry* fh )
-{
- Char msg_buf[100];
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf, "set_fhstack_entry %d to %p", n, fh );
- print_pthread_event(tid, msg_buf);
- }
-
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_Runnable);
- VG_TRACK( pre_mem_read, Vg_CorePThread, tid,
- "pthread_atfork: prepare/parent/child",
- (Addr)fh, sizeof(ForkHandlerEntry));
-
- if (n < 0 || n >= VG_N_FORKHANDLERSTACK) {
- SET_PTHREQ_RETVAL(tid, -1);
- return;
- }
-
- vg_fhstack[n] = *fh;
- SET_PTHREQ_RETVAL(tid, 0);
-}
-
-
-static
-void do__get_fhstack_entry ( ThreadId tid, Int n, /*OUT*/
- ForkHandlerEntry* fh )
-{
- Char msg_buf[100];
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf, "get_fhstack_entry %d", n );
- print_pthread_event(tid, msg_buf);
- }
-
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_Runnable);
- VG_TRACK( pre_mem_write, Vg_CorePThread, tid, "fork: prepare/parent/child",
- (Addr)fh, sizeof(ForkHandlerEntry));
-
- if (n < 0 || n >= VG_N_FORKHANDLERSTACK) {
- SET_PTHREQ_RETVAL(tid, -1);
- return;
- }
-
- *fh = vg_fhstack[n];
- SET_PTHREQ_RETVAL(tid, 0);
-
- VG_TRACK( post_mem_write, Vg_CorePThread, tid,
- (Addr)fh, sizeof(ForkHandlerEntry) );
-}
-
-
-static
-void do__get_stack_info ( ThreadId tid, ThreadId which, StackInfo* si )
-{
- Char msg_buf[100];
-
- vg_assert(VG_(is_valid_tid)(tid)
- && VG_(threads)[tid].status == VgTs_Runnable);
-
- if (VG_(clo_trace_sched)) {
- VG_(sprintf)(msg_buf, "get_stack_info for tid %d", which );
- print_pthread_event(tid, msg_buf);
- }
-
- if (!VG_(is_valid_tid)(which)) {
- SET_PTHREQ_RETVAL(tid, -1);
- return;
- }
-
- si->base = VG_(threads)[which].stack_base;
- si->size = VG_(threads)[which].stack_size
- - VG_AR_CLIENT_STACKBASE_REDZONE_SZB
- - VG_(threads)[which].stack_guard_size;
- si->guardsize = VG_(threads)[which].stack_guard_size;
-
- SET_PTHREQ_RETVAL(tid, 0);
-}
-
-/* ---------------------------------------------------------------------
- Shadow register manipulations
- ------------------------------------------------------------------ */
-
-void VG_(set_shadow_regs_area) ( ThreadId tid, OffT offset, SizeT size,
- const UChar* area )
-{
- ThreadState* tst;
-
- vg_assert(VG_(is_valid_tid)(tid));
- tst = & VG_(threads)[tid];
-
- // Bounds check
- vg_assert(0 <= offset && offset < sizeof(VexGuestArchState));
- vg_assert(offset + size <= sizeof(VexGuestArchState));
-
- VG_(memcpy)( (void*)(((Addr)(&tst->arch.vex_shadow)) + offset), area, size);
-}
-
-void VG_(get_shadow_regs_area) ( ThreadId tid, OffT offset, SizeT size,
- UChar* area )
-{
- ThreadState* tst;
-
- vg_assert(VG_(is_valid_tid)(tid));
- tst = & VG_(threads)[tid];
-
- // Bounds check
- vg_assert(0 <= offset && offset < sizeof(VexGuestArchState));
- vg_assert(offset + size <= sizeof(VexGuestArchState));
-
- VG_(memcpy)( area, (void*)(((Addr)&(tst->arch.vex_shadow)) + offset), size);
-}
-
-
-void VG_(set_return_from_syscall_shadow) ( ThreadId tid, UWord ret_shadow )
-{
- VG_(set_shadow_regs_area)(tid, O_SYSCALL_RET, sizeof(UWord),
- (UChar*)&ret_shadow);
-}
+void VG_(set_return_from_syscall_shadow) ( ThreadId tid, UWord ret_shadow )
+{
+ VG_(set_shadow_regs_area)(tid, O_SYSCALL_RET, sizeof(UWord),
+ (UChar*)&ret_shadow);
+}
UInt VG_(get_exit_status_shadow) ( ThreadId tid )
{
return ret;
}
-void VG_(intercept_libc_freeres_wrapper)(Addr addr)
-{
- __libc_freeres_wrapper = addr;
-}
/* ---------------------------------------------------------------------
Handle client requests.
choose a new thread to run.
*/
static
-void do_client_request ( ThreadId tid, UWord* arg )
+void do_client_request ( ThreadId tid )
{
+ UWord* arg = (UWord*)(CLREQ_ARGS(VG_(threads)[tid].arch));
UWord req_no = arg[0];
if (0)
switch (req_no) {
case VG_USERREQ__CLIENT_CALL0: {
- UWord (*f)(ThreadId) = (void*)arg[1];
+ UWord (*f)(void) = (void*)arg[1];
if (f == NULL)
VG_(message)(Vg_DebugMsg, "VG_USERREQ__CLIENT_CALL0: func=%p\n", f);
else
- SET_CLCALL_RETVAL(tid, f ( tid ), (Addr)f);
+ SET_CLCALL_RETVAL(tid, f ( ), (Addr)f);
break;
}
case VG_USERREQ__CLIENT_CALL1: {
- UWord (*f)(ThreadId, UWord) = (void*)arg[1];
+ UWord (*f)(UWord) = (void*)arg[1];
if (f == NULL)
VG_(message)(Vg_DebugMsg, "VG_USERREQ__CLIENT_CALL1: func=%p\n", f);
else
- SET_CLCALL_RETVAL(tid, f ( tid, arg[2] ), (Addr)f );
+ SET_CLCALL_RETVAL(tid, f ( arg[2] ), (Addr)f );
break;
}
case VG_USERREQ__CLIENT_CALL2: {
- UWord (*f)(ThreadId, UWord, UWord) = (void*)arg[1];
+ UWord (*f)(UWord, UWord) = (void*)arg[1];
if (f == NULL)
VG_(message)(Vg_DebugMsg, "VG_USERREQ__CLIENT_CALL2: func=%p\n", f);
else
- SET_CLCALL_RETVAL(tid, f ( tid, arg[2], arg[3] ), (Addr)f );
+ SET_CLCALL_RETVAL(tid, f ( arg[2], arg[3] ), (Addr)f );
break;
}
case VG_USERREQ__CLIENT_CALL3: {
- UWord (*f)(ThreadId, UWord, UWord, UWord) = (void*)arg[1];
+ UWord (*f)(UWord, UWord, UWord) = (void*)arg[1];
if (f == NULL)
VG_(message)(Vg_DebugMsg, "VG_USERREQ__CLIENT_CALL3: func=%p\n", f);
else
- SET_CLCALL_RETVAL(tid, f ( tid, arg[2], arg[3], arg[4] ), (Addr)f );
+ SET_CLCALL_RETVAL(tid, f ( arg[2], arg[3], arg[4] ), (Addr)f );
break;
}
/* Note: for tools that replace malloc() et al, we want to call
the replacement versions. For those that don't, we want to call
- VG_(cli_malloc)() et al. We do this by calling TL_(malloc)(), which
+ VG_(cli_malloc)() et al. We do this by calling SK_(malloc)(), which
malloc-replacing tools must replace, but have the default definition
- of TL_(malloc)() call VG_(cli_malloc)(). */
+ of SK_(malloc)() call VG_(cli_malloc)(). */
/* Note: for MALLOC and FREE, must set the appropriate "lock"... see
- the comment in vg_defaults.c/TL_(malloc)() for why. */
+ the comment in vg_defaults.c/SK_(malloc)() for why. */
case VG_USERREQ__MALLOC:
VG_(tl_malloc_called_by_scheduler) = True;
SET_PTHREQ_RETVAL(
SET_PTHREQ_RETVAL(tid, 0); /* irrelevant */
break;
- case VG_USERREQ__PTHREAD_GET_THREADID:
- SET_PTHREQ_RETVAL(tid, tid);
- break;
-
case VG_USERREQ__RUNNING_ON_VALGRIND:
- SET_CLREQ_RETVAL(tid, 1);
- break;
-
- case VG_USERREQ__GET_PTHREAD_TRACE_LEVEL:
- SET_PTHREQ_RETVAL(tid, VG_(clo_trace_pthread_level));
+ SET_CLREQ_RETVAL(tid, RUNNING_ON_VALGRIND+1);
break;
case VG_USERREQ__READ_MILLISECOND_TIMER:
SET_PTHREQ_RETVAL(tid, VG_(read_millisecond_timer)());
break;
- /* Some of these may make thread tid non-runnable, but the
- scheduler checks for that on return from this function. */
- case VG_USERREQ__PTHREAD_MUTEX_LOCK:
- do_pthread_mutex_lock( tid, False, (void *)(arg[1]), 0xFFFFFFFF );
- break;
-
- case VG_USERREQ__PTHREAD_MUTEX_TIMEDLOCK:
- do_pthread_mutex_lock( tid, False, (void *)(arg[1]), arg[2] );
- break;
-
- case VG_USERREQ__PTHREAD_MUTEX_TRYLOCK:
- do_pthread_mutex_lock( tid, True, (void *)(arg[1]), 0xFFFFFFFF );
- break;
-
- case VG_USERREQ__PTHREAD_MUTEX_UNLOCK:
- do_pthread_mutex_unlock( tid, (void *)(arg[1]) );
- break;
-
- case VG_USERREQ__PTHREAD_GETSPECIFIC_PTR:
- do_pthread_getspecific_ptr ( tid );
- break;
-
- case VG_USERREQ__SET_CANCELTYPE:
- do__set_canceltype ( tid, arg[1] );
- break;
-
- case VG_USERREQ__CLEANUP_PUSH:
- do__cleanup_push ( tid, (CleanupEntry*)(arg[1]) );
- break;
-
- case VG_USERREQ__CLEANUP_POP:
- do__cleanup_pop ( tid, (CleanupEntry*)(arg[1]) );
- break;
-
- case VG_USERREQ__TESTCANCEL:
- do__testcancel ( tid );
- break;
-
- case VG_USERREQ__PTHREAD_JOIN:
- do_pthread_join( tid, arg[1], (void**)(arg[2]) );
- break;
-
- case VG_USERREQ__PTHREAD_COND_WAIT:
- do_pthread_cond_wait( tid,
- (vg_pthread_cond_t *)(arg[1]),
- (vg_pthread_mutex_t *)(arg[2]),
- 0xFFFFFFFF /* no timeout */ );
- break;
-
- case VG_USERREQ__PTHREAD_COND_TIMEDWAIT:
- do_pthread_cond_wait( tid,
- (vg_pthread_cond_t *)(arg[1]),
- (vg_pthread_mutex_t *)(arg[2]),
- arg[3] /* timeout millisecond point */ );
- break;
-
- case VG_USERREQ__PTHREAD_COND_SIGNAL:
- do_pthread_cond_signal_or_broadcast(
- tid,
- False, /* signal, not broadcast */
- (vg_pthread_cond_t *)(arg[1]) );
- break;
-
- case VG_USERREQ__PTHREAD_COND_BROADCAST:
- do_pthread_cond_signal_or_broadcast(
- tid,
- True, /* broadcast, not signal */
- (vg_pthread_cond_t *)(arg[1]) );
- break;
-
- case VG_USERREQ__PTHREAD_KEY_VALIDATE:
- do_pthread_key_validate ( tid,
- (pthread_key_t)(arg[1]) );
- break;
-
- case VG_USERREQ__PTHREAD_KEY_CREATE:
- do_pthread_key_create ( tid,
- (pthread_key_t*)(arg[1]),
- (void(*)(void*))(arg[2]) );
- break;
-
- case VG_USERREQ__PTHREAD_KEY_DELETE:
- do_pthread_key_delete ( tid,
- (pthread_key_t)(arg[1]) );
- break;
-
- case VG_USERREQ__PTHREAD_SETSPECIFIC_PTR:
- do_pthread_setspecific_ptr ( tid,
- (void**)(arg[1]) );
- break;
-
- case VG_USERREQ__PTHREAD_SIGMASK:
- do_pthread_sigmask ( tid,
- arg[1],
- (vki_sigset_t*)(arg[2]),
- (vki_sigset_t*)(arg[3]) );
- break;
-
- case VG_USERREQ__PTHREAD_KILL:
- do_pthread_kill ( tid, arg[1], arg[2] );
- break;
-
- case VG_USERREQ__PTHREAD_YIELD:
- do_pthread_yield ( tid );
- /* On return from do_client_request(), the scheduler will
- select a new thread to run. */
- break;
-
- case VG_USERREQ__SET_CANCELSTATE:
- do__set_cancelstate ( tid, arg[1] );
- break;
-
- case VG_USERREQ__SET_OR_GET_DETACH:
- do__set_or_get_detach ( tid, arg[1], arg[2] );
- break;
-
- case VG_USERREQ__SET_CANCELPEND:
- do__set_cancelpend ( tid, arg[1], (void(*)(void*))arg[2] );
- break;
-
- case VG_USERREQ__WAIT_JOINER:
- do__wait_joiner ( tid, (void*)arg[1] );
- break;
-
- case VG_USERREQ__QUIT:
- do__quit ( tid );
- break;
-
- case VG_USERREQ__APPLY_IN_NEW_THREAD:
- do__apply_in_new_thread ( tid, (void*(*)(void*))arg[1],
- (void*)arg[2], (StackInfo*)(arg[3]) );
- break;
-
- case VG_USERREQ__GET_KEY_D_AND_S:
- do__get_key_destr_and_spec ( tid,
- (pthread_key_t)arg[1],
- (CleanupEntry*)arg[2] );
- break;
-
- case VG_USERREQ__NUKE_OTHER_THREADS:
- VG_(nuke_all_threads_except) ( tid );
- SET_PTHREQ_RETVAL(tid, 0);
- break;
-
- case VG_USERREQ__PTHREAD_ERROR:
- VG_(record_pthread_error)( tid, (Char*)(arg[1]) );
- SET_PTHREQ_RETVAL(tid, 0);
- break;
-
- case VG_USERREQ__SET_FHSTACK_USED:
- do__set_fhstack_used( tid, (Int)(arg[1]) );
- break;
-
- case VG_USERREQ__GET_FHSTACK_USED:
- do__get_fhstack_used( tid );
- break;
-
- case VG_USERREQ__SET_FHSTACK_ENTRY:
- do__set_fhstack_entry( tid, (Int)(arg[1]),
- (ForkHandlerEntry*)(arg[2]) );
- break;
-
- case VG_USERREQ__GET_FHSTACK_ENTRY:
- do__get_fhstack_entry( tid, (Int)(arg[1]),
- (ForkHandlerEntry*)(arg[2]) );
- break;
-
- case VG_USERREQ__SIGNAL_RETURNS:
- handle_signal_return(tid);
- break;
-
- case VG_USERREQ__GET_STACK_INFO:
- do__get_stack_info( tid, (Int)(arg[1]), (StackInfo*)(arg[2]) );
- break;
-
-
- case VG_USERREQ__GET_SIGRT_MIN:
- SET_PTHREQ_RETVAL(tid, VG_(sig_rtmin));
- break;
-
- case VG_USERREQ__GET_SIGRT_MAX:
- SET_PTHREQ_RETVAL(tid, VG_(sig_rtmax));
- break;
-
- case VG_USERREQ__ALLOC_RTSIG:
- SET_PTHREQ_RETVAL(tid, VG_(sig_alloc_rtsig)((Int)arg[1]));
- break;
-
case VG_USERREQ__PRINTF: {
int count =
VG_(vmessage)( Vg_ClientMsg, (char *)arg[1], (void*)arg[2] );
case VG_USERREQ__GET_MALLOCFUNCS: {
struct vg_mallocfunc_info *info = (struct vg_mallocfunc_info *)arg[1];
- info->tl_malloc = (Addr)TL_(malloc);
- info->tl_calloc = (Addr)TL_(calloc);
- info->tl_realloc = (Addr)TL_(realloc);
- info->tl_memalign = (Addr)TL_(memalign);
- info->tl___builtin_new = (Addr)TL_(__builtin_new);
- info->tl___builtin_vec_new = (Addr)TL_(__builtin_vec_new);
- info->tl_free = (Addr)TL_(free);
- info->tl___builtin_delete = (Addr)TL_(__builtin_delete);
- info->tl___builtin_vec_delete = (Addr)TL_(__builtin_vec_delete);
-
- info->arena_payload_szB = (Addr)VG_(arena_payload_szB);
+ info->tl_malloc = (Addr)TL_(malloc);
+ info->tl_calloc = (Addr)TL_(calloc);
+ info->tl_realloc = (Addr)TL_(realloc);
+ info->tl_memalign = (Addr)TL_(memalign);
+ info->tl___builtin_new = (Addr)TL_(__builtin_new);
+ info->tl___builtin_vec_new = (Addr)TL_(__builtin_vec_new);
+ info->tl_free = (Addr)TL_(free);
+ info->tl___builtin_delete = (Addr)TL_(__builtin_delete);
+ info->tl___builtin_vec_delete = (Addr)TL_(__builtin_vec_delete);
+
+ info->arena_payload_szB = (Addr)VG_(arena_payload_szB);
- info->clo_sloppy_malloc = VG_(clo_sloppy_malloc);
- info->clo_trace_malloc = VG_(clo_trace_malloc);
+ info->clo_sloppy_malloc = VG_(clo_sloppy_malloc);
+ info->clo_trace_malloc = VG_(clo_trace_malloc);
SET_CLREQ_RETVAL( tid, 0 ); /* return value is meaningless */
SET_CLREQ_RETVAL( tid, VG_(get_n_errs_found)() );
break;
+ /* Obsolete requests: print a warning in case there's an old
+ libpthread.so still hanging around. */
+ case VG_USERREQ__APPLY_IN_NEW_THREAD:
+ case VG_USERREQ__QUIT:
+ case VG_USERREQ__WAIT_JOINER:
+ case VG_USERREQ__PTHREAD_JOIN:
+ case VG_USERREQ__SET_CANCELSTATE:
+ case VG_USERREQ__SET_CANCELTYPE:
+ case VG_USERREQ__TESTCANCEL:
+ case VG_USERREQ__SET_CANCELPEND:
+ case VG_USERREQ__SET_OR_GET_DETACH:
+ case VG_USERREQ__PTHREAD_GET_THREADID:
+ case VG_USERREQ__PTHREAD_MUTEX_LOCK:
+ case VG_USERREQ__PTHREAD_MUTEX_TIMEDLOCK:
+ case VG_USERREQ__PTHREAD_MUTEX_TRYLOCK:
+ case VG_USERREQ__PTHREAD_MUTEX_UNLOCK:
+ case VG_USERREQ__PTHREAD_COND_WAIT:
+ case VG_USERREQ__PTHREAD_COND_TIMEDWAIT:
+ case VG_USERREQ__PTHREAD_COND_SIGNAL:
+ case VG_USERREQ__PTHREAD_COND_BROADCAST:
+ case VG_USERREQ__PTHREAD_KEY_CREATE:
+ case VG_USERREQ__PTHREAD_KEY_DELETE:
+ case VG_USERREQ__PTHREAD_SETSPECIFIC_PTR:
+ case VG_USERREQ__PTHREAD_GETSPECIFIC_PTR:
+ case VG_USERREQ__PTHREAD_SIGMASK:
+ case VG_USERREQ__SIGWAIT:
+ case VG_USERREQ__PTHREAD_KILL:
+ case VG_USERREQ__PTHREAD_YIELD:
+ case VG_USERREQ__PTHREAD_KEY_VALIDATE:
+ case VG_USERREQ__CLEANUP_PUSH:
+ case VG_USERREQ__CLEANUP_POP:
+ case VG_USERREQ__GET_KEY_D_AND_S:
+ case VG_USERREQ__NUKE_OTHER_THREADS:
+ case VG_USERREQ__GET_N_SIGS_RETURNED:
+ case VG_USERREQ__SET_FHSTACK_USED:
+ case VG_USERREQ__GET_FHSTACK_USED:
+ case VG_USERREQ__SET_FHSTACK_ENTRY:
+ case VG_USERREQ__GET_FHSTACK_ENTRY:
+ case VG_USERREQ__GET_SIGRT_MIN:
+ case VG_USERREQ__GET_SIGRT_MAX:
+ case VG_USERREQ__ALLOC_RTSIG:
+ VG_(message)(Vg_UserMsg, "It looks like you've got an old libpthread.so* ");
+ VG_(message)(Vg_UserMsg, "installed in \"%s\".", VG_(libdir));
+ VG_(message)(Vg_UserMsg, "Please delete it and try again.");
+ VG_(exit)(99);
+ break;
+
default:
- if (VG_(needs).client_requests) {
+ if (VGA_(client_request)(tid, arg)) {
+ /* architecture handled the client request */
+ } else if (VG_(needs).client_requests) {
UWord ret;
if (VG_(clo_verbosity) > 2)
arg[0], (void*)arg[1], arg[2] );
if (TL_(handle_client_request) ( tid, arg, &ret ))
- SET_CLREQ_RETVAL(tid, ret);
+ SET_CLREQ_RETVAL(tid, ret);
} else {
static Bool whined = False;
- if (!whined) {
+ if (!whined && VG_(clo_verbosity) > 2) {
// Allow for requests in core, but defined by tools, which
// have 0 and 0 in their two high bytes.
Char c1 = (arg[0] >> 24) & 0xff;
Sanity checking.
------------------------------------------------------------------ */
-/* Internal consistency checks on the sched/pthread structures. */
+/* Internal consistency checks on the sched structures. */
static
-void scheduler_sanity ( void )
+void scheduler_sanity ( ThreadId tid )
{
- vg_pthread_mutex_t* mx;
- vg_pthread_cond_t* cv;
- Int i;
- struct timeout* top;
- UInt lasttime = 0;
-
- for(top = timeouts; top != NULL; top = top->next) {
- vg_assert(top->time >= lasttime);
- vg_assert(is_valid_or_empty_tid(top->tid));
-
-#if 0
- /* assert timeout entry is either stale, or associated with a
- thread in the right state
-
- XXX disable for now - can be stale, but times happen to match
- */
- vg_assert(VG_(threads)[top->tid].awaken_at != top->time ||
- VG_(threads)[top->tid].status == VgTs_Sleeping ||
- VG_(threads)[top->tid].status == VgTs_WaitMX ||
- VG_(threads)[top->tid].status == VgTs_WaitCV);
-#endif
-
- lasttime = top->time;
- }
-
- /* VG_(printf)("scheduler_sanity\n"); */
- for (i = 1; i < VG_N_THREADS; i++) {
- mx = VG_(threads)[i].associated_mx;
- cv = VG_(threads)[i].associated_cv;
- if (VG_(threads)[i].status == VgTs_WaitMX) {
- /* If we're waiting on a MX: (1) the mx is not null, (2, 3)
- it's actually held by someone, since otherwise this thread
- is deadlocked, (4) the mutex's owner is not us, since
- otherwise this thread is also deadlocked. The logic in
- do_pthread_mutex_lock rejects attempts by a thread to lock
- a (non-recursive) mutex which it already owns.
-
- (2) has been seen to fail sometimes. I don't know why.
- Possibly to do with signals. */
- vg_assert(cv == NULL);
- /* 1 */ vg_assert(mx != NULL);
- /* 2 */ vg_assert(mx->__vg_m_count > 0);
- /* 3 */ vg_assert(VG_(is_valid_tid)((ThreadId)(UWord)mx->__vg_m_owner));
- /* 4 */ vg_assert((UInt)i != (ThreadId)(UWord)mx->__vg_m_owner ||
- VG_(threads)[i].awaken_at != 0xFFFFFFFF);
- } else
- if (VG_(threads)[i].status == VgTs_WaitCV) {
- vg_assert(cv != NULL);
- vg_assert(mx != NULL);
- } else {
- vg_assert(cv == NULL);
- vg_assert(mx == NULL);
- }
+ Bool bad = False;
- if (VG_(threads)[i].status != VgTs_Empty) {
- Int
- stack_used = (Addr)VG_(threads)[i].stack_highest_word
- - (Addr)STACK_PTR(VG_(threads)[i].arch);
- Int
- stack_avail = VG_(threads)[i].stack_size
- - VG_AR_CLIENT_STACKBASE_REDZONE_SZB
- - VG_(threads)[i].stack_guard_size;
- /* This test is a bit bogus - it doesn't take into account
- alternate signal stacks, for a start. Also, if a thread
- has it's stack pointer somewhere strange, killing Valgrind
- isn't the right answer. */
- if (0 && i > 1 /* not the root thread */
- && stack_used >= stack_avail) {
- VG_(message)(Vg_UserMsg,
- "Error: STACK OVERFLOW: "
- "thread %d: stack used %d, available %d",
- i, stack_used, stack_avail );
- VG_(message)(Vg_UserMsg,
- "Terminating Valgrind. If thread(s) "
- "really need more stack, increase");
- VG_(message)(Vg_UserMsg,
- "VG_PTHREAD_STACK_SIZE in core.h and recompile.");
- VG_(exit)(1);
- }
- }
+ if (!VG_(is_running_thread)(tid)) {
+ VG_(message)(Vg_DebugMsg,
+ "Thread %d is supposed to be running, but doesn't own run_sema (owned by %d)\n",
+ tid, running_tid);
+ bad = True;
}
- for (i = 0; i < VG_N_THREAD_KEYS; i++) {
- if (!vg_thread_keys[i].inuse)
- vg_assert(vg_thread_keys[i].destructor == NULL);
+ if (VG_(gettid)() != VG_(threads)[tid].os_state.lwpid) {
+ VG_(message)(Vg_DebugMsg,
+ "Thread %d supposed to be in LWP %d, but we're actually %d\n",
+ VG_(threads)[tid].os_state.lwpid, VG_(gettid)());
+ bad = True;
}
}
*/
/*
- New signal handling.
-
- Now that all threads have a ProxyLWP to deal with signals for them,
- we can use the kernel to do a lot more work for us. The kernel
- will deal with blocking signals, pending blocked signals, queues
- and thread selection. We just need to deal with setting a signal
- handler and signal delivery.
-
- In order to match the proper kernel signal semantics, the proxy LWP
- which recieves a signal goes through an exchange of messages with
- the scheduler LWP. When the proxy first gets a signal, it
- immediately blocks all signals and sends a message back to the
- scheduler LWP. It then enters a SigACK state, in which requests to
- run system calls are ignored, and all signals remain blocked. When
- the scheduler gets the signal message, it sets up the thread to
- enter its signal handler, and sends a SigACK message back to the
- proxy, which includes the signal mask to be applied while running
- the handler. On recieving SigACK, the proxy sets the new signal
- mask and reverts to its normal mode of operation. (All this is
- implemented in vg_syscalls.c)
-
- This protocol allows the application thread to take delivery of the
- signal at some arbitary time after the signal was sent to the
- process, while still getting proper signal delivery semantics (most
- notably, getting the signal block sets right while running the
- signal handler, and not allowing recursion where there wouldn't
- have been normally).
-
- Important point: the main LWP *always* has all signals blocked
- except for SIGSEGV, SIGBUS, SIGFPE and SIGILL (ie, signals which
- are synchronously changed . If the kernel supports thread groups
- with shared signal state (Linux 2.5+, RedHat's 2.4), then these are
- the only signals it needs to handle.
-
- If we get a synchronous signal, we longjmp back into the scheduler,
- since we can't resume executing the client code. The scheduler
- immediately starts signal delivery to the thread which generated
- the signal.
-
- On older kernels without thread-groups, we need to poll the pending
- signal with sigtimedwait() and farm any signals off to the
- appropriate proxy LWP.
+ Signal handling.
+
+ There are 4 distinct classes of signal:
+
+ 1. Synchronous, instruction-generated (SIGILL, FPE, BUS, SEGV and
+ TRAP): these are signals as a result of an instruction fault. If
+ we get one while running client code, then we just do the
+ appropriate thing. If it happens while running Valgrind code, then
+ it indicates a Valgrind bug. Note that we "manually" implement
+ automatic stack growth, such that if a fault happens near the
+ client process stack, it is extended in the same way the kernel
+ would, and the fault is never reported to the client program.
+
+ 2. Asynchronous varients of the above signals: If the kernel tries
+ to deliver a sync signal while it is blocked, it just kills the
+ process. Therefore, we can't block those signals if we want to be
+ able to report on bugs in Valgrind. This means that we're also
+ open to receiving those signals from other processes, sent with
+ kill. We could get away with just dropping them, since they aren't
+ really signals that processes send to each other.
+
+ 3. Synchronous, general signals. If a thread/process sends itself
+ a signal with kill, its expected to be synchronous: ie, the signal
+ will have been delivered by the time the syscall finishes.
+
+ 4. Asyncronous, general signals. All other signals, sent by
+ another process with kill. These are generally blocked, except for
+ two special cases: we poll for them each time we're about to run a
+ thread for a time quanta, and while running blocking syscalls.
+
+
+ In addition, we define two signals for internal use: SIGVGCHLD and
+ SIGVGKILL. SIGVGCHLD is used to indicate thread death to any
+ reaping thread (the master thread). It is always blocked and never
+ delivered as a signal; it is always polled with sigtimedwait.
+
+ SIGVGKILL is used to terminate threads. When one thread wants
+ another to exit, it will set its exitreason and send it SIGVGKILL
+ if it appears to be blocked in a syscall.
+
+
+ We use a kernel thread for each application thread. When the
+ thread allows itself to be open to signals, it sets the thread
+ signal mask to what the client application set it to. This means
+ that we get the kernel to do all signal routing: under Valgrind,
+ signals get delivered in the same way as in the non-Valgrind case
+ (the exception being for the sync signal set, since they're almost
+ always unblocked).
*/
#include "core.h"
static void vg_sync_signalhandler ( Int sigNo, vki_siginfo_t *info, struct vki_ucontext * );
static void vg_async_signalhandler ( Int sigNo, vki_siginfo_t *info, struct vki_ucontext * );
-static void vg_babyeater ( Int sigNo, vki_siginfo_t *info, struct vki_ucontext * );
-static void proxy_sigvg_handler ( Int sigNo, vki_siginfo_t *info, struct vki_ucontext * );
+static void sigvgkill_handler ( Int sigNo, vki_siginfo_t *info, struct vki_ucontext * );
-static Bool is_correct_sigmask(void);
static const Char *signame(Int sigNo);
-/* ---------------------------------------------------------------------
- Signal stack
- ------------------------------------------------------------------ */
+/* Maximum usable signal. */
+Int VG_(max_signal) = _VKI_NSIG;
-/* We have to ask for signals to be delivered on an alternative
- stack, since it is possible, although unlikely, that we'll have to run
- client code from inside the Valgrind-installed signal handler. */
-static Addr sigstack[VG_SIGSTACK_SIZE_W];
+#define N_QUEUED_SIGNALS 8
-extern void VG_(get_sigstack_bounds)( Addr* low, Addr* high )
-{
- *low = (Addr) & sigstack[0];
- *high = (Addr) & sigstack[VG_SIGSTACK_SIZE_W];
-}
+typedef struct SigQueue {
+ Int next;
+ vki_siginfo_t sigs[N_QUEUED_SIGNALS];
+} SigQueue;
/* ---------------------------------------------------------------------
HIGH LEVEL STUFF TO DO WITH SIGNALS: POLICY (MOSTLY)
------------------------------------------------------------------ */
-/* If set to true, the currently running kernel doesn't do the right
- thing with signals and LWPs, so we need to do our own. */
-Bool VG_(do_signal_routing) = False;
-
-/* Set of signal which are pending for the whole process. This is
- only used when we're doing signal routing, and this is a place to
- remember pending signals which we can't keep actually pending for
- some reason. */
-static vki_sigset_t proc_pending; /* process-wide pending signals */
-
-/* Since we use a couple of RT signals, we need to handle allocating
- the rest for application use. */
-Int VG_(sig_rtmin) = VKI_SIGVGRTUSERMIN;
-Int VG_(sig_rtmax) = VKI_SIGRTMAX;
-
-Int VG_(sig_alloc_rtsig)(Int high)
-{
- Int ret;
-
- if (VG_(sig_rtmin) >= VG_(sig_rtmax))
- ret = -1;
- else
- ret = high ? VG_(sig_rtmin)++ : VG_(sig_rtmax)--;
-
- vg_assert(ret >= VKI_SIGVGRTUSERMIN);
-
- return ret;
-}
-
/* ---------------------------------------------------------------------
Signal state for this process.
------------------------------------------------------------------ */
client's handler */
UInt scss_flags;
vki_sigset_t scss_mask;
- void* scss_restorer; /* god knows; we ignore it. */
+ void* scss_restorer; /* where sigreturn goes */
}
SCSS_Per_Signal;
Flags:
SA_SIGINFO -- we always set it, and honour it for the client
SA_NOCLDSTOP -- passed to kernel
- SA_ONESHOT or SA_RESETHAND -- required; abort if not set
+ SA_ONESHOT or SA_RESETHAND -- pass through
SA_RESTART -- we observe this but set our handlers to always restart
SA_NOMASK or SA_NODEFER -- we observe this, but our handlers block everything
- SA_ONSTACK -- currently not supported; abort if set.
- SA_NOCLDWAIT -- we observe this, but we never set it (doesn't quite
- work if client is blocked in a wait4() syscall)
+ SA_ONSTACK -- pass through
+ SA_NOCLDWAIT -- pass through
*/
handler = if client has a handler, then our handler
else if client is DFL, then our handler as well
else (client must be IGN)
- if (signal == SIGCHLD), then handler is vg_babyeater
- else IGN
-
- We don't really bother with blocking signals here, because the we
- rely on the proxyLWP having set it as part of its kernel state.
+ then hander is IGN
*/
static
void calculate_SKSS_from_SCSS ( SKSS* dst )
case VKI_SIGBUS:
case VKI_SIGFPE:
case VKI_SIGILL:
+ case VKI_SIGTRAP:
/* For these, we always want to catch them and report, even
if the client code doesn't. */
skss_handler = vg_sync_signalhandler;
break;
- case VKI_SIGVGINT:
- case VKI_SIGVGKILL:
- skss_handler = proxy_sigvg_handler;
+ case VKI_SIGCONT:
+ /* Let the kernel handle SIGCONT unless the client is actually
+ catching it. */
+ if (vg_scss.scss_per_sig[sig].scss_handler == VKI_SIG_DFL)
+ skss_handler = VKI_SIG_DFL;
+ else if (vg_scss.scss_per_sig[sig].scss_handler == VKI_SIG_IGN)
+ skss_handler = VKI_SIG_IGN;
+ else
+ skss_handler = vg_async_signalhandler;
break;
- case VKI_SIGCHLD:
- if (scss_handler == VKI_SIG_IGN) {
- skss_handler = vg_babyeater;
- break;
- }
- /* FALLTHROUGH */
default:
- if (scss_handler == VKI_SIG_IGN)
- skss_handler = VKI_SIG_IGN;
- else
- skss_handler = vg_async_signalhandler;
+ if (sig == VKI_SIGVGKILL)
+ skss_handler = sigvgkill_handler;
+ else if (sig == VKI_SIGVGCHLD)
+ skss_handler = VKI_SIG_IGN; /* we only poll for it */
+ else {
+ if (scss_handler == VKI_SIG_IGN)
+ skss_handler = VKI_SIG_IGN;
+ else
+ skss_handler = vg_async_signalhandler;
+ }
break;
}
- /* Restorer */
- /*
- Doesn't seem like we can spin this one.
- if (vg_scss.scss_per_sig[sig].scss_restorer != NULL)
- VG_(unimplemented)
- ("sigactions with non-NULL .sa_restorer field");
- */
-
/* Flags */
skss_flags = 0;
- /* SA_NOCLDSTOP: pass to kernel */
- if (scss_flags & VKI_SA_NOCLDSTOP)
- skss_flags |= VKI_SA_NOCLDSTOP;
-
- /* SA_NOCLDWAIT - don't set */
- /* XXX we could set this if we're not using wait() ourselves for
- tracking proxyLWPs (ie, have_futex is true in
- vg_syscalls.c. */
+ /* SA_NOCLDSTOP, SA_NOCLDWAIT: pass to kernel */
+ skss_flags |= scss_flags & (VKI_SA_NOCLDSTOP | VKI_SA_NOCLDWAIT);
/* SA_ONESHOT: ignore client setting */
- /*
- if (!(scss_flags & VKI_SA_ONESHOT))
- VG_(unimplemented)
- ("sigactions without SA_ONESHOT");
- vg_assert(scss_flags & VKI_SA_ONESHOT);
- skss_flags |= VKI_SA_ONESHOT;
- */
-
+
/* SA_RESTART: ignore client setting and always set it for us
(even though we never rely on the kernel to restart a
syscall, we observe whether it wanted to restart the syscall
- or not, which guides our actions) */
+ or not, which helps VGA_(interrupted_syscall)()) */
skss_flags |= VKI_SA_RESTART;
/* SA_NOMASK: ignore it */
/* SA_ONSTACK: client setting is irrelevant here */
- /*
- if (scss_flags & VKI_SA_ONSTACK)
- VG_(unimplemented)
- ("signals on an alternative stack (SA_ONSTACK)");
- vg_assert(!(scss_flags & VKI_SA_ONSTACK));
- */
- /* ... but WE ask for on-stack ourselves ... */
- skss_flags |= VKI_SA_ONSTACK;
+ /* We don't set a signal stack, so ignore */
/* always ask for SA_SIGINFO */
skss_flags |= VKI_SA_SIGINFO;
skss_flags |= VKI_SA_RESTORER;
/* Create SKSS entry for this signal. */
-
if (sig != VKI_SIGKILL && sig != VKI_SIGSTOP)
dst->skss_per_sig[sig].skss_handler = skss_handler;
else
SKSS skss_old;
struct vki_sigaction ksa, ksa_old;
- vg_assert(is_correct_sigmask());
-
/* Remember old SKSS and calculate new one. */
skss_old = vg_skss;
calculate_SKSS_from_SCSS ( &vg_skss );
/* Compare the new SKSS entries vs the old ones, and update kernel
where they differ. */
- for (sig = 1; sig <= _VKI_NSIG; sig++) {
+ for (sig = 1; sig <= VG_(max_signal); sig++) {
/* Trying to do anything with SIGKILL is pointless; just ignore
it. */
}
ksa.ksa_handler = vg_skss.skss_per_sig[sig].skss_handler;
- ksa.sa_flags = vg_skss.skss_per_sig[sig].skss_flags;
+ ksa.sa_flags = vg_skss.skss_per_sig[sig].skss_flags;
ksa.sa_restorer = VG_(sigreturn);
- vg_assert(ksa.sa_flags & VKI_SA_ONSTACK);
+ /* block all signals in handler */
VG_(sigfillset)( &ksa.sa_mask );
VG_(sigdelset)( &ksa.sa_mask, VKI_SIGKILL );
VG_(sigdelset)( &ksa.sa_mask, VKI_SIGSTOP );
- if (VG_(clo_trace_signals))
+ if (VG_(clo_trace_signals) && VG_(clo_verbosity) > 2)
VG_(message)(Vg_DebugMsg,
"setting ksig %d to: hdlr 0x%x, flags 0x%x, "
"mask(63..0) 0x%x 0x%x",
in kernel/signal.[ch] */
/* True if we are on the alternate signal stack. */
-static Int on_sig_stack ( ThreadId tid, Addr m_SP )
+static Bool on_sig_stack ( ThreadId tid, Addr m_SP )
{
ThreadState *tst = VG_(get_ThreadState)(tid);
}
-void VG_(do_sys_sigaction) ( ThreadId tid )
+Int VG_(do_sys_sigaction) ( Int signo,
+ const struct vki_sigaction *new_act,
+ struct vki_sigaction *old_act )
{
- Int signo;
- struct vki_sigaction* new_act;
- struct vki_sigaction* old_act;
-
- vg_assert(is_correct_sigmask());
-
- vg_assert(VG_(is_valid_tid)(tid));
- signo = SYSCALL_ARG1(VG_(threads)[tid].arch);
- new_act = (struct vki_sigaction*)SYSCALL_ARG2(VG_(threads)[tid].arch);
- old_act = (struct vki_sigaction*)SYSCALL_ARG3(VG_(threads)[tid].arch);
-
if (VG_(clo_trace_signals))
VG_(message)(Vg_DebugExtraMsg,
- "sys_sigaction: tid %d, sigNo %d, "
+ "sys_sigaction: sigNo %d, "
"new %p, old %p, new flags 0x%llx",
- tid, signo, (UWord)new_act, (UWord)old_act,
+ signo, (UWord)new_act, (UWord)old_act,
(ULong)(new_act ? new_act->sa_flags : 0) );
/* Rule out various error conditions. The aim is to ensure that if
succeed. */
/* Reject out-of-range signal numbers. */
- if (signo < 1 || signo > _VKI_NSIG) goto bad_signo;
+ if (signo < 1 || signo > VG_(max_signal)) goto bad_signo;
/* don't let them use our signals */
- if ( (signo == VKI_SIGVGINT || signo == VKI_SIGVGKILL)
+ if ( (signo > VKI_SIGVGRTUSERMAX)
&& new_act
&& !(new_act->ksa_handler == VKI_SIG_DFL || new_act->ksa_handler == VKI_SIG_IGN) )
goto bad_signo_reserved;
vg_scss.scss_per_sig[signo].scss_flags = new_act->sa_flags;
vg_scss.scss_per_sig[signo].scss_mask = new_act->sa_mask;
vg_scss.scss_per_sig[signo].scss_restorer = new_act->sa_restorer;
+
+ VG_(sigdelset)(&vg_scss.scss_per_sig[signo].scss_mask, VKI_SIGKILL);
+ VG_(sigdelset)(&vg_scss.scss_per_sig[signo].scss_mask, VKI_SIGSTOP);
}
/* All happy bunnies ... */
if (new_act) {
handle_SCSS_change( False /* lazy update */ );
}
- SET_SYSCALL_RETVAL(tid, 0);
- return;
+ return 0;
bad_signo:
if (VG_(needs).core_errors && VG_(clo_verbosity) >= 1)
VG_(message)(Vg_UserMsg,
"Warning: bad signal number %d in sigaction()",
signo);
- SET_SYSCALL_RETVAL(tid, -VKI_EINVAL);
- return;
+ return -VKI_EINVAL;
bad_signo_reserved:
if (VG_(needs).core_errors && VG_(clo_verbosity) >= 1) {
" the %s signal is used internally by Valgrind",
signame(signo));
}
- SET_SYSCALL_RETVAL(tid, -VKI_EINVAL);
- return;
+ return -VKI_EINVAL;
bad_sigkill_or_sigstop:
if (VG_(needs).core_errors && VG_(clo_verbosity) >= 1)
VG_(message)(Vg_UserMsg,
" the %s signal is uncatchable",
signame(signo));
- SET_SYSCALL_RETVAL(tid, -VKI_EINVAL);
- return;
+ return -VKI_EINVAL;
}
}
}
+static void sigvgchld_handler(Int sig)
+{
+ VG_(printf)("got a sigvgchld?\n");
+}
+
+/*
+ Wait until some predicate about threadstates is satisfied.
+
+ This uses SIGVGCHLD as a notification that it is now worth
+ re-evaluating the predicate.
+ */
+void VG_(wait_for_threadstate)(Bool (*pred)(void *), void *arg)
+{
+ vki_sigset_t set, saved;
+ struct vki_sigaction sa, old_sa;
+
+ /*
+ SIGVGCHLD is set to be ignored, and is unblocked by default.
+ This means all such signals are simply discarded.
+
+ In this loop, we actually block it, and then poll for it with
+ sigtimedwait.
+ */
+ VG_(sigemptyset)(&set);
+ VG_(sigaddset)(&set, VKI_SIGVGCHLD);
+
+ VG_(set_sleeping)(VG_(master_tid), VgTs_Yielding);
+ VG_(sigprocmask)(VKI_SIG_BLOCK, &set, &saved);
+
+ /* It shouldn't be necessary to set a handler, since the signal is
+ always blocked, but it seems to be necessary to convice the
+ kernel not to just toss the signal... */
+ sa.ksa_handler = sigvgchld_handler;
+ sa.sa_flags = 0;
+ VG_(sigfillset)(&sa.sa_mask);
+ VG_(sigaction)(VKI_SIGVGCHLD, &sa, &old_sa);
+
+ vg_assert(old_sa.ksa_handler == VKI_SIG_IGN);
+
+ while(!(*pred)(arg)) {
+ struct vki_siginfo si;
+ Int ret = VG_(sigtimedwait)(&set, &si, NULL);
+
+ if (ret > 0 && VG_(clo_trace_signals))
+ VG_(message)(Vg_DebugMsg, "Got %d (code=%d) from tid lwp %d",
+ ret, si.si_code, si._sifields._kill._pid);
+ }
+
+ VG_(sigaction)(VKI_SIGVGCHLD, &old_sa, NULL);
+ VG_(sigprocmask)(VKI_SIG_SETMASK, &saved, NULL);
+ VG_(set_running)(VG_(master_tid));
+}
+
+/* Add and remove signals from mask so that we end up telling the
+ kernel the state we actually want rather than what the client
+ wants. */
+void VG_(sanitize_client_sigmask)(ThreadId tid, vki_sigset_t *mask)
+{
+ VG_(sigdelset)(mask, VKI_SIGKILL);
+ VG_(sigdelset)(mask, VKI_SIGSTOP);
+
+ VG_(sigdelset)(mask, VKI_SIGVGKILL); /* never block */
+
+ /* SIGVGCHLD is used by threads to indicate their state changes to
+ the master thread. Mostly it doesn't care, so it leaves the
+ signal ignored and unblocked. Everyone else should have it
+ blocked, so there's at most 1 thread with it unblocked. */
+ if (tid == VG_(master_tid))
+ VG_(sigdelset)(mask, VKI_SIGVGCHLD);
+ else
+ VG_(sigaddset)(mask, VKI_SIGVGCHLD);
+}
+
/*
This updates the thread's signal mask. There's no such thing as a
process-wide signal mask.
vki_sigset_t* newset,
vki_sigset_t* oldset )
{
- vg_assert(is_correct_sigmask());
-
if (VG_(clo_trace_signals))
VG_(message)(Vg_DebugExtraMsg,
"do_setmask: tid = %d how = %d (%s), set = %p %08x%08x",
/* Just do this thread. */
vg_assert(VG_(is_valid_tid)(tid));
if (oldset) {
- *oldset = VG_(threads)[tid].eff_sig_mask;
+ *oldset = VG_(threads)[tid].sig_mask;
if (VG_(clo_trace_signals))
VG_(message)(Vg_DebugExtraMsg,
"\toldset=%p %08x%08x",
do_sigprocmask_bitops (how, &VG_(threads)[tid].sig_mask, newset );
VG_(sigdelset)(&VG_(threads)[tid].sig_mask, VKI_SIGKILL);
VG_(sigdelset)(&VG_(threads)[tid].sig_mask, VKI_SIGSTOP);
- VG_(proxy_setsigmask)(tid);
+ VG_(threads)[tid].tmp_sig_mask = VG_(threads)[tid].sig_mask;
}
-
- vg_assert(is_correct_sigmask());
}
case VKI_SIG_UNBLOCK:
case VKI_SIG_SETMASK:
vg_assert(VG_(is_valid_tid)(tid));
- /* Syscall returns 0 (success) to its thread. Set this up before
- calling do_setmask() because we may get a signal as part of
- setting the mask, which will confuse things.
- */
- SET_SYSCALL_RETVAL(tid, 0);
do_setmask ( tid, how, set, oldset );
-
- VG_(route_signals)(); /* if we're routing, do something before returning */
+ SET_SYSCALL_RETVAL(tid, 0);
+ VG_(poll_signals)(tid); /* look for any newly deliverable signals */
break;
default:
vg_assert(ret == 0);
}
-/* Sanity check - check the scheduler LWP has all the signals blocked
- it is supposed to have blocked. */
-static Bool is_correct_sigmask(void)
+Bool VG_(client_signal_OK)(Int sigNo)
{
- vki_sigset_t mask;
- Bool ret = True;
-
- vg_assert(VG_(gettid)() == VG_(main_pid));
-
-#ifdef DEBUG_SIGNALS
- VG_(sigprocmask)(VKI_SIG_SETMASK, NULL, &mask);
-
- /* unresumable signals */
-
- ret = ret && !VG_(sigismember)(&mask, VKI_SIGSEGV);
- VG_(sigaddset)(&mask, VKI_SIGSEGV);
-
- ret = ret && !VG_(sigismember)(&mask, VKI_SIGBUS);
- VG_(sigaddset)(&mask, VKI_SIGBUS);
-
- ret = ret && !VG_(sigismember)(&mask, VKI_SIGFPE);
- VG_(sigaddset)(&mask, VKI_SIGFPE);
-
- ret = ret && !VG_(sigismember)(&mask, VKI_SIGILL);
- VG_(sigaddset)(&mask, VKI_SIGILL);
+ /* signal 0 is OK for kill */
+ Bool ret = sigNo >= 0 && sigNo <= VKI_SIGVGRTUSERMAX;
- /* unblockable signals (doesn't really matter if these are
- already present) */
- VG_(sigaddset)(&mask, VKI_SIGSTOP);
- VG_(sigaddset)(&mask, VKI_SIGKILL);
-
- ret = ret && VG_(isfullsigset)(&mask);
-#endif /* DEBUG_SIGNALS */
+ //VG_(printf)("client_signal_OK(%d) -> %d\n", sigNo, ret);
return ret;
}
-/* Set the signal mask for the scheduer LWP; this should be set once
- and left that way - all async signal handling is done in the proxy
- LWPs. */
-static void set_main_sigmask(void)
-{
- vki_sigset_t mask;
-
- VG_(sigfillset)(&mask);
- VG_(sigdelset)(&mask, VKI_SIGSEGV);
- VG_(sigdelset)(&mask, VKI_SIGBUS);
- VG_(sigdelset)(&mask, VKI_SIGFPE);
- VG_(sigdelset)(&mask, VKI_SIGILL);
-
- VG_(sigprocmask)(VKI_SIG_SETMASK, &mask, NULL);
-
- vg_assert(is_correct_sigmask());
-}
-
/* ---------------------------------------------------------------------
The signal simulation proper. A simplified version of what the
Linux kernel does.
/* Set up a stack frame (VgSigContext) for the client's signal
handler. */
-static
void vg_push_signal_frame ( ThreadId tid, const vki_siginfo_t *siginfo )
{
Addr esp_top_of_frame;
ThreadState* tst;
Int sigNo = siginfo->si_signo;
- vg_assert(sigNo >= 1 && sigNo <= _VKI_NSIG);
+ vg_assert(sigNo >= 1 && sigNo <= VG_(max_signal));
vg_assert(VG_(is_valid_tid)(tid));
tst = & VG_(threads)[tid];
= (Addr)(tst->altstack.ss_sp) + tst->altstack.ss_size;
if (VG_(clo_trace_signals))
VG_(message)(Vg_DebugMsg,
- "delivering signal %d (%s) to thread %d: on ALT STACK",
- sigNo, signame(sigNo), tid );
+ "delivering signal %d (%s) to thread %d: on ALT STACK (%p-%p; %d bytes)",
+ sigNo, signame(sigNo), tid,
+ tst->altstack.ss_sp,
+ tst->altstack.ss_sp + tst->altstack.ss_size,
+ tst->altstack.ss_size );
/* Signal delivery to tools */
VG_TRACK( pre_deliver_signal, tid, sigNo, /*alt_stack*/True );
/* Signal delivery to tools */
VG_TRACK( pre_deliver_signal, tid, sigNo, /*alt_stack*/False );
}
+
+ vg_assert(vg_scss.scss_per_sig[sigNo].scss_handler != VKI_SIG_IGN);
+ vg_assert(vg_scss.scss_per_sig[sigNo].scss_handler != VKI_SIG_DFL);
+
+ /* This may fail if the client stack is busted; if that happens,
+ the whole process will exit rather than simply calling the
+ signal handler. */
VGA_(push_signal_frame)(tid, esp_top_of_frame, siginfo,
vg_scss.scss_per_sig[sigNo].scss_handler,
vg_scss.scss_per_sig[sigNo].scss_flags,
- &vg_scss.scss_per_sig[sigNo].scss_mask);
-}
-
-/* Clear the signal frame created by vg_push_signal_frame, restore the
- simulated machine state, and return the signal number that the
- frame was for. */
-static
-Int vg_pop_signal_frame ( ThreadId tid )
-{
- Int sigNo = VGA_(pop_signal_frame)(tid);
-
- VG_(proxy_setsigmask)(tid);
-
- /* Notify tools */
- VG_TRACK( post_deliver_signal, tid, sigNo );
-
- return sigNo;
+ &tst->sig_mask,
+ vg_scss.scss_per_sig[sigNo].scss_restorer);
}
-/* A handler is returning. Restore the machine state from the stacked
- VgSigContext and continue with whatever was going on before the
- handler ran. Returns the SA_RESTART syscall-restartability-status
- of the delivered signal. */
-
-Bool VG_(signal_returns) ( ThreadId tid )
-{
- Int sigNo;
-
- /* Pop the signal frame and restore tid's status to what it was
- before the signal was delivered. */
- sigNo = vg_pop_signal_frame(tid);
-
- vg_assert(sigNo >= 1 && sigNo <= _VKI_NSIG);
-
- /* Scheduler now can resume this thread, or perhaps some other.
- Tell the scheduler whether or not any syscall interrupted by
- this signal should be restarted, if possible, or no. This is
- only used for nanosleep; all other blocking syscalls are handled
- in VG_(deliver_signal)().
- */
- return
- (vg_scss.scss_per_sig[sigNo].scss_flags & VKI_SA_RESTART)
- ? True
- : False;
-}
-
static const Char *signame(Int sigNo)
{
static Char buf[10];
#undef S
case VKI_SIGRTMIN ... VKI_SIGRTMAX:
- VG_(sprintf)(buf, "SIGRT%d", sigNo);
+ VG_(sprintf)(buf, "SIGRT%d", sigNo-VKI_SIGRTMIN);
return buf;
default:
VG_(sigaction)(sigNo, &sa, &origsa);
- VG_(sigfillset)(&mask);
- VG_(sigdelset)(&mask, sigNo);
- VG_(sigprocmask)(VKI_SIG_SETMASK, &mask, &origmask);
+ VG_(sigemptyset)(&mask);
+ VG_(sigaddset)(&mask, sigNo);
+ VG_(sigprocmask)(VKI_SIG_UNBLOCK, &mask, &origmask);
VG_(tkill)(VG_(getpid)(), sigNo);
/* If true, then this Segment may be mentioned in the core */
static Bool may_dump(const Segment *seg)
{
- return (seg->flags & SF_VALGRIND) == 0 && VG_(is_client_addr)(seg->addr);
+ return (seg->flags & (SF_DEVICE|SF_VALGRIND)) == 0 && VG_(is_client_addr)(seg->addr);
}
/* If true, then this Segment's contents will be in the core */
switch(tst->status) {
case VgTs_Runnable:
+ case VgTs_Yielding:
prpsinfo->pr_sname = 'R';
break;
- case VgTs_WaitJoinee:
- prpsinfo->pr_sname = 'Z';
- prpsinfo->pr_zomb = 1;
- break;
-
- case VgTs_WaitJoiner:
- case VgTs_WaitMX:
- case VgTs_WaitCV:
case VgTs_WaitSys:
- case VgTs_Sleeping:
prpsinfo->pr_sname = 'S';
break;
+ case VgTs_Zombie:
+ prpsinfo->pr_sname = 'Z';
+ break;
+
case VgTs_Empty:
- /* ? */
+ case VgTs_Init:
+ prpsinfo->pr_sname = '?';
break;
}
}
}
-static void fill_prstatus(ThreadState *tst, struct vki_elf_prstatus *prs, const vki_siginfo_t *si)
+static void fill_prstatus(const ThreadState *tst,
+ struct vki_elf_prstatus *prs,
+ const vki_siginfo_t *si)
{
struct vki_user_regs_struct *regs;
prs->pr_cursig = si->si_signo;
- prs->pr_pid = VG_(main_pid) + tst->tid; /* just to distinguish threads from each other */
+ prs->pr_pid = tst->os_state.lwpid;
prs->pr_ppid = 0;
- prs->pr_pgrp = VG_(main_pgrp);
- prs->pr_sid = VG_(main_pgrp);
+ prs->pr_pgrp = VG_(getpgrp)();
+ prs->pr_sid = VG_(getpgrp)();
regs = (struct vki_user_regs_struct *)prs->pr_reg;
Elf32_Ehdr ehdr;
Elf32_Phdr *phdrs;
Int num_phdrs;
- Int i;
+ Int i, idx;
UInt off;
struct note *notelist, *note;
UInt notesz;
for(;;) {
if (seq == 0)
VG_(sprintf)(buf, "%s%s.pid%d",
- basename, coreext, VG_(main_pid));
+ basename, coreext, VG_(getpid)());
else
VG_(sprintf)(buf, "%s%s.pid%d.%d",
- basename, coreext, VG_(main_pid), seq);
+ basename, coreext, VG_(getpid)(), seq);
seq++;
core_fd = VG_(open)(buf,
fill_ehdr(&ehdr, num_phdrs);
+ notelist = NULL;
+
/* Second, work out their layout */
phdrs = VG_(arena_malloc)(VG_AR_CORE, sizeof(*phdrs) * num_phdrs);
off = PGROUNDUP(off);
- for(seg = VG_(first_segment)(), i = 1;
+ for(seg = VG_(first_segment)(), idx = 1;
seg != NULL;
- seg = VG_(next_segment)(seg), i++) {
+ seg = VG_(next_segment)(seg)) {
if (!may_dump(seg))
continue;
- fill_phdr(&phdrs[i], seg, off, (seg->len + off) < max_size);
+ fill_phdr(&phdrs[idx], seg, off, (seg->len + off) < max_size);
- off += phdrs[i].p_filesz;
+ off += phdrs[idx].p_filesz;
+
+ idx++;
}
/* write everything out */
VG_(lseek)(core_fd, phdrs[1].p_offset, VKI_SEEK_SET);
- for(seg = VG_(first_segment)(), i = 1;
+ for(seg = VG_(first_segment)(), idx = 1;
seg != NULL;
- seg = VG_(next_segment)(seg), i++) {
+ seg = VG_(next_segment)(seg)) {
if (!should_dump(seg))
continue;
- vg_assert(VG_(lseek)(core_fd, 0, VKI_SEEK_CUR) == phdrs[i].p_offset);
- if (phdrs[i].p_filesz > 0)
- VG_(write)(core_fd, (void *)seg->addr, seg->len);
+ if (phdrs[idx].p_filesz > 0) {
+ Int ret;
+
+ vg_assert(VG_(lseek)(core_fd, phdrs[idx].p_offset, VKI_SEEK_SET) == phdrs[idx].p_offset);
+ vg_assert(seg->len >= phdrs[idx].p_filesz);
+
+ ret = VG_(write)(core_fd, (void *)seg->addr, phdrs[idx].p_filesz);
+ }
+ idx++;
}
VG_(close)(core_fd);
#endif
/*
- Perform the default action of a signal. Returns if the default
- action isn't fatal.
+ Perform the default action of a signal. If the signal is fatal, it
+ marks all threads as needing to exit, but it doesn't actually kill
+ the process or thread.
If we're not being quiet, then print out some more detail about
fatal signals (esp. core dumping signals).
static void vg_default_action(const vki_siginfo_t *info, ThreadId tid)
{
Int sigNo = info->si_signo;
- Bool terminate = False;
- Bool core = False;
+ Bool terminate = False; /* kills process */
+ Bool core = False; /* kills process w/ core */
+ struct vki_rlimit corelim;
+ Bool could_core;
+ vg_assert(VG_(is_running_thread)(tid));
+
switch(sigNo) {
case VKI_SIGQUIT: /* core */
case VKI_SIGILL: /* core */
vg_assert(!core || (core && terminate));
if (VG_(clo_trace_signals))
- VG_(message)(Vg_DebugMsg, "delivering %d to default handler %s%s",
- sigNo, terminate ? "terminate" : "", core ? "+core" : "");
+ VG_(message)(Vg_DebugMsg, "delivering %d (code %d) to default handler; action: %s%s",
+ sigNo, info->si_code, terminate ? "terminate" : "ignore", core ? "+core" : "");
- if (terminate) {
- struct vki_rlimit corelim;
- Bool could_core = core;
+ if (!terminate)
+ return; /* nothing to do */
- if (core) {
- /* If they set the core-size limit to zero, don't generate a
- core file */
+ could_core = core;
+
+ if (core) {
+ /* If they set the core-size limit to zero, don't generate a
+ core file */
- VG_(getrlimit)(VKI_RLIMIT_CORE, &corelim);
+ VG_(getrlimit)(VKI_RLIMIT_CORE, &corelim);
- if (corelim.rlim_cur == 0)
- core = False;
- }
+ if (corelim.rlim_cur == 0)
+ core = False;
+ }
- if (VG_(clo_verbosity) != 0 && (could_core || VG_(clo_verbosity) > 1)) {
- VG_(message)(Vg_UserMsg, "");
- VG_(message)(Vg_UserMsg, "Process terminating with default action of signal %d (%s)%s",
- sigNo, signame(sigNo), core ? ": dumping core" : "");
-
- /* Be helpful - decode some more details about this fault */
- if (info->si_code > VKI_SI_USER) {
- const Char *event = NULL;
-
- switch(sigNo) {
- case VKI_SIGSEGV:
- switch(info->si_code) {
- case 1: event = "Access not within mapped region"; break;
- case 2: event = "Bad permissions for mapped region"; break;
- }
+ if (VG_(clo_verbosity) > 1 || (could_core && info->si_code > VKI_SI_USER)) {
+ VG_(message)(Vg_UserMsg, "");
+ VG_(message)(Vg_UserMsg, "Process terminating with default action of signal %d (%s)%s",
+ sigNo, signame(sigNo), core ? ": dumping core" : "");
+
+ /* Be helpful - decode some more details about this fault */
+ if (info->si_code > VKI_SI_USER) {
+ const Char *event = NULL;
+ Bool haveaddr = True;
+
+ switch(sigNo) {
+ case VKI_SIGSEGV:
+ switch(info->si_code) {
+ case 1: event = "Access not within mapped region"; break;
+ case 2: event = "Bad permissions for mapped region"; break;
+ case 128:
+ /* General Protection Fault: The CPU/kernel
+ isn't telling us anything useful, but this
+ is commonly the result of exceeding a
+ segment limit, such as the one imposed by
+ --pointercheck=yes. */
+ if (VG_(clo_pointercheck))
+ event = "GPF (Pointer out of bounds?)";
+ else
+ event = "General Protection Fault";
+ haveaddr = False;
break;
+ }
+ break;
- case VKI_SIGILL:
- switch(info->si_code) {
- case 1: event = "Illegal opcode"; break;
- case 2: event = "Illegal operand"; break;
- case 3: event = "Illegal addressing mode"; break;
- case 4: event = "Illegal trap"; break;
- case 5: event = "Privileged opcode"; break;
- case 6: event = "Privileged register"; break;
- case 7: event = "Coprocessor error"; break;
- case 8: event = "Internal stack error"; break;
- }
- break;
+ case VKI_SIGILL:
+ switch(info->si_code) {
+ case 1: event = "Illegal opcode"; break;
+ case 2: event = "Illegal operand"; break;
+ case 3: event = "Illegal addressing mode"; break;
+ case 4: event = "Illegal trap"; break;
+ case 5: event = "Privileged opcode"; break;
+ case 6: event = "Privileged register"; break;
+ case 7: event = "Coprocessor error"; break;
+ case 8: event = "Internal stack error"; break;
+ }
+ break;
- case VKI_SIGFPE:
- switch (info->si_code) {
- case 1: event = "Integer divide by zero"; break;
- case 2: event = "Integer overflow"; break;
- case 3: event = "FP divide by zero"; break;
- case 4: event = "FP overflow"; break;
- case 5: event = "FP underflow"; break;
- case 6: event = "FP inexact"; break;
- case 7: event = "FP invalid operation"; break;
- case 8: event = "FP subscript out of range"; break;
- }
- break;
+ case VKI_SIGFPE:
+ switch (info->si_code) {
+ case 1: event = "Integer divide by zero"; break;
+ case 2: event = "Integer overflow"; break;
+ case 3: event = "FP divide by zero"; break;
+ case 4: event = "FP overflow"; break;
+ case 5: event = "FP underflow"; break;
+ case 6: event = "FP inexact"; break;
+ case 7: event = "FP invalid operation"; break;
+ case 8: event = "FP subscript out of range"; break;
+ }
+ break;
- case VKI_SIGBUS:
- switch (info->si_code) {
- case 1: event = "Invalid address alignment"; break;
- case 2: event = "Non-existent physical address"; break;
- case 3: event = "Hardware error"; break;
- }
- break;
+ case VKI_SIGBUS:
+ switch (info->si_code) {
+ case 1: event = "Invalid address alignment"; break;
+ case 2: event = "Non-existent physical address"; break;
+ case 3: event = "Hardware error"; break;
}
+ break;
+ }
- if (event != NULL)
+ if (event != NULL) {
+ if (haveaddr)
VG_(message)(Vg_UserMsg, " %s at address %p",
event, info->_sifields._sigfault._addr);
- }
-
- if (tid != VG_INVALID_THREADID) {
- ExeContext *ec = VG_(get_ExeContext)(tid);
- VG_(pp_ExeContext)(ec);
+ else
+ VG_(message)(Vg_UserMsg, " %s", event);
}
}
- if (VG_(is_action_requested)( "Attach to debugger", & VG_(clo_db_attach) )) {
- VG_(start_debugger)( tid );
+ if (tid != VG_INVALID_THREADID) {
+ ExeContext *ec = VG_(get_ExeContext)(tid);
+ VG_(pp_ExeContext)(ec);
}
+ }
+
+ if (VG_(is_action_requested)( "Attach to debugger", & VG_(clo_db_attach) )) {
+ VG_(start_debugger)( tid );
+ }
// See comment above about this temporary disabling of core dumps.
#if 0
- if (core) {
- static struct vki_rlimit zero = { 0, 0 };
+ if (core) {
+ const static struct vki_rlimit zero = { 0, 0 };
- make_coredump(tid, info, corelim.rlim_cur);
-
- /* make sure we don't get a confusing kernel-generated coredump */
- VG_(setrlimit)(VKI_RLIMIT_CORE, &zero);
- }
- #endif
+ make_coredump(tid, info, corelim.rlim_cur);
- VG_(scheduler_handle_fatal_signal)( sigNo );
+ /* Make sure we don't get a confusing kernel-generated
+ coredump when we finally exit */
+ VG_(setrlimit)(VKI_RLIMIT_CORE, &zero);
}
+ #endif
- VG_(kill_self)(sigNo);
+ /* stash fatal signal in main thread */
+ VG_(threads)[VG_(master_tid)].os_state.fatalsig = sigNo;
- vg_assert(!terminate);
+ /* everyone dies */
+ VG_(nuke_all_threads_except)(tid, VgSrc_FatalSig);
+ VG_(threads)[tid].exitreason = VgSrc_FatalSig;
+ VG_(threads)[tid].os_state.fatalsig = sigNo;
}
static void synth_fault_common(ThreadId tid, Addr addr, Int si_code)
info.si_code = si_code;
info._sifields._sigfault._addr = (void*)addr;
- VG_(resume_scheduler)(VKI_SIGSEGV, &info);
- VG_(deliver_signal)(tid, &info, False);
+ /* If they're trying to block the signal, force it to be delivered */
+ if (VG_(sigismember)(&VG_(threads)[tid].sig_mask, VKI_SIGSEGV))
+ VG_(set_default_handler)(VKI_SIGSEGV);
+
+ VG_(deliver_signal)(tid, &info);
}
// Synthesize a fault where the address is OK, but the page
info.si_code = 1; /* jrs: no idea what this should be */
info._sifields._sigfault._addr = (void*)addr;
- VG_(resume_scheduler)(VKI_SIGILL, &info);
- VG_(deliver_signal)(tid, &info, False);
+ VG_(resume_scheduler)(tid);
+ VG_(deliver_signal)(tid, &info);
}
+/*
+ This does the business of delivering a signal to a thread. It may
+ be called from either a real signal handler, or from normal code to
+ cause the thread to enter the signal handler.
+
+ This updates the thread state, but it does not set it to be
+ Runnable.
+*/
+void VG_(deliver_signal) ( ThreadId tid,
+ const vki_siginfo_t *info )
-void VG_(deliver_signal) ( ThreadId tid, const vki_siginfo_t *info, Bool async )
{
Int sigNo = info->si_signo;
- vki_sigset_t handlermask;
SCSS_Per_Signal *handler = &vg_scss.scss_per_sig[sigNo];
void *handler_fn;
ThreadState *tst = VG_(get_ThreadState)(tid);
if (VG_(clo_trace_signals))
- VG_(message)(Vg_DebugMsg,"delivering signal %d (%s) to thread %d",
- sigNo, signame(sigNo), tid );
-
- if (sigNo == VKI_SIGVGINT) {
- /* If this is a SIGVGINT, then we just ACK the signal and carry
- on; the application need never know about it (except for any
- effect on its syscalls). */
- vg_assert(async);
-
- if (tst->status == VgTs_WaitSys) {
- /* blocked in a syscall; we assume it should be interrupted */
- if (SYSCALL_RET(tst->arch) == -VKI_ERESTARTSYS)
- SYSCALL_RET(tst->arch) = -VKI_EINTR;
- }
-
- VG_(proxy_sigack)(tid, &tst->sig_mask);
+ VG_(message)(Vg_DebugMsg,"delivering signal %d (%s):%d to thread %d",
+ sigNo, signame(sigNo), info->si_code, tid );
+
+ if (sigNo == VKI_SIGVGKILL) {
+ /* If this is a SIGVGKILL, we're expecting it to interrupt any
+ blocked syscall. It doesn't matter whether the VCPU state is
+ set to restart or not, because we don't expect it will
+ execute any more client instructions. */
+ vg_assert(VG_(is_exiting)(tid));
return;
}
- /* If thread is currently blocked in a syscall, then resume as
- runnable. If the syscall needs restarting, tweak the machine
- state to make it happen. */
- if (tst->status == VgTs_WaitSys) {
- vg_assert(tst->syscallno != -1);
-
- /* OK, the thread was waiting for a syscall to complete. This
- means that the proxy has either not yet processed the
- RunSyscall request, or was processing it when the signal
- came. Either way, it is going to give us some syscall
- results right now, so wait for them to appear. This makes
- the thread runnable again, so we're in the right state to run
- the handler. We ask post_syscall to restart based on the
- client's sigaction flags. */
- if (0)
- VG_(printf)("signal %d interrupted syscall %d; restart=%d\n",
- sigNo, tst->syscallno, !!(handler->scss_flags & VKI_SA_RESTART));
- VG_(proxy_wait_sys)(tid, !!(handler->scss_flags & VKI_SA_RESTART));
- }
+ /* If the client specifies SIG_IGN, treat it as SIG_DFL.
- /* If the client specifies SIG_IGN, treat it as SIG_DFL */
+ If VG_(deliver_signal)() is being called on a thread, we want
+ the signal to get through no matter what; if they're ignoring
+ it, then we do this override (this is so we can send it SIGSEGV,
+ etc). */
handler_fn = handler->scss_handler;
- if (handler_fn == VKI_SIG_IGN)
+ if (handler_fn == VKI_SIG_IGN)
handler_fn = VKI_SIG_DFL;
vg_assert(handler_fn != VKI_SIG_IGN);
- if (sigNo == VKI_SIGCHLD && (handler->scss_flags & VKI_SA_NOCLDWAIT)) {
- //VG_(printf)("sigNo==SIGCHLD and app asked for NOCLDWAIT\n");
- vg_babyeater(sigNo, NULL, NULL);
- }
-
if (handler_fn == VKI_SIG_DFL) {
- handlermask = tst->sig_mask; /* no change to signal mask */
vg_default_action(info, tid);
} else {
/* Create a signal delivery frame, and set the client's %ESP and
%EIP so that when execution continues, we will enter the
signal handler with the frame on top of the client's stack,
- as it expects. */
+ as it expects.
+
+ Signal delivery can fail if the client stack is too small or
+ missing, and we can't push the frame. If that happens,
+ push_signal_frame will cause the whole process to exit when
+ we next hit the scheduler.
+ */
vg_assert(VG_(is_valid_tid)(tid));
+
vg_push_signal_frame ( tid, info );
if (handler->scss_flags & VKI_SA_ONESHOT) {
handle_SCSS_change( False /* lazy update */ );
}
-
- switch(tst->status) {
- case VgTs_Runnable:
- break;
- case VgTs_WaitSys:
- case VgTs_WaitJoiner:
- case VgTs_WaitJoinee:
- case VgTs_WaitMX:
- case VgTs_WaitCV:
- case VgTs_Sleeping:
- tst->status = VgTs_Runnable;
- break;
+ /* At this point:
+ tst->sig_mask is the current signal mask
+ tst->tmp_sig_mask is the same as sig_mask, unless we're in sigsuspend
+ handler->scss_mask is the mask set by the handler
- case VgTs_Empty:
- VG_(core_panic)("unexpected thread state");
- break;
+ Handler gets a mask of tmp_sig_mask|handler_mask|signo
+ */
+ tst->sig_mask = tst->tmp_sig_mask;
+ if (!(handler->scss_flags & VKI_SA_NOMASK)) {
+ VG_(sigaddset_from_set)(&tst->sig_mask, &handler->scss_mask);
+ VG_(sigaddset)(&tst->sig_mask, sigNo);
+
+ tst->tmp_sig_mask = tst->sig_mask;
}
+ }
+
+ /* Thread state is ready to go - just add Runnable */
+}
+
+/* Make a signal pending for a thread, for later delivery.
+ VG_(poll_signals) will arrange for it to be delivered at the right
+ time.
+
+ tid==0 means add it to the process-wide queue, and not sent it to a
+ specific thread.
+*/
+void queue_signal(ThreadId tid, const vki_siginfo_t *si)
+{
+ ThreadState *tst;
+ SigQueue *sq;
+ vki_sigset_t savedmask;
- /* Clear the associated mx/cv information as we are no longer
- waiting on anything. The original details will be restored
- when the signal frame is popped. */
- tst->associated_mx = NULL;
- tst->associated_cv = NULL;
+ tst = VG_(get_ThreadState)(tid);
- /* handler gets the union of the signal's mask and the thread's
- mask */
- handlermask = handler->scss_mask;
- VG_(sigaddset_from_set)(&handlermask, &VG_(threads)[tid].sig_mask);
+ /* Protect the signal queue against async deliveries */
+ VG_(block_all_host_signals)(&savedmask);
- /* also mask this signal, unless they ask us not to */
- if (!(handler->scss_flags & VKI_SA_NOMASK))
- VG_(sigaddset)(&handlermask, sigNo);
+ if (tst->sig_queue == NULL) {
+ tst->sig_queue = VG_(arena_malloc)(VG_AR_CORE, sizeof(*tst->sig_queue));
+ VG_(memset)(tst->sig_queue, 0, sizeof(*tst->sig_queue));
}
+ sq = tst->sig_queue;
+
+ if (VG_(clo_trace_signals))
+ VG_(message)(Vg_DebugMsg, "Queueing signal %d (idx %d) to thread %d",
+ si->si_signo, sq->next, tid);
+
+ /* Add signal to the queue. If the queue gets overrun, then old
+ queued signals may get lost.
+
+ XXX We should also keep a sigset of pending signals, so that at
+ least a non-siginfo signal gets deliviered.
+ */
+ if (sq->sigs[sq->next].si_signo != 0)
+ VG_(message)(Vg_UserMsg, "Signal %d being dropped from thread %d's queue",
+ sq->sigs[sq->next].si_signo, tid);
- /* tell proxy we're about to start running the handler */
- if (async)
- VG_(proxy_sigack)(tid, &handlermask);
+ sq->sigs[sq->next] = *si;
+ sq->next = (sq->next+1) % N_QUEUED_SIGNALS;
+
+ VG_(restore_all_host_signals)(&savedmask);
}
+/*
+ Returns the next queued signal for thread tid which is in "set".
+ tid==0 means process-wide signal. Set si_signo to 0 when the
+ signal has been delivered.
-/*
- If the client set the handler for SIGCHLD to SIG_IGN, then we need
- to automatically dezombie any dead children. Also used if the
- client set the SA_NOCLDWAIT on their SIGCHLD handler.
- */
-static
-void vg_babyeater ( Int sigNo, vki_siginfo_t *info, struct vki_ucontext *uc )
+ Must be called with all signals blocked, to protect against async
+ deliveries.
+*/
+static vki_siginfo_t *next_queued(ThreadId tid, const vki_sigset_t *set)
{
- Int status;
- Int pid;
+ ThreadState *tst = VG_(get_ThreadState)(tid);
+ SigQueue *sq;
+ Int idx;
+ vki_siginfo_t *ret = NULL;
- vg_assert(sigNo == VKI_SIGCHLD);
+ sq = tst->sig_queue;
+ if (sq == NULL)
+ goto out;
+
+ idx = sq->next;
+ do {
+ if (0)
+ VG_(printf)("idx=%d si_signo=%d inset=%d\n", idx,
+ sq->sigs[idx].si_signo, VG_(sigismember)(set, sq->sigs[idx].si_signo));
- while((pid = VG_(waitpid)(-1, &status, VKI_WNOHANG)) > 0) {
- if (VG_(clo_trace_signals))
- VG_(message)(Vg_DebugMsg, "babyeater reaped %d", pid);
- }
+ if (sq->sigs[idx].si_signo != 0 && VG_(sigismember)(set, sq->sigs[idx].si_signo)) {
+ if (VG_(clo_trace_signals))
+ VG_(message)(Vg_DebugMsg, "Returning queued signal %d (idx %d) for thread %d",
+ sq->sigs[idx].si_signo, idx, tid);
+ ret = &sq->sigs[idx];
+ goto out;
+ }
+
+ idx = (idx + 1) % N_QUEUED_SIGNALS;
+ } while(idx != sq->next);
+ out:
+ return ret;
}
/*
- Receive an async signal from the host.
-
- It being called in the context of a proxy LWP, and therefore is an
- async signal aimed at one of our threads. In this case, we pass
- the signal info to the main thread with VG_(proxy_handlesig)().
+ Receive an async signal from the kernel.
- This should *never* be in the context of the main LWP, because
- all signals for which this is the handler should be blocked there.
+ This should only happen when the thread is blocked in a syscall,
+ since that's the only time this set of signals is unblocked.
*/
static
void vg_async_signalhandler ( Int sigNo, vki_siginfo_t *info, struct vki_ucontext *uc )
{
- if (VG_(gettid)() == VG_(main_pid)) {
- VG_(printf)("got signal %d in LWP %d (%d)\n",
- sigNo, VG_(gettid)(), VG_(gettid)(), VG_(main_pid));
- vg_assert(VG_(sigismember)(&uc->uc_sigmask, sigNo));
- }
+ ThreadId tid = VG_(get_lwp_tid)(VG_(gettid)());
+ ThreadState *tst = VG_(get_ThreadState)(tid);
- vg_assert(VG_(gettid)() != VG_(main_pid));
+ vg_assert(tst->status == VgTs_WaitSys);
- VG_(proxy_handlesig)(info, UCONTEXT_INSTR_PTR(uc),
- UCONTEXT_SYSCALL_NUM(uc));
+ /* The thread isn't currently running, make it so before going on */
+ VG_(set_running)(tid);
+
+ if (VG_(clo_trace_signals))
+ VG_(message)(Vg_DebugMsg, "Async handler got signal %d for tid %d info %d",
+ sigNo, tid, info->si_code);
+
+ /* Update thread state properly */
+ VGA_(interrupted_syscall)(tid, uc,
+ !!(vg_scss.scss_per_sig[sigNo].scss_flags & VKI_SA_RESTART));
+
+ /* Set up the thread's state to deliver a signal */
+ if (!VG_(is_sig_ign)(info->si_signo))
+ VG_(deliver_signal)(tid, info);
+
+ /* longjmp back to the thread's main loop to start executing the
+ handler. */
+ VG_(resume_scheduler)(tid);
+
+ VG_(core_panic)("vg_async_signalhandler: got unexpected signal while outside of scheduler");
}
+/* Extend the stack to cover addr. maxsize is the limit the stack can grow to.
+
+ Returns True on success, False on failure.
+
+ Succeeds without doing anything if addr is already within a segment.
+
+ Failure could be caused by:
+ - addr not below a growable segment
+ - new stack size would exceed maxsize
+ - mmap failed for some other reason
+ */
+Bool VG_(extend_stack)(Addr addr, UInt maxsize)
+{
+ Segment *seg;
+ Addr base;
+ UInt newsize;
+
+ /* Find the next Segment above addr */
+ seg = VG_(find_segment)(addr);
+ if (seg)
+ return True;
+
+ /* now we know addr is definitely unmapped */
+ seg = VG_(find_segment_above_unmapped)(addr);
+
+ /* If there isn't one, or it isn't growable, fail */
+ if (seg == NULL ||
+ !(seg->flags & SF_GROWDOWN) ||
+ VG_(seg_contains)(seg, addr, sizeof(void *)))
+ return False;
+
+ vg_assert(seg->addr > addr);
+
+ /* Create the mapping */
+ base = PGROUNDDN(addr);
+ newsize = seg->addr - base;
+
+ if (seg->len + newsize >= maxsize)
+ return False;
+
+ if (VG_(mmap)((Char *)base, newsize,
+ seg->prot,
+ VKI_MAP_PRIVATE | VKI_MAP_FIXED | VKI_MAP_ANONYMOUS | VKI_MAP_CLIENT,
+ seg->flags,
+ -1, 0) == (void *)-1)
+ return False;
+
+ if (0)
+ VG_(printf)("extended stack: %p %d\n",
+ base, newsize);
+
+ if (VG_(clo_sanity_level) > 2)
+ VG_(sanity_check_general)(False);
+
+ return True;
+}
+
+static void (*fault_catcher)(Int sig, Addr addr);
+
+void VG_(set_fault_catcher)(void (*catcher)(Int, Addr))
+{
+ if (catcher != NULL && fault_catcher != NULL)
+ VG_(core_panic)("Fault catcher is already registered");
+
+ fault_catcher = catcher;
+}
+
+
/*
Receive a sync signal from the host.
-
- This should always be called from the main thread, though it may be
- called in a proxy LWP if someone sends an async version of one of
- the sync signals.
*/
static
void vg_sync_signalhandler ( Int sigNo, vki_siginfo_t *info, struct vki_ucontext *uc )
{
- Int dummy_local;
+ ThreadId tid = VG_(get_lwp_tid)(VG_(gettid)());
vg_assert(info != NULL);
-
- if (VG_(clo_trace_signals)) {
- VG_(message)(Vg_DebugMsg, "");
- VG_(message)(Vg_DebugMsg, "signal %d arrived ... si_code = %d",
- sigNo, info->si_code );
- if (VG_(running_a_thread)()) {
- VG_(message)(Vg_DebugMsg, " running thread %d",
- VG_(get_current_tid)());
- } else {
- VG_(message)(Vg_DebugMsg, " not running a thread");
- }
- }
-
vg_assert(info->si_signo == sigNo);
vg_assert(sigNo == VKI_SIGSEGV ||
sigNo == VKI_SIGBUS ||
sigNo == VKI_SIGFPE ||
- sigNo == VKI_SIGILL);
+ sigNo == VKI_SIGILL ||
+ sigNo == VKI_SIGTRAP);
- if (VG_(gettid)() != VG_(main_pid)) {
- /* We were sent one of our sync signals in an async way (or the
- proxy LWP code has a bug) */
- vg_assert(info->si_code <= VKI_SI_USER);
+ if (info->si_code <= VKI_SI_USER) {
+ /* If some user-process sent us one of these signals (ie,
+ they're not the result of a faulting instruction), then treat
+ it as an async signal. This is tricky because we could get
+ this almost anywhere:
+ - while generated client code
+ Action: queue signal and return
+ - while running Valgrind code
+ Action: queue signal and return
+ - while blocked in a syscall
+ Action: make thread runnable, queue signal, resume scheduler
+ */
+ if (VG_(threads)[tid].status == VgTs_WaitSys) {
+ /* Since this signal interrupted a syscall, it means the
+ client's signal mask was applied, so we can't get here
+ unless the client wants this signal right now. This means
+ we can simply use the async_signalhandler. */
+ vg_async_signalhandler(sigNo, info, uc);
+ VG_(core_panic)("vg_async_signalhandler returned!?\n");
+ }
- VG_(proxy_handlesig)(info, UCONTEXT_INSTR_PTR(uc),
- UCONTEXT_SYSCALL_NUM(uc));
- return;
- }
+ if (info->_sifields._kill._pid == 0) {
+ /* There's a per-user limit of pending siginfo signals. If
+ you exceed this, by having more than that number of
+ pending signals with siginfo, then new signals are
+ delivered without siginfo. This condition can be caused
+ by any unrelated program you're running at the same time
+ as Valgrind, if it has a large number of pending siginfo
+ signals which it isn't taking delivery of.
+
+ Since we depend on siginfo to work out why we were sent a
+ signal and what we should do about it, we really can't
+ continue unless we get it. */
+ VG_(message)(Vg_UserMsg, "Signal %d (%s) appears to have lost its siginfo; I can't go on.",
+ sigNo, signame(sigNo));
+ VG_(message)(Vg_UserMsg, " This may be because one of your programs has consumed your");
+ VG_(message)(Vg_UserMsg, " ration of siginfo structures.");
+
+ /* It's a fatal signal, so we force the default handler. */
+ VG_(set_default_handler)(sigNo);
+ VG_(deliver_signal)(tid, info);
+ VG_(resume_scheduler)(tid);
+ VG_(exit)(99); /* If we can't resume, then just exit */
+ }
+ if (VG_(clo_trace_signals))
+ VG_(message)(Vg_DebugMsg, "Routing user-sent sync signal %d via queue",
+ sigNo);
+
+ /* Since every thread has these signals unblocked, we can't rely
+ on the kernel to route them properly, so we need to queue
+ them manually. */
+ if (info->si_code == VKI_SI_TKILL)
+ queue_signal(tid, info); /* directed to us specifically */
+ else
+ queue_signal(0, info); /* shared pending */
- /*
- if (sigNo == VKI_SIGUSR1) {
- VG_(printf)("YOWZA! SIGUSR1\n\n");
- VG_(clo_trace_pthread_level) = 2;
- VG_(clo_trace_sched) = True;
- VG_(clo_trace_syscalls) = True;
- VG_(clo_trace_signals) = True;
return;
- }
- */
-
- vg_assert(sigNo >= 1 && sigNo <= _VKI_NSIG);
+ }
- /* Sanity check. Ensure we're really running on the signal stack
- we asked for. */
- if (!(
- ((Char*)(&(sigstack[0])) <= (Char*)(&dummy_local))
- &&
- ((Char*)(&dummy_local) < (Char*)(&(sigstack[VG_SIGSTACK_SIZE_W])))
- )
- ) {
- VG_(message)(Vg_DebugMsg,
- "FATAL: signal delivered on the wrong stack?!");
- VG_(message)(Vg_DebugMsg,
- "A possible workaround follows. Please tell me");
- VG_(message)(Vg_DebugMsg,
- "(jseward@acm.org) if the suggested workaround doesn't help.");
- VG_(unimplemented)
- ("support for progs compiled with -p/-pg; "
- "rebuild your prog without -p/-pg");
+ if (VG_(clo_trace_signals)) {
+ VG_(message)(Vg_DebugMsg, "signal %d arrived ... si_code=%d, EIP=%p, eip=%p",
+ sigNo, info->si_code,
+ INSTR_PTR(VG_(threads)[tid].arch),
+ UCONTEXT_INSTR_PTR(uc) );
}
-
- vg_assert((Char*)(&(sigstack[0])) <= (Char*)(&dummy_local));
- vg_assert((Char*)(&dummy_local) < (Char*)(&(sigstack[VG_SIGSTACK_SIZE_W])));
+ vg_assert(sigNo >= 1 && sigNo <= VG_(max_signal));
/* Special fault-handling case. We can now get signals which can
act upon and immediately restart the faulting instruction.
*/
if (info->si_signo == VKI_SIGSEGV) {
- /* HACK */
- //ThreadId tid = VG_(running_a_thread)() ? VG_(get_current_tid)() : 1;
- ThreadId tid = VG_(get_current_tid)();
- /* end HACK */
-
Addr fault = (Addr)info->_sifields._sigfault._addr;
Addr esp = STACK_PTR(VG_(threads)[tid].arch);
- Segment *seg;
+ Segment* seg;
seg = VG_(find_segment)(fault);
if (seg == NULL)
seg = VG_(find_segment_above_unmapped)(fault);
- // if (seg != NULL)
- // seg = VG_(next_segment)(seg);
- //else
- // seg = VG_(first_segment)();
-
if (VG_(clo_trace_signals)) {
if (seg == NULL)
VG_(message)(Vg_DebugMsg,
- "SIGSEGV: si_code=%d faultaddr=%p tid=%d esp=%p seg=NULL shad=%p-%p",
+ "SIGSEGV: si_code=%d faultaddr=%p tid=%d ESP=%p seg=NULL shad=%p-%p",
info->si_code, fault, tid, esp,
VG_(shadow_base), VG_(shadow_end));
else
VG_(message)(Vg_DebugMsg,
- "SIGSEGV: si_code=%d faultaddr=%p tid=%d esp=%p seg=%p-%p fl=%x shad=%p-%p",
+ "SIGSEGV: si_code=%d faultaddr=%p tid=%d ESP=%p seg=%p-%p fl=%x shad=%p-%p",
info->si_code, fault, tid, esp, seg->addr, seg->addr+seg->len, seg->flags,
VG_(shadow_base), VG_(shadow_end));
}
-
- if (info->si_code == 1 && /* SEGV_MAPERR */
- seg != NULL &&
- fault >= (esp - ARCH_STACK_REDZONE_SIZE) &&
- fault < seg->addr &&
- (seg->flags & SF_GROWDOWN)) {
+ if (info->si_code == 1 /* SEGV_MAPERR */
+ && fault >= (esp - ARCH_STACK_REDZONE_SIZE)) {
/* If the fault address is above esp but below the current known
stack segment base, and it was a fault because there was
nothing mapped there (as opposed to a permissions fault),
then extend the stack segment.
*/
- Addr base = PGROUNDDN(esp - ARCH_STACK_REDZONE_SIZE);
- if (seg->len + (seg->addr - base) <= VG_(threads)[tid].stack_size &&
- (void*)-1 != VG_(mmap)((Char *)base, seg->addr - base,
- VKI_PROT_READ|VKI_PROT_WRITE|VKI_PROT_EXEC,
- VKI_MAP_PRIVATE|VKI_MAP_FIXED|VKI_MAP_ANONYMOUS|VKI_MAP_CLIENT,
- SF_STACK|SF_GROWDOWN,
- -1, 0))
- {
- return; // extension succeeded, restart instruction
- }
- /* Otherwise fall into normal signal handling */
+ Addr base = PGROUNDDN(esp - ARCH_STACK_REDZONE_SIZE);
+ if (VG_(extend_stack)(base, VG_(threads)[tid].stack_size)) {
+ if (VG_(clo_trace_signals))
+ VG_(message)(Vg_DebugMsg,
+ " -> extended stack base to %p", PGROUNDDN(fault));
+ return; // extension succeeded, restart instruction
+ } else
+ VG_(message)(Vg_UserMsg, "Stack overflow in thread %d: can't grow stack to %p",
+ tid, fault);
+
+ /* Fall into normal signal handling for all other cases */
} else if (info->si_code == 2 && /* SEGV_ACCERR */
VG_(needs).shadow_memory &&
VG_(is_shadow_addr)(fault)) {
recursion--;
}
}
+ }
+
+ /* OK, this is a signal we really have to deal with. If it came
+ from the client's code, then we can jump back into the scheduler
+ and have it delivered. Otherwise it's a Valgrind bug. */
+ {
+ Addr context_ip;
+ Char buf[1024];
+ ThreadState *tst = VG_(get_ThreadState)(VG_(get_lwp_tid)(VG_(gettid)()));
- if (info->si_code == 1 && /* SEGV_MAPERR */
- seg != NULL &&
- fault >= esp &&
- fault < seg->addr &&
- (seg->flags & SF_STACK)) {
- VG_(message)(Vg_UserMsg, "Stack overflow in thread %d", tid);
+ if (VG_(sigismember)(&tst->sig_mask, sigNo)) {
+ /* signal is blocked, but they're not allowed to block faults */
+ VG_(set_default_handler)(sigNo);
}
- }
- /* Can't continue; must longjmp back to the scheduler and thus
- enter the sighandler immediately. */
- VG_(resume_scheduler)(sigNo, info);
+ if (!VG_(my_fault)) {
+ /* Can't continue; must longjmp back to the scheduler and thus
+ enter the sighandler immediately. */
+ VG_(deliver_signal)(tid, info);
+ VG_(resume_scheduler)(tid);
+ }
- if (info->si_code <= VKI_SI_USER) {
- /*
- OK, one of sync signals was sent from user-mode, so try to
- deliver it to someone who cares. Just add it to the
- process-wide pending signal set - signal routing will deliver
- it to someone eventually.
-
- The only other place which touches proc_pending is
- VG_(route_signals), and it has signals blocked while doing
- so, so there's no race.
- */
- VG_(message)(Vg_DebugMsg,
- "adding signal %d to pending set", sigNo);
- VG_(sigaddset)(&proc_pending, sigNo);
- } else {
- /*
- A bad signal came from the kernel (indicating an instruction
- generated it), but there was no jumpbuf set up. This means
- it was actually generated by Valgrind internally.
- */
- Addr context_ip = UCONTEXT_INSTR_PTR(uc);
- Char buf[1024];
+ /* Check to see if someone is interested in faults. */
+ if (fault_catcher) {
+ (*fault_catcher)(sigNo, (Addr)info->_sifields._sigfault._addr);
+ /* If the catcher returns, then it didn't handle the fault,
+ so carry on panicing. */
+ }
+
+ /* If resume_scheduler returns or its our fault, it means we
+ don't have longjmp set up, implying that we weren't running
+ client code, and therefore it was actually generated by
+ Valgrind internally.
+ */
VG_(message)(Vg_DebugMsg,
"INTERNAL ERROR: Valgrind received a signal %d (%s) - exiting",
sigNo, signame(sigNo));
buf[0] = 0;
+ context_ip = UCONTEXT_INSTR_PTR(uc);
if (1 && !VG_(get_fnname)(context_ip, buf+2, sizeof(buf)-5)) {
Int len;
VG_(message)(Vg_DebugMsg,
"si_code=%x Fault EIP: %p%s; Faulting address: %p",
info->si_code, context_ip, buf, info->_sifields._sigfault._addr);
+ VG_(message)(Vg_DebugMsg,
+ " esp=%p\n", uc->uc_mcontext.esp);
if (0)
VG_(kill_self)(sigNo); /* generate a core dump */
+
+ tst = VG_(get_ThreadState)(VG_(get_lwp_tid)(VG_(gettid)()));
VG_(core_panic_at)("Killed by fatal signal",
VG_(get_ExeContext2)(UCONTEXT_INSTR_PTR(uc),
UCONTEXT_FRAME_PTR(uc),
UCONTEXT_STACK_PTR(uc),
- VG_(valgrind_last)));
+ (Addr)(tst->os_state.stack + tst->os_state.stacksize)));
}
}
/*
- This signal handler exists only so that the scheduler thread can
- poke the LWP to make it fall out of whatever syscall it is in.
- Used for thread termination and cancellation.
+ Kill this thread. Makes it leave any syscall it might be currently
+ blocked in, and return to the scheduler. This doesn't mark the thread
+ as exiting; that's the caller's job.
*/
-static void proxy_sigvg_handler(int signo, vki_siginfo_t *si, struct vki_ucontext *uc)
+static void sigvgkill_handler(int signo, vki_siginfo_t *si, struct vki_ucontext *uc)
{
- vg_assert(signo == VKI_SIGVGINT || signo == VKI_SIGVGKILL);
+ ThreadId tid = VG_(get_lwp_tid)(VG_(gettid)());
+
+ if (VG_(clo_trace_signals))
+ VG_(message)(Vg_DebugMsg, "sigvgkill for lwp %d tid %d", VG_(gettid)(), tid);
+
+ vg_assert(signo == VKI_SIGVGKILL);
vg_assert(si->si_signo == signo);
+ vg_assert(VG_(threads)[tid].status == VgTs_WaitSys);
- /* only pay attention to it if it came from the scheduler */
- if (si->si_code == VKI_SI_TKILL &&
- si->_sifields._kill._pid == VG_(main_pid)) {
- vg_assert(si->si_code == VKI_SI_TKILL);
- vg_assert(si->_sifields._kill._pid == VG_(main_pid));
-
- VG_(proxy_handlesig)(si, UCONTEXT_INSTR_PTR(uc),
- UCONTEXT_SYSCALL_NUM(uc));
- }
-}
+ VG_(set_running)(tid);
+ VG_(post_syscall)(tid);
+ VG_(resume_scheduler)(tid);
-/* The outer insn loop calls here to reenable a host signal if
- vg_oursighandler longjmp'd.
-*/
-void VG_(unblock_host_signal) ( Int sigNo )
-{
- vg_assert(sigNo == VKI_SIGSEGV ||
- sigNo == VKI_SIGBUS ||
- sigNo == VKI_SIGILL ||
- sigNo == VKI_SIGFPE);
- set_main_sigmask();
+ VG_(core_panic)("sigvgkill_handler couldn't return to the scheduler\n");
}
-
static __attribute((unused))
void pp_vg_ksigaction ( struct vki_sigaction* sa )
{
VG_(printf)("vg_ksigaction: handler %p, flags 0x%x, restorer %p\n",
sa->ksa_handler, (UInt)sa->sa_flags, sa->sa_restorer);
VG_(printf)("vg_ksigaction: { ");
- for (i = 1; i <= _VKI_NSIG; i++)
+ for (i = 1; i <= VG_(max_signal); i++)
if (VG_(sigismember(&(sa->sa_mask),i)))
VG_(printf)("%d ", i);
VG_(printf)("}\n");
}
/*
- In pre-2.6 kernels, the kernel didn't distribute signals to threads
- in a thread-group properly, so we need to do it here.
+ Force signal handler to default
*/
-void VG_(route_signals)(void)
+void VG_(set_default_handler)(Int signo)
+{
+ struct vki_sigaction sa;
+
+ sa.ksa_handler = VKI_SIG_DFL;
+ sa.sa_flags = 0;
+ sa.sa_restorer = 0;
+ VG_(sigemptyset)(&sa.sa_mask);
+
+ VG_(do_sys_sigaction)(signo, &sa, NULL);
+}
+
+/*
+ Poll for pending signals, and set the next one up for delivery.
+ */
+void VG_(poll_signals)(ThreadId tid)
{
static const struct vki_timespec zero = { 0, 0 };
- static ThreadId start_tid = 1; /* tid to start scanning from */
- vki_sigset_t set;
- vki_siginfo_t siset[_VKI_NSIG];
- vki_siginfo_t si;
- Int sigNo;
+ vki_siginfo_t si, *sip;
+ vki_sigset_t pollset;
+ ThreadState *tst = VG_(get_ThreadState)(tid);
+ Int i;
+ vki_sigset_t saved_mask;
- vg_assert(VG_(gettid)() == VG_(main_pid));
- vg_assert(is_correct_sigmask());
+ /* look for all the signals this thread isn't blocking */
+ for(i = 0; i < _VKI_NSIG_WORDS; i++)
+ pollset.sig[i] = ~tst->sig_mask.sig[i];
- if (!VG_(do_signal_routing))
- return;
+ VG_(sigdelset)(&pollset, VKI_SIGVGCHLD); /* already dealt with */
+
+ //VG_(printf)("tid %d pollset=%08x%08x\n", tid, pollset.sig[1], pollset.sig[0]);
- /* get the scheduler LWP's signal mask, and use it as the set of
- signals we're polling for - also block all signals to prevent
- races */
- VG_(block_all_host_signals) ( &set );
+ VG_(block_all_host_signals)(&saved_mask); // protect signal queue
- /* grab any pending signals and add them to the pending signal set */
- while(VG_(sigtimedwait)(&set, &si, &zero) > 0) {
- VG_(sigaddset)(&proc_pending, si.si_signo);
- siset[si.si_signo] = si;
+ /* First look for any queued pending signals */
+ sip = next_queued(tid, &pollset); /* this thread */
+
+ if (sip == NULL)
+ sip = next_queued(0, &pollset); /* process-wide */
+
+ /* If there was nothing queued, ask the kernel for a pending signal */
+ if (sip == NULL && VG_(sigtimedwait)(&pollset, &si, &zero) > 0) {
+ if (VG_(clo_trace_signals))
+ VG_(message)(Vg_DebugMsg, "poll_signals: got signal %d for thread %d", si.si_signo, tid);
+ sip = &si;
}
- /* transfer signals from the process pending set to a particular
- thread which has it unblocked */
- for(sigNo = 0; sigNo < _VKI_NSIG; sigNo++) {
- ThreadId tid;
- ThreadId end_tid;
- Int target = -1;
-
- if (!VG_(sigismember)(&proc_pending, sigNo))
- continue;
+ if (sip != NULL) {
+ /* OK, something to do; deliver it */
+ if (VG_(clo_trace_signals))
+ VG_(message)(Vg_DebugMsg, "Polling found signal %d for tid %d",
+ sip->si_signo, tid);
+ if (!VG_(is_sig_ign)(sip->si_signo))
+ VG_(deliver_signal)(tid, sip);
+ else if (VG_(clo_trace_signals))
+ VG_(message)(Vg_DebugMsg, " signal %d ignored", sip->si_signo);
+
+ sip->si_signo = 0; /* remove from signal queue, if that's
+ where it came from */
+ }
- end_tid = start_tid - 1;
- if (end_tid < 0 || end_tid >= VG_N_THREADS)
- end_tid = VG_N_THREADS-1;
+ VG_(restore_all_host_signals)(&saved_mask);
+}
- /* look for a suitable thread to deliver it to */
- for(tid = start_tid;
- tid != end_tid;
- tid = (tid + 1) % VG_N_THREADS) {
- ThreadState *tst = &VG_(threads)[tid];
+/* Set the standard set of blocked signals, used wheneever we're not
+ running a client syscall. */
+void VG_(block_signals)(ThreadId tid)
+{
+ vki_sigset_t mask;
- if (tst->status == VgTs_Empty)
- continue;
+ VG_(sigfillset)(&mask);
- if (!VG_(sigismember)(&tst->sig_mask, sigNo)) {
- vg_assert(tst->proxy != NULL);
- target = tid;
- start_tid = tid;
- break;
- }
- }
-
- /* found one - deliver it and be done */
- if (target != -1) {
- ThreadState *tst = &VG_(threads)[target];
- if (VG_(clo_trace_signals))
- VG_(message)(Vg_DebugMsg, "Routing signal %d to tid %d",
- sigNo, tid);
- tst->sigqueue[tst->sigqueue_head] = siset[sigNo];
- tst->sigqueue_head = (tst->sigqueue_head + 1) % VG_N_SIGNALQUEUE;
- vg_assert(tst->sigqueue_head != tst->sigqueue_tail);
- VG_(proxy_sendsig)(VG_INVALID_THREADID/*from*/,
- target/*to*/, sigNo);
- VG_(sigdelset)(&proc_pending, sigNo);
- }
- }
+ /* Don't block these because they're synchronous */
+ VG_(sigdelset)(&mask, VKI_SIGSEGV);
+ VG_(sigdelset)(&mask, VKI_SIGBUS);
+ VG_(sigdelset)(&mask, VKI_SIGFPE);
+ VG_(sigdelset)(&mask, VKI_SIGILL);
+ VG_(sigdelset)(&mask, VKI_SIGTRAP);
+
+ /* Can't block these anyway */
+ VG_(sigdelset)(&mask, VKI_SIGSTOP);
+ VG_(sigdelset)(&mask, VKI_SIGKILL);
+
+ /* Master doesn't block this */
+ if (tid == VG_(master_tid))
+ VG_(sigdelset)(&mask, VKI_SIGVGCHLD);
- /* restore signal mask */
- VG_(restore_all_host_signals) (&set);
+ VG_(sigprocmask)(VKI_SIG_SETMASK, &mask, NULL);
}
/* At startup, copy the process' real signal state to the SCSS.
{
Int i, ret;
vki_sigset_t saved_procmask;
- vki_stack_t altstack_info;
struct vki_sigaction sa;
/* VG_(printf)("SIGSTARTUP\n"); */
*/
VG_(block_all_host_signals)( &saved_procmask );
- /* clear process-wide pending signal set */
- VG_(sigemptyset)(&proc_pending);
-
- /* Set the signal mask which the scheduler LWP should maintain from
- now on. */
- set_main_sigmask();
-
/* Copy per-signal settings to SCSS. */
for (i = 1; i <= _VKI_NSIG; i++) {
-
/* Get the old host action */
ret = VG_(sigaction)(i, NULL, &sa);
- vg_assert(ret == 0);
- if (VG_(clo_trace_signals))
+ if (ret != 0)
+ break;
+
+ /* Try setting it back to see if this signal is really
+ available */
+ if (i >= VKI_SIGRTMIN) {
+ struct vki_sigaction tsa;
+
+ tsa.ksa_handler = (void *)vg_sync_signalhandler;
+ tsa.sa_flags = VKI_SA_SIGINFO;
+ tsa.sa_restorer = 0;
+ VG_(sigfillset)(&tsa.sa_mask);
+
+ /* try setting it to some arbitrary handler */
+ if (VG_(sigaction)(i, &tsa, NULL) != 0) {
+ /* failed - not really usable */
+ break;
+ }
+
+ ret = VG_(sigaction)(i, &sa, NULL);
+ vg_assert(ret == 0);
+ }
+
+ VG_(max_signal) = i;
+
+ if (VG_(clo_trace_signals) && VG_(clo_verbosity) > 2)
VG_(printf)("snaffling handler 0x%x for signal %d\n",
(Addr)(sa.ksa_handler), i );
vg_scss.scss_per_sig[i].scss_restorer = sa.sa_restorer;
}
+ if (VG_(clo_trace_signals))
+ VG_(message)(Vg_DebugMsg, "Max kernel-supported signal is %d", VG_(max_signal));
+
/* Our private internal signals are treated as ignored */
- vg_scss.scss_per_sig[VKI_SIGVGINT].scss_handler = VKI_SIG_IGN;
- vg_scss.scss_per_sig[VKI_SIGVGINT].scss_flags = VKI_SA_SIGINFO;
- VG_(sigfillset)(&vg_scss.scss_per_sig[VKI_SIGVGINT].scss_mask);
+ vg_scss.scss_per_sig[VKI_SIGVGCHLD].scss_handler = VKI_SIG_IGN;
+ vg_scss.scss_per_sig[VKI_SIGVGCHLD].scss_flags = VKI_SA_SIGINFO;
+ VG_(sigfillset)(&vg_scss.scss_per_sig[VKI_SIGVGCHLD].scss_mask);
+
vg_scss.scss_per_sig[VKI_SIGVGKILL].scss_handler = VKI_SIG_IGN;
vg_scss.scss_per_sig[VKI_SIGVGKILL].scss_flags = VKI_SA_SIGINFO;
VG_(sigfillset)(&vg_scss.scss_per_sig[VKI_SIGVGKILL].scss_mask);
/* Copy the process' signal mask into the root thread. */
- vg_assert(VG_(threads)[1].status == VgTs_Runnable);
- VG_(threads)[1].sig_mask = saved_procmask;
- VG_(proxy_setsigmask)(1);
-
- /* Register an alternative stack for our own signal handler to run on. */
- altstack_info.ss_sp = &(sigstack[0]);
- altstack_info.ss_size = sizeof(sigstack);
- altstack_info.ss_flags = 0;
- ret = VG_(sigaltstack)(&altstack_info, NULL);
- if (ret != 0) {
- VG_(core_panic)(
- "vg_sigstartup_actions: couldn't install alternative sigstack");
- }
- if (VG_(clo_trace_signals)) {
- VG_(message)(Vg_DebugExtraMsg,
- "vg_sigstartup_actions: sigstack installed ok");
- }
-
- /* DEBUGGING HACK */
- /* VG_(signal)(VKI_SIGUSR1, &VG_(oursignalhandler)); */
+ vg_assert(VG_(threads)[VG_(master_tid)].status == VgTs_Init);
+ VG_(threads)[VG_(master_tid)].sig_mask = saved_procmask;
+ VG_(threads)[VG_(master_tid)].tmp_sig_mask = saved_procmask;
/* Calculate SKSS and apply it. This also sets the initial kernel
mask we need to run with. */
handle_SCSS_change( True /* forced update */ );
+ /* Leave with all signals still blocked; the thread scheduler loop
+ will set the appropriate mask at the appropriate time. */
}
/*--------------------------------------------------------------------*/
This file is part of Valgrind, a dynamic binary instrumentation
framework.
- Copyright (C) 2002-2004 Nicholas Nethercote
- njn25@cam.ac.uk
+ Copyright (C) 2002-2004 Jeremy Fitzhardinge
+ jeremy@goop.org
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
return n;
}
-void *VG_(SkipList_Find)(const SkipList *l, void *k)
+/* Return list element which is <= k, or NULL if there is none. */
+void *VG_(SkipList_Find_Before)(const SkipList *l, void *k)
{
SkipNode *n = SkipList__Find(l, k, NULL);
return NULL;
}
+/* Return the list element which == k, or NULL if none */
+void *VG_(SkipList_Find_Exact)(const SkipList *l, void *k)
+{
+ SkipNode *n = SkipList__Find(l, k, NULL);
+
+ if (n != NULL && (l->cmp)(key_of_node(l, n), k) == 0)
+ return data_of_node(l, n);
+ return NULL;
+}
+
+/* Return the list element which is >= k, or NULL if none */
+void *VG_(SkipList_Find_After)(const SkipList *l, void *k)
+{
+ SkipNode *n = SkipList__Find(l, k, NULL);
+
+ if (n != NULL && (l->cmp)(key_of_node(l, n), k) < 0)
+ n = n->next[0];
+
+ if (n != NULL)
+ return data_of_node(l, n);
+
+ return NULL;
+}
+
void VG_(SkipList_Insert)(SkipList *l, void *data)
{
SkipNode *update[SK_MAXHEIGHT];
return data_of_node(l, n);
}
+
+void VG_(SkipList_for_each_node)(const SkipList *l,
+ void (*fn)(void *node, void *arg), void *arg)
+{
+ void *n;
+
+ for(n = VG_(SkipNode_First)(l);
+ n != NULL;
+ n = VG_(SkipNode_Next)(l, n))
+ (*fn)(n, arg);
+}
+
+
+/* --------------------------------------------------
+ Comparison functions
+ -------------------------------------------------- */
+Int VG_(cmp_Int)(const void *v1, const void *v2)
+{
+ Int a = *(const Int *)v1;
+ Int b = *(const Int *)v2;
+
+ if (a < b)
+ return -1;
+ if (a == b)
+ return 0;
+ return 1;
+}
+
+Int VG_(cmp_UInt)(const void *v1, const void *v2)
+{
+ UInt a = *(const UInt *)v1;
+ UInt b = *(const UInt *)v2;
+
+ if (a < b)
+ return -1;
+ if (a == b)
+ return 0;
+ return 1;
+}
+
+Int VG_(cmp_Addr)(const void *v1, const void *v2)
+{
+ Addr a = *(const Addr *)v1;
+ Addr b = *(const Addr *)v2;
+
+ if (a < b)
+ return -1;
+ if (a == b)
+ return 0;
+ return 1;
+}
+
+Int VG_(cmp_string)(const void *v1, const void *v2)
+{
+ const Char *a = *(const Char **)v1;
+ const Char *b = *(const Char **)v2;
+
+ return VG_(strcmp)(a, b);
+}
+
+
+
/*--------------------------------------------------------------------*/
/*--- Management of symbols and debugging information. ---*/
/*--- vg_symtab2.c ---*/
#include <elf.h> /* ELF defns */
+static SegInfo* segInfo = NULL;
+
/*------------------------------------------------------------*/
/*--- 32/64-bit parameterisation ---*/
/*------------------------------------------------------------*/
/* Two symbols have the same address. Which name do we prefer?
- In general we prefer the longer name, but if the choice is between
- __libc_X and X, then choose X (similarly with __GI__ and __
- prefixes).
+ The general rule is to prefer the shorter symbol name. If the
+ symbol contains a '@', which means its versioned, then the length
+ up to the '@' is used for length comparison purposes (so
+ "foo@GLIBC_2.4.2" is considered shorter than "foobar"), but if two
+ symbols have the same length, the one with the version string is
+ preferred. If all else fails, use alphabetical ordering.
*/
static RiSym *prefersym(RiSym *a, RiSym *b)
{
- Int pfx;
- Int lena, lenb;
- Int i;
- static const struct {
- const Char *prefix;
- Int len;
- } prefixes[] = {
-#define PFX(x) { x, sizeof(x)-1 }
- /* order from longest to shortest */
- PFX("__GI___libc_"),
- PFX("__GI___"),
- PFX("__libc_"),
- PFX("__GI__"),
- PFX("__GI_"),
- PFX("__"),
-#undef PFX
- };
-
- lena = VG_(strlen)(a->name);
- lenb = VG_(strlen)(b->name);
-
- /* rearrange so that a is the long one */
- if (lena < lenb) {
- RiSym *t;
- Int lt;
-
- t = a;
- a = b;
- b = t;
-
- lt = lena;
- lena = lenb;
- lenb = lt;
- }
+ Int lena, lenb; /* full length */
+ Int vlena, vlenb; /* length without version */
+ const Char *vpa, *vpb;
- /* Ugh. If we get a "free", always choose it. This is because
- normally, this function would choose "cfree" over free. cfree is
- an alias for free. If there's any more symbols like this, we may
- want to consider making this mechanism more generic.
- */
- if(VG_(strcmp)(a->name, "free") == 0)
- return a;
+ vlena = lena = VG_(strlen)(a->name);
+ vlenb = lenb = VG_(strlen)(b->name);
- if(VG_(strcmp)(b->name, "free") == 0)
- return b;
+ vpa = VG_(strchr)(a->name, '@');
+ vpb = VG_(strchr)(b->name, '@');
- for(i = pfx = 0; i < sizeof(prefixes)/sizeof(*prefixes); i++) {
- Int pfxlen = prefixes[i].len;
+ if (vpa)
+ vlena = vpa - a->name;
+ if (vpb)
+ vlenb = vpb - b->name;
- if (pfxlen < lena &&
- VG_(memcmp)(a->name, prefixes[i].prefix, pfxlen) == 0) {
- pfx = pfxlen;
- break;
- }
- }
+ /* Select the shortest unversioned name */
+ if (vlena < vlenb)
+ return a;
+ else if (vlenb < vlena)
+ return b;
- if (pfx != 0 && VG_(strcmp)(a->name + pfx, b->name) == 0)
+ /* Equal lengths; select the versioned name */
+ if (vpa && !vpb)
+ return a;
+ if (vpb && !vpa)
return b;
- return a;
+ /* Either both versioned or neither is versioned; select them
+ alphabetically */
+ if (VG_(strcmp)(a->name, b->name) < 0)
+ return a;
+ else
+ return b;
}
static
if(VG_(strncmp)(si->symtab[i].name, VG_INTERCEPT_PREFIX,
VG_INTERCEPT_PREFIX_LEN) == 0) {
int len = VG_(strlen)(si->symtab[i].name);
- char *buf = VG_(malloc)(len), *colon;
+ char *buf = VG_(arena_malloc)(VG_AR_SYMTAB, len), *colon;
intercept_demangle(si->symtab[i].name, buf, len);
colon = buf + VG_(strlen)(buf) - 1;
while(*colon != ':') colon--;
VG_(strncpy_safely)(si->symtab[i].name, colon+1, len);
+ VG_(arena_free)(VG_AR_SYMTAB, buf);
}
}
return True;
}
-// Forward declaration
-static void add_redirect_addr(const Char *from_lib, const Char *from_sym,
- Addr to_addr);
-
static
void handle_intercept( SegInfo* si, Char* symbol, ElfXX_Sym* sym)
{
- int len = VG_(strlen)(symbol) + 1 - VG_INTERCEPT_PREFIX_LEN;
- char *lib = VG_(malloc)(len);
+ Int len = VG_(strlen)(symbol) + 1 - VG_INTERCEPT_PREFIX_LEN;
+ Char *lib = VG_(arena_malloc)(VG_AR_SYMTAB, len);
Char *func;
intercept_demangle(symbol, lib, len);
while(*func != ':') func--;
*func = '\0';
- add_redirect_addr(lib, func+1, si->offset + sym->st_value);
- VG_(free)(lib);
+ VG_(add_redirect_addr)(lib, func+1, si->offset + sym->st_value);
+ VG_(arena_free)(VG_AR_SYMTAB, lib);
}
-static
-void handle_wrapper( SegInfo* si, Char* symbol, ElfXX_Sym* sym)
+Bool VG_(resolve_redir_allsegs)(CodeRedirect *redir)
{
- VG_(intercept_libc_freeres_wrapper)((Addr)(si->offset + sym->st_value));
+ SegInfo *si;
+
+ for(si = segInfo; si != NULL; si = si->next)
+ if (VG_(resolve_redir)(redir, si))
+ return True;
+
+ return False;
}
+//static
+//void handle_wrapper( SegInfo* si, Char* symbol, ElfXX_Sym* sym)
+//{
+// if (VG_(strcmp)(symbol, STR(VG_WRAPPER(freeres))) == 0)
+// VGA_(intercept_libc_freeres_wrapper)((Addr)(si->offset + sym->st_value));
+// else if (VG_(strcmp)(symbol, STR(VG_WRAPPER(pthread_startfunc_wrapper))) == 0)
+// VG_(pthread_startfunc_wrapper)((Addr)(si->offset + sym->st_value));
+//}
+
/* Read a symbol table (normal or dynamic) */
static
void read_symtab( SegInfo* si, Char* tab_name, Bool do_intercepts,
VG_INTERCEPT_PREFIX,
VG_INTERCEPT_PREFIX_LEN) == 0) {
handle_intercept(si, (Char*)o_strtab+sym->st_name, sym);
- } else if (VG_(strncmp)((Char*)o_strtab+sym->st_name,
- VG_WRAPPER_PREFIX,
- VG_WRAPPER_PREFIX_LEN) == 0) {
- handle_wrapper(si, (Char*)o_strtab+sym->st_name, sym);
- }
+ }
+ //else if (VG_(strncmp)((Char*)o_strtab+sym->st_name,
+ // VG_WRAPPER_PREFIX,
+ // VG_WRAPPER_PREFIX_LEN) == 0) {
+ // handle_wrapper(si, (Char*)o_strtab+sym->st_name, sym);
+ //}
}
/* Figure out if we're interested in the symbol.
Int fd;
struct vki_stat stat_buf;
Addr addr;
+ UInt calccrc;
if ((fd = VG_(open)(name, VKI_O_RDONLY, 0)) < 0)
return 0;
return 0;
}
+ if (VG_(clo_verbosity) > 1)
+ VG_(message)(Vg_UserMsg, "Reading debug info from %s...", name);
+
*size = stat_buf.st_size;
if ((addr = (Addr)VG_(mmap)(NULL, *size, VKI_PROT_READ,
VG_(close)(fd);
- if (calc_gnu_debuglink_crc32(0, (UChar*)addr, *size) != crc) {
+ calccrc = calc_gnu_debuglink_crc32(0, (UChar*)addr, *size);
+ if (calccrc != crc) {
int res = VG_(munmap)((void*)addr, *size);
vg_assert(0 == res);
+ if (VG_(clo_verbosity) > 1)
+ VG_(message)(Vg_UserMsg, "... CRC mismatch (computed %08x wanted %08x)", calccrc, crc);
return 0;
}
static
Addr find_debug_file( Char* objpath, Char* debugname, UInt crc, UInt* size )
{
- Char *objdir = VG_(strdup)(objpath);
+ Char *objdir = VG_(arena_strdup)(VG_AR_SYMTAB, objpath);
Char *objdirptr;
Char *debugpath;
Addr addr = 0;
if ((objdirptr = VG_(strrchr)(objdir, '/')) != NULL)
*objdirptr = '\0';
- debugpath = VG_(malloc)(VG_(strlen)(objdir) + VG_(strlen)(debugname) + 16);
+ debugpath = VG_(arena_malloc)(VG_AR_SYMTAB, VG_(strlen)(objdir) + VG_(strlen)(debugname) + 16);
VG_(sprintf)(debugpath, "%s/%s", objdir, debugname);
}
}
- VG_(free)(debugpath);
- VG_(free)(objdir);
+ VG_(arena_free)(VG_AR_SYMTAB, debugpath);
+ VG_(arena_free)(VG_AR_SYMTAB, objdir);
return addr;
}
si->start, si->start+si->size, si->size,
si->start+newsz, newsz);
- for(seg = VG_(find_segment)(si->start);
+ for(seg = VG_(find_segment_containing)(si->start);
seg != NULL && VG_(seg_overlaps)(seg, si->start, si->size);
seg = VG_(next_segment)(seg)) {
if (seg->symtab == si)
/* Did we find a debuglink section? */
if (debuglink != NULL) {
- UInt crc_offset = (VG_(strlen)(debuglink) + 4) & ~3;
+ UInt crc_offset = ROUNDUP(VG_(strlen)(debuglink)+1, 4);
UInt crc;
vg_assert(crc_offset + sizeof(UInt) <= debuglink_sz);
address ranges, and as a result the SegInfos in this list describe
disjoint address ranges.
*/
-static SegInfo* segInfo = NULL;
-
-static void resolve_seg_redirs(SegInfo *si);
-
SegInfo *VG_(read_seg_symbols) ( Segment *seg )
{
SegInfo* si;
canonicaliseScopetab ( si );
/* do redirects */
- resolve_seg_redirs( si );
+ VG_(resolve_seg_redirs)( si );
}
VGP_POPCC(VgpReadSyms);
table is designed we have no option but to do a complete linear
scan of the table. Returns NULL if not found. */
-static Addr reverse_search_one_symtab ( const SegInfo* si,
- const Char* name )
+Addr VG_(reverse_search_one_symtab) ( const SegInfo* si, const Char* name )
{
UInt i;
for (i = 0; i < si->symtab_used; i++) {
s = VG_(find_segment)(ptr);
- if (s == NULL || !VG_(seg_overlaps)(s, ptr, 0) || s->symtab == NULL)
+ if (s == NULL || s->symtab == NULL)
goto not_found;
si = s->symtab;
}
-/*------------------------------------------------------------*/
-/*--- General purpose redirection. ---*/
-/*------------------------------------------------------------*/
-
-/* Set to True for debug printing. */
-static const Bool verbose_redir = False;
-
-
-/* resolved redirections, indexed by from_addr */
-typedef struct _CodeRedirect {
- const Char *from_lib; /* library qualifier pattern */
- const Char *from_sym; /* symbol */
- Addr from_addr; /* old addr */
-
- const Char *to_lib; /* library qualifier pattern */
- const Char *to_sym; /* symbol */
- Addr to_addr; /* new addr */
-
- struct _CodeRedirect *next; /* next pointer on unresolved list */
-} CodeRedirect;
-
-static Int addrcmp(const void *ap, const void *bp)
-{
- Addr a = *(Addr *)ap;
- Addr b = *(Addr *)bp;
- Int ret;
-
- if (a == b)
- ret = 0;
- else
- ret = (a < b) ? -1 : 1;
-
- return ret;
-}
-
-static Char *straddr(void *p)
-{
- static Char buf[16];
-
- VG_(sprintf)(buf, "%p", *(Addr *)p);
-
- return buf;
-}
-
-static SkipList sk_resolved_redir = SKIPLIST_INIT(CodeRedirect, from_addr,
- addrcmp, straddr, VG_AR_SYMTAB);
-static CodeRedirect *unresolved_redir = NULL;
-
-static Bool match_lib(const Char *pattern, const SegInfo *si)
-{
- /* pattern == NULL matches everything, otherwise use globbing
-
- If the pattern starts with:
- file:, then match filename
- soname:, then match soname
- something else, match filename
- */
- const Char *name = si->filename;
-
- if (pattern == NULL)
- return True;
-
- if (VG_(strncmp)(pattern, "file:", 5) == 0) {
- pattern += 5;
- name = si->filename;
- }
- if (VG_(strncmp)(pattern, "soname:", 7) == 0) {
- pattern += 7;
- name = si->soname;
- }
-
- if (name == NULL)
- return False;
-
- return VG_(string_match)(pattern, name);
-}
-
-/* Resolve a redir using si if possible, and add it to the resolved
- list */
-static Bool resolve_redir(CodeRedirect *redir, const SegInfo *si)
-{
- Bool resolved;
-
- vg_assert(si != NULL);
- vg_assert(si->seg != NULL);
-
- /* no redirection from Valgrind segments */
- if (si->seg->flags & SF_VALGRIND)
- return False;
-
- resolved = (redir->from_addr != 0) && (redir->to_addr != 0);
-
- if (0 && verbose_redir)
- VG_(printf)(" consider FROM binding %s:%s -> %s:%s in %s(%s)\n",
- redir->from_lib, redir->from_sym,
- redir->to_lib, redir->to_sym,
- si->filename, si->soname);
-
- vg_assert(!resolved);
-
- if (redir->from_addr == 0) {
- vg_assert(redir->from_sym != NULL);
-
- if (match_lib(redir->from_lib, si)) {
- redir->from_addr = reverse_search_one_symtab(si, redir->from_sym);
- if (verbose_redir && redir->from_addr != 0)
- VG_(printf)(" bind FROM: %p = %s:%s\n",
- redir->from_addr,redir->from_lib, redir->from_sym );
- }
- }
-
- if (redir->to_addr == 0) {
- vg_assert(redir->to_sym != NULL);
-
- if (match_lib(redir->to_lib, si)) {
- redir->to_addr = reverse_search_one_symtab(si, redir->to_sym);
- if (verbose_redir && redir->to_addr != 0)
- VG_(printf)(" bind TO: %p = %s:%s\n",
- redir->to_addr,redir->to_lib, redir->to_sym );
-
- }
- }
-
- resolved = (redir->from_addr != 0) && (redir->to_addr != 0);
-
- if (0 && verbose_redir)
- VG_(printf)("resolve_redir: %s:%s from=%p %s:%s to=%p\n",
- redir->from_lib, redir->from_sym, redir->from_addr,
- redir->to_lib, redir->to_sym, redir->to_addr);
-
- if (resolved) {
- if (VG_(clo_verbosity) > 2 || verbose_redir) {
- VG_(message)(Vg_DebugMsg, " redir resolved (%s:%s=%p -> ",
- redir->from_lib, redir->from_sym, redir->from_addr);
- VG_(message)(Vg_DebugMsg, " %s:%s=%p)",
- redir->to_lib, redir->to_sym, redir->to_addr);
- }
-
- if (VG_(search_transtab)(NULL, redir->from_addr, False)) {
- /* For some given (from, to) redir, the "from" function got
- called before the .so containing "to" became available. We
- know this because there is already a translation for the
- entry point of the original "from". So the redirect will
- never actually take effect unless that translation is
- discarded.
-
- Note, we only really need to discard the first bb of the
- old entry point, and so we avoid the problem of having to
- figure out how big that bb was -- since it is at least 1
- byte of original code, we can just pass 1 as the original
- size to invalidate_translations() and it will indeed get
- rid of the translation.
-
- Note, this is potentially expensive -- discarding
- translations causes complete unchaining.
- */
- if (VG_(clo_verbosity) > 2) {
- VG_(message)(Vg_UserMsg,
- "Discarding translation due to redirect of already called function" );
- VG_(message)(Vg_UserMsg,
- " %s (%p -> %p)",
- redir->from_sym, redir->from_addr, redir->to_addr );
- }
- VG_(discard_translations)(redir->from_addr, 1);
- }
-
- VG_(SkipList_Insert)(&sk_resolved_redir, redir);
- }
-
- return resolved;
-}
-
-/* Go through the complete redir list, resolving as much as possible with this SegInfo.
-
- This should be called when a new SegInfo symtab is loaded.
- */
-static void resolve_seg_redirs(SegInfo *si)
-{
- CodeRedirect **prevp = &unresolved_redir;
- CodeRedirect *redir, *next;
-
- if (verbose_redir)
- VG_(printf)("Considering redirs to/from %s(soname=%s)\n",
- si->filename, si->soname);
-
- /* visit each unresolved redir - if it becomes resolved, then
- remove it from the unresolved list */
- for(redir = unresolved_redir; redir != NULL; redir = next) {
- next = redir->next;
-
- if (resolve_redir(redir, si)) {
- *prevp = next;
- redir->next = NULL;
- } else
- prevp = &redir->next;
- }
-}
-
-static Bool resolve_redir_allsegs(CodeRedirect *redir)
-{
- SegInfo *si;
-
- for(si = segInfo; si != NULL; si = si->next)
- if (resolve_redir(redir, si))
- return True;
-
- return False;
-}
-
-/* Redirect a lib/symbol reference to a function at lib/symbol */
-static void add_redirect_sym(const Char *from_lib, const Char *from_sym,
- const Char *to_lib, const Char *to_sym)
-{
- CodeRedirect *redir = VG_(SkipNode_Alloc)(&sk_resolved_redir);
-
- redir->from_lib = VG_(arena_strdup)(VG_AR_SYMTAB, from_lib);
- redir->from_sym = VG_(arena_strdup)(VG_AR_SYMTAB, from_sym);
- redir->from_addr = 0;
-
- redir->to_lib = VG_(arena_strdup)(VG_AR_SYMTAB, to_lib);
- redir->to_sym = VG_(arena_strdup)(VG_AR_SYMTAB, to_sym);
- redir->to_addr = 0;
-
- if (0||VG_(clo_verbosity) > 2)
- VG_(message)(Vg_UserMsg,
- "REDIRECT %s(%s) to %s(%s)",
- from_lib, from_sym, to_lib, to_sym);
-
- if (!resolve_redir_allsegs(redir)) {
- /* can't resolve immediately; add to list */
- redir->next = unresolved_redir;
- unresolved_redir = redir;
- }
-}
-
-/* Redirect a lib/symbol reference to an addr */
-static void add_redirect_addr(const Char *from_lib, const Char *from_sym,
- Addr to_addr)
-{
- CodeRedirect *redir = VG_(SkipNode_Alloc)(&sk_resolved_redir);
-
- redir->from_lib = VG_(arena_strdup)(VG_AR_SYMTAB, from_lib);
- redir->from_sym = VG_(arena_strdup)(VG_AR_SYMTAB, from_sym);
- redir->from_addr = 0;
-
- redir->to_lib = NULL;
- redir->to_sym = NULL;
- redir->to_addr = to_addr;
-
- if (0||VG_(clo_verbosity) > 2)
- VG_(message)(Vg_UserMsg,
- "REDIRECT %s(%s) to %p",
- from_lib, from_sym, to_addr);
-
- if (!resolve_redir_allsegs(redir)) {
- /* can't resolve immediately; add to list */
- redir->next = unresolved_redir;
- unresolved_redir = redir;
- }
-}
-
-
-/* HACK! This should be done properly (see ~/NOTES.txt) */
-#ifdef __amd64__
-/* Rerouted entry points for __NR_vgettimeofday and __NR_vtime.
- 96 == __NR_gettimeofday
- 201 == __NR_time
-*/
-asm(
-"amd64_linux_rerouted__vgettimeofday:\n"
-" movq $96, %rax\n"
-" syscall\n"
-" ret\n"
-"amd64_linux_rerouted__vtime:\n"
-" movq $201, %rax\n"
-" syscall\n"
-" ret\n"
-);
-#endif
-
-Addr VG_(code_redirect)(Addr a)
-{
- CodeRedirect *r = VG_(SkipList_Find)(&sk_resolved_redir, &a);
-
-#ifdef __amd64__
- /* HACK. Reroute the amd64-linux vsyscalls. This should be moved
- out of here into an amd64-linux specific initialisation routine.
- */
- extern void amd64_linux_rerouted__vgettimeofday;
- extern void amd64_linux_rerouted__vtime;
- if (a == 0xFFFFFFFFFF600000ULL)
- return (Addr)&amd64_linux_rerouted__vgettimeofday;
- if (a == 0xFFFFFFFFFF600400ULL)
- return (Addr)&amd64_linux_rerouted__vtime;
-#endif
-
- if (r == NULL || r->from_addr != a)
- return a;
-
- vg_assert(r->to_addr != 0);
-
- return r->to_addr;
-}
-
-void VG_(setup_code_redirect_table) ( void )
-{
- static const struct {
- const Char *from, *to;
- } redirects[] = {
- { "__GI___errno_location", "__errno_location" },
- { "__errno_location", "__errno_location" },
- { "__GI___h_errno_location", "__h_errno_location" },
- { "__h_errno_location", "__h_errno_location" },
- { "__GI___res_state", "__res_state" },
- { "__res_state", "__res_state" },
- };
- Int i;
-
- for(i = 0; i < sizeof(redirects)/sizeof(*redirects); i++) {
- add_redirect_sym("soname:libc.so.6", redirects[i].from,
- "soname:libpthread.so.0", redirects[i].to);
- }
-
-// XXX: what architectures is this necessary for? x86 yes, PPC no, others ?
-#ifdef __x86__
- /* Redirect _dl_sysinfo_int80, which is glibc's default system call
- routine, to the routine in our trampoline page so that the
- special sysinfo unwind hack in vg_execontext.c will kick in.
- */
- add_redirect_addr("soname:ld-linux.so.2", "_dl_sysinfo_int80",
- VG_(client_trampoline_code)+VG_(tramp_syscall_offset));
-#endif
-
- /* Overenthusiastic use of PLT bypassing by the glibc people also
- means we need to patch the following functions to our own
- implementations of said, in mac_replace_strmem.c.
- */
- add_redirect_sym("soname:libc.so.6", "stpcpy",
- "*vgpreload_memcheck.so*", "stpcpy");
-
- add_redirect_sym("soname:libc.so.6", "strlen",
- "*vgpreload_memcheck.so*", "strlen");
-
- add_redirect_sym("soname:libc.so.6", "strnlen",
- "*vgpreload_memcheck.so*", "strnlen");
-
- add_redirect_sym("soname:ld-linux.so.2", "stpcpy",
- "*vgpreload_memcheck.so*", "stpcpy");
-
- add_redirect_sym("soname:libc.so.6", "strchr",
- "*vgpreload_memcheck.so*", "strchr");
- add_redirect_sym("soname:ld-linux.so.2", "strchr",
- "*vgpreload_memcheck.so*", "strchr");
-
- add_redirect_sym("soname:libc.so.6", "strchrnul",
- "*vgpreload_memcheck.so*", "glibc232_strchrnul");
-
- add_redirect_sym("soname:libc.so.6", "rawmemchr",
- "*vgpreload_memcheck.so*", "glibc232_rawmemchr");
-}
-
/*------------------------------------------------------------*/
/*--- SegInfo accessor functions ---*/
/*------------------------------------------------------------*/
}
}
-static void bprintf(void (*send)(Char), const Char *fmt, ...)
+static void bprintf(void (*send)(Char, void*), void *send_arg, const Char *fmt, ...)
{
va_list vargs;
va_start(vargs, fmt);
- VG_(vprintf)(send, fmt, vargs);
+ VG_(vprintf)(send, fmt, vargs, send_arg);
va_end(vargs);
}
static UInt describe_addr_bufsz;
/* Add a character to the result buffer */
-static void describe_addr_addbuf(Char c) {
+static void describe_addr_addbuf(Char c,void *p) {
if ((describe_addr_bufidx+1) >= describe_addr_bufsz) {
Char *n;
if (debug)
VG_(printf)(" non-followable array (sz=%d): checking addr %p in range %p-%p\n",
sz, addr, var->valuep, (var->valuep + sz));
- if (addr >= var->valuep && addr <= (var->valuep + sz))
+ if (ty->size > 0 && addr >= var->valuep && addr <= (var->valuep + sz))
min = max = (addr - var->valuep) / ty->size;
else
break;
found->container->name = NULL;
found->container = found->container->container;
} else {
- bprintf(describe_addr_addbuf, "&(");
+ bprintf(describe_addr_addbuf, 0, "&(");
ptr = False;
}
*ep++ = '\0';
- bprintf(describe_addr_addbuf, sp);
+ bprintf(describe_addr_addbuf, 0, sp);
if (addr != found->valuep)
- bprintf(describe_addr_addbuf, "+%d", addr - found->valuep);
+ bprintf(describe_addr_addbuf, 0, "+%d", addr - found->valuep);
if (VG_(get_filename_linenum)(eip, file, sizeof(file), &line))
- bprintf(describe_addr_addbuf, " at %s:%d", file, line, addr);
+ bprintf(describe_addr_addbuf, 0, " at %s:%d", file, line, addr);
}
}
n_atfork++;
}
-static void do_atfork_pre(ThreadId tid)
+void VG_(do_atfork_pre)(ThreadId tid)
{
Int i;
(*atforks[i].pre)(tid);
}
-static void do_atfork_parent(ThreadId tid)
+void VG_(do_atfork_parent)(ThreadId tid)
{
Int i;
(*atforks[i].parent)(tid);
}
-static void do_atfork_child(ThreadId tid)
+void VG_(do_atfork_child)(ThreadId tid)
{
Int i;
(Addr)msg->msg_control, msg->msg_controllen );
}
-void check_cmsg_for_fds(ThreadId tid, struct vki_msghdr *msg)
+static void check_cmsg_for_fds(ThreadId tid, struct vki_msghdr *msg)
{
struct vki_cmsghdr *cm = VKI_CMSG_FIRSTHDR(msg);
// Combine two 32-bit values into a 64-bit value
#define LOHI64(lo,hi) ( (lo) | ((ULong)(hi) << 32) )
-PRE(sys_exit_group, Special)
-{
- VG_(core_panic)("syscall exit_group() not caught by the scheduler?!");
-}
+//PRE(sys_exit_group, Special)
+//{
+// VG_(core_panic)("syscall exit_group() not caught by the scheduler?!");
+//}
PRE(sys_exit, Special)
{
VG_(core_panic)("syscall exit() not caught by the scheduler?!");
}
-PRE(sys_sched_yield, Special)
+PRE(sys_sched_yield, MayBlock)
{
- VG_(core_panic)("syscall sched_yield() not caught by the scheduler?!");
+ PRINT("sched_yield()");
+ PRE_REG_READ0(long, "sys_sched_yield");
}
PRE(sys_ni_syscall, Special)
SET_RESULT( -VKI_ENOSYS );
}
-PRE(sys_set_tid_address, Special)
+PRE(sys_set_tid_address, 0)
{
- // We don't let this syscall run, and don't do anything to simulate it
- // ourselves -- it becomes a no-op! Why? Tom says:
- //
- // I suspect this is deliberate given that all the user level threads
- // are running in the same kernel thread under valgrind so we probably
- // don't want to be calling the actual system call here.
- //
- // Hmm.
PRINT("sys_set_tid_address ( %p )", ARG1);
PRE_REG_READ1(long, "set_tid_address", int *, tidptr);
}
PRE_MEM_READ( "putpmsg(data)", (Addr)data->buf, data->len);
}
-PRE(sys_getitimer, NBRunInLWP)
+PRE(sys_getitimer, 0)
{
PRINT("sys_getitimer ( %d, %p )", ARG1, ARG2);
PRE_REG_READ2(long, "getitimer", int, which, struct itimerval *, value);
}
}
-PRE(sys_setitimer, NBRunInLWP)
+PRE(sys_setitimer, 0)
{
PRINT("sys_setitimer ( %d, %p, %p )", ARG1,ARG2,ARG3);
PRE_REG_READ3(long, "setitimer",
}
// Pre_read a char** argument.
-void pre_argv_envp(Addr a, ThreadId tid, Char* s1, Char* s2)
+static void pre_argv_envp(Addr a, ThreadId tid, Char* s1, Char* s2)
{
while (True) {
Addr a_deref;
// but it seems to work nonetheless...
PRE(sys_execve, Special)
{
+ Char *path; /* path to executable */
+
PRINT("sys_execve ( %p(%s), %p, %p )", ARG1, ARG1, ARG2, ARG3);
PRE_REG_READ3(vki_off_t, "execve",
char *, filename, char **, argv, char **, envp);
if (ARG3 != 0)
pre_argv_envp( ARG3, tid, "execve(envp)", "execve(envp[i])" );
+ path = (Char *)ARG1;
+
/* Erk. If the exec fails, then the following will have made a
mess of things which makes it hard for us to continue. The
right thing to do is piece everything together again in
/* Resistance is futile. Nuke all other threads. POSIX mandates
this. (Really, nuke them all, since the new process will make
its own new thread.) */
- VG_(nuke_all_threads_except)( VG_INVALID_THREADID );
+ VG_(master_tid) = tid; /* become the master */
+ VG_(nuke_all_threads_except)( tid, VgSrc_ExitSyscall );
+ VGA_(reap_threads)(tid);
+
+ if (0) {
+ /* Shut down cleanly and report final state
+ XXX Is this reasonable? */
+ tst->exitreason = VgSrc_ExitSyscall;
+ VG_(shutdown_actions)(tid);
+ }
{
// Remove the valgrind-specific stuff from the environment so the
- // child doesn't get our libpthread and other stuff. This is
+ // child doesn't get vg_inject.so, vgpreload.so, etc. This is
// done unconditionally, since if we are tracing the child,
// stage1/2 will set up the appropriate client environment.
Char** envp = (Char**)ARG3;
if (envp != NULL) {
- VG_(env_remove_valgrind_env_stuff)( envp );
+ VG_(env_remove_valgrind_env_stuff)( envp );
}
}
VG_(printf)("env: %s\n", *cpp);
}
- /* Set our real sigmask to match the client's sigmask so that the
- exec'd child will get the right mask. First we need to clear
- out any pending signals so they they don't get delivered, which
- would confuse things.
+ /* restore the DATA rlimit for the child */
+ VG_(setrlimit)(VKI_RLIMIT_DATA, &VG_(client_rlimit_data));
+
+ /*
+ Set the signal state up for exec.
+
+ We need to set the real signal state to make sure the exec'd
+ process gets SIG_IGN properly.
+
+ Also set our real sigmask to match the client's sigmask so that
+ the exec'd child will get the right mask. First we need to
+ clear out any pending signals so they they don't get delivered,
+ which would confuse things.
XXX This is a bug - the signals should remain pending, and be
delivered to the new process after exec. There's also a
vki_sigset_t allsigs;
vki_siginfo_t info;
static const struct vki_timespec zero = { 0, 0 };
-
+ Int i;
+
+ for(i = 1; i < VG_(max_signal); i++) {
+ struct vki_sigaction sa;
+ VG_(do_sys_sigaction)(i, NULL, &sa);
+ if (sa.ksa_handler == VKI_SIG_IGN)
+ VG_(sigaction)(i, &sa, NULL);
+ else {
+ sa.ksa_handler = VKI_SIG_DFL;
+ VG_(sigaction)(i, &sa, NULL);
+ }
+ }
+
VG_(sigfillset)(&allsigs);
while(VG_(sigtimedwait)(&allsigs, &info, &zero) > 0)
- ;
+ ;
VG_(sigprocmask)(VKI_SIG_SETMASK, &tst->sig_mask, NULL);
}
- /* restore the DATA rlimit for the child */
- VG_(setrlimit)(VKI_RLIMIT_DATA, &VG_(client_rlimit_data));
-
- SET_RESULT( VG_(do_syscall3)(__NR_execve, ARG1, ARG2, ARG3) );
+ SET_RESULT( VG_(do_syscall3)(__NR_execve, (UWord)path, ARG2, ARG3) );
- /* If we got here, then the execve failed. We've already made too much of a mess
- of ourselves to continue, so we have to abort. */
+ /* If we got here, then the execve failed. We've already made too
+ much of a mess of ourselves to continue, so we have to abort. */
VG_(message)(Vg_UserMsg, "execve(%p(%s), %p, %p) failed, errno %d",
- ARG1, ARG1, ARG2, ARG3, -RES);
- VG_(core_panic)("EXEC FAILED: I can't recover from execve() failing, so I'm dying.\n"
- "Add more stringent tests in PRE(execve), or work out how to recover.");
+ ARG1, ARG1, ARG2, ARG3, -RES);
+ VG_(message)(Vg_UserMsg, "EXEC FAILED: I can't recover from "
+ "execve() failing, so I'm dying.");
+ VG_(message)(Vg_UserMsg, "Add more stringent tests in PRE(execve), "
+ "or work out how to recover.");
+ VG_(exit)(101);
}
PRE(sys_access, 0)
PRE_MEM_RASCIIZ( "access(pathname)", ARG1 );
}
-PRE(sys_alarm, NBRunInLWP)
+PRE(sys_alarm, 0)
{
PRINT("sys_alarm ( %d )", ARG1);
PRE_REG_READ1(unsigned long, "alarm", unsigned int, seconds);
break;
}
- if (ARG2 == VKI_F_SETLKW)
- tst->sys_flags |= MayBlock;
+ //if (ARG2 == VKI_F_SETLKW)
+ // tst->sys_flags |= MayBlock;
}
POST(sys_fcntl)
}
#ifndef __amd64__
- if (ARG2 == VKI_F_SETLKW || ARG2 == VKI_F_SETLKW64)
- tst->sys_flags |= MayBlock;
+ //if (ARG2 == VKI_F_SETLKW || ARG2 == VKI_F_SETLKW64)
+ // tst->sys_flags |= MayBlock;
#else
- if (ARG2 == VKI_F_SETLKW)
- tst->sys_flags |= MayBlock;
+ //if (ARG2 == VKI_F_SETLKW)
+ // tst->sys_flags |= MayBlock;
#endif
}
PRINT("sys_fork ( )");
PRE_REG_READ0(long, "fork");
- vg_assert(VG_(gettid)() == VG_(main_pid));
-
/* Block all signals during fork, so that we can fix things up in
the child without being interrupted. */
VG_(sigfillset)(&mask);
VG_(sigprocmask)(VKI_SIG_SETMASK, &mask, &fork_saved_mask);
- do_atfork_pre(tid);
-}
+ VG_(do_atfork_pre)(tid);
-POST(sys_fork)
-{
- if (RES == 0) {
- do_atfork_child(tid);
+ SET_RESULT(VG_(do_syscall0)(__NR_fork));
- /* I am the child. Nuke all other threads which I might
- have inherited from my parent. POSIX mandates this. */
- VG_(nuke_all_threads_except)( tid );
-
- /* XXX TODO: tid 1 is special, and is presumed to be present.
- We should move this TID to 1 in the child. */
+ if (RES == 0) {
+ VG_(do_atfork_child)(tid);
/* restore signal mask */
VG_(sigprocmask)(VKI_SIG_SETMASK, &fork_saved_mask, NULL);
- } else {
- PRINT(" fork: process %d created child %d\n", VG_(main_pid), RES);
+ } else if (RES > 0) {
+ PRINT(" fork: process %d created child %d\n", VG_(getpid)(), RES);
- do_atfork_parent(tid);
+ VG_(do_atfork_parent)(tid);
/* restore signal mask */
VG_(sigprocmask)(VKI_SIG_SETMASK, &fork_saved_mask, NULL);
break;
case VKI_SIOCSPGRP:
PRE_MEM_READ( "ioctl(SIOCSPGRP)", ARG3, sizeof(int) );
- tst->sys_flags &= ~MayBlock;
+ //tst->sys_flags &= ~MayBlock;
break;
/* linux/soundcard interface (OSS) */
/* int kill(pid_t pid, int sig); */
PRINT("sys_kill ( %d, %d )", ARG1,ARG2);
PRE_REG_READ2(long, "kill", int, pid, int, sig);
- if (ARG2 == VKI_SIGVGINT || ARG2 == VKI_SIGVGKILL)
+ if (!VG_(client_signal_OK)(ARG2))
SET_RESULT( -VKI_EINVAL );
}
POST(sys_kill)
{
- /* If this was a self-kill then wait for a signal to be
- delivered to any thread before claiming the kill is done. */
- if (RES >= 0 && // if it was successful and
- ARG2 != 0 && // if a real signal and
- !VG_(is_sig_ign)(ARG2) && // that isn't ignored and
- !VG_(sigismember)(&tst->eff_sig_mask, ARG2) && // we're not blocking it
- (ARG1 == VG_(getpid)() || // directed at us or
- ARG1 == -1 || // directed at everyone or
- ARG1 == 0 || // directed at whole group or
- -ARG1 == VG_(getpgrp)())) { // directed at our group...
- /* ...then wait for that signal to be delivered to someone
- (might be us, might be someone else who doesn't have it blocked) */
- VG_(proxy_waitsig)();
- }
+ if (VG_(clo_trace_signals))
+ VG_(message)(Vg_DebugMsg, "kill: sent signal %d to pid %d",
+ ARG2, ARG1);
+ // Check to see if this kill gave us a pending signal
+ VG_(poll_signals)(tid);
}
PRE(sys_link, MayBlock)
PRINT("old_mmap ( %p, %llu, %d, %d, %d, %d )",
a1, (ULong)a2, a3, a4, a5, a6 );
+ if (a2 == 0) {
+ /* SuSV3 says: If len is zero, mmap() shall fail and no mapping
+ shall be established. */
+ SET_RESULT( -VKI_EINVAL );
+ return;
+ }
+
if (a4 & VKI_MAP_FIXED) {
if (!VG_(valid_client_addr)(a1, a2, tid, "old_mmap")) {
PRINT("old_mmap failing: %p-%p\n", a1, a1+a2);
unsigned long, prot, unsigned long, flags,
unsigned long, fd, unsigned long, offset);
+ if (ARG2 == 0) {
+ /* SuSV3 says: If len is zero, mmap() shall fail and no mapping
+ shall be established. */
+ SET_RESULT( -VKI_EINVAL );
+ return;
+ }
+
if (ARG4 & VKI_MAP_FIXED) {
if (!VG_(valid_client_addr)(ARG1, ARG2, tid, "mmap2"))
SET_RESULT( -VKI_ENOMEM );
PRE_REG_READ2(long, "setpgid", vki_pid_t, pid, vki_pid_t, pgid);
}
-POST(sys_setpgid)
-{
- VG_(main_pgrp) = VG_(getpgrp)();
-}
-
PRE(sys_setregid, 0)
{
PRINT("sys_setregid ( %d, %d )", ARG1, ARG2);
POST(sys_rt_sigqueueinfo)
{
- if (RES >= 0 &&
- ARG2 != 0 &&
- !VG_(is_sig_ign)(ARG2) &&
- !VG_(sigismember)(&tst->eff_sig_mask, ARG2) &&
- ARG1 == VG_(getpid)()) {
- VG_(proxy_waitsig)();
- }
+ PRINT("sys_rt_sigqueueinfo(%d, %d, %p)", ARG1, ARG2, ARG3);
+ PRE_REG_READ3(long, "rt_sigqueueinfo",
+ int, pid, int, sig, vki_siginfo_t *, uinfo);
+ if (ARG2 != 0)
+ PRE_MEM_READ( "rt_sigqueueinfo(uinfo)", ARG3, sizeof(vki_siginfo_t) );
+ if (!VG_(client_signal_OK)(ARG2))
+ SET_RESULT( -VKI_EINVAL );
}
// XXX: x86-specific
// XXX: doesn't seem right to be calling do_sys_sigaction for
// sys_rt_sigaction... perhaps this function should be renamed
// VG_(do_sys_rt_sigaction)() --njn
- VG_(do_sys_sigaction)(tid);
- /* Mark that the result is set. */
- SET_RESULT(RES);
+
+ SET_RESULT(
+ VG_(do_sys_sigaction)(ARG1, (const struct vki_sigaction *)ARG2,
+ (struct vki_sigaction *)ARG3)
+ );
}
POST(sys_rt_sigaction)
POST_MEM_WRITE( ARG3, sizeof(vki_sigset_t));
}
-PRE(sys_sigpending, NBRunInLWP)
+PRE(sys_sigpending, 0)
{
PRINT( "sys_sigpending ( %p )", ARG1 );
PRE_REG_READ1(long, "sigpending", vki_old_sigset_t *, set);
POST_MEM_WRITE( ARG1, sizeof(vki_old_sigset_t) ) ;
}
-PRE(sys_rt_sigpending, NBRunInLWP)
+PRE(sys_rt_sigpending, 0)
{
PRINT( "sys_rt_sigpending ( %p )", ARG1 );
PRE_REG_READ2(long, "rt_sigpending",
static const struct SyscallTableEntry bad_sys =
{ &bad_flags, bad_before, NULL };
+static const struct SyscallTableEntry *get_syscall_entry(UInt syscallno)
+{
+ const struct SyscallTableEntry *sys;
+
+ if (syscallno < VGA_(syscall_table_size) &&
+ VGA_(syscall_table)[syscallno].before != NULL)
+ sys = &VGA_(syscall_table)[syscallno];
+ else
+ sys = &bad_sys;
+
+ return sys;
+}
-Bool VG_(pre_syscall) ( ThreadId tid )
+/* Perform post-syscall actions */
+void VG_(post_syscall) (ThreadId tid)
+{
+ const struct SyscallTableEntry *sys;
+ UInt flags;
+ Bool mayBlock;
+ ThreadState *tst = VG_(get_ThreadState)(tid);
+ Int syscallno;
+
+ vg_assert(VG_(is_running_thread)(tid));
+
+ syscallno = tst->syscallno;
+ tst->syscallno = -1;
+
+ vg_assert(syscallno != -1);
+
+ sys = get_syscall_entry(syscallno);
+ flags = *(sys->flags_ptr);
+
+ mayBlock = !!( flags & MayBlock );
+
+ if (sys->after != NULL &&
+ ((flags & PostOnFail) != 0 || !VG_(is_kerror)(RES))) {
+ if (0)
+ VG_(printf)("post_syscall: calling sys_after tid=%d syscallno=%d\n",
+ tid, syscallno);
+ (sys->after)(tid, tst);
+ }
+
+ /* Do any post-syscall actions
+
+ NOTE: this is only called if the syscall completed. If the
+ syscall was restarted, then it will call the Tool's
+ pre_syscall again, without calling post_syscall (ie, more
+ pre's than post's) */
+ if (VG_(needs).syscall_wrapper) {
+ //VGP_PUSHCC(VgpSkinSysWrap);
+ TL_(post_syscall)(tid, syscallno, RES);
+ //VGP_POPCC(VgpSkinSysWrap);
+ }
+}
+
+
+void VG_(client_syscall) ( ThreadId tid )
{
ThreadState* tst;
UInt syscallno, flags;
tst = VG_(get_ThreadState)(tid);
- /* Convert vfork to fork, since we can't handle it otherwise. */
- if (SYSNO == __NR_vfork)
- SYSNO = __NR_fork;
-
syscallno = (UInt)SYSNO;
- if (tst->syscallno != -1)
- VG_(printf)("tid %d has syscall %d\n", tst->tid, tst->syscallno);
-
- vg_assert(tst->syscallno == -1); // should be no current syscall
- vg_assert(tst->status == VgTs_Runnable); // should be runnable */
-
/* the syscall no is in %eax. For syscalls with <= 6 args,
args 1 .. 6 to the syscall are in %ebx %ecx %edx %esi %edi %ebp.
For calls with > 6 args, %ebx points to a lump of memory
comes from.
*/
+ vg_assert(VG_(is_running_thread)(tid));
+ vg_assert(tst->syscallno == -1);
tst->syscallno = syscallno;
- vg_assert(tst->status == VgTs_Runnable);
- if (syscallno < VGA_(syscall_table_size) &&
- VGA_(syscall_table)[syscallno].before != NULL)
- {
- sys = &VGA_(syscall_table)[syscallno];
- } else {
- sys = &bad_sys;
- }
- flags = *(sys->flags_ptr);
+ /* Make sure the tmp signal mask matches the real signal
+ mask; sigsuspend may change this. */
+ vg_assert(tst->sig_mask.sig[0] == tst->tmp_sig_mask.sig[0]);
+ vg_assert(tst->sig_mask.sig[1] == tst->tmp_sig_mask.sig[1]);
- {
- Bool nbrunInLWP = ( flags & NBRunInLWP ? True : False );
- isSpecial = ( flags & Special ? True : False );
- mayBlock = ( flags & MayBlock ? True : False );
- runInLWP = mayBlock || nbrunInLWP;
- // At most one of these should be true
- vg_assert( isSpecial + mayBlock + nbrunInLWP <= 1 );
- }
+ sys = get_syscall_entry(syscallno);
+ flags = *(sys->flags_ptr);
- tst->sys_flags = flags;
+ /* !! is standard idiom to turn an int->bool */
+ isSpecial = !!( flags & Special );
+ mayBlock = !!( flags & MayBlock );
+ // At most one of these should be true
+ vg_assert( isSpecial + mayBlock <= 1 );
/* Do any pre-syscall actions */
if (VG_(needs).syscall_wrapper) {
isSpecial ? " special" : "",
runInLWP ? " runInLWP" : "");
+ tst->syscall_result_set = False;
+
if (isSpecial) {
/* "Special" syscalls are implemented by Valgrind internally,
and do not generate real kernel calls. The expectation,
sets the result. Special syscalls cannot block. */
vg_assert(!mayBlock && !runInLWP);
- tst->syscall_result_set = False;
(sys->before)(tst->tid, tst);
/* This *must* result in tst->syscall_result_set becoming
True. */
- vg_assert(tst->sys_flags == flags);
+ // vg_assert(tst->sys_flags == flags);
vg_assert(tst->syscall_result_set == True);
PRINT(" --> %lld (0x%llx)\n", (Long)(Word)RES, (ULong)RES);
syscall_done = True;
} else {
- tst->syscall_result_set = False;
(sys->before)(tst->tid, tst);
/* This *may* result in tst->syscall_result_set becoming
True. */
don't do anything - just pretend the syscall happened. */
PRINT(" ==> %lld (0x%llx)\n", (Long)RES, (ULong)RES);
syscall_done = True;
- } else if (runInLWP) {
- /* Issue to worker. If we're waiting on the syscall because
- it's in the hands of the ProxyLWP, then set the thread
- state to WaitSys. */
+ } else if (mayBlock) {
+ vki_sigset_t mask;
+
+ vg_assert(!(flags & PadAddr));
+
+ /* Syscall may block, so run it asynchronously */
PRINT(" --> ...\n");
- tst->status = VgTs_WaitSys;
- VG_(sys_issue)(tid);
+
+ mask = tst->sig_mask;
+ VG_(sanitize_client_sigmask)(tid, &mask);
+
+ VG_(set_sleeping)(tid, VgTs_WaitSys);
+ VGA_(client_syscall)(syscallno, tst, &mask);
+ /* VGA_(client_syscall) may not return if the syscall was
+ interrupted by a signal. In that case, flow of control
+ will end up back in the scheduler via the signal
+ machinery. */
+ VG_(set_running)(tid);
+ PRINT("SYSCALL[%d,%d](%3d) --> %ld (0x%lx)\n",
+ VG_(getpid)(), tid, syscallno, (Long)(Word)RES, (ULong)RES);
} else {
/* run the syscall directly */
+ if (flags & PadAddr)
+ VG_(pad_address_space)(VG_(client_end));
+
RES = VG_(do_syscall6)(syscallno, ARG1, ARG2, ARG3, ARG4, ARG5, ARG6);
PRINT(" --> %lld (0x%llx)\n", (Long)(Word)RES, (ULong)RES);
syscall_done = True;
}
}
- VGP_POPCC(VgpCoreSysWrap);
-
- vg_assert(( syscall_done && tst->status == VgTs_Runnable) ||
- (!syscall_done && tst->status == VgTs_WaitSys ));
-
- return syscall_done;
-}
-
-static void restart_syscall(ThreadId tid)
-{
- ThreadState* tst;
- tst = VG_(get_ThreadState)(tid);
-
- vg_assert(tst != NULL);
- vg_assert(tst->status == VgTs_WaitSys);
- vg_assert(tst->syscallno != -1);
-
- SYSNO = tst->syscallno;
- VGA_(restart_syscall)(&tst->arch);
-}
-
-void VG_(post_syscall) ( ThreadId tid, Bool restart )
-{
- ThreadState* tst;
- UInt syscallno, flags;
- const struct SyscallTableEntry *sys;
- Bool isSpecial = False;
- Bool restarted = False;
-
- VGP_PUSHCC(VgpCoreSysWrap);
-
- tst = VG_(get_ThreadState)(tid);
- vg_assert(tst->tid == tid);
-
- /* Tell the tool about the syscall return value */
- SET_SYSCALL_RETVAL(tst->tid, RES);
-
- syscallno = tst->syscallno;
-
- vg_assert(syscallno != -1); /* must be a current syscall */
-
- if (syscallno < VGA_(syscall_table_size) &&
- VGA_(syscall_table)[syscallno].before != NULL)
- {
- sys = &VGA_(syscall_table)[syscallno];
- } else {
- sys = &bad_sys;
- }
- flags = *(sys->flags_ptr);
-
- isSpecial = flags & Special;
-
- if (RES == -VKI_ERESTARTSYS) {
- /* Applications never expect to see this, so we should either
- restart the syscall or fail it with EINTR, depending on what
- our caller wants. Generally they'll want to restart, but if
- client set the signal state to not restart, then we fail with
- EINTR. Either way, ERESTARTSYS means the syscall made no
- progress, and so can be failed or restarted without
- consequence. */
- if (0)
- VG_(printf)("syscall %d returned ERESTARTSYS; restart=%d\n",
- syscallno, restart);
-
- if (restart) {
- restarted = True;
- restart_syscall(tid);
- } else
- RES = -VKI_EINTR;
- }
+ vg_assert(VG_(is_running_thread)(tid));
- if (!restarted) {
- if (sys->after != NULL &&
- ((tst->sys_flags & PostOnFail) != 0 || !VG_(is_kerror)(RES)))
- (sys->after)(tst->tid, tst);
+ SET_SYSCALL_RETVAL(tid, RES);
- /* Do any post-syscall actions
+ VG_(post_syscall)(tid);
- NOTE: this is only called if the syscall completed. If the
- syscall was restarted, then it will call the Tool's
- pre_syscall again, without calling post_syscall (ie, more
- pre's than post's)
- */
- if (VG_(needs).syscall_wrapper) {
- VGP_PUSHCC(VgpToolSysWrap);
- TL_(post_syscall)(tid, syscallno, RES);
- VGP_POPCC(VgpToolSysWrap);
- }
+ if (flags & PadAddr) {
+ vg_assert(!mayBlock);
+ VG_(unpad_address_space)(VG_(client_end));
+ //VG_(sanity_check_memory)();
}
- tst->status = VgTs_Runnable; /* runnable again */
- tst->syscallno = -1; /* no current syscall */
+ /* VG_(post_syscall) should set this */
+ vg_assert(tst->syscallno == -1);
VGP_POPCC(VgpCoreSysWrap);
}
+//static void restart_syscall(ThreadId tid)
+//{
+// ThreadState* tst;
+// tst = VG_(get_ThreadState)(tid);
+//
+// vg_assert(tst != NULL);
+// vg_assert(tst->status == VgTs_WaitSys);
+// vg_assert(tst->syscallno != -1);
+//
+// SYSNO = tst->syscallno;
+// VGA_(restart_syscall)(&tst->arch);
+//}
+
+// svn version of post_syscall
+//void VG_(post_syscall) ( ThreadId tid, Bool restart )
+//{
+// ThreadState* tst;
+// UInt syscallno, flags;
+// const struct SyscallTableEntry *sys;
+// Bool isSpecial = False;
+// Bool restarted = False;
+//
+// VGP_PUSHCC(VgpCoreSysWrap);
+//
+// tst = VG_(get_ThreadState)(tid);
+// vg_assert(tst->tid == tid);
+//
+// /* Tell the tool about the syscall return value */
+// SET_SYSCALL_RETVAL(tst->tid, RES);
+//
+// syscallno = tst->syscallno;
+//
+// vg_assert(syscallno != -1); /* must be a current syscall */
+//
+// if (syscallno < VGA_(syscall_table_size) &&
+// VGA_(syscall_table)[syscallno].before != NULL)
+// {
+// sys = &VGA_(syscall_table)[syscallno];
+// } else {
+// sys = &bad_sys;
+// }
+// flags = *(sys->flags_ptr);
+//
+// isSpecial = flags & Special;
+//
+// if (RES == -VKI_ERESTARTSYS) {
+// /* Applications never expect to see this, so we should either
+// restart the syscall or fail it with EINTR, depending on what
+// our caller wants. Generally they'll want to restart, but if
+// client set the signal state to not restart, then we fail with
+// EINTR. Either way, ERESTARTSYS means the syscall made no
+// progress, and so can be failed or restarted without
+// consequence. */
+// if (0)
+// VG_(printf)("syscall %d returned ERESTARTSYS; restart=%d\n",
+// syscallno, restart);
+//
+// if (restart) {
+// restarted = True;
+// restart_syscall(tid);
+// } else
+// RES = -VKI_EINTR;
+// }
+//
+// if (!restarted) {
+// if (sys->after != NULL &&
+// ((tst->sys_flags & PostOnFail) != 0 || !VG_(is_kerror)(RES)))
+// (sys->after)(tst->tid, tst);
+//
+// /* Do any post-syscall actions
+//
+// NOTE: this is only called if the syscall completed. If the
+// syscall was restarted, then it will call the Tool's
+// pre_syscall again, without calling post_syscall (ie, more
+// pre's than post's)
+// */
+// if (VG_(needs).syscall_wrapper) {
+// VGP_PUSHCC(VgpToolSysWrap);
+// TL_(post_syscall)(tid, syscallno, RES);
+// VGP_POPCC(VgpToolSysWrap);
+// }
+// }
+//
+// tst->status = VgTs_Runnable; /* runnable again */
+// tst->syscallno = -1; /* no current syscall */
+//
+// VGP_POPCC(VgpCoreSysWrap);
+//}
+
/*--------------------------------------------------------------------*/
/*--- end ---*/
/*--------------------------------------------------------------------*/
}
static
-void log_bytes ( Char* bytes, Int nbytes )
+void log_bytes ( HChar* bytes, Int nbytes )
{
Int i;
for (i = 0; i < nbytes-3; i += 4)
!VG_(seg_contains)(seg, orig_addr, 1) ||
(seg->prot & (VKI_PROT_READ|VKI_PROT_EXEC)) == 0) {
/* Code address is bad - deliver a signal instead */
- vg_assert(!VG_(is_addressable)(orig_addr, 1));
+ vg_assert(!VG_(is_addressable)(orig_addr, 1,
+ VKI_PROT_READ|VKI_PROT_EXEC));
if (seg != NULL && VG_(seg_contains)(seg, orig_addr, 1)) {
vg_assert((seg->prot & VKI_PROT_EXEC) == 0);
tres = LibVEX_Translate (
VG_(vex_arch), VG_(vex_subarch),
VG_(vex_arch), VG_(vex_subarch),
- (UChar*)orig_addr,
+ (UChar*)ULong_to_Ptr(orig_addr),
(Addr64)orig_addr,
chase_into_ok,
&vge,
include $(top_srcdir)/Makefile.all.am
include $(top_srcdir)/Makefile.core-AM_CPPFLAGS.am
-AM_CFLAGS = $(WERROR) -Winline -Wall -Wshadow -O -fomit-frame-pointer -g
+AM_CFLAGS = $(WERROR) -Wmissing-prototypes -Winline -Wall -Wshadow -O -g
noinst_HEADERS = \
core_platform.h \
#define UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.esp)
#define UCONTEXT_FRAME_PTR(uc) ((uc)->uc_mcontext.ebp)
#define UCONTEXT_SYSCALL_NUM(uc) ((uc)->uc_mcontext.eax)
+#define UCONTEXT_SYSCALL_RET(uc) ((uc)->uc_mcontext.eax)
/* ---------------------------------------------------------------------
mmap() stuff
a6 = arg_block[5]; \
} while (0)
+/* ---------------------------------------------------------------------
+ Inline asm for atomic operations for use with futexes
+ Taken from futex-2.2/i386.h
+ ------------------------------------------------------------------ */
+/* (C) Matthew Kirkwood <matthew@hairy.beasts.org>
+ (C) 2002 Rusty Russell IBM <rusty@rustcorp.com.au>
+ */
+
+/* Atomic dec: return new value. */
+static __inline__ Int __futex_down(Int *counter)
+{
+ Int val;
+ UChar eqz;
+
+ /* Don't decrement if already negative. */
+ val = *counter;
+ if (val < 0)
+ return val;
+
+ /* Damn 386: no cmpxchg... */
+ __asm__ __volatile__(
+ "lock; decl %0; sete %1"
+ :"=m" (*counter), "=qm" (eqz)
+ :"m" (*counter) : "memory");
+
+ /* We know if it's zero... */
+ if (eqz) return 0;
+ /* Otherwise, we have no way of knowing value. Guess -1 (if
+ we're wrong we'll spin). */
+ return -1;
+}
+
+/* Atomic inc: return 1 if counter incremented from 0 to 1. */
+static __inline__ Int __futex_up(Int *c)
+{
+ Int r = 1;
+
+ /* This actually tests if result >= 1. Damn 386. --RR */
+ __asm__ __volatile__ (
+ " lock; incl %1\n"
+ " jg 1f\n"
+ " decl %0\n"
+ "1:\n"
+ : "=q"(r), "=m"(*c) : "0"(r)
+ );
+ return r;
+}
+
+/* Simple atomic increment. */
+static __inline__ void __atomic_inc(Int *c)
+{
+ __asm__ __volatile__(
+ "lock; incl %0"
+ :"=m" (*c)
+ :"m" (*c));
+}
+
+
+/* Commit the write, so it happens before we send the semaphore to
+ anyone else */
+static __inline__ void __futex_commit(void)
+{
+ /* Probably overkill, but some non-Intel clones support
+ out-of-order stores, according to 2.5.5-pre1's
+ linux/include/asm-i386/system.h */
+ __asm__ __volatile__ ("lock; addl $0,0(%%esp)": : :"memory");
+}
+
+/* Use libc setjmp/longjmp. longjmp must not restore signal mask
+ state, but does need to pass though "val". */
+#include <setjmp.h> /* for jmp_buf */
+
+#define SETJMP(env) setjmp(env)
+#define LONGJMP(env, val) longjmp(env, val)
+
#endif // __X86_LINUX_CORE_PLATFORM_H
/*--------------------------------------------------------------------*/
#include "core_asm.h"
#include "vki_unistd.h"
+#include "libvex_guest_offsets.h"
+
+.globl VG_(do_syscall)
/*
Perform a Linux syscall with int 0x80
Int VG_(do_syscall)(Int syscall_no, UWord a1, UWord a2, UWord a3,
UWord a4, UWord a5, UWord a6)
- This has no effect on the virtual machine; the expectation is
+ This has no effect on the virtual machine; the assumption is
that the syscall mechanism makes no useful changes to any
- register except %eax, which is returned.
+ register except %eax, which is returned. (Some kernels will
+ change other registers randomly, but they're just compiler
+ spill values.)
*/
-.globl VG_(do_syscall)
VG_(do_syscall):
push %esi
push %edi
int VG_(clone)(int (*fn)(void *), void *child_stack, int flags, void *arg,
0 4 8 12
- pid_t *child_tid, pid_t *parent_tid)
- 16 20
+ pid_t *child_tid, pid_t *parent_tid, vki_modify_ldt_t *)
+ 16 20 24
*/
.globl VG_(clone)
movl 8+FSZ(%esp), %ebx /* flags */
movl 20+FSZ(%esp), %edx /* parent tid * */
movl 16+FSZ(%esp), %edi /* child tid * */
+ movl 24+FSZ(%esp), %esi /* modify_ldt_t * */
movl $__NR_clone, %eax
int $0x80
testl %eax, %eax
pop %edi
pop %ebx
ret
-
+#undef FSZ
+
.globl VG_(sigreturn)
VG_(sigreturn):
movl $__NR_rt_sigreturn, %eax
int $0x80
+
+/*
+ Perform a syscall for the client. This will run a syscall
+ with the client's specific per-thread signal mask.
+
+ The structure of this function is such that, if the syscall is
+ interrupted by a signal, we can determine exactly what
+ execution state we were in with respect to the execution of
+ the syscall by examining the value of %eip in the signal
+ handler. This means that we can always do the appropriate
+ thing to precisely emulate the kernel's signal/syscall
+ interactions.
+
+ The syscall number is taken from the argument, even though it
+ should also be in regs->m_eax. The syscall result is written
+ back to regs->m_eax on completion.
+
+ Returns 0 if the syscall was successfully called (even if the
+ syscall itself failed), or a -ve error code if one of the
+ sigprocmasks failed (there's no way to determine which one
+ failed).
+
+ VGA_(interrupted_syscall)() does the thread state fixup in the
+ case where we were interrupted by a signal.
+
+ Prototype:
+
+ Int VGA_(_client_syscall)(Int syscallno, // 0
+ void* guest_state, // 4
+ const vki_sigset_t *sysmask, // 8
+ const vki_sigset_t *postmask, // 12
+ Int nsigwords) // 16
+
+*/
+
+/* from vki_arch.h */
+#define VKI_SIG_SETMASK 2
+
+.globl VGA_(_client_syscall)
+VGA_(_client_syscall):
+ /* save callee-saved regs */
+ push %esi
+ push %edi
+ push %ebx
+ push %ebp
+#define FSZ ((4+1)*4) /* 4 args + ret addr */
+
+1: /* Even though we can't take a signal until the sigprocmask completes,
+ start the range early.
+ If eip is in the range [1,2), the syscall hasn't been started yet */
+
+ /* Set the signal mask which should be current during the syscall. */
+ movl $__NR_rt_sigprocmask, %eax
+ movl $VKI_SIG_SETMASK, %ebx
+ movl 8+FSZ(%esp), %ecx
+ movl 12+FSZ(%esp), %edx
+ movl 16+FSZ(%esp), %esi
+ int $0x80
+ testl %eax, %eax
+ js 5f /* sigprocmask failed */
+
+ movl 4+FSZ(%esp), %eax /* eax == ThreadState * */
+
+ movl OFFSET_x86_EBX(%eax), %ebx
+ movl OFFSET_x86_ECX(%eax), %ecx
+ movl OFFSET_x86_EDX(%eax), %edx
+ movl OFFSET_x86_ESI(%eax), %esi
+ movl OFFSET_x86_EDI(%eax), %edi
+ movl OFFSET_x86_EBP(%eax), %ebp
+ movl 0+FSZ(%esp), %eax /* use syscallno argument rather than thread EAX */
+
+ /* If eip==2, then the syscall was either just about to start,
+ or was interrupted and the kernel was restarting it. */
+2: int $0x80
+3: /* In the range [3, 4), the syscall result is in %eax, but hasn't been
+ committed to EAX. */
+ movl 4+FSZ(%esp), %ebx
+ movl %eax, OFFSET_x86_EAX(%ebx) /* save back to EAX */
+
+4: /* Re-block signals. If eip is in [4,5), then the syscall is complete and
+ we needn't worry about it. */
+ movl $__NR_rt_sigprocmask, %eax
+ movl $VKI_SIG_SETMASK, %ebx
+ movl 12+FSZ(%esp), %ecx
+ xorl %edx, %edx
+ movl 16+FSZ(%esp), %esi
+ int $0x80
+
+5: /* now safe from signals */
+
+ popl %ebp
+ popl %ebx
+ popl %edi
+ popl %esi
+#undef FSZ
+ ret
+
+.section .rodata
+/* export the ranges so that VGA_(interrupted_syscall) can do the
+ right thing */
+
+.globl VGA_(blksys_setup)
+.globl VGA_(blksys_restart)
+.globl VGA_(blksys_complete)
+.globl VGA_(blksys_committed)
+.globl VGA_(blksys_finished)
+VGA_(blksys_setup): .long 1b
+VGA_(blksys_restart): .long 2b
+VGA_(blksys_complete): .long 3b
+VGA_(blksys_committed): .long 4b
+VGA_(blksys_finished): .long 5b
+.previous
+
/* Let the linker know we don't need an executable stack */
.section .note.GNU-stack,"",@progbits
The GNU General Public License is contained in the file COPYING.
*/
-#include "core.h"
+/* TODO/FIXME jrs 20050207: assignments to the syscall return result
+ in interrupted_syscall() need to be reviewed. They don't seem
+ to assign the shadow state.
+*/
+#include "core.h"
+#include "ume.h" /* for jmp_with_stack */
-// See the comment accompanying the declaration of VGA_(thread_syscall)() in
-// coregrind/core.h for an explanation of what this does, and why.
-asm(
-".text\n"
-" .type vgArch_do_thread_syscall,@function\n"
-
-".globl vgArch_do_thread_syscall\n"
-"vgArch_do_thread_syscall:\n"
-" push %esi\n"
-" push %edi\n"
-" push %ebx\n"
-" push %ebp\n"
-".vgArch_sys_before:\n"
-" movl 16+ 4(%esp),%eax\n" /* syscall */
-" movl 16+ 8(%esp),%ebx\n" /* arg1 */
-" movl 16+12(%esp),%ecx\n" /* arg2 */
-" movl 16+16(%esp),%edx\n" /* arg3 */
-" movl 16+20(%esp),%esi\n" /* arg4 */
-" movl 16+24(%esp),%edi\n" /* arg5 */
-" movl 16+28(%esp),%ebp\n" /* arg6 */
-".vgArch_sys_restarted:\n"
-" int $0x80\n"
-".vgArch_sys_after:\n"
-" movl 16+32(%esp),%ebx\n" /* ebx = Int *RES */
-" movl %eax, (%ebx)\n" /* write the syscall retval */
-
-" movl 16+36(%esp),%ebx\n" /* ebx = enum PXState * */
-" testl %ebx, %ebx\n"
-" jz 1f\n"
-
-" movl 16+40(%esp),%ecx\n" /* write the post state (must be after retval write) */
-" movl %ecx,(%ebx)\n"
-
-".vgArch_sys_done:\n" /* OK, all clear from here */
-"1: popl %ebp\n"
-" popl %ebx\n"
-" popl %edi\n"
-" popl %esi\n"
-" ret\n"
-" .size vgArch_do_thread_syscall,.-vgArch_do_thread_syscall\n"
-".previous\n"
-
-".section .rodata\n"
-" .globl vgArch_sys_before\n"
-"vgArch_sys_before: .long .vgArch_sys_before\n"
-" .globl vgArch_sys_restarted\n"
-"vgArch_sys_restarted: .long .vgArch_sys_restarted\n"
-" .globl vgArch_sys_after\n"
-"vgArch_sys_after: .long .vgArch_sys_after\n"
-" .globl vgArch_sys_done\n"
-"vgArch_sys_done: .long .vgArch_sys_done\n"
-".previous\n"
-);
+/* These are addresses within VGA_(client_syscall). See syscall.S for details. */
+extern const Word VGA_(blksys_setup);
+extern const Word VGA_(blksys_restart);
+extern const Word VGA_(blksys_complete);
+extern const Word VGA_(blksys_committed);
+extern const Word VGA_(blksys_finished);
// Back up to restart a system call.
void VGA_(restart_syscall)(ThreadArchState *arch)
}
}
+/*
+ Fix up the VCPU state when a syscall is interrupted by a signal.
+
+ To do this, we determine the precise state of the syscall by
+ looking at the (real) eip at the time the signal happened. The
+ syscall sequence looks like:
+
+ 1. unblock signals
+ 2. perform syscall
+ 3. save result to EAX
+ 4. re-block signals
+
+ If a signal happens at Then Why?
+ 1-2 restart nothing has happened (restart syscall)
+ 2 restart syscall hasn't started, or kernel wants to restart
+ 2-3 save syscall complete, but results not saved
+ 3-4 - syscall complete, results saved
+
+ Sometimes we never want to restart an interrupted syscall (because
+ sigaction says not to), so we only restart if "restart" is True.
+
+ This will also call VG_(post_syscall)() if the syscall has actually
+ completed (either because it was interrupted, or because it
+ actually finished). It will not call VG_(post_syscall)() if the
+ syscall is set up for restart, which means that the pre-wrapper may
+ get called multiple times.
+ */
+void VGA_(interrupted_syscall)(ThreadId tid,
+ struct vki_ucontext *uc,
+ Bool restart)
+{
+ static const Bool debug = 0;
+
+ ThreadState *tst = VG_(get_ThreadState)(tid);
+ ThreadArchState *th_regs = &tst->arch;
+ Word eip = UCONTEXT_INSTR_PTR(uc);
+
+ if (debug)
+ VG_(printf)("interrupted_syscall: eip=%p; restart=%d eax=%d\n",
+ eip, restart, UCONTEXT_SYSCALL_NUM(uc));
+
+ if (eip < VGA_(blksys_setup) || eip >= VGA_(blksys_finished)) {
+ VG_(printf)(" not in syscall (%p - %p)\n", VGA_(blksys_setup), VGA_(blksys_finished));
+ vg_assert(tst->syscallno == -1);
+ return;
+ }
+
+ vg_assert(tst->syscallno != -1);
+
+ if (eip >= VGA_(blksys_setup) && eip < VGA_(blksys_restart)) {
+ /* syscall hasn't even started; go around again */
+ if (debug)
+ VG_(printf)(" not started: restart\n");
+ VGA_(restart_syscall)(th_regs);
+ } else if (eip == VGA_(blksys_restart)) {
+ /* We're either about to run the syscall, or it was interrupted
+ and the kernel restarted it. Restart if asked, otherwise
+ EINTR it. */
+ if (restart)
+ VGA_(restart_syscall)(th_regs);
+ else {
+ th_regs->vex.PLATFORM_SYSCALL_RET = -VKI_EINTR;
+ VG_(post_syscall)(tid);
+ }
+ } else if (eip >= VGA_(blksys_complete) && eip < VGA_(blksys_committed)) {
+ /* Syscall complete, but result hasn't been written back yet.
+ The saved real CPU %eax has the result, which we need to move
+ to EAX. */
+ if (debug)
+ VG_(printf)(" completed: ret=%d\n", UCONTEXT_SYSCALL_RET(uc));
+ th_regs->vex.PLATFORM_SYSCALL_RET = UCONTEXT_SYSCALL_RET(uc);
+ VG_(post_syscall)(tid);
+ } else if (eip >= VGA_(blksys_committed) && eip < VGA_(blksys_finished)) {
+ /* Result committed, but the signal mask has not been restored;
+ we expect our caller (the signal handler) will have fixed
+ this up. */
+ if (debug)
+ VG_(printf)(" all done\n");
+ VG_(post_syscall)(tid);
+ } else
+ VG_(core_panic)("?? strange syscall interrupt state?");
+
+ tst->syscallno = -1;
+}
+
+extern void VGA_(_client_syscall)(Int syscallno,
+ void* guest_state,
+ const vki_sigset_t *syscall_mask,
+ const vki_sigset_t *restore_mask,
+ Int nsigwords);
+
+void VGA_(client_syscall)(Int syscallno, ThreadState *tst,
+ const vki_sigset_t *syscall_mask)
+{
+ vki_sigset_t saved;
+ VGA_(_client_syscall)(syscallno, &tst->arch.vex,
+ syscall_mask, &saved, _VKI_NSIG_WORDS * sizeof(UWord));
+}
+
+
+/*
+ Allocate a stack for this thread.
+
+ They're allocated lazily, but never freed.
+ */
+#define FILL 0xdeadbeef
+
+static UInt *allocstack(ThreadId tid)
+{
+ ThreadState *tst = VG_(get_ThreadState)(tid);
+ UInt *esp;
+
+ if (tst->os_state.stack == NULL) {
+ void *stk = VG_(mmap)(0, VG_STACK_SIZE_W * sizeof(Int) + VKI_PAGE_SIZE,
+ VKI_PROT_READ|VKI_PROT_WRITE,
+ VKI_MAP_PRIVATE|VKI_MAP_ANONYMOUS,
+ SF_VALGRIND,
+ -1, 0);
+
+ if (stk != (void *)-1) {
+ VG_(mprotect)(stk, VKI_PAGE_SIZE, VKI_PROT_NONE); /* guard page */
+ tst->os_state.stack = (UInt *)stk + VKI_PAGE_SIZE/sizeof(UInt);
+ tst->os_state.stacksize = VG_STACK_SIZE_W;
+ } else
+ return (UInt *)-1;
+ }
+
+ for(esp = tst->os_state.stack; esp < (tst->os_state.stack + tst->os_state.stacksize); esp++)
+ *esp = FILL;
+ /* esp is left at top of stack */
+
+ if (0)
+ VG_(printf)("stack for tid %d at %p (%x); esp=%p\n",
+ tid, tst->os_state.stack, *tst->os_state.stack,
+ esp);
+
+ return esp;
+}
+
+/* Return how many bytes of this stack have not been used */
+Int VGA_(stack_unused)(ThreadId tid)
+{
+ ThreadState *tst = VG_(get_ThreadState)(tid);
+ UInt *p;
+
+ for (p = tst->os_state.stack;
+ p && (p < (tst->os_state.stack + tst->os_state.stacksize));
+ p++)
+ if (*p != FILL)
+ break;
+
+ if (0)
+ VG_(printf)("p=%p %x tst->os_state.stack=%p\n", p, *p, tst->os_state.stack);
+
+ return (p - tst->os_state.stack) * sizeof(*p);
+}
+
+/*
+ Allocate a stack for the main thread, and call VGA_(thread_wrapper)
+ on that stack.
+ */
+void VGA_(main_thread_wrapper)(ThreadId tid)
+{
+ UInt *esp = allocstack(tid);
+
+ vg_assert(tid == VG_(master_tid));
+
+ *--esp = tid; /* set arg */
+ *--esp = 0; /* bogus return address */
+ jmp_with_stack((void (*)(void))VGA_(thread_wrapper), (Addr)esp);
+}
+
+static Int start_thread(void *arg)
+{
+ ThreadState *tst = (ThreadState *)arg;
+ ThreadId tid = tst->tid;
+
+ VGA_(thread_wrapper)(tid);
+
+ /* OK, thread is dead; this releases the run lock */
+ VG_(exit_thread)(tid);
+
+ vg_assert(tst->status == VgTs_Zombie);
+
+ /* Poke the reaper */
+ if (VG_(clo_trace_signals))
+ VG_(message)(Vg_DebugMsg, "Sending SIGVGCHLD to master tid=%d lwp=%d",
+ VG_(master_tid), VG_(threads)[VG_(master_tid)].os_state.lwpid);
+
+ VG_(tkill)(VG_(threads)[VG_(master_tid)].os_state.lwpid, VKI_SIGVGCHLD);
+
+ /* We have to use this sequence to terminate the thread to prevent
+ a subtle race. If VG_(exit_thread)() had left the ThreadState
+ as Empty, then it could have been reallocated, reusing the stack
+ while we're doing these last cleanups. Instead,
+ VG_(exit_thread) leaves it as Zombie to prevent reallocation.
+ We need to make sure we don't touch the stack between marking it
+ Empty and exiting. Hence the assembler. */
+ asm volatile (
+ "movl %1, %0\n" /* set tst->status = VgTs_Empty */
+ "int $0x80\n" /* exit(tst->os_state.exitcode) */
+ : "=m" (tst->status)
+ : "n" (VgTs_Empty), "a" (__NR_exit), "b" (tst->os_state.exitcode));
+
+ VG_(core_panic)("Thread exit failed?\n");
+}
+
+/*
+ clone() handling
+
+ When a client clones, we need to keep track of the new thread. This means:
+ 1. allocate a ThreadId+ThreadState+stack for the the thread
+
+ 2. initialize the thread's new VCPU state
+
+ 3. create the thread using the same args as the client requested,
+ but using the scheduler entrypoint for EIP, and a separate stack
+ for ESP.
+ */
+static Int do_clone(ThreadId ptid,
+ UInt flags, Addr esp,
+ Int *parent_tidptr,
+ Int *child_tidptr,
+ vki_modify_ldt_t *tlsinfo)
+{
+ static const Bool debug = False;
+
+ ThreadId ctid = VG_(alloc_ThreadState)();
+ ThreadState *ptst = VG_(get_ThreadState)(ptid);
+ ThreadState *ctst = VG_(get_ThreadState)(ctid);
+ UInt *stack;
+ Segment *seg;
+ Int ret;
+ vki_sigset_t blockall, savedmask;
+
+ VG_(sigfillset)(&blockall);
+
+ vg_assert(VG_(is_running_thread)(ptid));
+ vg_assert(VG_(is_valid_tid)(ctid));
+
+ stack = allocstack(ctid);
+
+ /* Copy register state
+
+ Both parent and child return to the same place, and the code
+ following the clone syscall works out which is which, so we
+ don't need to worry about it.
+
+ The parent gets the child's new tid returned from clone, but the
+ child gets 0.
+
+ If the clone call specifies a NULL esp for the new thread, then
+ it actually gets a copy of the parent's esp.
+ */
+ VGA_(setup_child)( &ctst->arch, &ptst->arch );
+
+ PLATFORM_SET_SYSCALL_RESULT(ctst->arch, 0);
+ if (esp != 0)
+ ctst->arch.vex.guest_ESP = esp;
+
+ ctst->os_state.parent = ptid;
+ ctst->os_state.clone_flags = flags;
+ ctst->os_state.parent_tidptr = parent_tidptr;
+ ctst->os_state.child_tidptr = child_tidptr;
+
+ /* inherit signal mask */
+ ctst->sig_mask = ptst->sig_mask;
+ ctst->tmp_sig_mask = ptst->sig_mask;
+
+ /* We don't really know where the client stack is, because its
+ allocated by the client. The best we can do is look at the
+ memory mappings and try to derive some useful information. We
+ assume that esp starts near its highest possible value, and can
+ only go down to the start of the mmaped segment. */
+ seg = VG_(find_segment)((Addr)esp);
+ if (seg) {
+ ctst->stack_base = seg->addr;
+ ctst->stack_highest_word = (Addr)PGROUNDUP(esp);
+ ctst->stack_size = ctst->stack_highest_word - ctst->stack_base;
+
+ if (debug)
+ VG_(printf)("tid %d: guessed client stack range %p-%p\n",
+ ctid, seg->addr, PGROUNDUP(esp));
+ } else {
+ VG_(message)(Vg_UserMsg, "!? New thread %d starts with ESP(%p) unmapped\n",
+ ctid, esp);
+ ctst->stack_base = 0;
+ ctst->stack_size = 0;
+ }
+
+ if (flags & VKI_CLONE_SETTLS) {
+ if (debug)
+ VG_(printf)("clone child has SETTLS: tls info at %p: idx=%d base=%p limit=%x; esp=%p fs=%x gs=%x\n",
+ tlsinfo, tlsinfo->entry_number, tlsinfo->base_addr, tlsinfo->limit,
+ ptst->arch.vex.guest_ESP,
+ ctst->arch.vex.guest_FS, ctst->arch.vex.guest_GS);
+ ret = VG_(sys_set_thread_area)(ctid, tlsinfo);
+
+ if (ret != 0)
+ goto out;
+ }
+
+ flags &= ~VKI_CLONE_SETTLS;
+
+ /* start the thread with everything blocked */
+ VG_(sigprocmask)(VKI_SIG_SETMASK, &blockall, &savedmask);
+
+ /* Create the new thread */
+ ret = VG_(clone)(start_thread, stack, flags, &VG_(threads)[ctid],
+ child_tidptr, parent_tidptr, NULL);
+
+ VG_(sigprocmask)(VKI_SIG_SETMASK, &savedmask, NULL);
+
+ out:
+ if (ret < 0) {
+ /* clone failed */
+ VGA_(cleanup_thread)(&ctst->arch);
+ ctst->status = VgTs_Empty;
+ }
+
+ return ret;
+}
+
+/* Do a clone which is really a fork() */
+static Int do_fork_clone(ThreadId tid, UInt flags, Addr esp, Int *parent_tidptr, Int *child_tidptr)
+{
+ vki_sigset_t fork_saved_mask;
+ vki_sigset_t mask;
+ Int ret;
+
+ if (flags & (VKI_CLONE_SETTLS | VKI_CLONE_FS | VKI_CLONE_VM | VKI_CLONE_FILES | VKI_CLONE_VFORK))
+ return -VKI_EINVAL;
+
+ /* Block all signals during fork, so that we can fix things up in
+ the child without being interrupted. */
+ VG_(sigfillset)(&mask);
+ VG_(sigprocmask)(VKI_SIG_SETMASK, &mask, &fork_saved_mask);
+
+ VG_(do_atfork_pre)(tid);
+
+ /* Since this is the fork() form of clone, we don't need all that
+ VG_(clone) stuff */
+ ret = VG_(do_syscall5)(__NR_clone, flags, (UWord)NULL, (UWord)parent_tidptr,
+ (UWord)NULL, (UWord)child_tidptr);
+
+ if (ret == 0) {
+ /* child */
+ VG_(do_atfork_child)(tid);
+
+ /* restore signal mask */
+ VG_(sigprocmask)(VKI_SIG_SETMASK, &fork_saved_mask, NULL);
+ } else if (ret > 0) {
+ /* parent */
+ if (VG_(clo_trace_syscalls))
+ VG_(printf)(" clone(fork): process %d created child %d\n", VG_(getpid)(), ret);
+
+ VG_(do_atfork_parent)(tid);
+
+ /* restore signal mask */
+ VG_(sigprocmask)(VKI_SIG_SETMASK, &fork_saved_mask, NULL);
+ }
+
+ return ret;
+}
+
/* ---------------------------------------------------------------------
PRE/POST wrappers for x86/Linux-specific syscalls
------------------------------------------------------------------ */
}
}
+PRE(sys_clone, Special)
+{
+ UInt cloneflags;
+
+ PRINT("sys_clone ( %x, %p, %p, %p, %p )",ARG1,ARG2,ARG3,ARG4,ARG5);
+ PRE_REG_READ5(int, "clone",
+ unsigned long, flags,
+ void *, child_stack,
+ int *, parent_tidptr,
+ vki_modify_ldt_t *, tlsinfo,
+ int *, child_tidptr);
+
+ if (ARG1 & VKI_CLONE_PARENT_SETTID) {
+ PRE_MEM_WRITE("clone(parent_tidptr)", ARG3, sizeof(Int));
+ if (!VG_(is_addressable)(ARG3, sizeof(Int), VKI_PROT_WRITE)) {
+ SET_RESULT( -VKI_EFAULT );
+ return;
+ }
+ }
+ if (ARG1 & (VKI_CLONE_CHILD_SETTID | VKI_CLONE_CHILD_CLEARTID)) {
+ PRE_MEM_WRITE("clone(child_tidptr)", ARG5, sizeof(Int));
+ if (!VG_(is_addressable)(ARG5, sizeof(Int), VKI_PROT_WRITE)) {
+ SET_RESULT( -VKI_EFAULT );
+ return;
+ }
+ }
+ if (ARG1 & VKI_CLONE_SETTLS) {
+ PRE_MEM_READ("clone(tls_user_desc)", ARG4, sizeof(vki_modify_ldt_t));
+ if (!VG_(is_addressable)(ARG4, sizeof(vki_modify_ldt_t), VKI_PROT_READ)) {
+ SET_RESULT( -VKI_EFAULT );
+ return;
+ }
+ }
-/* --- BEGIN Quadrics Elan3 driver hacks (1) --- */
-
-/* Horrible hack. Do sys_clone, and if you are then the child,
- continue at child_next_eip instead of returning. */
-/* not even kernel thread-safe due to use of static var, but I think
- that's ok. we should not have multiple kernel threads here. */
-static void* child_wherenext;
-static UInt clone_child_native ( void* child_next_eip, UInt arg1, UInt arg2 )
-{
- UInt __res;
- UInt syscallno = __NR_clone;
- child_wherenext = child_next_eip;
- __asm__ volatile (
- "int $0x80\n\t"
- "cmpl $0, %%eax\n\t"
- "jnz laLaloLo345321\n\t"
- "jmp *child_wherenext\n"
- "laLaloLo345321:"
- : "=a" (__res)
- : "0" (syscallno),
- "b" (arg1),
- "c" (arg2)
- );
- return __res;
-}
+ cloneflags = ARG1;
-/* --- END Quadrics Elan3 driver hacks (1) --- */
+ if (!VG_(client_signal_OK)(ARG1 & VKI_CSIGNAL)) {
+ SET_RESULT( -VKI_EINVAL );
+ return;
+ }
+ /* Only look at the flags we really care about */
+ switch(cloneflags & (VKI_CLONE_VM | VKI_CLONE_FS | VKI_CLONE_FILES | VKI_CLONE_VFORK)) {
+ case VKI_CLONE_VM | VKI_CLONE_FS | VKI_CLONE_FILES:
+ /* thread creation */
+ SET_RESULT(do_clone(tid,
+ ARG1, /* flags */
+ (Addr)ARG2, /* child ESP */
+ (Int *)ARG3, /* parent_tidptr */
+ (Int *)ARG5, /* child_tidptr */
+ (vki_modify_ldt_t *)ARG4)); /* set_tls */
+ break;
-PRE(sys_clone, Special)
-{
- PRINT("sys_clone ( %d, %p, %p, %p, %p )",ARG1,ARG2,ARG3,ARG4,ARG5);
- // XXX: really not sure about the last two args... if they are really
- // there, we should do PRE_MEM_READs for both of them...
- PRE_REG_READ4(int, "clone",
- unsigned long, flags, void *, child_stack,
- int *, parent_tidptr, int *, child_tidptr);
-
- if (ARG2 == 0
- && (ARG1 == (VKI_CLONE_CHILD_CLEARTID|VKI_CLONE_CHILD_SETTID|VKI_SIGCHLD)
- || ARG1 == (VKI_CLONE_PARENT_SETTID|VKI_SIGCHLD)))
- {
- VGA_(gen_sys_fork_before)(tid, tst);
- SET_RESULT( VG_(do_syscall5)(SYSNO, ARG1, ARG2, ARG3, ARG4, ARG5) );
- VGA_(gen_sys_fork_after) (tid, tst);
- }
- else
- if (VG_(clo_support_elan3) && ARG1 == 0xF00) {
- /* --- BEGIN Quadrics Elan3 driver hacks (2) --- */
- /* The Elan3 user-space driver is trying to clone off a
- do-nothing-much thread. So we let it run natively, and hope
- for the best, but keep the parent running normally on
- Valgrind. */
- Int res = clone_child_native( (void*)tst->arch.vex.guest_EIP, ARG1, ARG2 );
- /* clone_child_native only returns in the parent's context, and
- so res must be either > 0, in which case it is the pid of the
- child, or < 0, which is an error code.
- */
- if (1)
- VG_(printf)("valgrind: ELAN3_HACK(x86): "
- "parent pid = %d, child pid = %d\n", VG_(getpid)(), res);
- vg_assert(res != 0);
- SET_RESULT(res);
- /* --- END Quadrics Elan3 driver hacks (2) --- */
- }
- else {
+ case VKI_CLONE_VFORK | VKI_CLONE_VM: /* vfork */
+ /* FALLTHROUGH - assume vfork == fork */
+ cloneflags &= ~(VKI_CLONE_VFORK | VKI_CLONE_VM);
+
+ case 0: /* plain fork */
+ SET_RESULT(do_fork_clone(tid,
+ cloneflags, /* flags */
+ (Addr)ARG2, /* child ESP */
+ (Int *)ARG3, /* parent_tidptr */
+ (Int *)ARG5)); /* child_tidptr */
+ break;
+
+ default:
+ /* should we just ENOSYS? */
+ VG_(message)(Vg_UserMsg, "Unsupported clone() flags: %x", ARG1);
VG_(unimplemented)
- ("clone(): not supported by Valgrind.\n "
- "\n"
- "NOTE(1): We do support programs linked against\n "
- "libpthread.so, though. Re-run with -v and ensure that\n "
- "you are picking up Valgrind's implementation of libpthread.so.\n"
- "\n"
- "NOTE(2): if you are trying to run a program using the Quadrics Elan3\n"
- " user-space drivers, you need re-run with the flag:\n"
- " --support-elan3=yes\n");
+ ("Valgrind does not support general clone(). The only supported uses "
+ "are via a threads library, fork, or vfork.");
+ }
+
+ if (!VG_(is_kerror)(RES)) {
+ if (ARG1 & VKI_CLONE_PARENT_SETTID)
+ POST_MEM_WRITE(ARG3, sizeof(Int));
+ if (ARG1 & (VKI_CLONE_CHILD_SETTID | VKI_CLONE_CHILD_CLEARTID))
+ POST_MEM_WRITE(ARG5, sizeof(Int));
+
+ /* Thread creation was successful; let the child have the chance
+ to run */
+ VG_(vg_yield)();
}
}
+PRE(sys_sigreturn, Special)
+{
+ PRINT("sigreturn ( )");
+
+ /* Adjust esp to point to start of frame; skip back up over
+ sigreturn sequence's "popl %eax" and handler ret addr */
+ tst->arch.vex.guest_ESP -= sizeof(Addr)+sizeof(Word);
+
+ /* This is only so that the EIP is (might be) useful to report if
+ something goes wrong in the sigreturn */
+ VGA_(restart_syscall)(&tst->arch);
+
+ VGA_(signal_return)(tid, False);
+
+ /* Keep looking for signals until there are none */
+ VG_(poll_signals)(tid);
+
+ /* placate return-must-be-set assertion */
+ SET_RESULT(0);
+}
+
PRE(sys_modify_ldt, Special)
{
PRINT("sys_modify_ldt ( %d, %p, %d )", ARG1,ARG2,ARG3);
switch (ARG1 /* call */) {
case VKI_SEMOP:
PRE_MEM_READ( "semop(sops)", ARG5, ARG3 * sizeof(struct vki_sembuf) );
- tst->sys_flags |= MayBlock;
+ /* tst->sys_flags |= MayBlock; */
break;
case VKI_SEMGET:
break;
if (ARG6 != 0)
PRE_MEM_READ( "semtimedop(timeout)", ARG6,
sizeof(struct vki_timespec) );
- tst->sys_flags |= MayBlock;
+ /* tst->sys_flags |= MayBlock; */
break;
case VKI_MSGSND:
{
PRE_MEM_READ( "msgsnd(msgp->mtext)",
(Addr)msgp->mtext, msgsz );
- if ((ARG4 & VKI_IPC_NOWAIT) == 0)
- tst->sys_flags |= MayBlock;
+ /* if ((ARG4 & VKI_IPC_NOWAIT) == 0)
+ tst->sys_flags |= MayBlock;
+ */
break;
}
case VKI_MSGRCV:
PRE_MEM_WRITE( "msgrcv(msgp->mtext)",
(Addr)msgp->mtext, msgsz );
- if ((ARG4 & VKI_IPC_NOWAIT) == 0)
- tst->sys_flags |= MayBlock;
+ /* if ((ARG4 & VKI_IPC_NOWAIT) == 0)
+ tst->sys_flags |= MayBlock;
+ */
break;
}
case VKI_MSGGET:
}
}
+
+// jrs 20050207: this is from the svn branch
+//PRE(sys_sigaction, Special)
+//{
+// PRINT("sys_sigaction ( %d, %p, %p )", ARG1,ARG2,ARG3);
+// PRE_REG_READ3(int, "sigaction",
+// int, signum, const struct old_sigaction *, act,
+// struct old_sigaction *, oldact)
+// if (ARG2 != 0)
+// PRE_MEM_READ( "sigaction(act)", ARG2, sizeof(struct vki_old_sigaction));
+// if (ARG3 != 0)
+// PRE_MEM_WRITE( "sigaction(oldact)", ARG3, sizeof(struct vki_old_sigaction));
+//
+// VG_(do_sys_sigaction)(tid);
+//}
+
+/* Convert from non-RT to RT sigset_t's */
+static void convert_sigset_to_rt(const vki_old_sigset_t *oldset, vki_sigset_t *set)
+{
+ VG_(sigemptyset)(set);
+ set->sig[0] = *oldset;
+}
PRE(sys_sigaction, Special)
{
+ struct vki_sigaction new, old;
+ struct vki_sigaction *newp, *oldp;
+
PRINT("sys_sigaction ( %d, %p, %p )", ARG1,ARG2,ARG3);
PRE_REG_READ3(int, "sigaction",
int, signum, const struct old_sigaction *, act,
- struct old_sigaction *, oldact)
+ struct old_sigaction *, oldact);
+
+ newp = oldp = NULL;
+
if (ARG2 != 0)
PRE_MEM_READ( "sigaction(act)", ARG2, sizeof(struct vki_old_sigaction));
- if (ARG3 != 0)
+
+ if (ARG3 != 0) {
PRE_MEM_WRITE( "sigaction(oldact)", ARG3, sizeof(struct vki_old_sigaction));
+ oldp = &old;
+ }
+
+ //jrs 20050207: what?! how can this make any sense?
+ //if (VG_(is_kerror)(SYSRES))
+ // return;
+
+ if (ARG2 != 0) {
+ struct vki_old_sigaction *oldnew = (struct vki_old_sigaction *)ARG2;
+
+ new.ksa_handler = oldnew->ksa_handler;
+ new.sa_flags = oldnew->sa_flags;
+ new.sa_restorer = oldnew->sa_restorer;
+ convert_sigset_to_rt(&oldnew->sa_mask, &new.sa_mask);
+ newp = &new;
+ }
+
+ SET_RESULT( VG_(do_sys_sigaction)(ARG1, newp, oldp) );
- VG_(do_sys_sigaction)(tid);
+ if (ARG3 != 0 && RES == 0) {
+ struct vki_old_sigaction *oldold = (struct vki_old_sigaction *)ARG3;
+
+ oldold->ksa_handler = oldp->ksa_handler;
+ oldold->sa_flags = oldp->sa_flags;
+ oldold->sa_restorer = oldp->sa_restorer;
+ oldold->sa_mask = oldp->sa_mask.sig[0];
+ }
}
POST(sys_sigaction)
const struct SyscallTableEntry VGA_(syscall_table)[] = {
// (restart_syscall) // 0
GENX_(__NR_exit, sys_exit), // 1
- GENXY(__NR_fork, sys_fork), // 2
+ GENX_(__NR_fork, sys_fork), // 2
GENXY(__NR_read, sys_read), // 3
GENX_(__NR_write, sys_write), // 4
GENXY(__NR_fcntl, sys_fcntl), // 55
GENX_(__NR_mpx, sys_ni_syscall), // 56
- GENXY(__NR_setpgid, sys_setpgid), // 57
+ GENX_(__NR_setpgid, sys_setpgid), // 57
GENX_(__NR_ulimit, sys_ni_syscall), // 58
// (__NR_oldolduname, sys_olduname), // 59 Linux -- obsolete
LINXY(__NR_sysinfo, sys_sysinfo), // 116
PLAXY(__NR_ipc, sys_ipc), // 117
GENX_(__NR_fsync, sys_fsync), // 118
- // (__NR_sigreturn, sys_sigreturn), // 119 ?/Linux
+ PLAX_(__NR_sigreturn, sys_sigreturn), // 119 ?/Linux
PLAX_(__NR_clone, sys_clone), // 120
// (__NR_setdomainname, sys_setdomainname), // 121 */*(?)
// (__NR_fadvise64, sys_fadvise64), // 250 */(Linux?)
GENX_(251, sys_ni_syscall), // 251
- GENX_(__NR_exit_group, sys_exit_group), // 252
+ LINX_(__NR_exit_group, sys_exit_group), // 252
GENXY(__NR_lookup_dcookie, sys_lookup_dcookie), // 253
LINXY(__NR_epoll_create, sys_epoll_create), // 254
Makefile.in
Makefile
+core_arch_asm_offsets.h
+gen_offsets
stage2.lds
include $(top_srcdir)/Makefile.all.am
include $(top_srcdir)/Makefile.core-AM_CPPFLAGS.am
-AM_CFLAGS = $(WERROR) -Winline -Wall -Wshadow -O -fomit-frame-pointer -g
+AM_CFLAGS = $(WERROR) -Wmissing-prototypes -Winline -Wall -Wshadow -O -g
noinst_HEADERS = \
core_arch.h \
noinst_LIBRARIES = libarch.a
EXTRA_DIST = \
- jmp_with_stack.c \
- libpthread.c
+ jmp_with_stack.c
BUILT_SOURCES = stage2.lds
CLEANFILES = stage2.lds
helpers.S \
dispatch.S \
signals.c \
+ jmp_with_stack.c \
state.c
# Extract ld's default linker script and hack it to our needs
#define ARCH_CLREQ_RET guest_EDX
#define ARCH_PTHREQ_RET guest_EDX
+
// Register numbers, for vg_symtab2.c
#define R_STACK_PTR 4
#define R_FRAME_PTR 5
// The signal handler needs to know this.
#define ARCH_STACK_REDZONE_SIZE 0
+//extern const Char VG_(helper_wrapper_before)[]; /* in dispatch.S */
+//extern const Char VG_(helper_wrapper_return)[]; /* in dispatch.S */
+
+//extern const Char VG_(helper_undefined_instruction)[];
+//extern const Char VG_(helper_INT)[];
+//extern const Char VG_(helper_breakpoint)[];
+
+
/* ---------------------------------------------------------------------
Architecture-specific part of a ThreadState
------------------------------------------------------------------ */
-// Architecture-specific part of a ThreadState
-// XXX: eventually this should be made abstract, ie. the fields not visible
-// to the core...
typedef
struct {
/* --- BEGIN vex-mandated guest state --- */
typedef VexGuestX86State VexGuestArchState;
-/* ---------------------------------------------------------------------
- libpthread stuff
- ------------------------------------------------------------------ */
-
-struct _ThreadArchAux {
- void* tls_data;
- int tls_segment;
- unsigned long sysinfo;
-};
-
/* ---------------------------------------------------------------------
Miscellaneous constants
------------------------------------------------------------------ */
// Valgrind's signal stack size, in words.
#define VG_SIGSTACK_SIZE_W 10000
+// Valgrind's stack size, in words.
+#define VG_STACK_SIZE_W 16384
+
// Base address of client address space.
#define CLIENT_BASE 0x00000000ul
+/* ---------------------------------------------------------------------
+ Signal stuff (should be plat)
+ ------------------------------------------------------------------ */
+
+void VGA_(signal_return)(ThreadId tid, Bool isRT);
+
#endif // __X86_CORE_ARCH_H
/*--------------------------------------------------------------------*/
#ifndef __X86_CORE_ARCH_ASM_H
#define __X86_CORE_ARCH_ASM_H
-#endif // __X86_CORE_ARCH_ASM_H
+#endif /* __X86_CORE_ARCH_ASM_H */
/*--------------------------------------------------------------------*/
/*--- end ---*/
*/
#include "core_asm.h"
+#include "vki_unistd.h"
/* ------------------ SIMULATED CPU HELPERS ------------------ */
/* A stubs for a return which we want to catch: a signal return.
.global VG_(trampoline_code_start)
.global VG_(trampoline_code_length)
.global VG_(tramp_sigreturn_offset)
+.global VG_(tramp_rt_sigreturn_offset)
.global VG_(tramp_syscall_offset)
VG_(trampoline_code_start):
-sigreturn_start:
- subl $20, %esp # allocate arg block
- movl %esp, %edx # %edx == &_zzq_args[0]
- movl $VG_USERREQ__SIGNAL_RETURNS, 0(%edx) # request
- movl $0, 4(%edx) # arg1
- movl $0, 8(%edx) # arg2
- movl $0, 12(%edx) # arg3
- movl $0, 16(%edx) # arg4
- movl %edx, %eax
- # and now the magic sequence itself:
- roll $29, %eax
- roll $3, %eax
- rorl $27, %eax
- rorl $5, %eax
- roll $13, %eax
- roll $19, %eax
- # should never get here
- ud2
+sigreturn_start:
+ /* This is a very specific sequence which GDB uses to
+ recognize signal handler frames. */
+ popl %eax
+ movl $__NR_sigreturn, %eax
+ int $0x80
+ ud2
+
+rt_sigreturn_start:
+ /* Likewise for rt signal frames */
+ movl $__NR_rt_sigreturn, %eax
+ int $0x80
+ ud2
# We can point our sysinfo stuff here
.align 16
.long tramp_code_end - VG_(trampoline_code_start)
VG_(tramp_sigreturn_offset):
.long sigreturn_start - VG_(trampoline_code_start)
+VG_(tramp_rt_sigreturn_offset):
+ .long rt_sigreturn_start - VG_(trampoline_code_start)
VG_(tramp_syscall_offset):
.long syscall_start - VG_(trampoline_code_start)
.text
#include "ume.h"
-void jmp_with_stack(Addr eip, Addr esp)
+/*
+ Jump to a particular IP with a particular SP. This is intended
+ to simulate the initial CPU state when the kernel starts an program
+ after exec; it therefore also clears all the other registers.
+ */
+void jmp_with_stack(void (*eip)(void), Addr esp)
{
asm volatile (
"movl %1, %%esp;" // set esp */
#include "libvex_guest_x86.h"
+
+/* This module creates and removes signal frames for signal deliveries
+ on x86-linux.
+
+ Note that this file is in the wrong place. It is marked as x86
+ specific, but in fact it is specific to both x86 and linux. There
+ is nothing that ensures that (eg) x86-solaris will have the same
+ signal frame layout as Linux.
+
+ Note also, this file contains kernel-specific knowledge in the
+ form of 'struct sigframe' and 'struct rt_sigframe'. How does
+ that relate to the vki kernel interface stuff?
+
+ Either a 'struct sigframe' or a 'struct rtsigframe' is pushed
+ onto the client's stack. This contains a subsidiary
+ vki_ucontext. That holds the vcpu's state across the signal,
+ so that the sighandler can mess with the vcpu state if it
+ really wants.
+
+ FIXME: sigcontexting is basically broken for the moment. When
+ delivering a signal, the integer registers and %eflags are
+ correctly written into the sigcontext, however the FP and SSE state
+ is not. When returning from a signal, the entire CPU state is
+ restored to what it was before the signal. Hence signal handlers
+ which modify the sigcontext and then return will not work.
+
+ This will be fixed.
+*/
+
+
/*------------------------------------------------------------*/
-/*--- Signal frame ---*/
+/*--- Signal frame layouts ---*/
/*------------------------------------------------------------*/
// A structure in which to save the application's registers
// during the execution of signal handlers.
-typedef
- struct {
- /* There are two different stack frame formats, depending on
- whether the client set the SA_SIGINFO flag for the handler.
- This structure is put onto the client's stack as part of
- signal delivery, and therefore appears as the signal
- handler's arguments.
-
- The first two words are common for both frame formats -
- they're the return address and the signal number. */
-
- /* Sig handler's (bogus) return address */
- Addr retaddr;
- /* The arg to the sig handler. We need to inspect this after
- the handler returns, but it's unreasonable to assume that the
- handler won't change it. So we keep a second copy of it in
- sigNo_private. */
- Int sigNo;
-
- /* This is where the two frames start differing. */
- union {
- struct { /* set SA_SIGINFO */
- /* ptr to siginfo_t. */
- Addr psigInfo;
-
- /* ptr to ucontext */
- Addr puContext;
- } sigInfo;
- struct vki_sigcontext sigContext; /* did not set SA_SIGINFO */
- } handlerArgs;
-
- /* The rest are private fields which the handler is unaware of. */
-
- /* Sanity check word. */
- UInt magicPI;
- /* pointed to by psigInfo */
- vki_siginfo_t sigInfo;
- /* pointed to by puContext */
- struct vki_ucontext uContext;
-
- /* Safely-saved version of sigNo, as described above. */
- Int sigNo_private;
-
- /* Saved processor state. */
- VexGuestX86State vex;
- VexGuestX86State vex_shadow;
-
- /* saved signal mask to be restored when handler returns */
- vki_sigset_t mask;
-
- /* Scheduler-private stuff: what was the thread's status prior to
- delivering this signal? */
- ThreadStatus status;
- void* /*pthread_mutex_t* */ associated_mx;
- void* /*pthread_cond_t* */ associated_cv;
-
- /* Sanity check word. Is the highest-addressed word; do not
- move!*/
- UInt magicE;
- }
- VgSigFrame;
+// Linux has 2 signal frame structures: one for normal signal
+// deliveries, and one for SA_SIGINFO deliveries (also known as RT
+// signals).
+//
+// In theory, so long as we get the arguments to the handler function
+// right, it doesn't matter what the exact layout of the rest of the
+// frame is. Unfortunately, things like gcc's exception unwinding
+// make assumptions about the locations of various parts of the frame,
+// so we need to duplicate it exactly.
+
+/* Valgrind-specific parts of the signal frame */
+struct vg_sigframe
+{
+ /* Sanity check word. */
+ UInt magicPI;
+
+ UInt handlerflags; /* flags for signal handler */
+
+
+ /* Safely-saved version of sigNo, as described above. */
+ Int sigNo_private;
+
+ /* XXX This is wrong. Surely we should store the shadow values
+ into the shadow memory behind the actual values? */
+ VexGuestX86State vex_shadow;
+
+ /* HACK ALERT */
+ VexGuestX86State vex;
+ /* end HACK ALERT */
+
+ /* saved signal mask to be restored when handler returns */
+ vki_sigset_t mask;
+
+ /* Sanity check word. Is the highest-addressed word; do not
+ move!*/
+ UInt magicE;
+};
+
+struct sigframe
+{
+ /* Sig handler's return address */
+ Addr retaddr;
+ Int sigNo;
+
+ struct vki_sigcontext sigContext;
+ struct _vki_fpstate fpstate;
+
+ struct vg_sigframe vg;
+};
+
+struct rt_sigframe
+{
+ /* Sig handler's return address */
+ Addr retaddr;
+ Int sigNo;
+
+ /* ptr to siginfo_t. */
+ Addr psigInfo;
+
+ /* ptr to ucontext */
+ Addr puContext;
+ /* pointed to by psigInfo */
+ vki_siginfo_t sigInfo;
+
+ /* pointed to by puContext */
+ struct vki_ucontext uContext;
+ struct _vki_fpstate fpstate;
+
+ struct vg_sigframe vg;
+};
+
+
+//:: /*------------------------------------------------------------*/
+//:: /*--- Signal operations ---*/
+//:: /*------------------------------------------------------------*/
+//::
+//:: /*
+//:: Great gobs of FP state conversion taken wholesale from
+//:: linux/arch/i386/kernel/i387.c
+//:: */
+//::
+//:: /*
+//:: * FXSR floating point environment conversions.
+//:: */
+//:: #define X86_FXSR_MAGIC 0x0000
+//::
+//:: /*
+//:: * FPU tag word conversions.
+//:: */
+//::
+//:: static inline unsigned short twd_i387_to_fxsr( unsigned short twd )
+//:: {
+//:: unsigned int tmp; /* to avoid 16 bit prefixes in the code */
+//::
+//:: /* Transform each pair of bits into 01 (valid) or 00 (empty) */
+//:: tmp = ~twd;
+//:: tmp = (tmp | (tmp>>1)) & 0x5555; /* 0V0V0V0V0V0V0V0V */
+//:: /* and move the valid bits to the lower byte. */
+//:: tmp = (tmp | (tmp >> 1)) & 0x3333; /* 00VV00VV00VV00VV */
+//:: tmp = (tmp | (tmp >> 2)) & 0x0f0f; /* 0000VVVV0000VVVV */
+//:: tmp = (tmp | (tmp >> 4)) & 0x00ff; /* 00000000VVVVVVVV */
+//:: return tmp;
+//:: }
+//::
+//:: static unsigned long twd_fxsr_to_i387( const struct i387_fxsave_struct *fxsave )
+//:: {
+//:: struct _vki_fpxreg *st = NULL;
+//:: unsigned long twd = (unsigned long) fxsave->twd;
+//:: unsigned long tag;
+//:: unsigned long ret = 0xffff0000u;
+//:: int i;
+//::
+//:: #define FPREG_ADDR(f, n) ((char *)&(f)->st_space + (n) * 16);
+//::
+//:: for ( i = 0 ; i < 8 ; i++ ) {
+//:: if ( twd & 0x1 ) {
+//:: st = (struct _vki_fpxreg *) FPREG_ADDR( fxsave, i );
+//::
+//:: switch ( st->exponent & 0x7fff ) {
+//:: case 0x7fff:
+//:: tag = 2; /* Special */
+//:: break;
+//:: case 0x0000:
+//:: if ( !st->significand[0] &&
+//:: !st->significand[1] &&
+//:: !st->significand[2] &&
+//:: !st->significand[3] ) {
+//:: tag = 1; /* Zero */
+//:: } else {
+//:: tag = 2; /* Special */
+//:: }
+//:: break;
+//:: default:
+//:: if ( st->significand[3] & 0x8000 ) {
+//:: tag = 0; /* Valid */
+//:: } else {
+//:: tag = 2; /* Special */
+//:: }
+//:: break;
+//:: }
+//:: } else {
+//:: tag = 3; /* Empty */
+//:: }
+//:: ret |= (tag << (2 * i));
+//:: twd = twd >> 1;
+//:: }
+//:: return ret;
+//:: }
+//::
+//:: static void convert_fxsr_to_user( struct _vki_fpstate *buf,
+//:: const struct i387_fxsave_struct *fxsave )
+//:: {
+//:: unsigned long env[7];
+//:: struct _vki_fpreg *to;
+//:: struct _vki_fpxreg *from;
+//:: int i;
+//::
+//:: env[0] = (unsigned long)fxsave->cwd | 0xffff0000ul;
+//:: env[1] = (unsigned long)fxsave->swd | 0xffff0000ul;
+//:: env[2] = twd_fxsr_to_i387(fxsave);
+//:: env[3] = fxsave->fip;
+//:: env[4] = fxsave->fcs | ((unsigned long)fxsave->fop << 16);
+//:: env[5] = fxsave->foo;
+//:: env[6] = fxsave->fos;
+//::
+//:: VG_(memcpy)(buf, env, 7 * sizeof(unsigned long));
+//::
+//:: to = &buf->_st[0];
+//:: from = (struct _vki_fpxreg *) &fxsave->st_space[0];
+//:: for ( i = 0 ; i < 8 ; i++, to++, from++ ) {
+//:: unsigned long __user *t = (unsigned long __user *)to;
+//:: unsigned long *f = (unsigned long *)from;
+//::
+//:: t[0] = f[0];
+//:: t[1] = f[1];
+//:: to->exponent = from->exponent;
+//:: }
+//:: }
+//::
+//:: static void convert_fxsr_from_user( struct i387_fxsave_struct *fxsave,
+//:: const struct _vki_fpstate *buf )
+//:: {
+//:: unsigned long env[7];
+//:: struct _vki_fpxreg *to;
+//:: const struct _vki_fpreg *from;
+//:: int i;
+//::
+//:: VG_(memcpy)(env, buf, 7 * sizeof(long));
+//::
+//:: fxsave->cwd = (unsigned short)(env[0] & 0xffff);
+//:: fxsave->swd = (unsigned short)(env[1] & 0xffff);
+//:: fxsave->twd = twd_i387_to_fxsr((unsigned short)(env[2] & 0xffff));
+//:: fxsave->fip = env[3];
+//:: fxsave->fop = (unsigned short)((env[4] & 0xffff0000ul) >> 16);
+//:: fxsave->fcs = (env[4] & 0xffff);
+//:: fxsave->foo = env[5];
+//:: fxsave->fos = env[6];
+//::
+//:: to = (struct _vki_fpxreg *) &fxsave->st_space[0];
+//:: from = &buf->_st[0];
+//:: for ( i = 0 ; i < 8 ; i++, to++, from++ ) {
+//:: unsigned long *t = (unsigned long *)to;
+//:: unsigned long __user *f = (unsigned long __user *)from;
+//::
+//:: t[0] = f[0];
+//:: t[1] = f[1];
+//:: to->exponent = from->exponent;
+//:: }
+//:: }
+//::
+//:: static inline void save_i387_fsave( arch_thread_t *regs, struct _vki_fpstate *buf )
+//:: {
+//:: struct i387_fsave_struct *fs = ®s->m_sse.fsave;
+//::
+//:: fs->status = fs->swd;
+//:: VG_(memcpy)(buf, fs, sizeof(*fs));
+//:: }
+//::
+//:: static void save_i387_fxsave( arch_thread_t *regs, struct _vki_fpstate *buf )
+//:: {
+//:: const struct i387_fxsave_struct *fx = ®s->m_sse.fxsave;
+//:: convert_fxsr_to_user( buf, fx );
+//::
+//:: buf->status = fx->swd;
+//:: buf->magic = X86_FXSR_MAGIC;
+//:: VG_(memcpy)(buf->_fxsr_env, fx, sizeof(struct i387_fxsave_struct));
+//:: }
+//::
+//:: static void save_i387( arch_thread_t *regs, struct _vki_fpstate *buf )
+//:: {
+//:: if ( VG_(have_ssestate) )
+//:: save_i387_fxsave( regs, buf );
+//:: else
+//:: save_i387_fsave( regs, buf );
+//:: }
+//::
+//:: static inline void restore_i387_fsave( arch_thread_t *regs, const struct _vki_fpstate __user *buf )
+//:: {
+//:: VG_(memcpy)( ®s->m_sse.fsave, buf, sizeof(struct i387_fsave_struct) );
+//:: }
+//::
+//:: static void restore_i387_fxsave( arch_thread_t *regs, const struct _vki_fpstate __user *buf )
+//:: {
+//:: VG_(memcpy)(®s->m_sse.fxsave, &buf->_fxsr_env[0],
+//:: sizeof(struct i387_fxsave_struct) );
+//:: /* mxcsr reserved bits must be masked to zero for security reasons */
+//:: regs->m_sse.fxsave.mxcsr &= 0xffbf;
+//:: convert_fxsr_from_user( ®s->m_sse.fxsave, buf );
+//:: }
+//::
+//:: static void restore_i387( arch_thread_t *regs, const struct _vki_fpstate __user *buf )
+//:: {
+//:: if ( VG_(have_ssestate) ) {
+//:: restore_i387_fxsave( regs, buf );
+//:: } else {
+//:: restore_i387_fsave( regs, buf );
+//:: }
+//:: }
+
/*------------------------------------------------------------*/
-/*--- Signal operations ---*/
+/*--- Creating signal frames ---*/
/*------------------------------------------------------------*/
-/* Make up a plausible-looking thread state from the thread's current state */
-static void synth_ucontext(ThreadId tid, const vki_siginfo_t *si,
- const vki_sigset_t *set, struct vki_ucontext *uc)
+/* Create a plausible-looking sigcontext from the thread's
+ Vex guest state. NOTE: does not fill in the FP or SSE
+ bits of sigcontext at the moment.
+*/
+static
+void synth_ucontext(ThreadId tid, const vki_siginfo_t *si,
+ const vki_sigset_t *set,
+ struct vki_ucontext *uc, struct _vki_fpstate *fpstate)
{
ThreadState *tst = VG_(get_ThreadState)(tid);
struct vki_sigcontext *sc = &uc->uc_mcontext;
uc->uc_link = 0;
uc->uc_sigmask = *set;
uc->uc_stack = tst->altstack;
+ sc->fpstate = fpstate;
+
+ // FIXME: save_i387(&tst->arch, fpstate);
-#define SC2(reg,REG) sc->reg = tst->arch.vex.guest_##REG
+# define SC2(reg,REG) sc->reg = tst->arch.vex.guest_##REG
SC2(gs,GS);
SC2(fs,FS);
SC2(es,ES);
/* XXX esp_at_signal */
/* XXX trapno */
/* XXX err */
-#undef SC2
+# undef SC2
sc->cr2 = (UInt)si->_sifields._sigfault._addr;
}
+
#define SET_SIGNAL_ESP(zztid, zzval) \
SET_THREAD_REG(zztid, zzval, STACK_PTR, post_reg_write, \
Vg_CoreSignal, zztid, O_STACK_PTR, sizeof(Addr))
-void VGA_(push_signal_frame)(ThreadId tid, Addr esp_top_of_frame,
- const vki_siginfo_t *siginfo,
- void *handler, UInt flags,
- const vki_sigset_t *mask)
-{
- Addr esp;
- ThreadState* tst;
- VgSigFrame* frame;
- Int sigNo = siginfo->si_signo;
- esp = esp_top_of_frame;
- esp -= sizeof(VgSigFrame);
- frame = (VgSigFrame*)esp;
+/* Extend the stack segment downwards if needed so as to ensure the
+ new signal frames are mapped to something. Return a Bool
+ indicating whether or not the operation was successful.
+*/
+static Bool extend ( ThreadState *tst, Addr addr, SizeT size )
+{
+ ThreadId tid = tst->tid;
+ Segment *stackseg = NULL;
+
+ if (VG_(extend_stack)(addr, tst->stack_size)) {
+ stackseg = VG_(find_segment)(addr);
+ if (0 && stackseg)
+ VG_(printf)("frame=%p seg=%p-%p\n",
+ addr, stackseg->addr, stackseg->addr+stackseg->len);
+ }
- tst = & VG_(threads)[tid];
+ if (stackseg == NULL
+ || (stackseg->prot & (VKI_PROT_READ|VKI_PROT_WRITE)) == 0) {
+ VG_(message)(
+ Vg_UserMsg,
+ "Can't extend stack to %p during signal delivery for thread %d:",
+ addr, tid);
+ if (stackseg == NULL)
+ VG_(message)(Vg_UserMsg, " no stack segment");
+ else
+ VG_(message)(Vg_UserMsg, " too small or bad protection modes");
+
+ /* set SIGSEGV to default handler */
+ VG_(set_default_handler)(VKI_SIGSEGV);
+ VG_(synth_fault_mapping)(tid, addr);
+
+ /* The whole process should be about to die, since the default
+ action of SIGSEGV to kill the whole process. */
+ return False;
+ }
/* For tracking memory events, indicate the entire frame has been
- * allocated, but pretend that only the first four words are written */
- VG_TRACK( new_mem_stack_signal, (Addr)frame, sizeof(VgSigFrame) );
-
- /* Assert that the frame is placed correctly. */
- vg_assert( (sizeof(VgSigFrame) & 0x3) == 0 );
- vg_assert( ((Char*)(&frame->magicE)) + sizeof(UInt)
- == ((Char*)(esp_top_of_frame)) );
-
- /* retaddr, sigNo, psigInfo, puContext fields are to be written */
- VG_TRACK( pre_mem_write, Vg_CoreSignal, tid, "signal handler frame",
- (Addr)frame, offsetof(VgSigFrame, handlerArgs) );
- frame->retaddr = (UInt)VG_(client_trampoline_code)+VG_(tramp_sigreturn_offset);
- frame->sigNo = sigNo;
- frame->sigNo_private = sigNo;
- VG_TRACK( post_mem_write, Vg_CoreSignal, tid,
- (Addr)frame, offsetof(VgSigFrame, handlerArgs) );
-
- if (flags & VKI_SA_SIGINFO) {
- /* if the client asked for a siginfo delivery, then build the stack that way */
- VG_TRACK( pre_mem_write, Vg_CoreSignal, tid, "signal handler frame (siginfo)",
- (Addr)&frame->handlerArgs, sizeof(frame->handlerArgs.sigInfo) );
- frame->handlerArgs.sigInfo.psigInfo = (Addr)&frame->sigInfo;
- frame->handlerArgs.sigInfo.puContext = (Addr)&frame->uContext;
- VG_TRACK( post_mem_write, Vg_CoreSignal, tid,
- (Addr)&frame->handlerArgs, sizeof(frame->handlerArgs.sigInfo) );
-
- VG_TRACK( pre_mem_write, Vg_CoreSignal, tid, "signal handler frame (siginfo)",
- (Addr)&frame->sigInfo, sizeof(frame->sigInfo) );
- VG_(memcpy)(&frame->sigInfo, siginfo, sizeof(vki_siginfo_t));
- VG_TRACK( post_mem_write, Vg_CoreSignal, tid,
- (Addr)&frame->sigInfo, sizeof(frame->sigInfo) );
-
- VG_TRACK( pre_mem_write, Vg_CoreSignal, tid, "signal handler frame (siginfo)",
- (Addr)&frame->uContext, sizeof(frame->uContext) );
- synth_ucontext(tid, siginfo, mask, &frame->uContext);
- VG_TRACK( post_mem_write, Vg_CoreSignal, tid,
- (Addr)&frame->uContext, sizeof(frame->uContext) );
- } else {
- struct vki_ucontext uc;
-
- /* otherwise just put the sigcontext there */
-
- synth_ucontext(tid, siginfo, mask, &uc);
-
- VG_TRACK( pre_mem_write, Vg_CoreSignal, tid, "signal handler frame (sigcontext)",
- (Addr)&frame->handlerArgs, sizeof(frame->handlerArgs.sigContext) );
- VG_(memcpy)(&frame->handlerArgs.sigContext, &uc.uc_mcontext,
- sizeof(struct vki_sigcontext));
- VG_TRACK( post_mem_write, Vg_CoreSignal, tid,
- (Addr)&frame->handlerArgs, sizeof(frame->handlerArgs.sigContext) );
-
- frame->handlerArgs.sigContext.oldmask = tst->sig_mask.sig[0];
- }
+ allocated. */
+ VG_TRACK( new_mem_stack_signal, addr, size );
- frame->magicPI = 0x31415927;
+ return True;
+}
- frame->vex = tst->arch.vex;
- frame->vex_shadow = tst->arch.vex_shadow;
- frame->mask = tst->sig_mask;
+/* Build the Valgrind-specific part of a signal frame. */
- /* If the thread is currently blocked in a syscall, we want it to
- resume as runnable. */
- if (tst->status == VgTs_WaitSys)
- frame->status = VgTs_Runnable;
- else
- frame->status = tst->status;
-
- frame->associated_mx = tst->associated_mx;
- frame->associated_cv = tst->associated_cv;
+static void build_vg_sigframe(struct vg_sigframe *frame,
+ ThreadState *tst,
+ const vki_sigset_t *mask,
+ UInt flags,
+ Int sigNo)
+{
+ frame->sigNo_private = sigNo;
+ frame->magicPI = 0x31415927;
+ frame->vex_shadow = tst->arch.vex_shadow;
+ /* HACK ALERT */
+ frame->vex = tst->arch.vex;
+ /* end HACK ALERT */
+ frame->mask = tst->sig_mask;
+ frame->handlerflags = flags;
+ frame->magicE = 0x27182818;
+}
- frame->magicE = 0x27182818;
- /* Ensure 'tid' and 'tst' correspond */
- vg_assert(& VG_(threads)[tid] == tst);
- /* Set the thread so it will next run the handler. */
- /* tst->m_esp = esp; */
- SET_SIGNAL_ESP(tid, esp);
+static Addr build_sigframe(ThreadState *tst,
+ Addr esp_top_of_frame,
+ const vki_siginfo_t *siginfo,
+ void *handler, UInt flags,
+ const vki_sigset_t *mask,
+ void *restorer)
+{
+ struct sigframe *frame;
+ Addr esp = esp_top_of_frame;
+ Int sigNo = siginfo->si_signo;
+ struct vki_ucontext uc;
+
+ esp -= sizeof(*frame);
+ esp = ROUNDDN(esp, 16);
+ frame = (struct sigframe *)esp;
+
+ if (!extend(tst, esp, sizeof(*frame)))
+ return esp_top_of_frame;
+
+ /* retaddr, sigNo, siguContext fields are to be written */
+ VG_TRACK( pre_mem_write, Vg_CoreSignal, tst->tid, "signal handler frame",
+ esp, offsetof(struct sigframe, vg) );
+
+ frame->sigNo = sigNo;
+
+ if (flags & VKI_SA_RESTORER)
+ frame->retaddr = (Addr)restorer;
+ else {
+ if (flags & VKI_SA_SIGINFO)
+ frame->retaddr
+ = (UInt)VG_(client_trampoline_code)+VG_(tramp_rt_sigreturn_offset);
+ else
+ frame->retaddr
+ = (UInt)VG_(client_trampoline_code)+VG_(tramp_sigreturn_offset);
+ }
- tst->arch.vex.guest_EIP = (Addr) handler;
- /* This thread needs to be marked runnable, but we leave that the
- caller to do. */
+ synth_ucontext(tst->tid, siginfo, mask, &uc, &frame->fpstate);
- if (0)
- VG_(printf)("pushed signal frame; %%ESP now = %p, next %%EBP = %p, status=%d\n",
- esp, tst->arch.vex.guest_EIP, tst->status);
+ VG_(memcpy)(&frame->sigContext, &uc.uc_mcontext,
+ sizeof(struct vki_sigcontext));
+ frame->sigContext.oldmask = mask->sig[0];
+
+ VG_TRACK( post_mem_write, Vg_CoreSignal, tst->tid,
+ esp, offsetof(struct sigframe, vg) );
+
+ build_vg_sigframe(&frame->vg, tst, mask, flags, sigNo);
+
+ return esp;
}
-Int VGA_(pop_signal_frame)(ThreadId tid)
+
+static Addr build_rt_sigframe(ThreadState *tst,
+ Addr esp_top_of_frame,
+ const vki_siginfo_t *siginfo,
+ void *handler, UInt flags,
+ const vki_sigset_t *mask,
+ void *restorer)
{
- Addr esp;
- VgSigFrame* frame;
- ThreadState* tst;
+ struct rt_sigframe *frame;
+ Addr esp = esp_top_of_frame;
+ Int sigNo = siginfo->si_signo;
+
+ esp -= sizeof(*frame);
+ esp = ROUNDDN(esp, 16);
+ frame = (struct rt_sigframe *)esp;
+
+ if (!extend(tst, esp, sizeof(*frame)))
+ return esp_top_of_frame;
+
+ /* retaddr, sigNo, pSiginfo, puContext fields are to be written */
+ VG_TRACK( pre_mem_write, Vg_CoreSignal, tst->tid, "rt signal handler frame",
+ esp, offsetof(struct rt_sigframe, vg) );
+
+ frame->sigNo = sigNo;
+
+ if (flags & VKI_SA_RESTORER)
+ frame->retaddr = (Addr)restorer;
+ else {
+ if (flags & VKI_SA_SIGINFO)
+ frame->retaddr
+ = (UInt)VG_(client_trampoline_code)+VG_(tramp_rt_sigreturn_offset);
+ else
+ frame->retaddr
+ = (UInt)VG_(client_trampoline_code)+VG_(tramp_sigreturn_offset);
+ }
- vg_assert(VG_(is_valid_tid)(tid));
- tst = & VG_(threads)[tid];
+ frame->psigInfo = (Addr)&frame->sigInfo;
+ frame->puContext = (Addr)&frame->uContext;
+ VG_(memcpy)(&frame->sigInfo, siginfo, sizeof(vki_siginfo_t));
- /* Correctly reestablish the frame base address. */
- esp = tst->arch.vex.guest_ESP;
- frame = (VgSigFrame*)
- (esp -4 /* because the handler's RET pops the RA */
- +20 /* because signalreturn_bogusRA pushes 5 words */);
+ /* SIGILL defines addr to be the faulting address */
+ if (sigNo == VKI_SIGILL && siginfo->si_code > 0)
+ frame->sigInfo._sifields._sigfault._addr
+ = (void*)tst->arch.vex.guest_EIP;
- vg_assert(frame->magicPI == 0x31415927);
- vg_assert(frame->magicE == 0x27182818);
- if (VG_(clo_trace_signals))
- VG_(message)(Vg_DebugMsg,
- "vg_pop_signal_frame (thread %d): valid magic; EIP=%p", tid, frame->vex.guest_EIP);
+ synth_ucontext(tst->tid, siginfo, mask, &frame->uContext, &frame->fpstate);
- /* Mark the frame structure as nonaccessible. */
- VG_TRACK( die_mem_stack_signal, (Addr)frame, sizeof(VgSigFrame) );
+ VG_TRACK( post_mem_write, Vg_CoreSignal, tst->tid,
+ esp, offsetof(struct rt_sigframe, vg) );
+
+ build_vg_sigframe(&frame->vg, tst, mask, flags, sigNo);
+
+ return esp;
+}
- /* restore machine state */
- tst->arch.vex = frame->vex;
- tst->arch.vex_shadow = frame->vex_shadow;
- /* And restore the thread's status to what it was before the signal
- was delivered. */
- tst->status = frame->status;
+void VGA_(push_signal_frame)(ThreadId tid, Addr esp_top_of_frame,
+ const vki_siginfo_t *siginfo,
+ void *handler, UInt flags,
+ const vki_sigset_t *mask,
+ void *restorer)
+{
+ Addr esp;
+ ThreadState* tst = VG_(get_ThreadState)(tid);
+
+ if (flags & VKI_SA_SIGINFO)
+ esp = build_rt_sigframe(tst, esp_top_of_frame, siginfo,
+ handler, flags, mask, restorer);
+ else
+ esp = build_sigframe(tst, esp_top_of_frame,
+ siginfo, handler, flags, mask, restorer);
- tst->associated_mx = frame->associated_mx;
- tst->associated_cv = frame->associated_cv;
+ /* Set the thread so it will next run the handler. */
+ /* tst->m_esp = esp; */
+ SET_SIGNAL_ESP(tid, esp);
- tst->sig_mask = frame->mask;
+ //VG_(printf)("handler = %p\n", handler);
+ tst->arch.vex.guest_EIP = (Addr) handler;
+ /* This thread needs to be marked runnable, but we leave that the
+ caller to do. */
- /* don't use the copy exposed to the handler; it might have changed
- it. */
- return frame->sigNo_private;
+ if (0)
+ VG_(printf)("pushed signal frame; %%ESP now = %p, "
+ "next %%EIP = %p, status=%d\n",
+ esp, tst->arch.vex.guest_EIP, tst->status);
}
+
/*------------------------------------------------------------*/
-/*--- Making coredumps ---*/
+/*--- Destroying signal frames ---*/
/*------------------------------------------------------------*/
-// Nb: these functions do *not* represent the right way to abstract out the
-// arch-specific parts of coredumps. Some rethinking is required.
-#if 0
-void VGA_(fill_elfregs_from_tst)(struct vki_user_regs_struct* regs,
- ThreadArchState* arch)
+/* Return False and don't do anything, just set the client to take a
+ segfault, if it looks like the frame is corrupted. */
+static
+Bool restore_vg_sigframe ( ThreadState *tst,
+ struct vg_sigframe *frame, Int *sigNo )
{
- regs->eflags = LibVEX_GuestX86_get_eflags(&arch->vex);
- regs->esp = arch->vex.guest_ESP;
- regs->eip = arch->vex.guest_EIP;
-
- regs->ebx = arch->vex.guest_EBX;
- regs->ecx = arch->vex.guest_ECX;
- regs->edx = arch->vex.guest_EDX;
- regs->esi = arch->vex.guest_ESI;
- regs->edi = arch->vex.guest_EDI;
- regs->ebp = arch->vex.guest_EBP;
- regs->eax = arch->vex.guest_EAX;
-
- regs->cs = arch->vex.guest_CS;
- regs->ds = arch->vex.guest_DS;
- regs->ss = arch->vex.guest_SS;
- regs->es = arch->vex.guest_ES;
- regs->fs = arch->vex.guest_FS;
- regs->gs = arch->vex.guest_GS;
+ if (frame->magicPI != 0x31415927 ||
+ frame->magicE != 0x27182818) {
+ VG_(message)(Vg_UserMsg, "Thread %d return signal frame "
+ "corrupted. Killing process.",
+ tst->tid);
+ VG_(set_default_handler)(VKI_SIGSEGV);
+ VG_(synth_fault)(tst->tid);
+ *sigNo = VKI_SIGSEGV;
+ return False;
+ }
+ tst->sig_mask = frame->mask;
+ tst->tmp_sig_mask = frame->mask;
+ tst->arch.vex_shadow = frame->vex_shadow;
+ /* HACK ALERT */
+ tst->arch.vex = frame->vex;
+ /* end HACK ALERT */
+ *sigNo = frame->sigNo_private;
+ return True;
}
-static void fill_fpu(vki_elf_fpregset_t *fpu, const Char *from)
+static
+void restore_sigcontext( ThreadState *tst,
+ struct vki_sigcontext *sc,
+ struct _vki_fpstate *fpstate )
{
- UShort *to;
- Int i;
-
- /* This is what the kernel does */
- VG_(memcpy)(fpu, from, 7*sizeof(long));
-
- to = (UShort *)&fpu->st_space[0];
- from += 18 * sizeof(UShort);
-
- for (i = 0; i < 8; i++, to += 5, from += 8)
- VG_(memcpy)(to, from, 5*sizeof(UShort));
+//:: tst->arch.vex.guest_EAX = sc->eax;
+//:: tst->arch.vex.guest_ECX = sc->ecx;
+//:: tst->arch.vex.guest_EDX = sc->edx;
+//:: tst->arch.vex.guest_EBX = sc->ebx;
+//:: tst->arch.vex.guest_EBP = sc->ebp;
+//:: tst->arch.vex.guest_ESP = sc->esp;
+//:: tst->arch.vex.guest_ESI = sc->esi;
+//:: tst->arch.vex.guest_EDI = sc->edi;
+//:: tst->arch.vex.guest_eflags = sc->eflags;
+//:: tst->arch.vex.guest_EIP = sc->eip;
+//::
+//:: tst->arch.vex.guest_CS = sc->cs;
+//:: tst->arch.vex.guest_SS = sc->ss;
+//:: tst->arch.vex.guest_DS = sc->ds;
+//:: tst->arch.vex.guest_ES = sc->es;
+//:: tst->arch.vex.guest_FS = sc->fs;
+//:: tst->arch.vex.guest_GS = sc->gs;
+//::
+//:: restore_i387(&tst->arch, fpstate);
}
-void VGA_(fill_elffpregs_from_BB)( vki_elf_fpregset_t* fpu )
-{
- fill_fpu(fpu, (const Char *)&VG_(baseBlock)[VGOFF_(m_ssestate)]);
-}
-void VGA_(fill_elffpregs_from_tst)( vki_elf_fpregset_t* fpu,
- const ThreadArchState* arch)
+static
+SizeT restore_sigframe ( ThreadState *tst,
+ struct sigframe *frame, Int *sigNo )
{
- fill_fpu(fpu, (const Char *)&arch->m_sse);
+ if (restore_vg_sigframe(tst, &frame->vg, sigNo))
+ restore_sigcontext(tst, &frame->sigContext, &frame->fpstate);
+
+ return sizeof(*frame);
}
-void VGA_(fill_elffpxregs_from_BB) ( vki_elf_fpxregset_t* xfpu )
+static
+SizeT restore_rt_sigframe ( ThreadState *tst,
+ struct rt_sigframe *frame, Int *sigNo )
{
- VG_(memcpy)(xfpu, &VG_(baseBlock)[VGOFF_(m_ssestate)], sizeof(*xfpu));
+ if (restore_vg_sigframe(tst, &frame->vg, sigNo))
+ restore_sigcontext(tst, &frame->uContext.uc_mcontext, &frame->fpstate);
+
+ return sizeof(*frame);
}
-void VGA_(fill_elffpxregs_from_tst) ( vki_elf_fpxregset_t* xfpu,
- const ThreadArchState* arch )
+
+void VGA_(signal_return)(ThreadId tid, Bool isRT)
{
- VG_(memcpy)(xfpu, arch->m_sse, sizeof(*xfpu));
+ Addr esp;
+ ThreadState* tst;
+ SizeT size;
+ Int sigNo;
+
+ tst = VG_(get_ThreadState)(tid);
+
+ /* Correctly reestablish the frame base address. */
+ esp = tst->arch.vex.guest_ESP;
+
+ if (!isRT)
+ size = restore_sigframe(tst, (struct sigframe *)esp, &sigNo);
+ else
+ size = restore_rt_sigframe(tst, (struct rt_sigframe *)esp, &sigNo);
+
+ VG_TRACK( die_mem_stack_signal, esp, size );
+
+ if (VG_(clo_trace_signals))
+ VG_(message)(
+ Vg_DebugMsg,
+ "vg_pop_signal_frame (thread %d): isRT=%d valid magic; EIP=%p",
+ tid, isRT, tst->arch.vex.guest_EIP);
+
+ /* tell the tools */
+ VG_TRACK( post_deliver_signal, tid, sigNo );
}
-#endif
-/*--------------------------------------------------------------------*/
-/*--- end ---*/
-/*--------------------------------------------------------------------*/
+//:: /*------------------------------------------------------------*/
+//:: /*--- Making coredumps ---*/
+//:: /*------------------------------------------------------------*/
+//::
+//:: void VGA_(fill_elfregs_from_tst)(struct vki_user_regs_struct* regs,
+//:: const arch_thread_t* arch)
+//:: {
+//:: regs->eflags = arch->m_eflags;
+//:: regs->esp = arch->m_esp;
+//:: regs->eip = arch->m_eip;
+//::
+//:: regs->ebx = arch->m_ebx;
+//:: regs->ecx = arch->m_ecx;
+//:: regs->edx = arch->m_edx;
+//:: regs->esi = arch->m_esi;
+//:: regs->edi = arch->m_edi;
+//:: regs->ebp = arch->m_ebp;
+//:: regs->eax = arch->m_eax;
+//::
+//:: regs->cs = arch->m_cs;
+//:: regs->ds = arch->m_ds;
+//:: regs->ss = arch->m_ss;
+//:: regs->es = arch->m_es;
+//:: regs->fs = arch->m_fs;
+//:: regs->gs = arch->m_gs;
+//:: }
+//::
+//:: static void fill_fpu(vki_elf_fpregset_t *fpu, const Char *from)
+//:: {
+//:: if (VG_(have_ssestate)) {
+//:: UShort *to;
+//:: Int i;
+//::
+//:: /* This is what the kernel does */
+//:: VG_(memcpy)(fpu, from, 7*sizeof(long));
+//::
+//:: to = (UShort *)&fpu->st_space[0];
+//:: from += 18 * sizeof(UShort);
+//::
+//:: for (i = 0; i < 8; i++, to += 5, from += 8)
+//:: VG_(memcpy)(to, from, 5*sizeof(UShort));
+//:: } else
+//:: VG_(memcpy)(fpu, from, sizeof(*fpu));
+//:: }
+//::
+//:: void VGA_(fill_elffpregs_from_tst)( vki_elf_fpregset_t* fpu,
+//:: const arch_thread_t* arch)
+//:: {
+//:: fill_fpu(fpu, (const Char *)&arch->m_sse);
+//:: }
+//::
+//:: void VGA_(fill_elffpxregs_from_tst) ( vki_elf_fpxregset_t* xfpu,
+//:: const arch_thread_t* arch )
+//:: {
+//:: VG_(memcpy)(xfpu, arch->m_sse.state, sizeof(*xfpu));
+//:: }
+//::
+//:: /*--------------------------------------------------------------------*/
+//:: /*--- end ---*/
+//:: /*--------------------------------------------------------------------*/
void VGA_(setup_child) ( /*OUT*/ ThreadArchState *child,
/*IN*/ ThreadArchState *parent )
{
+ /* We inherit our parent's guest state. */
+ child->vex = parent->vex;
+ child->vex_shadow = parent->vex_shadow;
/* We inherit our parent's LDT. */
if (parent->vex.guest_LDT == (HWord)NULL) {
/* We hope this is the common case. */
VG_TRACK ( post_mem_write, Vg_CoreSignal, tid, esp, 2 * sizeof(UWord) );
}
+
+void VGA_(mark_from_registers)(ThreadId tid, void (*marker)(Addr))
+{
+ ThreadState *tst = VG_(get_ThreadState)(tid);
+ ThreadArchState *arch = &tst->arch;
+
+ /* XXX ask tool about validity? */
+ (*marker)(arch->vex.guest_EAX);
+ (*marker)(arch->vex.guest_ECX);
+ (*marker)(arch->vex.guest_EDX);
+ (*marker)(arch->vex.guest_EBX);
+ (*marker)(arch->vex.guest_ESI);
+ (*marker)(arch->vex.guest_EDI);
+ (*marker)(arch->vex.guest_ESP);
+ (*marker)(arch->vex.guest_EBP);
+}
+
+
/*------------------------------------------------------------*/
/*--- Symtab stuff ---*/
/*------------------------------------------------------------*/
obj:*libc-2.1.3.so
obj:*libX11.so*
}
-
-##----------------------------------------------------------------------##
-## For a leak in Valgrind's own libpthread.so :(
-{
- my_malloc/get_or_allocate_specifics_ptr/pthread_key_create(Leak)
- Memcheck:Leak
- fun:malloc
- fun:my_malloc
- fun:get_or_allocate_specifics_ptr
- fun:pthread_key_create
-}
-
obj:*libc-2.2.?.so
fun:_dl_catch_error*
}
-
+{
+ _dl_relocate_object_internal
+ Memcheck:Cond
+ fun:_dl_relocate_object_internal
+}
#-------- SuSE 8.1 stuff (gcc-3.2, glibc-2.2.5 + SuSE's hacks)
{
}
{
- _dl_init/ld-2.2.4.so(Cond)
+ _dl_start/ld-2.2.4.so(Cond)
Memcheck:Cond
fun:_dl_start
obj:/lib/ld-2.2.4.so
}
+#-------- glibc 2.2.5/ Debian 3.0
+{
+ _dl_start/ld-2.2.5.so(Cond)
+ Memcheck:Cond
+ fun:_dl_start
+ obj:/lib/ld-2.2.5.so
+}
+
#-------------------
{
socketcall.connect(serv_addr)/connect/*
obj:/usr/X11R6/lib/libXt.so.6.0
}
-##----------------------------------------------------------------------##
-## For a leak in Valgrind's own libpthread.so :(
+# LinuxThreads suppressesion
{
- my_malloc/get_or_allocate_specifics_ptr/pthread_key_create(Leak)
- Memcheck:Leak
- fun:malloc
- fun:my_malloc
- fun:get_or_allocate_specifics_ptr
- fun:pthread_key_create
+ LinuxThreads: write/pthread_create
+ Memcheck:Param
+ write(buf)
+ fun:pthread_create@@GLIBC_2.1
+}
+{
+ LinuxThreads: write/pthread_create
+ Memcheck:Param
+ write(buf)
+ fun:write
+ fun:pthread_create@@GLIBC_2.1
}
-
-
fun:dl_open_worker
}
+#-------- glibc 2.3.4/ Fedora Core 3
+{
+ dl_relocate_object
+ Memcheck:Cond
+ fun:_dl_relocate_object
+}
+
#-------- Data races
{
_dl_lookup_symbol_internal/fixup/_dl_runtime_resolve
fun:_IO_funlockfile
}
-##----------------------------------------------------------------------##
-## For a leak in Valgrind's own libpthread.so :(
-{
- my_malloc/get_or_allocate_specifics_ptr/pthread_key_create(Leak)
- Memcheck:Leak
- fun:malloc
- fun:my_malloc
- fun:get_or_allocate_specifics_ptr
- fun:pthread_key_create
-}
-
##----------------------------------------------------------------------##
## Bugs in helper library supplied with Intel Icc 7.0 (65)
## in /opt/intel/compiler70/ia32/lib/libcxa.so.3
Memcheck:Cond
obj:/lib/ld-2.3.3.so
}
+##----------------------------------------------------------------------##
+## glibc-2.3.3 on FC2
+## Assumes that sysctl returns \0-terminated strings in is_smp_system
+{
+ Unterminated strstr string in is_smp_system() (NPTL)
+ Memcheck:Cond
+ fun:strstr
+ fun:__pthread_initialize_minimal
+ obj:/lib/tls/libpthread-0.61.so
+ obj:/lib/tls/libpthread-0.61.so
+}
+{
+ Unterminated strstr string in is_smp_system() (LinuxThreads)
+ Memcheck:Cond
+ fun:strstr
+ fun:pthread_initialize
+ obj:/lib/i686/libpthread-0.10.so
+ obj:/lib/i686/libpthread-0.10.so
+}
+{
+ Unterminated strstr string in is_smp_system() (LinuxThreads)
+ Memcheck:Cond
+ fun:strstr
+ fun:pthread_initialize
+ obj:/lib/libpthread-0.10.so
+ obj:/lib/libpthread-0.10.so
+}
+
+## Bug in PRE(sys_clone), really. Some args are not used.
+{
+ LinuxThread clone use (parent_tidptr)
+ Memcheck:Param
+ clone(parent_tidptr)
+ fun:clone
+ fun:pthread_create
+}
+{
+ LinuxThread clone use (child_tidptr)
+ Memcheck:Param
+ clone(child_tidptr)
+ fun:clone
+ fun:pthread_create
+}
+{
+ LinuxThread clone use (tlsinfo)
+ Memcheck:Param
+ clone(tlsinfo)
+ fun:clone
+ fun:pthread_create
+}
+{
+ LinuxThread clone use (parent_tidptr)
+ Memcheck:Param
+ clone(parent_tidptr)
+ fun:clone
+ fun:pthread_create@@GLIBC_2.1
+}
+{
+ LinuxThread clone use (child_tidptr)
+ Memcheck:Param
+ clone(child_tidptr)
+ fun:clone
+ fun:pthread_create@@GLIBC_2.1
+}
+{
+ LinuxThread clone use (tlsinfo)
+ Memcheck:Param
+ clone(tlsinfo)
+ fun:clone
+ fun:pthread_create@@GLIBC_2.1
+}
+
+## LinuxThreads manager writes messages containing undefined bytes
+{
+ LinuxThreads: write/pthread_onexit_process
+ Memcheck:Param
+ write(buf)
+ fun:pthread_onexit_process
+ fun:exit
+}
+{
+ LinuxThreads: write/pthread_join
+ Memcheck:Param
+ write(buf)
+ fun:pthread_join
+}
+{
+ LinuxThreads: write/pthread_create
+ Memcheck:Param
+ write(buf)
+ fun:pthread_create@@GLIBC_2.1
+}
+{
+ LinuxThreads: write/__pthread_initialize_manager/pthread_create
+ Memcheck:Param
+ write(buf)
+ fun:__pthread_initialize_manager
+ fun:pthread_create@@GLIBC_2.1
+}
+
+{
+ LinuxThreads: write/pthread_create
+ Memcheck:Param
+ write(buf)
+ fun:write
+ fun:pthread_create
+}
+
+##----------------------------------------------------------------------##
+## glibc-2.3.4 on FC3
+## Assumes that sysctl returns \0-terminated strings in is_smp_system
+{
+ Unterminated strstr string in is_smp_system() (NPTL)
+ Memcheck:Cond
+ fun:strstr
+ fun:__pthread_initialize_minimal
+ obj:/lib/tls/libpthread-2.3.4.so
+ obj:/lib/tls/libpthread-2.3.4.so
+}
{
shadow_word sword;
- tl_assert(VG_INVALID_THREADID == VG_(get_current_tid)());
-
sword = SW(Vge_Virgin, TID_INDICATING_NONVIRGIN);
set_sword(a, virgin_sword);
REGPARM(1) static void eraser_mem_help_read_1(Addr a)
{
- eraser_mem_read(a, 1, VG_(get_current_tid)());
+ eraser_mem_read(a, 1, VG_(get_VCPU_tid)());
}
REGPARM(1) static void eraser_mem_help_read_2(Addr a)
{
- eraser_mem_read(a, 2, VG_(get_current_tid)());
+ eraser_mem_read(a, 2, VG_(get_VCPU_tid)());
}
REGPARM(1) static void eraser_mem_help_read_4(Addr a)
{
- eraser_mem_read(a, 4, VG_(get_current_tid)());
+ eraser_mem_read(a, 4, VG_(get_VCPU_tid)());
}
REGPARM(2) static void eraser_mem_help_read_N(Addr a, SizeT size)
{
- eraser_mem_read(a, size, VG_(get_current_tid)());
+ eraser_mem_read(a, size, VG_(get_VCPU_tid)());
}
REGPARM(2) static void eraser_mem_help_write_1(Addr a, UInt val)
{
if (*(UChar *)a != val)
- eraser_mem_write(a, 1, VG_(get_current_tid)());
+ eraser_mem_write(a, 1, VG_(get_VCPU_tid)());
}
REGPARM(2) static void eraser_mem_help_write_2(Addr a, UInt val)
{
if (*(UShort *)a != val)
- eraser_mem_write(a, 2, VG_(get_current_tid)());
+ eraser_mem_write(a, 2, VG_(get_VCPU_tid)());
}
REGPARM(2) static void eraser_mem_help_write_4(Addr a, UInt val)
{
if (*(UInt *)a != val)
- eraser_mem_write(a, 4, VG_(get_current_tid)());
+ eraser_mem_write(a, 4, VG_(get_VCPU_tid)());
}
REGPARM(2) static void eraser_mem_help_write_N(Addr a, SizeT size)
{
- eraser_mem_write(a, size, VG_(get_current_tid)());
+ eraser_mem_write(a, size, VG_(get_VCPU_tid)());
}
static void hg_thread_create(ThreadId parent, ThreadId child)
static void bus_lock(void)
{
- ThreadId tid = VG_(get_current_tid)();
+ ThreadId tid = VG_(get_VCPU_tid)();
eraser_pre_mutex_lock(tid, &__BUS_HARDWARE_LOCK__);
eraser_post_mutex_lock(tid, &__BUS_HARDWARE_LOCK__);
}
static void bus_unlock(void)
{
- ThreadId tid = VG_(get_current_tid)();
+ ThreadId tid = VG_(get_VCPU_tid)();
eraser_post_mutex_unlock(tid, &__BUS_HARDWARE_LOCK__);
}
// From linux-2.6.8.1/include/linux/sched.h
//----------------------------------------------------------------------
+#define VKI_CSIGNAL 0x000000ff /* signal mask to be sent at exit */
#define VKI_CLONE_VM 0x00000100 /* set if VM shared between processes */
#define VKI_CLONE_FS 0x00000200 /* set if fs info shared between processes */
#define VKI_CLONE_FILES 0x00000400 /* set if open files shared between processes */
#define VKI_CLONE_SIGHAND 0x00000800 /* set if signal handlers and blocked signals shared */
+#define VKI_CLONE_VFORK 0x00004000 /* set if the parent wants the child to wake it up on mm_release */
+#define VKI_CLONE_PARENT 0x00008000 /* set if we want to have the same parent as the cloner */
#define VKI_CLONE_THREAD 0x00010000 /* Same thread group? */
+#define VKI_CLONE_SYSVSEM 0x00040000 /* share system V SEM_UNDO semantics */
+#define VKI_CLONE_SETTLS 0x00080000 /* create a new TLS for the child */
#define VKI_CLONE_PARENT_SETTID 0x00100000 /* set the TID in the parent */
#define VKI_CLONE_CHILD_CLEARTID 0x00200000 /* clear the TID in the child */
#define VKI_CLONE_DETACHED 0x00400000 /* Unused, ignored */
// From nowhere: constants internal to Valgrind
//----------------------------------------------------------------------
-#define VKI_SIGVGINT (VKI_SIGRTMIN+0) // [[internal: interrupt]]
-#define VKI_SIGVGKILL (VKI_SIGRTMIN+1) // [[internal: kill]]
-#define VKI_SIGVGRTUSERMIN (VKI_SIGRTMIN+2) // [[internal: first
+/* Use high signals because native pthreads wants to use low */
+#define VKI_SIGVGKILL (VG_(max_signal)-0) // [[internal: kill]]
+#define VKI_SIGVGCHLD (VG_(max_signal)-1) // [[internal: thread death]]
+#define VKI_SIGVGRTUSERMAX (VG_(max_signal)-2) // [[internal: last user-usable RT signal]]
//----------------------------------------------------------------------
// From linux-2.6.8.1/include/asm-generic/siginfo.h
#define VKI_ESRCH 3 /* No such process */
#define VKI_EINTR 4 /* Interrupted system call */
#define VKI_EBADF 9 /* Bad file number */
+#define VKI_EAGAIN 11 /* Try again */
+#define VKI_EWOULDBLOCK VKI_EAGAIN
#define VKI_ENOMEM 12 /* Out of memory */
#define VKI_EACCES 13 /* Permission denied */
#define VKI_EFAULT 14 /* Bad address */
#define VKI_MREMAP_FIXED 2
//----------------------------------------------------------------------
-// From linux-2.6.8.1/include/linux/futex.h
+// From linux-2.6.10-rc3-mm1/include/linux/futex.h
//----------------------------------------------------------------------
#define VKI_FUTEX_WAIT (0)
+#define VKI_FUTEX_WAKE (1)
#define VKI_FUTEX_FD (2)
#define VKI_FUTEX_REQUEUE (3)
+#define VKI_FUTEX_CMP_REQUEUE (4)
//----------------------------------------------------------------------
// From linux-2.6.8.1/include/linux/errno.h
#define __TOOL_H
#include <stdarg.h> /* ANSI varargs stuff */
-#include <setjmp.h> /* for jmp_buf */
#include "basic_types.h"
-#include "tool_asm.h" // asm stuff
-#include "tool_arch.h" // arch-specific tool stuff
+#include "tool_asm.h" /* asm stuff */
+#include "tool_arch.h" /* arch-specific tool stuff */
#include "vki.h"
#include "libvex.h"
#define VG_CLO_STREQ(s1,s2) (0==VG_(strcmp_ws)((s1),(s2)))
#define VG_CLO_STREQN(nn,s1,s2) (0==VG_(strncmp_ws)((s1),(s2),(nn)))
-// Higher-level command-line option recognisers; use in if/else chains
+/* Higher-level command-line option recognisers; use in if/else chains */
#define VG_BOOL_CLO(qq_option, qq_var) \
if (VG_CLO_STREQ(arg, qq_option"=yes")) { (qq_var) = True; } \
(qq_var) = (Int)VG_(atoll)( &arg[ VG_(strlen)(qq_option)+1 ] ); \
}
-// Bounded integer arg
+/* Bounded integer arg */
#define VG_BNUM_CLO(qq_option, qq_var, qq_lo, qq_hi) \
if (VG_CLO_STREQN(VG_(strlen)(qq_option)+1, arg, qq_option"=")) { \
(qq_var) = (Int)VG_(atoll)( &arg[ VG_(strlen)(qq_option)+1 ] ); \
enum { Vg_UserMsg, /* '?' == '=' */
Vg_DebugMsg, /* '?' == '-' */
Vg_DebugExtraMsg, /* '?' == '+' */
- Vg_ClientMsg, /* '?' == '*' */
+ Vg_ClientMsg /* '?' == '*' */
}
VgMsgKind;
/* Functions for building a message from multiple parts. */
extern int VG_(start_msg) ( VgMsgKind kind );
-extern int VG_(add_to_msg) ( Char* format, ... );
+extern int VG_(add_to_msg) ( const Char* format, ... );
/* Ends and prints the message. Appends a newline. */
extern int VG_(end_msg) ( void );
/* Send a single-part message. Appends a newline. */
-extern int VG_(message) ( VgMsgKind kind, Char* format, ... );
-extern int VG_(vmessage) ( VgMsgKind kind, Char* format, va_list vargs );
+extern int VG_(message) ( VgMsgKind kind, const Char* format, ... );
+extern int VG_(vmessage) ( VgMsgKind kind, const Char* format, va_list vargs );
/*====================================================================*/
UInt
ThreadId;
-/* Returns the tid of the currently running thread. Only call it when
- running generated code. It will barf if there is no running thread.
- Will never return zero.
-*/
-extern ThreadId VG_(get_current_tid) ( void );
-
-/* Does the scheduler think we are running generated code right now? */
-extern Bool VG_(running_a_thread) ( void );
+/* Get the TID of the thread which currently has the CPU. */
+extern ThreadId VG_(get_running_tid) ( void );
/* Searches through all thread's stacks to see if any match. Returns
VG_INVALID_THREADID if none match. */
extern UInt VG_(printf) ( const char *format, ... );
/* too noisy ... __attribute__ ((format (printf, 1, 2))) ; */
extern UInt VG_(sprintf) ( Char* buf, Char *format, ... );
-extern UInt VG_(vprintf) ( void(*send)(Char),
- const Char *format, va_list vargs );
+extern UInt VG_(vprintf) ( void(*send)(Char, void *),
+ const Char *format, va_list vargs, void *send_arg );
extern Int VG_(rename) ( Char* old_name, Char* new_name );
extern void VG_(print_malloc_stats) ( void );
-
+/* terminate everything */
extern void VG_(exit)( Int status )
__attribute__ ((__noreturn__));
+
+/* terminate the calling thread - probably not what you want */
+extern void VG_(exit_single)( Int status )
+ __attribute__ ((__noreturn__));
+
/* Prints a panic message (a constant string), appends newline and bug
reporting info, aborts. */
__attribute__ ((__noreturn__))
extern Int VG_(getrlimit) ( Int resource, struct vki_rlimit *rlim );
/* Set client resource limit*/
-extern Int VG_(setrlimit) ( Int resource, struct vki_rlimit *rlim );
+extern Int VG_(setrlimit) ( Int resource, const struct vki_rlimit *rlim );
/* Crude stand-in for the glibc system() call. */
extern Int VG_(system) ( Char* cmd );
extern void *VG_(shadow_alloc)(UInt size);
-extern Bool VG_(is_addressable)(Addr p, SizeT sz);
+extern Bool VG_(is_addressable)(Addr p, SizeT sz, UInt prot);
extern Addr VG_(client_alloc)(Addr base, SizeT len, UInt prot, UInt flags);
extern void VG_(client_free)(Addr addr);
extern Bool VG_(is_valgrind_addr)(Addr a);
+/* Register an interest in apparently internal faults; used code which
+ wanders around dangerous memory (ie, leakcheck). The catcher is
+ not expected to return. */
+extern void VG_(set_fault_catcher)(void (*catcher)(Int sig, Addr addr));
+
/* initialize shadow pages in the range [p, p+sz) This calls
init_shadow_page for each one. It should be a lot more efficient
for bulk-initializing shadow pages than faulting on each one.
*/
extern void VG_(init_shadow_range)(Addr p, UInt sz, Bool call_init);
+/* Calls into the core used by leak-checking */
+
+/* Calls "add_rootrange" with each range of memory which looks like a
+ plausible source of root pointers. */
+extern void VG_(find_root_memory)(void (*add_rootrange)(Addr addr, SizeT sz));
+
+/* Calls "mark_addr" with register values (which may or may not be pointers) */
+extern void VG_(mark_from_registers)(void (*mark_addr)(Addr addr));
+
/* ------------------------------------------------------------------ */
/* signal.h.
extern Int VG_(sigfillset) ( vki_sigset_t* set );
extern Int VG_(sigemptyset) ( vki_sigset_t* set );
-extern Bool VG_(isfullsigset) ( vki_sigset_t* set );
-extern Bool VG_(isemptysigset) ( vki_sigset_t* set );
+extern Bool VG_(isfullsigset) ( const vki_sigset_t* set );
+extern Bool VG_(isemptysigset) ( const vki_sigset_t* set );
extern Int VG_(sigaddset) ( vki_sigset_t* set, Int signum );
extern Int VG_(sigdelset) ( vki_sigset_t* set, Int signum );
-extern Int VG_(sigismember) ( vki_sigset_t* set, Int signum );
+extern Int VG_(sigismember) ( const vki_sigset_t* set, Int signum );
extern void VG_(sigaddset_from_set) ( vki_sigset_t* dst, vki_sigset_t* src );
extern void VG_(sigdelset_from_set) ( vki_sigset_t* dst, vki_sigset_t* src );
/* other, randomly useful functions */
extern UInt VG_(read_millisecond_timer) ( void );
+extern Bool VG_(has_cpuid) ( void );
+
extern void VG_(cpuid) ( UInt eax,
UInt *eax_ret, UInt *ebx_ret,
UInt *ecx_ret, UInt *edx_ret );
new one. Either way, return a pointer to the context. Context size
controlled by --num-callers option.
- If called from generated code, use VG_(get_current_tid)() to get the
+ If called from generated code, use VG_(get_VCPU_tid)() to get the
current ThreadId. If called from non-generated code, the current
ThreadId should be passed in by the core.
*/
Vg_SectData,
Vg_SectBSS,
Vg_SectGOT,
- Vg_SectPLT,
+ Vg_SectPLT
}
VgSectKind;
}
/* List operations:
- SkipList_Find searchs a list. If it can't find an exact match, it either
- returns NULL or a pointer to the element before where k would go
+ SkipList_Find_* search a list. The 3 variants are:
+ Before: returns a node which is <= key, or NULL if none
+ Exact: returns a node which is == key, or NULL if none
+ After: returns a node which is >= key, or NULL if none
SkipList_Insert inserts a new element into the list. Duplicates are
forbidden. The element must have been created with SkipList_Alloc!
SkipList_Remove removes an element from the list and returns it. It
doesn't free the memory.
*/
-extern void *VG_(SkipList_Find) (const SkipList *l, void *key);
-extern void VG_(SkipList_Insert)( SkipList *l, void *data);
-extern void *VG_(SkipList_Remove)( SkipList *l, void *key);
+extern void *VG_(SkipList_Find_Before) (const SkipList *l, void *key);
+extern void *VG_(SkipList_Find_Exact) (const SkipList *l, void *key);
+extern void *VG_(SkipList_Find_After) (const SkipList *l, void *key);
+extern void VG_(SkipList_Insert) ( SkipList *l, void *data);
+extern void *VG_(SkipList_Remove) ( SkipList *l, void *key);
+
+/* Some useful standard comparisons */
+extern Int VG_(cmp_Addr) (const void *a, const void *b);
+extern Int VG_(cmp_Int) (const void *a, const void *b);
+extern Int VG_(cmp_UInt) (const void *a, const void *b);
+extern Int VG_(cmp_string)(const void *a, const void *b);
/* Node (element) operations:
SkipNode_Alloc: allocate memory for a new element on the list. Must be
-
-/*
+/* -*- c -*-
----------------------------------------------------------------
Notice that the following BSD-style license applies to this one
#undef __@VG_ARCH@__
#define __@VG_ARCH@__ 1 // Architecture we're installed on
+
+/* If we're not compiling for our target architecture, don't generate
+ any inline asms. This would be a bit neater if we used the same
+ CPP symbols as the compiler for identifying architectures. */
+#if !(__x86__ && __i386__)
+# ifndef NVALGRIND
+# define NVALGRIND 1
+# endif /* NVALGRIND */
+#endif
+
+
/* This file is for inclusion into client (your!) code.
You can use these macros to manipulate and query Valgrind's
// amd64/core_arch.h!
#endif // __amd64__
#ifdef __x86__
-#define VALGRIND_MAGIC_SEQUENCE( \
- _zzq_rlval, _zzq_default, _zzq_request, \
- _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4) \
- \
- { volatile unsigned int _zzq_args[5]; \
- _zzq_args[0] = (volatile unsigned int)(_zzq_request); \
- _zzq_args[1] = (volatile unsigned int)(_zzq_arg1); \
- _zzq_args[2] = (volatile unsigned int)(_zzq_arg2); \
- _zzq_args[3] = (volatile unsigned int)(_zzq_arg3); \
- _zzq_args[4] = (volatile unsigned int)(_zzq_arg4); \
- asm volatile("movl %1, %%eax\n\t" \
- "movl %2, %%edx\n\t" \
- "roll $29, %%eax ; roll $3, %%eax\n\t" \
- "rorl $27, %%eax ; rorl $5, %%eax\n\t" \
- "roll $13, %%eax ; roll $19, %%eax\n\t" \
- "movl %%edx, %0\t" \
- : "=r" (_zzq_rlval) \
- : "r" (&_zzq_args[0]), "r" (_zzq_default) \
- : "eax", "edx", "cc", "memory" \
- ); \
+#define VALGRIND_MAGIC_SEQUENCE( \
+ _zzq_rlval, _zzq_default, _zzq_request, \
+ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4) \
+ \
+ { unsigned int _zzq_args[5]; \
+ _zzq_args[0] = (unsigned int)(_zzq_request); \
+ _zzq_args[1] = (unsigned int)(_zzq_arg1); \
+ _zzq_args[2] = (unsigned int)(_zzq_arg2); \
+ _zzq_args[3] = (unsigned int)(_zzq_arg3); \
+ _zzq_args[4] = (unsigned int)(_zzq_arg4); \
+ asm volatile("roll $29, %%eax ; roll $3, %%eax\n\t" \
+ "rorl $27, %%eax ; rorl $5, %%eax\n\t" \
+ "roll $13, %%eax ; roll $19, %%eax" \
+ : "=d" (_zzq_rlval) \
+ : "a" (&_zzq_args[0]), "0" (_zzq_default) \
+ : "cc", "memory" \
+ ); \
}
#endif // __x86__
// Insert assembly code for other architectures here...
// From linux-2.6.8.1/include/asm-i386/mman.h
//----------------------------------------------------------------------
-//#define VKI_PROT_NONE 0x0 /* No page permissions */
+#define VKI_PROT_NONE 0x0 /* No page permissions */
#define VKI_PROT_READ 0x1 /* page can be read */
#define VKI_PROT_WRITE 0x2 /* page can be written */
#define VKI_PROT_EXEC 0x4 /* page can be executed */
//#define VKI_MAP_TYPE 0x0f /* Mask for type of mapping */
#define VKI_MAP_FIXED 0x10 /* Interpret addr exactly */
#define VKI_MAP_ANONYMOUS 0x20 /* don't use a file */
+#define VKI_MAP_NORESERVE 0x4000 /* don't check for reservations */
//----------------------------------------------------------------------
// From linux-2.6.8.1/include/asm-i386/fcntl.h
#define VKI_O_RDONLY 00
#define VKI_O_WRONLY 01
+#define VKI_O_RDWR 02
#define VKI_O_CREAT 0100 /* not fcntl */
#define VKI_O_EXCL 0200 /* not fcntl */
#define VKI_O_TRUNC 01000 /* not fcntl */
// From linux-2.6.8.1/include/asm-i386/stat.h
//----------------------------------------------------------------------
+#define VKI_S_IFMT 00170000
+#define VKI_S_IFSOCK 0140000
+#define VKI_S_IFLNK 0120000
+#define VKI_S_IFREG 0100000
+#define VKI_S_IFBLK 0060000
+#define VKI_S_IFDIR 0040000
+#define VKI_S_IFCHR 0020000
+#define VKI_S_IFIFO 0010000
+#define VKI_S_ISUID 0004000
+#define VKI_S_ISGID 0002000
+#define VKI_S_ISVTX 0001000
+
+#define VKI_S_ISLNK(m) (((m) & VKI_S_IFMT) == VKI_S_IFLNK)
+#define VKI_S_ISREG(m) (((m) & VKI_S_IFMT) == VKI_S_IFREG)
+#define VKI_S_ISDIR(m) (((m) & VKI_S_IFMT) == VKI_S_IFDIR)
+#define VKI_S_ISCHR(m) (((m) & VKI_S_IFMT) == VKI_S_IFCHR)
+#define VKI_S_ISBLK(m) (((m) & VKI_S_IFMT) == VKI_S_IFBLK)
+#define VKI_S_ISFIFO(m) (((m) & VKI_S_IFMT) == VKI_S_IFIFO)
+#define VKI_S_ISSOCK(m) (((m) & VKI_S_IFMT) == VKI_S_IFSOCK)
+
struct vki_stat {
unsigned long st_dev;
unsigned long st_ino;
#define _VKI_IOC_SIZEBITS 14
#define _VKI_IOC_DIRBITS 2
+#define _VKI_IOC_NRMASK ((1 << _VKI_IOC_NRBITS)-1)
+#define _VKI_IOC_TYPEMASK ((1 << _VKI_IOC_TYPEBITS)-1)
#define _VKI_IOC_SIZEMASK ((1 << _VKI_IOC_SIZEBITS)-1)
#define _VKI_IOC_DIRMASK ((1 << _VKI_IOC_DIRBITS)-1)
/* used to decode ioctl numbers.. */
#define _VKI_IOC_DIR(nr) (((nr) >> _VKI_IOC_DIRSHIFT) & _VKI_IOC_DIRMASK)
+#define _VKI_IOC_TYPE(nr) (((nr) >> _VKI_IOC_TYPESHIFT) & _VKI_IOC_TYPEMASK)
+#define _VKI_IOC_NR(nr) (((nr) >> _VKI_IOC_NRSHIFT) & _VKI_IOC_NRMASK)
#define _VKI_IOC_SIZE(nr) (((nr) >> _VKI_IOC_SIZESHIFT) & _VKI_IOC_SIZEMASK)
//----------------------------------------------------------------------
unsigned long __unused4;
};
+//----------------------------------------------------------------------
+// DRM ioctls
+//----------------------------------------------------------------------
+
+// jrs 20050207: where did all this stuff come from? Is it really
+// i386 specific, or should it go into the linux-generic category?
+//struct vki_drm_buf_pub {
+// Int idx; /**< Index into the master buffer list */
+// Int total; /**< Buffer size */
+// Int used; /**< Amount of buffer in use (for DMA) */
+// void __user *address; /**< Address of buffer */
+//};
+//
+//struct vki_drm_buf_map {
+// Int count; /**< Length of the buffer list */
+// void __user *virtual; /**< Mmap'd area in user-virtual */
+// struct vki_drm_buf_pub __user *list; /**< Buffer information */
+//};
+//
+///* We need to pay attention to this, because it mmaps memory */
+//#define VKI_DRM_IOCTL_MAP_BUFS _VKI_IOWR('d', 0x19, struct vki_drm_buf_map)
+
//----------------------------------------------------------------------
// From linux-2.6.9/include/asm-i386/ptrace.h
//----------------------------------------------------------------------
"__builtin_vec_new",
"calloc",
"realloc",
- "my_malloc", // from vg_libpthread.c
"memalign",
};
// Must return False so that all stacks are traversed
static Bool count_stack_size( Addr stack_min, Addr stack_max, void *cp )
{
+ VG_(printf)("stack_max=%p stack_min=%p delta=%d\n", stack_max, stack_min, stack_max-stack_min);
*(UInt *)cp += (stack_max - stack_min);
return False;
}
// Stack(s) ----------------------------------------------------------
if (clo_stacks) {
- tl_assert(0 != total_ST);
VG_(message)(Vg_UserMsg, "stack(s): %s",
- make_perc(stack_ST, total_ST) );
+ ( 0 == stack_ST ? (Char*)"0%"
+ : make_perc(stack_ST, total_ST) ) );
}
if (VG_(clo_verbosity) > 1) {
$dir/../../tests/filter_stderr_basic |
-# Remove "Massif, ..." line and the following copyright line.
-sed "/^Massif, a space profiler./ , /./ d" |
-
-# Remove numbers from all lines
-sed "s/\([a-zA-Z(): ]*\)[ 0-9\.,()+rdw]*\(%\|ms.B\)$/\1/"
+# Remove numbers from all lines (and "(n/a)" strings)
+sed "s/\(Total spacetime: \).*$/\1/" |
+sed "s/\(heap: \).*$/\1/" |
+sed "s/\(heap admin: \).*$/\1/" |
+sed "s/\(stack(s): \).*$/\1/"
Attempting too-big mmap()...
Total spacetime:
-heap:
-heap admin:
+heap:
+heap admin:
stack(s):
Total spacetime:
-heap:
-heap admin:
+heap:
+heap admin:
stack(s):
Total spacetime:
-heap:
-heap admin:
+heap:
+heap admin:
stack(s):
The GNU General Public License is contained in the file COPYING.
*/
+#include <setjmp.h>
#include "mac_shared.h"
/* Define to debug the memory-leak-detector. */
-/* #define VG_DEBUG_LEAKCHECK */
+#define VG_DEBUG_LEAKCHECK 0
+#define VG_DEBUG_CLIQUE 0
+
+#define ROUNDDN(p, a) ((Addr)(p) & ~((a)-1))
+#define ROUNDUP(p, a) ROUNDDN((p)+(a)-1, (a))
+#define PGROUNDDN(p) ROUNDDN(p, VKI_PAGE_SIZE)
+#define PGROUNDUP(p) ROUNDUP(p, VKI_PAGE_SIZE)
/*------------------------------------------------------------*/
/*--- Low-level address-space scanning, for the leak ---*/
static
-void vg_scan_all_valid_memory_sighandler ( Int sigNo )
+void vg_scan_all_valid_memory_catcher ( Int sigNo, Addr addr )
{
- __builtin_longjmp(memscan_jmpbuf, 1);
-}
-
-
-/* Safely (avoiding SIGSEGV / SIGBUS) scan the entire valid address
- space and pass the addresses and values of all addressible,
- defined, aligned words to notify_word. This is the basis for the
- leak detector. Returns the number of calls made to notify_word.
-
- Addresses are validated 3 ways. First we enquire whether (addr >>
- 16) denotes a 64k chunk in use, by asking is_valid_64k_chunk(). If
- so, we decide for ourselves whether each x86-level (4 K) page in
- the chunk is safe to inspect. If yes, we enquire with
- is_valid_address() whether or not each of the 1024 word-locations
- on the page is valid. Only if so are that address and its contents
- passed to notify_word.
-
- This is all to avoid duplication of this machinery between
- Memcheck and Addrcheck.
-*/
-static
-UInt vg_scan_all_valid_memory ( Bool is_valid_64k_chunk ( UInt ),
- Bool is_valid_address ( Addr ),
- void (*notify_word)( Addr, UInt ) )
-{
- /* All volatile, because some gccs seem paranoid about longjmp(). */
- volatile Bool anyValid;
- volatile Addr pageBase, addr;
- volatile UInt res, numPages, page, primaryMapNo;
- volatile UInt page_first_word, nWordsNotified;
-
- struct vki_sigaction sigbus_saved;
- struct vki_sigaction sigbus_new;
- struct vki_sigaction sigsegv_saved;
- struct vki_sigaction sigsegv_new;
- vki_sigset_t blockmask_saved;
- vki_sigset_t unblockmask_new;
-
- /* Temporarily install a new sigsegv and sigbus handler, and make
- sure SIGBUS, SIGSEGV and SIGTERM are unblocked. (Perhaps the
- first two can never be blocked anyway?) */
-
- sigbus_new.ksa_handler = vg_scan_all_valid_memory_sighandler;
- sigbus_new.sa_flags = VKI_SA_ONSTACK | VKI_SA_RESTART;
- sigbus_new.sa_restorer = NULL;
- res = VG_(sigemptyset)( &sigbus_new.sa_mask );
- tl_assert(res == 0);
-
- sigsegv_new.ksa_handler = vg_scan_all_valid_memory_sighandler;
- sigsegv_new.sa_flags = VKI_SA_ONSTACK | VKI_SA_RESTART;
- sigsegv_new.sa_restorer = NULL;
- res = VG_(sigemptyset)( &sigsegv_new.sa_mask );
- tl_assert(res == 0+0);
-
- res = VG_(sigemptyset)( &unblockmask_new );
- res |= VG_(sigaddset)( &unblockmask_new, VKI_SIGBUS );
- res |= VG_(sigaddset)( &unblockmask_new, VKI_SIGSEGV );
- res |= VG_(sigaddset)( &unblockmask_new, VKI_SIGTERM );
- tl_assert(res == 0+0+0);
-
- res = VG_(sigaction)( VKI_SIGBUS, &sigbus_new, &sigbus_saved );
- tl_assert(res == 0+0+0+0);
-
- res = VG_(sigaction)( VKI_SIGSEGV, &sigsegv_new, &sigsegv_saved );
- tl_assert(res == 0+0+0+0+0);
-
- res = VG_(sigprocmask)( VKI_SIG_UNBLOCK, &unblockmask_new, &blockmask_saved );
- tl_assert(res == 0+0+0+0+0+0);
-
- /* The signal handlers are installed. Actually do the memory scan. */
- numPages = 1 << (32-VKI_PAGE_SHIFT);
- tl_assert(numPages == 1048576);
- tl_assert(4096 == (1 << VKI_PAGE_SHIFT));
-
- nWordsNotified = 0;
-
- for (page = 0; page < numPages; page++) {
-
- /* Base address of this 4k page. */
- pageBase = page << VKI_PAGE_SHIFT;
-
- /* Skip if this page is in an unused 64k chunk. */
- primaryMapNo = pageBase >> 16;
- if (!is_valid_64k_chunk(primaryMapNo))
- continue;
-
- /* Next, establish whether or not we want to consider any
- locations on this page. We need to do so before actually
- prodding it, because prodding it when in fact it is not
- needed can cause a page fault which under some rare
- circumstances can cause the kernel to extend the stack
- segment all the way down to here, which is seriously bad.
- Hence: */
- anyValid = False;
- for (addr = pageBase; addr < pageBase+VKI_PAGE_SIZE; addr += 4) {
- if (is_valid_address(addr)) {
- anyValid = True;
- break;
- }
- }
-
- if (!anyValid)
- continue; /* nothing interesting here .. move to the next page */
-
- /* Ok, we have to prod cautiously at the page and see if it
- explodes or not. */
- if (__builtin_setjmp(memscan_jmpbuf) == 0) {
- /* try this ... */
- page_first_word = * (volatile UInt*)pageBase;
- /* we get here if we didn't get a fault */
- /* Scan the page */
- for (addr = pageBase; addr < pageBase+VKI_PAGE_SIZE; addr += 4) {
- if (is_valid_address(addr)) {
- nWordsNotified++;
- notify_word ( addr, *(UInt*)addr );
- }
- }
- } else {
- /* We get here if reading the first word of the page caused a
- fault, which in turn caused the signal handler to longjmp.
- Ignore this page. */
- if (0)
- VG_(printf)(
- "vg_scan_all_valid_memory_sighandler: ignoring page at %p\n",
- (void*)pageBase
- );
- }
- }
-
- /* Restore signal state to whatever it was before. */
- res = VG_(sigaction)( VKI_SIGBUS, &sigbus_saved, NULL );
- tl_assert(res == 0 +0);
-
- res = VG_(sigaction)( VKI_SIGSEGV, &sigsegv_saved, NULL );
- tl_assert(res == 0 +0 +0);
-
- res = VG_(sigprocmask)( VKI_SIG_SETMASK, &blockmask_saved, NULL );
- tl_assert(res == 0 +0 +0 +0);
-
- return nWordsNotified;
+ if (0)
+ VG_(printf)("OUCH! sig=%d addr=%p\n", sigNo, addr);
+ if (sigNo == VKI_SIGSEGV || sigNo == VKI_SIGBUS)
+ __builtin_longjmp(memscan_jmpbuf, 1);
}
/*------------------------------------------------------------*/
-- Proper-ly reached; a pointer to its start has been found
-- Interior-ly reached; only an interior pointer to it has been found
-- Unreached; so far, no pointers to any part of it have been found.
+ -- IndirectLeak; leaked, but referred to by another leaked block
*/
-typedef
- enum { Unreached, Interior, Proper }
- Reachedness;
+typedef enum {
+ Unreached,
+ IndirectLeak,
+ Interior,
+ Proper
+ } Reachedness;
+
+/* An entry in the mark stack */
+typedef struct {
+ Int next:30; /* Index of next in mark stack */
+ UInt state:2; /* Reachedness */
+ SizeT indirect; /* if Unreached, how much is unreachable from here */
+} MarkStack;
/* A block record, used for generating err msgs. */
typedef
Reachedness loss_mode;
/* Number of blocks and total # bytes involved. */
UInt total_bytes;
+ UInt indirect_bytes;
UInt num_blocks;
}
LossRecord;
shadows[i]. Return -1 if none found. This assumes that shadows[]
has been sorted on the ->data field. */
-#ifdef VG_DEBUG_LEAKCHECK
+#if VG_DEBUG_LEAKCHECK
/* Used to sanity-check the fast binary-search mechanism. */
static
Int find_shadow_for_OLD ( Addr ptr,
for (i = 0; i < n_shadows; i++) {
PROF_EVENT(71);
a_lo = shadows[i]->data;
- a_hi = ((Addr)shadows[i]->data) + shadows[i]->size - 1;
+ a_hi = ((Addr)shadows[i]->data) + shadows[i]->size;
if (a_lo <= ptr && ptr <= a_hi)
return i;
}
mid = (lo + hi) / 2;
a_mid_lo = shadows[mid]->data;
- a_mid_hi = shadows[mid]->data + shadows[mid]->size - 1;
+ a_mid_hi = shadows[mid]->data + shadows[mid]->size;
if (ptr < a_mid_lo) {
hi = mid-1;
lo = mid+1;
continue;
}
- tl_assert(ptr >= a_mid_lo && ptr <= a_mid_hi);
+ sk_assert(ptr >= a_mid_lo && ptr <= a_mid_hi);
retVal = mid;
break;
}
-# ifdef VG_DEBUG_LEAKCHECK
- tl_assert(retVal == find_shadow_for_OLD ( ptr, shadows, n_shadows ));
+# if VG_DEBUG_LEAKCHECK
+ sk_assert(retVal == find_shadow_for_OLD ( ptr, shadows, n_shadows ));
# endif
/* VG_(printf)("%d\n", retVal); */
return retVal;
/* Globals, for the following callback used by VG_(detect_memory_leaks). */
static MAC_Chunk** lc_shadows;
static Int lc_n_shadows;
-static Reachedness* lc_reachedness;
+static MarkStack* lc_markstack;
+static Int lc_markstack_top;
static Addr lc_min_mallocd_addr;
static Addr lc_max_mallocd_addr;
+static SizeT lc_scanned;
-static
-void vg_detect_memory_leaks_notify_addr ( Addr a, UInt word_at_a )
+static Bool (*lc_is_valid_chunk) (UInt chunk);
+static Bool (*lc_is_valid_address)(Addr addr);
+
+static const Char *pp_lossmode(Reachedness lossmode)
{
- Int sh_no;
- Addr ptr;
-
- /* Rule out some known causes of bogus pointers. Mostly these do
- not cause much trouble because only a few false pointers can
- ever lurk in these places. This mainly stops it reporting that
- blocks are still reachable in stupid test programs like this
-
- int main (void) { char* a = malloc(100); return 0; }
-
- which people seem inordinately fond of writing, for some reason.
-
- Note that this is a complete kludge. It would be better to
- ignore any addresses corresponding to valgrind.so's .bss and
- .data segments, but I cannot think of a reliable way to identify
- where the .bss segment has been put. If you can, drop me a
- line.
- */
- if (!VG_(is_client_addr)(a)) return;
-
- /* OK, let's get on and do something Useful for a change. */
-
- ptr = (Addr)word_at_a;
- if (ptr >= lc_min_mallocd_addr && ptr <= lc_max_mallocd_addr) {
- /* Might be legitimate; we'll have to investigate further. */
- sh_no = find_shadow_for ( ptr, lc_shadows, lc_n_shadows );
- if (sh_no != -1) {
- /* Found a block at/into which ptr points. */
- tl_assert(sh_no >= 0 && sh_no < lc_n_shadows);
- tl_assert(ptr < lc_shadows[sh_no]->data + lc_shadows[sh_no]->size);
- /* Decide whether Proper-ly or Interior-ly reached. */
- if (ptr == lc_shadows[sh_no]->data) {
- if (0) VG_(printf)("pointer at %p to %p\n", a, word_at_a );
- lc_reachedness[sh_no] = Proper;
- } else {
- if (lc_reachedness[sh_no] == Unreached)
- lc_reachedness[sh_no] = Interior;
- }
- }
+ const Char *loss = "?";
+
+ switch(lossmode) {
+ case Unreached: loss = "definitely lost"; break;
+ case IndirectLeak: loss = "indirectly lost"; break;
+ case Interior: loss = "possibly lost"; break;
+ case Proper: loss = "still reachable"; break;
}
+
+ return loss;
}
/* Used for printing leak errors, avoids exposing the LossRecord type (which
void MAC_(pp_LeakError)(void* vl, UInt n_this_record, UInt n_total_records)
{
LossRecord* l = (LossRecord*)vl;
+ const Char *loss = pp_lossmode(l->loss_mode);
VG_(message)(Vg_UserMsg, "");
- VG_(message)(Vg_UserMsg,
- "%d bytes in %d blocks are %s in loss record %d of %d",
- l->total_bytes, l->num_blocks,
- l->loss_mode==Unreached ? "definitely lost"
- : (l->loss_mode==Interior ? "possibly lost"
- : "still reachable"),
- n_this_record, n_total_records
- );
+ if (l->indirect_bytes) {
+ VG_(message)(Vg_UserMsg,
+ "%d (%d direct, %d indirect) bytes in %d blocks are %s in loss record %d of %d",
+ l->total_bytes + l->indirect_bytes,
+ l->total_bytes, l->indirect_bytes, l->num_blocks,
+ loss, n_this_record, n_total_records);
+ } else {
+ VG_(message)(Vg_UserMsg,
+ "%d bytes in %d blocks are %s in loss record %d of %d",
+ l->total_bytes, l->num_blocks,
+ loss, n_this_record, n_total_records);
+ }
VG_(pp_ExeContext)(l->allocated_at);
}
Int MAC_(bytes_leaked) = 0;
+Int MAC_(bytes_indirect) = 0;
Int MAC_(bytes_dubious) = 0;
Int MAC_(bytes_reachable) = 0;
Int MAC_(bytes_suppressed) = 0;
return (mc1->data < mc2->data ? -1 : 1);
}
-/* Top level entry point to leak detector. Call here, passing in
- suitable address-validating functions (see comment at top of
- vg_scan_all_valid_memory above). All this is to avoid duplication
- of the leak-detection code for Memcheck and Addrcheck.
- Also pass in a tool-specific function to extract the .where field
- for allocated blocks, an indication of the resolution wanted for
- distinguishing different allocation points, and whether or not
- reachable blocks should be shown.
-*/
-void MAC_(do_detect_memory_leaks) (
- ThreadId tid,
- Bool is_valid_64k_chunk ( UInt ),
- Bool is_valid_address ( Addr )
-)
+/* If ptr is pointing to a heap-allocated block which hasn't been seen
+ before, push it onto the mark stack. Clique is the index of the
+ clique leader; -1 if none. */
+static void _lc_markstack_push(Addr ptr, Int clique)
{
- Int i;
- Int blocks_leaked;
- Int blocks_dubious;
- Int blocks_reachable;
- Int blocks_suppressed;
- Int n_lossrecords;
- UInt bytes_notified;
- Bool is_suppressed;
-
- LossRecord* errlist;
- LossRecord* p;
+ Int sh_no;
- /* VG_(HT_to_array) allocates storage for shadows */
- lc_shadows = (MAC_Chunk**)VG_(HT_to_array)( MAC_(malloc_list),
- &lc_n_shadows );
+ if (!VG_(is_client_addr)(ptr)) /* quick filter */
+ return;
- /* Sort the array. */
- VG_(ssort)((void*)lc_shadows, lc_n_shadows, sizeof(VgHashNode*), lc_compar);
+ sh_no = find_shadow_for(ptr, lc_shadows, lc_n_shadows);
- /* Sanity check; assert that the blocks are now in order */
- for (i = 0; i < lc_n_shadows-1; i++) {
- tl_assert( lc_shadows[i]->data <= lc_shadows[i+1]->data);
+ if (VG_DEBUG_LEAKCHECK)
+ VG_(printf)("ptr=%p -> block %d\n", ptr, sh_no);
+
+ if (sh_no == -1)
+ return;
+
+ sk_assert(sh_no >= 0 && sh_no < lc_n_shadows);
+ sk_assert(ptr <= lc_shadows[sh_no]->data + lc_shadows[sh_no]->size);
+
+ if (lc_markstack[sh_no].state == Unreached) {
+ if (0)
+ VG_(printf)("pushing %p-%p\n", lc_shadows[sh_no]->data,
+ lc_shadows[sh_no]->data + lc_shadows[sh_no]->size);
+
+ sk_assert(lc_markstack[sh_no].next == -1);
+ lc_markstack[sh_no].next = lc_markstack_top;
+ lc_markstack_top = sh_no;
}
- /* Sanity check -- make sure they don't overlap */
- for (i = 0; i < lc_n_shadows-1; i++) {
- tl_assert( lc_shadows[i]->data + lc_shadows[i]->size
- < lc_shadows[i+1]->data );
+ if (clique != -1) {
+ if (0)
+ VG_(printf)("mopup: %d: %p is %d\n",
+ sh_no, lc_shadows[sh_no]->data, lc_markstack[sh_no].state);
+
+ /* An unmarked block - add it to the clique. Add its size to
+ the clique-leader's indirect size. If the new block was
+ itself a clique leader, it isn't any more, so add its
+ indirect to the new clique leader.
+
+ If this block *is* the clique leader, it means this is a
+ cyclic structure, so none of this applies. */
+ if (lc_markstack[sh_no].state == Unreached) {
+ lc_markstack[sh_no].state = IndirectLeak;
+
+ if (sh_no != clique) {
+ if (VG_DEBUG_CLIQUE) {
+ if (lc_markstack[sh_no].indirect)
+ VG_(printf)(" clique %d joining clique %d adding %d+%d bytes\n",
+ sh_no, clique,
+ lc_shadows[sh_no]->size, lc_markstack[sh_no].indirect);
+ else
+ VG_(printf)(" %d joining %d adding %d\n",
+ sh_no, clique, lc_shadows[sh_no]->size);
+ }
+
+ lc_markstack[clique].indirect += lc_shadows[sh_no]->size;
+ lc_markstack[clique].indirect += lc_markstack[sh_no].indirect;
+ lc_markstack[sh_no].indirect = 0; /* shouldn't matter */
+ }
+ }
+ } else if (ptr == lc_shadows[sh_no]->data) {
+ lc_markstack[sh_no].state = Proper;
+ } else {
+ if (lc_markstack[sh_no].state == Unreached)
+ lc_markstack[sh_no].state = Interior;
}
+}
- if (lc_n_shadows == 0) {
- tl_assert(lc_shadows == NULL);
- if (VG_(clo_verbosity) >= 1) {
- VG_(message)(Vg_UserMsg,
- "No malloc'd blocks -- no leaks are possible.");
+static void lc_markstack_push(Addr ptr)
+{
+ _lc_markstack_push(ptr, -1);
+}
+
+/* Return the top of the mark stack, if any. */
+static Int lc_markstack_pop(void)
+{
+ Int ret = lc_markstack_top;
+
+ if (ret != -1) {
+ lc_markstack_top = lc_markstack[ret].next;
+ lc_markstack[ret].next = -1;
+ }
+
+ return ret;
+}
+
+/* Scan a block of memory between [start, start+len). This range may
+ be bogus, inaccessable, or otherwise strange; we deal with it.
+
+ If clique != -1, it means we're gathering leaked memory into
+ cliques, and clique is the index of the current clique leader. */
+static void _lc_scan_memory(Addr start, SizeT len, Int clique)
+{
+ Addr ptr = ROUNDUP(start, sizeof(Addr));
+ Addr end = ROUNDDN(start+len, sizeof(Addr));
+ vki_sigset_t sigmask;
+
+ if (VG_DEBUG_LEAKCHECK)
+ VG_(printf)("scan %p-%p\n", start, len);
+ VG_(sigprocmask)(VKI_SIG_SETMASK, NULL, &sigmask);
+ VG_(set_fault_catcher)(vg_scan_all_valid_memory_catcher);
+
+ lc_scanned += end-ptr;
+
+ if (!VG_(is_client_addr)(ptr) ||
+ !VG_(is_addressable)(ptr, sizeof(Addr), VKI_PROT_READ))
+ ptr = PGROUNDUP(ptr+1); /* first page bad */
+
+ while(ptr < end) {
+ Addr addr;
+
+ /* Skip invalid chunks */
+ if (!(*lc_is_valid_chunk)(PM_IDX(ptr))) {
+ ptr = ROUNDUP(ptr+1, SECONDARY_SIZE);
+ continue;
+ }
+
+ /* Look to see if this page seems reasonble */
+ if ((ptr % VKI_PAGE_SIZE) == 0) {
+ if (!VG_(is_client_addr)(ptr) ||
+ !VG_(is_addressable)(ptr, sizeof(Addr), VKI_PROT_READ))
+ ptr += VKI_PAGE_SIZE; /* bad page - skip it */
+ }
+
+ if (__builtin_setjmp(memscan_jmpbuf) == 0) {
+ if ((*lc_is_valid_address)(ptr)) {
+ addr = *(Addr *)ptr;
+ _lc_markstack_push(addr, clique);
+ } else if (0 && VG_DEBUG_LEAKCHECK)
+ VG_(printf)("%p not valid\n", ptr);
+ ptr += sizeof(Addr);
+ } else {
+ /* We need to restore the signal mask, because we were
+ longjmped out of a signal handler. */
+ VG_(sigprocmask)(VKI_SIG_SETMASK, &sigmask, NULL);
+
+ ptr = PGROUNDUP(ptr+1); /* bad page - skip it */
}
- return;
}
- if (VG_(clo_verbosity) > 0)
- VG_(message)(Vg_UserMsg,
- "searching for pointers to %d not-freed blocks.",
- lc_n_shadows );
+ VG_(sigprocmask)(VKI_SIG_SETMASK, &sigmask, NULL);
+ VG_(set_fault_catcher)(NULL);
+}
- lc_min_mallocd_addr = lc_shadows[0]->data;
- lc_max_mallocd_addr = lc_shadows[lc_n_shadows-1]->data
- + lc_shadows[lc_n_shadows-1]->size - 1;
+static void lc_scan_memory(Addr start, SizeT len)
+{
+ _lc_scan_memory(start, len, -1);
+}
- lc_reachedness = VG_(malloc)( lc_n_shadows * sizeof(Reachedness) );
- for (i = 0; i < lc_n_shadows; i++)
- lc_reachedness[i] = Unreached;
+/* Process the mark stack until empty. If mopup is true, then we're
+ actually gathering leaked blocks, so they should be marked
+ IndirectLeak. */
+static void lc_do_leakcheck(Int clique)
+{
+ Int top;
- /* Do the scan of memory. */
- bytes_notified
- = sizeof(UWord)
- * vg_scan_all_valid_memory (
- is_valid_64k_chunk,
- is_valid_address,
- &vg_detect_memory_leaks_notify_addr
- );
+ while((top = lc_markstack_pop()) != -1) {
+ sk_assert(top >= 0 && top < lc_n_shadows);
+ sk_assert(lc_markstack[top].state != Unreached);
- if (VG_(clo_verbosity) > 0)
- VG_(message)(Vg_UserMsg, "checked %d bytes.", bytes_notified);
+ _lc_scan_memory(lc_shadows[top]->data, lc_shadows[top]->size, clique);
+ }
+}
+
+static Int blocks_leaked;
+static Int blocks_indirect;
+static Int blocks_dubious;
+static Int blocks_reachable;
+static Int blocks_suppressed;
+
+static void full_report()
+{
+ Int i;
+ Int n_lossrecords;
+ LossRecord* errlist;
+ LossRecord* p;
+ Bool is_suppressed;
+
+ /* Go through and group lost structures into cliques. For each
+ Unreached block, push it onto the mark stack, and find all the
+ blocks linked to it. These are marked IndirectLeak, and their
+ size is added to the clique leader's indirect size. If one of
+ the found blocks was itself a clique leader (from a previous
+ pass), then the cliques are merged. */
+ for (i = 0; i < lc_n_shadows; i++) {
+ if (VG_DEBUG_CLIQUE)
+ VG_(printf)("cliques: %d at %p -> %s\n",
+ i, lc_shadows[i]->data, pp_lossmode(lc_markstack[i].state));
+ if (lc_markstack[i].state != Unreached)
+ continue;
+
+ sk_assert(lc_markstack_top == -1);
+ if (VG_DEBUG_CLIQUE)
+ VG_(printf)("%d: gathering clique %p\n", i, lc_shadows[i]->data);
+
+ _lc_markstack_push(lc_shadows[i]->data, i);
+
+ lc_do_leakcheck(i);
+
+ sk_assert(lc_markstack_top == -1);
+ sk_assert(lc_markstack[i].state == IndirectLeak);
+
+ lc_markstack[i].state = Unreached; /* Return to unreached state,
+ to indicate its a clique
+ leader */
+ }
+
/* Common up the lost blocks so we can print sensible error messages. */
n_lossrecords = 0;
errlist = NULL;
for (i = 0; i < lc_n_shadows; i++) {
-
ExeContext* where = lc_shadows[i]->where;
-
+
for (p = errlist; p != NULL; p = p->next) {
- if (p->loss_mode == lc_reachedness[i]
+ if (p->loss_mode == lc_markstack[i].state
&& VG_(eq_ExeContext) ( MAC_(clo_leak_resolution),
p->allocated_at,
where) ) {
if (p != NULL) {
p->num_blocks ++;
p->total_bytes += lc_shadows[i]->size;
+ p->indirect_bytes += lc_markstack[i].indirect;
} else {
n_lossrecords ++;
p = VG_(malloc)(sizeof(LossRecord));
- p->loss_mode = lc_reachedness[i];
+ p->loss_mode = lc_markstack[i].state;
p->allocated_at = where;
p->total_bytes = lc_shadows[i]->size;
+ p->indirect_bytes = lc_markstack[i].indirect;
p->num_blocks = 1;
p->next = errlist;
errlist = p;
}
/* Print out the commoned-up blocks and collect summary stats. */
- blocks_leaked = MAC_(bytes_leaked) = 0;
- blocks_dubious = MAC_(bytes_dubious) = 0;
- blocks_reachable = MAC_(bytes_reachable) = 0;
- blocks_suppressed = MAC_(bytes_suppressed) = 0;
-
for (i = 0; i < n_lossrecords; i++) {
Bool print_record;
LossRecord* p_min = NULL;
UInt n_min = 0xFFFFFFFF;
for (p = errlist; p != NULL; p = p->next) {
if (p->num_blocks > 0 && p->total_bytes < n_min) {
- n_min = p->total_bytes;
+ n_min = p->total_bytes + p->indirect_bytes;
p_min = p;
}
}
- tl_assert(p_min != NULL);
+ sk_assert(p_min != NULL);
/* Ok to have tst==NULL; it's only used if --gdb-attach=yes, and
we disallow that when --leak-check=yes.
- Prints the error if not suppressed, unless it's reachable (Proper)
+ Prints the error if not suppressed, unless it's reachable (Proper or IndirectLeak)
and --show-reachable=no */
- print_record = ( MAC_(clo_show_reachable) || Proper != p_min->loss_mode );
+ print_record = ( MAC_(clo_show_reachable) ||
+ Unreached == p_min->loss_mode || Interior == p_min->loss_mode );
is_suppressed =
- VG_(unique_error) ( tid, LeakErr, (UInt)i+1,
- (Char*)(UWord)n_lossrecords, (void*) p_min,
+ VG_(unique_error) ( VG_(get_VCPU_tid)(), LeakErr, (UInt)i+1,
+ (Char*)n_lossrecords, (void*) p_min,
p_min->allocated_at, print_record,
/*allow_GDB_attach*/False, /*count_error*/False );
blocks_leaked += p_min->num_blocks;
MAC_(bytes_leaked) += p_min->total_bytes;
+ } else if (IndirectLeak == p_min->loss_mode) {
+ blocks_indirect += p_min->num_blocks;
+ MAC_(bytes_indirect)+= p_min->total_bytes;
+
} else if (Interior == p_min->loss_mode) {
blocks_dubious += p_min->num_blocks;
MAC_(bytes_dubious) += p_min->total_bytes;
MAC_(bytes_reachable) += p_min->total_bytes;
} else {
- VG_(tool_panic)("generic_detect_memory_leaks: unknown loss mode");
+ VG_(skin_panic)("generic_detect_memory_leaks: unknown loss mode");
}
p_min->num_blocks = 0;
}
+}
+
+/* Compute a quick summary of the leak check. */
+static void make_summary()
+{
+ Int i;
+
+ for(i = 0; i < lc_n_shadows; i++) {
+ SizeT size = lc_shadows[i]->size;
+
+ switch(lc_markstack[i].state) {
+ case Unreached:
+ blocks_leaked++;
+ MAC_(bytes_leaked) += size;
+ break;
+
+ case Proper:
+ blocks_reachable++;
+ MAC_(bytes_reachable) += size;
+ break;
+
+ case Interior:
+ blocks_dubious++;
+ MAC_(bytes_dubious) += size;
+ break;
+
+ case IndirectLeak: /* shouldn't happen */
+ blocks_indirect++;
+ MAC_(bytes_indirect) += size;
+ break;
+ }
+ }
+}
+
+/* Top level entry point to leak detector. Call here, passing in
+ suitable address-validating functions (see comment at top of
+ vg_scan_all_valid_memory above). All this is to avoid duplication
+ of the leak-detection code for Memcheck and Addrcheck.
+ Also pass in a tool-specific function to extract the .where field
+ for allocated blocks, an indication of the resolution wanted for
+ distinguishing different allocation points, and whether or not
+ reachable blocks should be shown.
+*/
+void MAC_(do_detect_memory_leaks) (
+ LeakCheckMode mode,
+ Bool (*is_valid_64k_chunk) ( UInt ),
+ Bool (*is_valid_address) ( Addr )
+)
+{
+ Int i;
+
+ sk_assert(mode != LC_Off);
+
+ /* VG_(HT_to_array) allocates storage for shadows */
+ lc_shadows = (MAC_Chunk**)VG_(HT_to_array)( MAC_(malloc_list),
+ &lc_n_shadows );
+
+ /* Sort the array. */
+ VG_(ssort)((void*)lc_shadows, lc_n_shadows, sizeof(VgHashNode*), lc_compar);
+
+ /* Sanity check; assert that the blocks are now in order */
+ for (i = 0; i < lc_n_shadows-1; i++) {
+ sk_assert( lc_shadows[i]->data <= lc_shadows[i+1]->data);
+ }
+
+ /* Sanity check -- make sure they don't overlap */
+ for (i = 0; i < lc_n_shadows-1; i++) {
+ sk_assert( lc_shadows[i]->data + lc_shadows[i]->size
+ < lc_shadows[i+1]->data );
+ }
+
+ if (lc_n_shadows == 0) {
+ sk_assert(lc_shadows == NULL);
+ if (VG_(clo_verbosity) >= 1) {
+ VG_(message)(Vg_UserMsg,
+ "No malloc'd blocks -- no leaks are possible.");
+ }
+ return;
+ }
+
+ if (VG_(clo_verbosity) > 0)
+ VG_(message)(Vg_UserMsg,
+ "searching for pointers to %d not-freed blocks.",
+ lc_n_shadows );
+
+ lc_min_mallocd_addr = lc_shadows[0]->data;
+ lc_max_mallocd_addr = lc_shadows[lc_n_shadows-1]->data
+ + lc_shadows[lc_n_shadows-1]->size;
+
+ lc_markstack = VG_(malloc)( lc_n_shadows * sizeof(*lc_markstack) );
+ for (i = 0; i < lc_n_shadows; i++) {
+ lc_markstack[i].next = -1;
+ lc_markstack[i].state = Unreached;
+ lc_markstack[i].indirect = 0;
+ }
+ lc_markstack_top = -1;
+
+ lc_is_valid_chunk = is_valid_64k_chunk;
+ lc_is_valid_address = is_valid_address;
+
+ lc_scanned = 0;
+
+ /* Do the scan of memory, pushing any pointers onto the mark stack */
+ VG_(find_root_memory)(lc_scan_memory);
+
+ /* Push registers onto mark stack */
+ VG_(mark_from_registers)(lc_markstack_push);
+
+ /* Keep walking the heap until everything is found */
+ lc_do_leakcheck(-1);
+
+ if (VG_(clo_verbosity) > 0)
+ VG_(message)(Vg_UserMsg, "checked %d bytes.", lc_scanned);
+
+ blocks_leaked = MAC_(bytes_leaked) = 0;
+ blocks_indirect = MAC_(bytes_indirect) = 0;
+ blocks_dubious = MAC_(bytes_dubious) = 0;
+ blocks_reachable = MAC_(bytes_reachable) = 0;
+ blocks_suppressed = MAC_(bytes_suppressed) = 0;
+
+ if (mode == LC_Full)
+ full_report();
+ else
+ make_summary();
if (VG_(clo_verbosity) > 0) {
VG_(message)(Vg_UserMsg, "");
VG_(message)(Vg_UserMsg, "LEAK SUMMARY:");
VG_(message)(Vg_UserMsg, " definitely lost: %d bytes in %d blocks.",
MAC_(bytes_leaked), blocks_leaked );
- VG_(message)(Vg_UserMsg, " possibly lost: %d bytes in %d blocks.",
+ if (blocks_indirect > 0)
+ VG_(message)(Vg_UserMsg, " indirectly lost: %d bytes in %d blocks.",
+ MAC_(bytes_indirect), blocks_indirect );
+ VG_(message)(Vg_UserMsg, " possibly lost: %d bytes in %d blocks.",
MAC_(bytes_dubious), blocks_dubious );
VG_(message)(Vg_UserMsg, " still reachable: %d bytes in %d blocks.",
MAC_(bytes_reachable), blocks_reachable );
VG_(message)(Vg_UserMsg, " suppressed: %d bytes in %d blocks.",
MAC_(bytes_suppressed), blocks_suppressed );
- if (!MAC_(clo_show_reachable)) {
+ if (mode == LC_Summary)
+ VG_(message)(Vg_UserMsg,
+ "Use --leak-check=full to see details of leaked memory.");
+ else if (!MAC_(clo_show_reachable)) {
VG_(message)(Vg_UserMsg,
"Reachable blocks (those to which a pointer was found) are not shown.");
VG_(message)(Vg_UserMsg,
}
VG_(free) ( lc_shadows );
- VG_(free) ( lc_reachedness );
+ VG_(free) ( lc_markstack );
}
/*--------------------------------------------------------------------*/
/*--- Command line options ---*/
/*------------------------------------------------------------*/
-Bool MAC_(clo_partial_loads_ok) = True;
-Int MAC_(clo_freelist_vol) = 1000000;
-Bool MAC_(clo_leak_check) = False;
-VgRes MAC_(clo_leak_resolution) = Vg_LowRes;
-Bool MAC_(clo_show_reachable) = False;
-Bool MAC_(clo_workaround_gcc296_bugs) = False;
+Bool MAC_(clo_partial_loads_ok) = True;
+Int MAC_(clo_freelist_vol) = 1000000;
+LeakCheckMode MAC_(clo_leak_check) = LC_Off;
+VgRes MAC_(clo_leak_resolution) = Vg_LowRes;
+Bool MAC_(clo_show_reachable) = False;
+Bool MAC_(clo_workaround_gcc296_bugs) = False;
Bool MAC_(process_common_cmd_line_option)(Char* arg)
{
- VG_BOOL_CLO("--leak-check", MAC_(clo_leak_check))
- else VG_BOOL_CLO("--partial-loads-ok", MAC_(clo_partial_loads_ok))
+ VG_BOOL_CLO("--partial-loads-ok", MAC_(clo_partial_loads_ok))
else VG_BOOL_CLO("--show-reachable", MAC_(clo_show_reachable))
else VG_BOOL_CLO("--workaround-gcc296-bugs",MAC_(clo_workaround_gcc296_bugs))
else VG_BNUM_CLO("--freelist-vol", MAC_(clo_freelist_vol), 0, 1000000000)
+ else if (VG_CLO_STREQ(arg, "--leak-check=no"))
+ MAC_(clo_leak_check) = LC_Off;
+ else if (VG_CLO_STREQ(arg, "--leak-check=summary"))
+ MAC_(clo_leak_check) = LC_Summary;
+ else if (VG_CLO_STREQ(arg, "--leak-check=yes") ||
+ VG_CLO_STREQ(arg, "--leak-check=full"))
+ MAC_(clo_leak_check) = LC_Full;
+
else if (VG_CLO_STREQ(arg, "--leak-resolution=low"))
MAC_(clo_leak_resolution) = Vg_LowRes;
else if (VG_CLO_STREQ(arg, "--leak-resolution=med"))
void MAC_(print_common_usage)(void)
{
VG_(printf)(
-" --partial-loads-ok=no|yes too hard to explain here; see manual [yes]\n"
-" --freelist-vol=<number> volume of freed blocks queue [1000000]\n"
-" --leak-check=no|yes search for memory leaks at exit? [no]\n"
-" --leak-resolution=low|med|high how much bt merging in leak check [low]\n"
-" --show-reachable=no|yes show reachable blocks in leak check? [no]\n"
+" --partial-loads-ok=no|yes too hard to explain here; see manual [yes]\n"
+" --freelist-vol=<number> volume of freed blocks queue [1000000]\n"
+" --leak-check=no|summary|full search for memory leaks at exit? [no]\n"
+" --leak-resolution=low|med|high how much bt merging in leak check [low]\n"
+" --show-reachable=no|yes show reachable blocks in leak check? [no]\n"
" --workaround-gcc296-bugs=no|yes self explanatory [no]\n"
);
VG_(replacement_malloc_print_usage)();
ai->lastchange = NULL;
ai->stack_tid = VG_INVALID_THREADID;
ai->maybe_gcc = False;
+ ai->desc = NULL;
}
void MAC_(clear_MAC_Error) ( MAC_Error* err_extra )
break;
case Freed: case Mallocd: case UserG: case Mempool: {
SizeT delta;
- UChar* relative;
- UChar* kind;
+ const Char* relative;
+ const Char* kind;
if (ai->akind == Mempool) {
kind = "mempool";
} else {
kind = "block";
}
+ if (ai->desc != NULL)
+ kind = ai->desc;
+
if (ai->rwoffset < 0) {
delta = (SizeT)(- ai->rwoffset);
relative = "before";
init_prof_mem();
}
-void MAC_(common_fini)(void (*leak_check)(ThreadId))
+void MAC_(common_fini)(void (*leak_check)(LeakCheckMode mode))
{
MAC_(print_malloc_stats)();
if (VG_(clo_verbosity) == 1) {
- if (!MAC_(clo_leak_check))
+ if (MAC_(clo_leak_check) == LC_Off)
VG_(message)(Vg_UserMsg,
"For a detailed leak analysis, rerun with: --leak-check=yes");
VG_(message)(Vg_UserMsg,
"For counts of detected errors, rerun with: -v");
}
- if (MAC_(clo_leak_check))
- leak_check( 1/*bogus ThreadID*/ );
+ if (MAC_(clo_leak_check) != LC_Off)
+ (*leak_check)(MAC_(clo_leak_check));
done_prof_mem();
}
UWord** argp = (UWord**)arg;
// MAC_(bytes_leaked) et al were set by the last leak check (or zero
// if no prior leak checks performed).
- *argp[1] = MAC_(bytes_leaked);
+ *argp[1] = MAC_(bytes_leaked) + MAC_(bytes_indirect);
*argp[2] = MAC_(bytes_dubious);
*argp[3] = MAC_(bytes_reachable);
*argp[4] = MAC_(bytes_suppressed);
+ // there is no argp[5]
+ //*argp[5] = MAC_(bytes_indirect);
+ // XXX need to make *argp[1-4] readable
*ret = 0;
return True;
}
return dst;
}
+void *memset(void *s, int c, size_t n)
+{
+ unsigned char *cp = s;
+
+ while(n--)
+ *cp++ = c;
+
+ return s;
+}
+
/* Find the first occurrence of C in S or the final NUL byte. */
OffT rwoffset; // Freed, Mallocd
ExeContext* lastchange; // Freed, Mallocd
ThreadId stack_tid; // Stack
+ const Char *desc; // UserG
Bool maybe_gcc; // True if just below %esp -- could be a gcc bug.
}
AddrInfo;
/*--- V and A bits ---*/
/*------------------------------------------------------------*/
-#define IS_DISTINGUISHED_SM(smap) \
- ((smap) == &distinguished_secondary_map)
+/* expand 1 bit -> 8 */
+#define BIT_EXPAND(b) ((~(((UChar)(b) & 1) - 1)) & 0xFF)
+
+#define SECONDARY_SHIFT 16
+#define SECONDARY_SIZE (1 << SECONDARY_SHIFT)
+#define SECONDARY_MASK (SECONDARY_SIZE - 1)
+
+#define PRIMARY_SIZE (1 << (32 - SECONDARY_SHIFT))
+
+#define SM_OFF(addr) ((addr) & SECONDARY_MASK)
+#define PM_IDX(addr) ((addr) >> SECONDARY_SHIFT)
+
+#define IS_DISTINGUISHED_SM(smap) \
+ ((smap) >= &distinguished_secondary_maps[0] && \
+ (smap) < &distinguished_secondary_maps[N_SECONDARY_MAPS])
+
+#define IS_DISTINGUISHED(addr) (IS_DISTINGUISHED_SM(primary_map[PM_IDX(addr)]))
#define ENSURE_MAPPABLE(addr,caller) \
do { \
- if (IS_DISTINGUISHED_SM(primary_map[(addr) >> 16])) { \
- primary_map[(addr) >> 16] = alloc_secondary_map(caller); \
+ if (IS_DISTINGUISHED(addr)) { \
+ primary_map[PM_IDX(addr)] = alloc_secondary_map(caller, primary_map[PM_IDX(addr)]); \
/* VG_(printf)("new 2map because of %p\n", addr); */ \
} \
} while(0)
extern Int MAC_(clo_freelist_vol);
/* Do leak check at exit? default: NO */
-extern Bool MAC_(clo_leak_check);
+typedef
+ enum {
+ LC_Off,
+ LC_Summary,
+ LC_Full,
+ }
+ LeakCheckMode;
+
+extern LeakCheckMode MAC_(clo_leak_check);
/* How closely should we compare ExeContexts in leak records? default: 2 */
extern VgRes MAC_(clo_leak_resolution);
/* For VALGRIND_COUNT_LEAKS client request */
extern Int MAC_(bytes_leaked);
+extern Int MAC_(bytes_indirect);
extern Int MAC_(bytes_dubious);
extern Int MAC_(bytes_reachable);
extern Int MAC_(bytes_suppressed);
extern MAC_Chunk* MAC_(first_matching_freed_MAC_Chunk)( Bool (*p)(MAC_Chunk*, void*), void* d );
extern void MAC_(common_pre_clo_init) ( void );
-extern void MAC_(common_fini) ( void (*leak_check)(ThreadId) );
+extern void MAC_(common_fini) ( void (*leak_check)(LeakCheckMode mode) );
extern Bool MAC_(handle_common_client_requests) ( ThreadId tid,
UWord* arg_block, UWord* ret );
UInt n_total_records);
extern void MAC_(do_detect_memory_leaks) (
- ThreadId tid,
- Bool is_valid_64k_chunk ( UInt ),
- Bool is_valid_address ( Addr )
+ LeakCheckMode mode,
+ Bool (*is_valid_64k_chunk) ( UInt ),
+ Bool (*is_valid_address) ( Addr )
);
extern REGPARM(1) void MAC_(new_mem_stack_4) ( Addr old_ESP );
VG_USERREQ__GET_VBITS,
VG_USERREQ__SET_VBITS,
+ VG_USERREQ__CREATE_BLOCK,
+
/* This is just for memcheck's internal use - don't use it */
_VG_USERREQ__MEMCHECK_GET_RECORD_OVERLAP = VG_USERREQ_TOOL_BASE('M','C')+256
} Vg_MemCheckClientRequest;
/* Client-code macros to manipulate the state of memory. */
/* Mark memory at _qzz_addr as unaddressible and undefined for
- _qzz_len bytes. Returns an int handle pertaining to the block
- descriptions Valgrind will use in subsequent error messages. */
+ _qzz_len bytes. */
#define VALGRIND_MAKE_NOACCESS(_qzz_addr,_qzz_len) \
(__extension__({unsigned int _qzz_res; \
VALGRIND_MAGIC_SEQUENCE(_qzz_res, 0 /* default return */, \
_qzz_res; \
}))
-/* Discard a block-description-handle obtained from the above three
- macros. After this, Valgrind will no longer be able to relate
- addressing errors to the user-defined block associated with the
- handle. The permissions settings associated with the handle remain
- in place. Returns 1 for an invalid handle, 0 for a valid
- handle. */
+/* Create a block-description handle. The description is an ascii
+ string which is included in any messages pertaining to addresses
+ within the specified memory range. Has no other effect on the
+ properties of the memory range. */
+#define VALGRIND_CREATE_BLOCK(_qzz_addr,_qzz_len, _qzz_desc) \
+ (__extension__({unsigned int _qzz_res; \
+ VALGRIND_MAGIC_SEQUENCE(_qzz_res, 0 /* default return */, \
+ VG_USERREQ__CREATE_BLOCK, \
+ _qzz_addr, _qzz_len, _qzz_desc, 0); \
+ _qzz_res; \
+ }))
+
+/* Discard a block-description-handle. Returns 1 for an
+ invalid handle, 0 for a valid handle. */
#define VALGRIND_DISCARD(_qzz_blkindex) \
(__extension__ ({unsigned int _qzz_res; \
VALGRIND_MAGIC_SEQUENCE(_qzz_res, 0 /* default return */, \
0, 0, 0, 0); \
}
+/* Just display summaries of leaked memory, rather than all the
+ details */
+#define VALGRIND_DO_QUICK_LEAK_CHECK \
+ {unsigned int _qzz_res; \
+ VALGRIND_MAGIC_SEQUENCE(_qzz_res, 0, \
+ VG_USERREQ__DO_LEAK_CHECK, \
+ 1, 0, 0, 0); \
+ }
+
/* Return number of leaked, dubious, reachable and suppressed bytes found by
all previous leak checks. They must be lvalues. */
#define VALGRIND_COUNT_LEAKS(leaked, dubious, reachable, suppressed) \
Makefile.in
Makefile
+addressable
badaddrvalue
badfree
badjump
hello
inits
inline
+leak-0
+leak-cycle
+leak-regroot
+leak-tree
malloc1
malloc2
malloc3
new_override
null_socket
overlap
+pointer-trace
+post-syscall
realloc1
realloc2
realloc3
noinst_HEADERS = scalar.h
EXTRA_DIST = $(noinst_SCRIPTS) \
+ addressable.stderr.exp addressable.stdout.exp addressable.vgtest \
badaddrvalue.stderr.exp \
badaddrvalue.stdout.exp badaddrvalue.vgtest \
badfree-2trace.stderr.exp badfree-2trace.vgtest \
clientperm.stderr.exp \
clientperm.stdout.exp clientperm.vgtest \
custom_alloc.stderr.exp custom_alloc.vgtest \
+ describe-block.stderr.exp describe-block.vgtest \
doublefree.stderr.exp doublefree.vgtest \
error_counts.stderr.exp error_counts.stdout.exp error_counts.vgtest \
errs1.stderr.exp errs1.vgtest \
fwrite.stderr.exp fwrite.stdout.exp fwrite.vgtest \
inits.stderr.exp inits.vgtest \
inline.stderr.exp inline.stdout.exp inline.vgtest \
+ leak-0.vgtest leak-0.stderr.exp \
+ leak-cycle.vgtest leak-cycle.stderr.exp \
+ leak-tree.vgtest leak-tree.stderr.exp \
+ leak-regroot.vgtest leak-regroot.stderr.exp \
+ leakotron.vgtest leakotron.stdout.exp leakotron.stderr.exp \
malloc1.stderr.exp malloc1.vgtest \
malloc2.stderr.exp malloc2.vgtest \
malloc3.stderr.exp malloc3.stdout.exp malloc3.vgtest \
new_override.stderr.exp new_override.stdout.exp new_override.vgtest \
null_socket.stderr.exp null_socket.vgtest \
overlap.stderr.exp overlap.stdout.exp overlap.vgtest \
+ pointer-trace.vgtest pointer-trace.stdout.exp pointer-trace.stderr.exp \
+ post-syscall.stderr.exp post-syscall.stdout.exp post-syscall.vgtest \
pth_once.stderr.exp pth_once.stdout.exp pth_once.vgtest \
realloc1.stderr.exp realloc1.vgtest \
realloc2.stderr.exp realloc2.vgtest \
zeropage.stderr.exp zeropage.stderr.exp2 zeropage.vgtest
check_PROGRAMS = \
+ addressable \
badaddrvalue badfree badjump badjump2 \
badloop badpoll badrw brk brk2 buflen_check \
clientperm custom_alloc \
+ describe-block \
doublefree error_counts errs1 exitprog execve execve2 \
fprw fwrite hello inits inline \
+ leak-0 leak-cycle leak-tree leak-regroot leakotron \
malloc1 malloc2 malloc3 manuel1 manuel2 manuel3 \
memalign_test memalign2 memcmptest mempool mmaptest \
nanoleak new_nothrow \
null_socket overlap \
+ pointer-trace \
+ post-syscall \
realloc1 realloc2 realloc3 \
scalar scalar_exit_group scalar_fork scalar_supp scalar_vfork \
sigaltstack signal2 sigprocmask \
writev zeropage
-AM_CPPFLAGS = -I$(top_builddir)/include -I@VEX_DIR@/pub
+AM_CPPFLAGS = -I$(top_srcdir) -I$(top_srcdir)/include -I$(top_builddir)/include -I@VEX_DIR@/pub
AM_CFLAGS = $(WERROR) -Winline -Wall -Wshadow -g
AM_CXXFLAGS = $(AM_CFLAGS)
# C ones
+addressable_SOURCES = addressable.c
badaddrvalue_SOURCES = badaddrvalue.c
badfree_SOURCES = badfree.c
badjump_SOURCES = badjump.c
buflen_check_SOURCES = buflen_check.c
clientperm_SOURCES = clientperm.c
custom_alloc_SOURCES = custom_alloc.c
+describe_block_SOURCES = describe-block.c
doublefree_SOURCES = doublefree.c
error_counts_SOURCES = error_counts.c
errs1_SOURCES = errs1.c
fwrite_SOURCES = fwrite.c
inits_SOURCES = inits.c
inline_SOURCES = inline.c
+leak_0_SOURCES = leak-0.c
+leak_cycle_SOURCES = leak-cycle.c
+leak_tree_SOURCES = leak-tree.c
+leak_regroot_SOURCES = leak-regroot.c
+leakotron_SOURCES = leakotron.c
malloc1_SOURCES = malloc1.c
malloc2_SOURCES = malloc2.c
malloc3_SOURCES = malloc3.c
nanoleak_SOURCES = nanoleak.c
null_socket_SOURCES = null_socket.c
overlap_SOURCES = overlap.c
+pointer_trace_SOURCES = pointer-trace.c
+post_syscall_SOURCES = post-syscall.c
realloc1_SOURCES = realloc1.c
realloc2_SOURCES = realloc2.c
realloc3_SOURCES = realloc3.c
Jump to the invalid address stated on the next line
at 0x........: ???
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Process terminating with default action of signal 11 (SIGSEGV)
Access not within mapped region at address 0x........
at 0x........: ???
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
Jump to the invalid address stated on the next line
at 0x........: ???
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Process terminating with default action of signal 11 (SIGSEGV)
Access not within mapped region at address 0x........
at 0x........: ???
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
Jump to the invalid address stated on the next line
at 0x........: ???
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Signal caught, as expected
int main(void)
{
struct sockaddr name;
- int res1, res2;
+ int res1, res2, res3;
int len = 10;
res1 = socket(PF_UNIX, SOCK_STREAM, 0);
}
/* Valgrind 1.0.X doesn't report the second error */
- res1 = getsockname(-1, NULL, &len); /* NULL is bogus */
- res2 = getsockname(-1, &name, NULL); /* NULL is bogus */
- if (res1 == -1) {
+ res2 = getsockname(res1, NULL, &len); /* NULL is bogus */
+ res3 = getsockname(res1, &name, NULL); /* NULL is bogus */
+ if (res2 == -1) {
fprintf(stderr, "getsockname(1) failed\n");
}
- if (res2 == -1) {
+ if (res3 == -1) {
fprintf(stderr, "getsockname(2) failed\n");
}
Syscall param socketcall.getsockname(name) points to unaddressable byte(s)
at 0x........: getsockname (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param socketcall.getsockname(namelen_in) points to unaddressable byte(s)
at 0x........: getsockname (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
getsockname(1) failed
-m_na: returned value is 0
+m_na: returned value is -1
sum is non-positive
-m_rm: returned value is 0
+m_rm: returned value is 1
sum is non-positive
Invalid write of size 4
at 0x........: main (custom_alloc.c:79)
- Address 0x........ is 48 bytes inside a block of size 100000 client-defined
- at 0x........: get_superblock (custom_alloc.c:25)
- by 0x........: custom_alloc (custom_alloc.c:40)
+ Address 0x........ is 0 bytes after a block of size 40 alloc'd
+ at 0x........: custom_alloc (custom_alloc.c:47)
by 0x........: main (custom_alloc.c:76)
Invalid free() / delete / delete[]
Invalid read of size 4
at 0x........: main (custom_alloc.c:89)
- Address 0x........ is 8 bytes inside a block of size 100000 client-defined
- at 0x........: get_superblock (custom_alloc.c:25)
- by 0x........: custom_alloc (custom_alloc.c:40)
- by 0x........: main (custom_alloc.c:76)
+ Address 0x........ is not stack'd, malloc'd or (recently) free'd
by 0x........: zzzzzzz (errs1.c:12)
by 0x........: yyy (errs1.c:13)
by 0x........: xxx (errs1.c:14)
+ by 0x........: www (errs1.c:15)
+ by 0x........: main (errs1.c:17)
Invalid write of size 1
at 0x........: ddd (errs1.c:7)
by 0x........: zzzzzzz (errs1.c:12)
by 0x........: yyy (errs1.c:13)
by 0x........: xxx (errs1.c:14)
+ by 0x........: www (errs1.c:15)
+ by 0x........: main (errs1.c:17)
# Anonymise line numbers in mac_replace_strmem.c
sed "s/mac_replace_strmem.c:[0-9]*/mac_replace_strmem.c:.../" |
-$dir/../../tests/filter_test_paths |
-
-# Anonymise paths like "(in /foo/bar/libc-baz.so)"
-sed "s/(in \/.*libc.*)$/(in \/...libc...)/" |
-
-# Anonymise paths like "(within /foo/bar/libc-baz.so)"
-sed "s/(within \/.*libc.*)$/(within \/...libc...)/" |
-
-# Anonymise paths like "xxx (../sysdeps/unix/sysv/linux/quux.c:129)"
-sed "s/(\.\.\/sysdeps\/unix\/sysv\/linux\/.*\.c:[0-9]*)$/(in \/...libc...)/" |
-
-# Anonymise paths like "__libc_start_main (../foo/bar/libc-quux.c:129)"
-sed "s/__libc_\(.*\) (.*)$/__libc_\1 (...libc...)/"
-
+$dir/../../tests/filter_test_paths
Syscall param write(buf) points to uninitialised byte(s)
at 0x........: write (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is 0 bytes inside a block of size 10 alloc'd
at 0x........: malloc (vg_replace_malloc.c:...)
Invalid write of size 1
at 0x........: test (mempool.c:124)
by 0x........: main (mempool.c:148)
- Address 0x........ is 1 bytes before a block of size 10 client-defined
- at 0x........: allocate (mempool.c:99)
- by 0x........: test (mempool.c:115)
+ Address 0x........ is 7 bytes inside a block of size 100000 alloc'd
+ at 0x........: malloc (vg_replace_malloc.c:...)
+ by 0x........: make_pool (mempool.c:38)
+ by 0x........: test (mempool.c:111)
by 0x........: main (mempool.c:148)
Invalid write of size 1
at 0x........: test (mempool.c:125)
by 0x........: main (mempool.c:148)
- Address 0x........ is 0 bytes after a block of size 10 client-defined
- at 0x........: allocate (mempool.c:99)
- by 0x........: test (mempool.c:115)
+ Address 0x........ is 18 bytes inside a block of size 100000 alloc'd
+ at 0x........: malloc (vg_replace_malloc.c:...)
+ by 0x........: make_pool (mempool.c:38)
+ by 0x........: test (mempool.c:111)
by 0x........: main (mempool.c:148)
Invalid write of size 1
at 0x........: test (mempool.c:129)
by 0x........: main (mempool.c:148)
- Address 0x........ is 70 bytes inside a mempool of size 100000 client-defined
- at 0x........: make_pool (mempool.c:43)
+ Address 0x........ is 70 bytes inside a block of size 100000 alloc'd
+ at 0x........: malloc (vg_replace_malloc.c:...)
+ by 0x........: make_pool (mempool.c:38)
by 0x........: test (mempool.c:111)
by 0x........: main (mempool.c:148)
Invalid write of size 1
at 0x........: test (mempool.c:130)
by 0x........: main (mempool.c:148)
- Address 0x........ is 96 bytes inside a mempool of size 100000 client-defined
- at 0x........: make_pool (mempool.c:43)
+ Address 0x........ is 96 bytes inside a block of size 100000 alloc'd
+ at 0x........: malloc (vg_replace_malloc.c:...)
+ by 0x........: make_pool (mempool.c:38)
by 0x........: test (mempool.c:111)
by 0x........: main (mempool.c:148)
-20 bytes in 1 blocks are definitely lost in loss record 2 of 3
+100028 (20 direct, 100008 indirect) bytes in 1 blocks are definitely lost in loss record 2 of 3
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: make_pool (mempool.c:37)
by 0x........: test (mempool.c:111)
// __NR_munlockall 153
GO(__NR_munlockall, "0s 0m");
- SY(__NR_munlockall); SUCC_OR_FAIL;
+ SY(__NR_munlockall); SUCC_OR_FAILx(EPERM);
// __NR_sched_setparam 154
GO(__NR_sched_setparam, "2s 1m");
// __NR_flistxattr 234
GO(__NR_flistxattr, "3s 1m");
- SY(__NR_flistxattr, x0-1, x0, x0+1); FAILx(EBADF);
+ SY(__NR_flistxattr, x0-1, x0, x0+1); FAILx(EFAULT); /* kernel returns EBADF, but both seem correct */
// __NR_removexattr 235
GO(__NR_removexattr, "2s 2m");
// __NR_set_tid_address 258
GO(__NR_set_tid_address, "1s 0m");
- SY(__NR_set_tid_address, x0); SUCC;
+ SY(__NR_set_tid_address, x0); SUCC_OR_FAILx(ENOSYS);
// __NR_timer_create 259
GO(__NR_timer_create, "3s 2m");
} \
} while (0);
+#define SUCC_OR_FAILx(E) \
+ do { \
+ int myerrno = errno; \
+ if (-1 == res) { \
+ if (E == myerrno) { \
+ /* as expected */ \
+ } else { \
+ fprintf(stderr, "Expected error %s (%d), got %d\n", #E, E, myerrno); \
+ exit(1); \
+ } \
+ } \
+ } while (0);
-----------------------------------------------------
Syscall param (syscallno) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param read(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param read(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param read(count) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param read(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param write(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param write(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param write(count) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param write(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param open(filename) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param open(flags) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param open(filename) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param open(mode) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
6: __NR_close 1s 0m
Syscall param close(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Warning: invalid file descriptor -1 in syscall close()
-----------------------------------------------------
Syscall param waitpid(pid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param waitpid(status) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param waitpid(options) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param waitpid(status) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param creat(pathname) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param creat(mode) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param creat(pathname) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param link(oldpath) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param link(newpath) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param link(oldpath) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param link(newpath) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param unlink(pathname) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param unlink(pathname) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param execve(filename) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param execve(argv) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param execve(envp) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param execve(filename) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param chdir(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param chdir(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param time(t) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param time(t) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param mknod(pathname) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mknod(mode) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mknod(dev) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mknod(pathname) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param chmod(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param chmod(mode) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param chmod(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param lseek(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lseek(offset) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lseek(whence) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
20: __NR_getpid 0s 0m
Syscall param mount(source) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mount(target) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mount(type) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mount(flags) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mount(data) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
More than 50 errors detected. Subsequent errors
Syscall param mount(source) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param mount(target) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param mount(type) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param umount(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param umount(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param setuid16(uid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
24: __NR_getuid 0s 0m
Syscall param ptrace(request) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param ptrace(pid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param ptrace(addr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param ptrace(data) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param ptrace(getregs) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param alarm(seconds) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
28: __NR_oldfstat n/a
Syscall param utime(filename) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param utime(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param utime(filename) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param utime(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param access(pathname) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param access(mode) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param access(pathname) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param nice(inc) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
35: __NR_ftime ni
Syscall param kill(pid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param kill(sig) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
38: __NR_rename 2s 2m
Syscall param rename(oldpath) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rename(newpath) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rename(oldpath) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param rename(newpath) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param mkdir(pathname) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mkdir(mode) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mkdir(pathname) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param rmdir(pathname) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rmdir(pathname) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param dup(oldfd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
42: __NR_pipe 1s 1m
Syscall param pipe(filedes) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param pipe(filedes) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param times(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param times(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param brk(end_data_segment) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
46: __NR_setgid 1s 0m
Syscall param setgid16(gid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
47: __NR_getgid 0s 0m
Syscall param acct(filename) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param acct(filename) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param umount2(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param umount2(flags) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param umount2(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param ioctl(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param ioctl(request) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param ioctl(arg) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param ioctl(TCSET{S,SW,SF}) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param fcntl(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fcntl(cmd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
55: __NR_fcntl (DUPFD) 1s 0m
Syscall param fcntl(arg) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
55: __NR_fcntl (GETLK) 1s 0m
Syscall param setpgid(pid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setpgid(pgid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
58: __NR_ulimit ni
Syscall param umask(mask) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
61: __NR_chroot 1s 1m
Syscall param chroot(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param chroot(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param dup2(oldfd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param dup2(newfd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
64: __NR_getppid 0s 0m
Syscall param sigaction(signum) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sigaction(act) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sigaction(oldact) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sigaction(act) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param sigaction(oldact) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param setreuid16(ruid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setreuid16(euid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
71: __NR_setregid 2s 0m
Syscall param setregid16(rgid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setregid16(egid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
72: __NR_sigsuspend ignore
Syscall param sigpending(set) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sigpending(set) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param setrlimit(resource) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setrlimit(rlim) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setrlimit(rlim) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param old_getrlimit(resource) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param old_getrlimit(rlim) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param old_getrlimit(rlim) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param getrusage(who) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getrusage(usage) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getrusage(usage) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param gettimeofday(tv) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param gettimeofday(tz) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param gettimeofday(tv) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param gettimeofday(tz) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param settimeofday(tv) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param settimeofday(tz) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param settimeofday(tv) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param settimeofday(tz) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param getgroups16(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getgroups16(list) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getgroups16(list) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param setgroups16(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setgroups16(list) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setgroups16(list) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param old_select(args) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param old_select(readfds) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param old_select(writefds) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param old_select(exceptfds) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param old_select(timeout) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param symlink(oldpath) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param symlink(newpath) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param symlink(oldpath) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param symlink(newpath) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param readlink(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param readlink(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param readlink(bufsiz) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param readlink(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param readlink(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param old_mmap(args) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
91: __NR_munmap 2s 0m
Syscall param munmap(start) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param munmap(length) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
92: __NR_truncate 2s 1m
Syscall param truncate(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param truncate(length) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param truncate(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param ftruncate(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param ftruncate(length) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
94: __NR_fchmod 2s 0m
Syscall param fchmod(fildes) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fchmod(mode) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
95: __NR_fchown 3s 0m
Syscall param fchown16(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fchown16(owner) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fchown16(group) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
96: __NR_getpriority 2s 0m
Syscall param getpriority(which) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getpriority(who) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
97: __NR_setpriority 3s 0m
Syscall param setpriority(which) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setpriority(who) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setpriority(prio) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
98: __NR_profil ni
Syscall param statfs(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param statfs(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param statfs(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param statfs(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param fstatfs(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fstatfs(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fstatfs(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param ioperm(from) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param ioperm(num) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param ioperm(turn_on) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
102: __NR_socketcall XXX
Syscall param syslog(type) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param syslog(bufp) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param syslog(len) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param syslog(bufp) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param setitimer(which) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setitimer(value) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setitimer(ovalue) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setitimer(value) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param setitimer(ovalue) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param getitimer(which) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getitimer(value) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getitimer(value) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param stat(file_name) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param stat(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param stat(file_name) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param stat(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param lstat(file_name) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lstat(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lstat(file_name) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param lstat(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param fstat(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fstat(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fstat(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param iopl(level) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
111: __NR_vhangup 0s 0m
Syscall param wait4(pid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param wait4(status) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param wait4(options) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param wait4(rusage) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param wait4(status) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param wait4(rusage) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param sysinfo(info) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sysinfo(info) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param ipc(call) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param ipc(first) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param ipc(second) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param ipc(third) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param ipc(ptr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
118: __NR_fsync 1s 0m
Syscall param fsync(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
119: __NR_sigreturn n/a
Syscall param clone(flags) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param clone(child_stack) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param clone(parent_tidptr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
+ by 0x........: ...
+
+Syscall param clone(tlsinfo) contains uninitialised byte(s)
+ at 0x........: syscall (in /...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param clone(child_tidptr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
+
+Syscall param clone(parent_tidptr) points to unaddressable byte(s)
+ at 0x........: syscall (in /...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
+ by 0x........: ...
+ Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
121: __NR_setdomainname n/a
-----------------------------------------------------
Syscall param uname(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param uname(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param modify_ldt(func) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param modify_ldt(ptr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param modify_ldt(bytecount) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param modify_ldt(ptr) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param mprotect(addr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mprotect(len) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mprotect(prot) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
126: __NR_sigprocmask 3s 2m
Syscall param sigprocmask(how) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sigprocmask(set) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sigprocmask(oldset) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sigprocmask(set) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is 0 bytes after a block of size 4 alloc'd
at 0x........: malloc (vg_replace_malloc.c:...)
Syscall param sigprocmask(oldset) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is 0 bytes after a block of size 4 alloc'd
at 0x........: malloc (vg_replace_malloc.c:...)
Syscall param init_module(umod) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param init_module(len) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param init_module(uargs) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param init_module(umod) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param init_module(uargs) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param quotactl(cmd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param quotactl(special) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param quotactl(id) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param quotactl(addr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param quotactl(special) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param getpgid(pid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
133: __NR_fchdir 1s 0m
Syscall param fchdir(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
134: __NR_bdflush n/a
Syscall param personality(persona) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
137: __NR_afs_syscall ni
Syscall param setfsuid16(uid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
139: __NR_setfsgid 1s 0m
Syscall param setfsgid16(gid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
140: __NR__llseek 5s 1m
Syscall param llseek(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param llseek(offset_high) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param llseek(offset_low) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param llseek(result) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param llseek(whence) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param llseek(result) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param getdents(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getdents(dirp) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getdents(count) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getdents(dirp) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param select(n) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param select(readfds) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param select(writefds) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param select(exceptfds) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param select(timeout) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param select(readfds) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param select(writefds) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param select(exceptfds) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param select(timeout) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param flock(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param flock(operation) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
144: __NR_msync 3s 1m
Syscall param msync(start) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param msync(length) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param msync(flags) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param msync(start) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param readv(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param readv(vector) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param readv(count) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param readv(vector) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param writev(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param writev(vector) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param writev(count) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param writev(vector) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param getsid(pid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
148: __NR_fdatasync 1s 0m
Syscall param fdatasync(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
149: __NR__sysctl 1s 1m
Syscall param sysctl(args) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sysctl(args) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param mlock(addr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mlock(len) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
151: __NR_munlock 2s 0m
Syscall param munlock(addr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param munlock(len) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
152: __NR_mlockall 1s 0m
Syscall param mlockall(flags) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
153: __NR_munlockall 0s 0m
Syscall param sched_setparam(pid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sched_setparam(p) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sched_setparam(p) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param sched_getparam(pid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sched_getparam(p) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sched_getparam(p) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param sched_setscheduler(pid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sched_setscheduler(policy) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sched_setscheduler(p) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sched_setscheduler(p) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param sched_getscheduler(pid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
158: __NR_sched_yield 0s 0m
Syscall param sched_get_priority_max(policy) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
160:__NR_sched_get_priority_min 1s 0m
Syscall param sched_get_priority_min(policy) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
161:__NR_sched_rr_get_interval n/a
Syscall param nanosleep(req) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param nanosleep(rem) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param nanosleep(req) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param nanosleep(rem) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param mremap(old_addr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mremap(old_size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mremap(new_size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mremap(flags) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mremap(new_addr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
164: __NR_setresuid 3s 0m
Syscall param setresuid16(ruid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setresuid16(euid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setresuid16(suid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
165: __NR_getresuid 3s 3m
Syscall param getresuid16(ruid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getresuid16(euid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getresuid16(suid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getresuid16(ruid) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param getresuid16(euid) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param getresuid16(suid) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param poll(ufds) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param poll(nfds) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param poll(timeout) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param poll(ufds) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param setresgid16(rgid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setresgid16(egid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setresgid16(sgid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
171: __NR_getresgid 3s 3m
Syscall param getresgid16(rgid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getresgid16(egid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getresgid16(sgid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getresgid16(rgid) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param getresgid16(egid) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param getresgid16(sgid) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param prctl(option) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param prctl(arg2) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param prctl(arg3) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param prctl(arg4) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param prctl(arg5) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
173: __NR_rt_sigreturn n/a
Syscall param rt_sigaction(signum) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigaction(act) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigaction(oldact) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigaction(sigsetsize) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigaction(act) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param rt_sigaction(oldact) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param rt_sigprocmask(how) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigprocmask(set) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigprocmask(oldset) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigprocmask(sigsetsize) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigprocmask(set) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param rt_sigprocmask(oldset) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param rt_sigpending(set) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigpending(sigsetsize) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigpending(set) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param rt_sigtimedwait(set) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigtimedwait(info) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigtimedwait(timeout) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigtimedwait(sigsetsize) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigtimedwait(set) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param rt_sigtimedwait(info) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param rt_sigtimedwait(timeout) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param rt_sigqueueinfo(pid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigqueueinfo(sig) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigqueueinfo(uinfo) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param rt_sigqueueinfo(uinfo) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param pread64(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param pread64(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param pread64(count) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param pread64(offset_low32) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param pread64(offset_high32) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param pread64(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param pwrite64(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param pwrite64(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param pwrite64(count) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param pwrite64(offset_low32) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param pwrite64(offset_high32) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param pwrite64(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param chown16(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param chown16(owner) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param chown16(group) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param chown16(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param getcwd(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getcwd(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getcwd(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param capget(header) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param capget(data) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param capget(header) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param capget(data) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param capset(header) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param capset(data) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param capset(header) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param capset(data) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param sigaltstack(ss) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sigaltstack(oss) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sigaltstack(ss) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
- Address 0x........ is 0 bytes inside a block of size 12 client-defined
- at 0x........: main (scalar.c:821)
+ Address 0x........ is on thread 1's stack
Syscall param sigaltstack(oss) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
- Address 0x........ is 0 bytes inside a block of size 12 client-defined
- at 0x........: main (scalar.c:821)
+ Address 0x........ is on thread 1's stack
-----------------------------------------------------
187: __NR_sendfile 4s 1m
-----------------------------------------------------
Syscall param sendfile(out_fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sendfile(in_fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sendfile(offset) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sendfile(count) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sendfile(offset) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param getpmsg(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getpmsg(ctrl) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getpmsg(data) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getpmsg(bandp) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getpmsg(flagsp) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
189: __NR_putpmsg 5s 0m
Syscall param putpmsg(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param putpmsg(ctrl) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param putpmsg(data) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param putpmsg(band) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param putpmsg(flags) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
190: __NR_vfork other
Syscall param getrlimit(resource) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getrlimit(rlim) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getrlimit(rlim) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param mmap2(start) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mmap2(length) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mmap2(prot) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mmap2(flags) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mmap2(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
193: __NR_truncate64 3s 1m
Syscall param truncate64(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param truncate64(length_low32) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param truncate64(length_high32) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param truncate64(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param ftruncate64(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param ftruncate64(length_low32) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param ftruncate64(length_high32) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
195: __NR_stat64 2s 2m
Syscall param stat64(file_name) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param stat64(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param stat64(file_name) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param stat64(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param lstat64(file_name) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lstat64(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lstat64(file_name) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param lstat64(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param fstat64(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fstat64(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fstat64(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param lchown(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lchown(owner) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lchown(group) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lchown(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param setreuid(ruid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setreuid(euid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
204: __NR_setregid32 2s 0m
Syscall param setregid(rgid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setregid(egid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
205: __NR_getgroups32 2s 1m
Syscall param getgroups(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getgroups(list) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getgroups(list) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param setgroups(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setgroups(list) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setgroups(list) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param fchown(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fchown(owner) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fchown(group) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
208: __NR_setresuid32 3s 0m
Syscall param setresuid(ruid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setresuid(euid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setresuid(suid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
209: __NR_getresuid32 3s 3m
Syscall param getresuid(ruid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getresuid(euid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getresuid(suid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getresuid(ruid) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param getresuid(euid) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param getresuid(suid) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param setresgid(rgid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setresgid(egid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setresgid(sgid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
211: __NR_getresgid32 3s 3m
Syscall param getresgid(rgid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getresgid(egid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getresgid(sgid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getresgid(rgid) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param getresgid(egid) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param getresgid(sgid) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param chown(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param chown(owner) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param chown(group) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param chown(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param setuid(uid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
214: __NR_setgid32 1s 0m
Syscall param setgid(gid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
215: __NR_setfsuid32 1s 0m
Syscall param setfsuid(uid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
216: __NR_setfsgid32 1s 0m
Syscall param setfsgid(gid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
217: __NR_pivot_root n/a
Syscall param mincore(start) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mincore(length) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mincore(vec) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mincore(vec) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param madvise(start) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param madvise(length) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param madvise(advice) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
220: __NR_getdents64 3s 1m
Syscall param getdents64(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getdents64(dirp) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getdents64(count) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getdents64(dirp) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param fcntl64(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fcntl64(cmd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
221: __NR_fcntl64 (DUPFD) 1s 0m
Syscall param fcntl64(arg) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
221: __NR_fcntl64 (GETLK) 1s 0m
Syscall param setxattr(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setxattr(name) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setxattr(value) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setxattr(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setxattr(flags) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param setxattr(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param setxattr(name) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param setxattr(value) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param lsetxattr(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lsetxattr(name) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lsetxattr(value) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lsetxattr(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lsetxattr(flags) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lsetxattr(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param lsetxattr(name) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param lsetxattr(value) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param fsetxattr(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fsetxattr(name) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fsetxattr(value) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fsetxattr(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fsetxattr(flags) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fsetxattr(name) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param fsetxattr(value) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param getxattr(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getxattr(name) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getxattr(value) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getxattr(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param getxattr(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param getxattr(name) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param getxattr(value) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param lgetxattr(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lgetxattr(name) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lgetxattr(value) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lgetxattr(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lgetxattr(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param lgetxattr(name) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param lgetxattr(value) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param fgetxattr(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fgetxattr(name) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fgetxattr(value) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fgetxattr(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fgetxattr(name) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param fgetxattr(value) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param listxattr(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param listxattr(list) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param listxattr(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param listxattr(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param listxattr(list) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param llistxattr(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param llistxattr(list) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param llistxattr(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param llistxattr(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param llistxattr(list) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param flistxattr(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param flistxattr(list) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param flistxattr(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param flistxattr(list) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param removexattr(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param removexattr(name) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param removexattr(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param removexattr(name) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param lremovexattr(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lremovexattr(name) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lremovexattr(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param lremovexattr(name) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param fremovexattr(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fremovexattr(name) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fremovexattr(name) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param sendfile64(out_fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sendfile64(in_fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sendfile64(offset) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sendfile64(count) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sendfile64(offset) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param futex(futex) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param futex(op) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param futex(val) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param futex(utime) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param futex(uaddr2) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param futex(futex) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param futex(timeout) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param sched_setaffinity(pid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sched_setaffinity(len) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sched_setaffinity(mask) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sched_setaffinity(mask) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param sched_getaffinity(pid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sched_getaffinity(len) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sched_getaffinity(mask) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param sched_getaffinity(mask) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param set_thread_area(u_info) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param set_thread_area(u_info) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param get_thread_area(u_info) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param get_thread_area(u_info) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param io_setup(nr_events) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param io_setup(ctxp) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param io_setup(ctxp) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param io_destroy(ctx) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
247: __NR_io_getevents 5s 2m
Syscall param io_getevents(ctx_id) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param io_getevents(min_nr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param io_getevents(nr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param io_getevents(events) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param io_getevents(timeout) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param io_getevents(events) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param io_getevents(timeout) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param io_submit(ctx_id) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param io_submit(nr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param io_submit(iocbpp) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param io_submit(iocbpp) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param io_cancel(ctx_id) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param io_cancel(iocb) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param io_cancel(result) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param io_cancel(iocb) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param io_cancel(result) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param lookup_dcookie(cookie_low32) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lookup_dcookie(cookie_high32) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lookup_dcookie(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lookup_dcookie(len) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param lookup_dcookie(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param epoll_create(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
255: __NR_epoll_ctl 4s 1m
Syscall param epoll_ctl(epfd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param epoll_ctl(op) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param epoll_ctl(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param epoll_ctl(event) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param epoll_ctl(event) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param epoll_wait(epfd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param epoll_wait(events) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param epoll_wait(maxevents) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param epoll_wait(timeout) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param epoll_wait(events) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param set_tid_address(tidptr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
259: __NR_timer_create 3s 2m
Syscall param timer_create(clockid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param timer_create(evp) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param timer_create(timerid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param timer_create(evp) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param timer_create(timerid) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param timer_settime(timerid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param timer_settime(flags) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param timer_settime(value) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param timer_settime(ovalue) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param timer_settime(value) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param timer_settime(ovalue) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param timer_gettime(timerid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param timer_gettime(value) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param timer_gettime(value) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param timer_getoverrun(timerid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
263: __NR_timer_delete 1s 0m
Syscall param timer_delete(timerid) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
264: __NR_clock_settime 2s 1m
Syscall param clock_settime(clk_id) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param clock_settime(tp) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param clock_settime(tp) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param clock_gettime(clk_id) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param clock_gettime(tp) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param clock_gettime(tp) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param clock_getres(clk_id) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param clock_getres(res) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param clock_getres(res) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param statfs64(path) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param statfs64(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param statfs64(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param statfs64(path) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param statfs64(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param fstatfs64(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fstatfs64(size) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fstatfs64(buf) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param fstatfs64(buf) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param utimes(filename) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param utimes(tvp) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param utimes(filename) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param utimes(tvp) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param mq_open(name) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_open(oflag) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_open(mode) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_open(attr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_open(name) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param mq_open(attr->mq_maxmsg) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param mq_open(attr->mq_msgsize) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param mq_unlink(name) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_unlink(name) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param mq_timedsend(mqdes) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_timedsend(msg_ptr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_timedsend(msg_len) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_timedsend(msg_prio) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_timedsend(abs_timeout) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_timedsend(msg_ptr) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param mq_timedsend(abs_timeout) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param mq_timedreceive(mqdes) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_timedreceive(msg_ptr) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_timedreceive(msg_len) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_timedreceive(msg_prio) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_timedreceive(abs_timeout) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_timedreceive(msg_ptr) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param mq_timedreceive(msg_prio) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param mq_timedreceive(abs_timeout) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param mq_notify(mqdes) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_notify(notification) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_notify(notification) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
Syscall param mq_getsetattr(mqdes) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_getsetattr(mqstat) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_getsetattr(omqstat) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param mq_getsetattr(mqstat->mq_flags) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
Syscall param mq_getsetattr(omqstat) points to unaddressable byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
1: __NR_exit 1s 0m
-----------------------------------------------------
-Syscall param exit(error_code) contains uninitialised byte(s)
+Syscall param exit(exitcode) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
-----------------------------------------------------
252: __NR_exit_group 1s 0m
-----------------------------------------------------
-Syscall param exit_group(error_code) contains uninitialised byte(s)
+Syscall param exit_group(exit_code) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param (syscallno) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param write(fd) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Syscall param write(count) contains uninitialised byte(s)
at 0x........: syscall (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
by 0x........: ddd (suppfree.c:7)
by 0x........: ccc (suppfree.c:12)
by 0x........: bbb (suppfree.c:17)
+ by 0x........: aaa (suppfree.c:22)
+ by 0x........: main (suppfree.c:28)
Address 0x........ is 0 bytes inside a block of size 10 free'd
at 0x........: free (vg_replace_malloc.c:...)
by 0x........: ddd (suppfree.c:6)
by 0x........: ccc (suppfree.c:12)
by 0x........: bbb (suppfree.c:17)
+ by 0x........: aaa (suppfree.c:22)
+ by 0x........: main (suppfree.c:28)
Syscall param ioctl(TCSET{A,AW,AF}) points to uninitialised byte(s)
at 0x........: ioctl (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
+ by 0x........: __libc_start_main (in /...libc...)
by 0x........: ...
Address 0x........ is on thread 1's stack
-Warning: client syscall mmap2 tried to modify addresses 0x........-0x........
-Warning: client syscall mmap2 tried to modify addresses 0x........-0x........
-Warning: client syscall mmap2 tried to modify addresses 0x........-0x........
Makefile.in
Makefile
args
+async-sigs
bitfield1
+blockfault
closeall
coolo_sigaction
coolo_strlen
discard
exec-sigmask
execve
+faultstatus
fcntl_setown
floored
fork
fucomip
+getseg
gxx304
insn_basic
insn_basic.c
readline1
resolv
rlimit_nofile
+map_unaligned
+pending
+selfrun
sem
semlimit
sha1_test
shortpush
shorts
+sigcontext
+sigstackgrowth
smc1
+stackgrowth
susphello
syscall-restart1
syscall-restart2
pth_simple_threads
pth_specific
pth_yield
+thread-exits
tls
yield
*.stdout.diff
EXTRA_DIST = $(noinst_SCRIPTS) \
args.stderr.exp args.stdout.exp args.vgtest \
+ async-sigs.stderr.exp async-sigs.stdout.exp async-sigs.vgtest \
bitfield1.stderr.exp bitfield1.vgtest \
+ blockfault.vgtest blockfault.stderr.exp blockfault.stdout.exp \
closeall.stderr.exp closeall.vgtest \
cmdline1.stderr.exp cmdline1.stdout.exp cmdline1.vgtest \
cmdline2.stderr.exp cmdline2.stdout.exp cmdline2.vgtest \
exec-sigmask.vgtest exec-sigmask.stdout.exp \
exec-sigmask.stdout.exp2 exec-sigmask.stderr.exp \
execve.vgtest execve.stdout.exp execve.stderr.exp \
+ faultstatus.vgtest faultstatus.stderr.exp \
fcntl_setown.vgtest fcntl_setown.stdout.exp fcntl_setown.stderr.exp \
floored.stderr.exp floored.stdout.exp \
floored.vgtest \
fork.stderr.exp fork.stdout.exp fork.vgtest \
fucomip.stderr.exp fucomip.vgtest \
+ getseg.stdout.exp getseg.stderr.exp getseg.vgtest \
gxx304.stderr.exp gxx304.vgtest \
+ manythreads.stdout.exp manythreads.stderr.exp manythreads.vgtest \
+ map_unaligned.stderr.exp map_unaligned.vgtest \
map_unmap.stderr.exp map_unmap.stdout.exp map_unmap.vgtest \
mq.stderr.exp mq.vgtest \
mremap.stderr.exp mremap.stdout.exp mremap.vgtest \
munmap_exe.stderr.exp munmap_exe.vgtest \
+ pending.stdout.exp pending.stderr.exp pending.vgtest \
pth_blockedsig.stderr.exp \
pth_blockedsig.stdout.exp pth_blockedsig.vgtest \
pth_stackalign.stderr.exp \
readline1.vgtest \
resolv.stderr.exp resolv.stdout.exp resolv.vgtest \
rlimit_nofile.stderr.exp rlimit_nofile.stdout.exp rlimit_nofile.vgtest \
+ selfrun.stderr.exp selfrun.stdout.exp selfrun.vgtest \
sem.stderr.exp sem.stdout.exp sem.vgtest \
semlimit.stderr.exp semlimit.stdout.exp semlimit.vgtest \
susphello.stdout.exp susphello.stderr.exp susphello.vgtest \
sha1_test.stderr.exp sha1_test.vgtest \
shortpush.stderr.exp shortpush.vgtest \
shorts.stderr.exp shorts.vgtest \
- tls.stderr.exp tls.stdout.exp \
+ sigcontext.stdout.exp sigcontext.stderr.exp sigcontext.vgtest \
+ sigstackgrowth.stdout.exp sigstackgrowth.stderr.exp sigstackgrowth.vgtest \
smc1.stderr.exp smc1.stdout.exp smc1.vgtest \
+ stackgrowth.stdout.exp stackgrowth.stderr.exp stackgrowth.vgtest \
syscall-restart1.vgtest syscall-restart1.stdout.exp syscall-restart1.stderr.exp \
syscall-restart2.vgtest syscall-restart2.stdout.exp syscall-restart2.stderr.exp \
system.stderr.exp system.vgtest \
+ thread-exits.stderr.exp thread-exits.stdout.exp thread-exits.vgtest \
+ tls.stderr.exp tls.stdout.exp \
yield.stderr.exp yield.stdout.exp yield.vgtest
check_PROGRAMS = \
- args bitfield1 closeall coolo_strlen \
- discard exec-sigmask execve fcntl_setown floored fork \
- fucomip \
- munmap_exe map_unmap mq mremap rcrl readline1 \
- resolv rlimit_nofile sem semlimit sha1_test \
- shortpush shorts smc1 susphello pth_blockedsig pth_stackalign \
+ args async-sigs bitfield1 blockfault closeall coolo_strlen \
+ discard exec-sigmask execve faultstatus fcntl_setown floored fork \
+ fucomip getseg \
+ manythreads \
+ munmap_exe map_unaligned map_unmap mq mremap rcrl readline1 \
+ resolv rlimit_nofile selfrun sem semlimit sha1_test \
+ shortpush shorts sigcontext \
+ stackgrowth sigstackgrowth \
+ smc1 susphello pending pth_blockedsig pth_stackalign \
syscall-restart1 syscall-restart2 system \
+ thread-exits \
+ tls tls.so tls2.so \
coolo_sigaction gxx304 yield
AM_CFLAGS = $(WERROR) -Winline -Wall -Wshadow -g
-AM_CPPFLAGS = -I$(top_builddir)/include
+AM_CPPFLAGS = -I$(top_srcdir) -I$(top_srcdir)/include -I$(top_builddir)/include
AM_CXXFLAGS = $(AM_CFLAGS)
# generic C ones
args_SOURCES = args.c
+async_sigs_SOURCES = async-sigs.c
bitfield1_SOURCES = bitfield1.c
+blockfault_SOURCES = blockfault.c
closeall_SOURCES = closeall.c
coolo_strlen_SOURCES = coolo_strlen.c
discard_SOURCES = discard.c
exec_sigmask_SOURCES = exec-sigmask.c
execve_SOURCES = execve.c
+faultstatus_SOURCES = faultstatus.c
fcntl_setown_SOURCES = fcntl_setown.c
fork_SOURCES = fork.c
floored_SOURCES = floored.c
floored_LDADD = -lm
fucomip_SOURCES = fucomip.c
+getseg_SOURCES = getseg.c
+pending_SOURCES = pending.c
+map_unaligned_SOURCES = map_unaligned.c
map_unmap_SOURCES = map_unmap.c
mq_SOURCES = mq.c
mq_LDADD = -lrt
readline1_SOURCES = readline1.c
resolv_SOURCES = resolv.c
rlimit_nofile_SOURCES = rlimit_nofile.c
+selfrun_SOURCES = selfrun.c
sem_SOURCES = sem.c
semlimit_SOURCES = semlimit.c
semlimit_LDADD = -lpthread
sha1_test_SOURCES = sha1_test.c
shortpush_SOURCES = shortpush.c
shorts_SOURCES = shorts.c
+sigcontext_SOURCES = sigcontext.c
+sigstackgrowth_SOURCES = sigstackgrowth.c
susphello_SOURCES = susphello.c
susphello_LDADD = -lpthread
+stackgrowth_SOURCES = stackgrowth.c
syscall_restart1_SOURCES = syscall-restart1.c
syscall_restart2_SOURCES = syscall-restart2.c
system_SOURCES = system.c
-#tls_SOURCES = tls.c tls2.c
-#tls_DEPENDENCIES = tls.so
-#tls_LDFLAGS = -Wl,-rpath,$(srcdir)
-#tls_LDADD = tls.so -lpthread
-#tls_so_SOURCES = tls_so.c
-#tls_so_LDADD = tls2.so
-#tls_so_DEPENDENCIES = tls2.so
-#tls_so_LDFLAGS = -Wl,-rpath,$(srcdir) -shared
-#tls2_so_SOURCES = tls2_so.c
-#tls2_so_LDFLAGS = -shared
+thread_exits_SOURCES = thread-exits.c
+thread_exits_LDADD = -lpthread
+tls_SOURCES = tls.c tls2.c
+tls_DEPENDENCIES = tls.so
+tls_LDFLAGS = -Wl,-rpath,$(top_builddir)/none/tests
+tls_LDADD = tls.so -lpthread
+tls_so_SOURCES = tls_so.c
+tls_so_LDADD = tls2.so
+tls_so_DEPENDENCIES = tls2.so
+tls_so_LDFLAGS = -Wl,-rpath,$(top_builddir)/none/tests -shared
+tls2_so_SOURCES = tls2_so.c
+tls2_so_LDFLAGS = -shared
yield_SOURCES = yield.c
yield_CFLAGS = $(AM_CFLAGS) -D__$(VG_ARCH)__
yield_LDADD = -lpthread
# pthread C ones
+manythreads_SOURCES = manythreads.c
+manythreads_LDADD = -lpthread
pth_blockedsig_SOURCES = pth_blockedsig.c
pth_blockedsig_LDADD = -lpthread
pth_stackalign_SOURCES = pth_stackalign.c
usage: valgrind --tool=<toolname> [options] prog-and-args
common user options for all Valgrind tools, with defaults in [ ]:
- --tool=<name> use the Valgrind tool named <name>
+ --tool=<name> use the Valgrind tool named <name> [memcheck]
-h --help show this message
--help-debug show this message, plus debugging options
--version show version
uncommon user options for all Valgrind tools:
--run-libc-freeres=no|yes free up glibc memory at exit? [yes]
- --weird-hacks=hack1,hack2,... recognised hacks: lax-ioctls [none]
- --signal-polltime=<time> signal poll period (mS) for older kernels [50]
- --lowlat-signals=no|yes improve thread signal wake-up latency [no]
- --lowlat-syscalls=no|yes improve thread syscall wake-up latency [no]
+ --weird-hacks=hack1,hack2,... recognised hacks: lax-ioctls,ioctl-mmap [none]
--pointercheck=no|yes enforce client address space limits [yes]
--support-elan3=no|yes hacks for Quadrics Elan3 support [no]
-vgopts: --help
+vgopts: --help --tool=none
usage: valgrind --tool=<toolname> [options] prog-and-args
common user options for all Valgrind tools, with defaults in [ ]:
- --tool=<name> use the Valgrind tool named <name>
+ --tool=<name> use the Valgrind tool named <name> [memcheck]
-h --help show this message
--help-debug show this message, plus debugging options
--version show version
uncommon user options for all Valgrind tools:
--run-libc-freeres=no|yes free up glibc memory at exit? [yes]
- --weird-hacks=hack1,hack2,... recognised hacks: lax-ioctls [none]
- --signal-polltime=<time> signal poll period (mS) for older kernels [50]
- --lowlat-signals=no|yes improve thread signal wake-up latency [no]
- --lowlat-syscalls=no|yes improve thread syscall wake-up latency [no]
+ --weird-hacks=hack1,hack2,... recognised hacks: lax-ioctls,ioctl-mmap [none]
--pointercheck=no|yes enforce client address space limits [yes]
--support-elan3=no|yes hacks for Quadrics Elan3 support [no]
--trace-signals=no|yes show signal handling details? [no]
--trace-symtab=no|yes show symbol table details? [no]
--trace-sched=no|yes show thread scheduler details? [no]
- --trace-pthread=none|some|all show pthread event details? [none]
--wait-for-gdb=yes|no pause on startup to wait for gdb attach
-
+ --command-line-only=no|yes only use command line options [no]
--vex-iropt-verbosity 0 .. 9 [0]
--vex-iropt-level 0 .. 2 [2]
--vex-iropt-precise-memory-exns [no]
-vgopts: --help-debug
+vgopts: --help-debug --tool=none
#include <errno.h>
#include <string.h>
#include <stdio.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <stdlib.h>
-int main(int argc, char **argv)
+static void do_exec(const char *path, const char *arg, const sigset_t *mask)
{
- if (argc == 1) {
- sigset_t all;
+ pid_t pid;
+ int status;
+ pid = fork();
+ if (pid == -1) {
+ perror("fork");
+ exit(1);
+ }
+ if (pid == 0) {
+ sigprocmask(SIG_SETMASK, mask, NULL);
+ execl(path, path, arg, NULL);
+
+ fprintf(stderr, "FAILED: execl failed with %s\n",
+ strerror(errno));
+ } else {
+ int ret;
+ do
+ ret = waitpid(pid, &status, 0);
+ while(ret == -1 && errno == EINTR);
+ if (ret != pid) {
+ perror("waitpid");
+ exit(1);
+ }
+ if (status != 0) {
+ fprintf(stderr, "child exec failed\n");
+ exit(1);
+ }
+ }
+}
- sigfillset(&all);
- sigprocmask(SIG_SETMASK, &all, NULL);
+int main(int argc, char **argv)
+{
+ if (argc == 1) {
+ sigset_t mask;
- execl(argv[0], argv[0], "test", NULL);
+ sigfillset(&mask);
+ do_exec(argv[0], "full", &mask);
- fprintf(stderr, "FAILED: execl failed with %s\n",
- strerror(errno));
- return 1;
+ sigemptyset(&mask);
+ do_exec(argv[0], "empty", &mask);
} else {
sigset_t mask;
int i;
+ int empty;
+
+ if (strcmp(argv[1], "full") == 0)
+ empty = 0;
+ else if (strcmp(argv[1], "empty") == 0)
+ empty = 1;
+ else {
+ fprintf(stderr, "empty or full?\n");
+ exit(1);
+ }
sigprocmask(SIG_SETMASK, NULL, &mask);
if (i == SIGKILL || i == SIGSTOP)
continue;
- if (!sigismember(&mask, i))
- printf("signal %d missing from mask\n", i);
+ if (empty) {
+ if (sigismember(&mask, i))
+ printf("empty: signal %d added to mask\n", i);
+ } else {
+ if (!sigismember(&mask, i))
+ printf("full: signal %d missing from mask\n", i);
+ }
}
}
-signal 32 missing from mask
+full: signal 32 missing from mask
+full: signal 33 missing from mask
# Remove "Corecheck, ..." line and the following copyright line.
sed "/^Nulgrind, a binary JIT-compiler./ , /./ d"
+# Anonymise addresses
+$dir/../../tests/filter_addresses
+
prog: map_unmap
+vgopts: --sanity-level=3
// for (i = 0; i < 5; ++i)
// {
sleep (1);
- fprintf (stdout, "thread %ld sending SIGUSR1 to thread %ld\n",
- pthread_self (), main_thread);
+ fprintf (stdout, "thread CHILD sending SIGUSR1 to thread MAIN\n");
if (pthread_kill (main_thread, SIGUSR1) != 0)
fprintf (stderr, "error doing pthread_kill\n");
// }
-thread 2 sending SIGUSR1 to thread 1
+thread CHILD sending SIGUSR1 to thread MAIN
prog: susphello
+# susphello seems broken; sometimes it just doesn't terminate (even natively)
+prereq: false
kill(pid, SIGUSR1);
sleep(1);
if (write(fds[1], "x", 1) != -1 || errno != EPIPE)
- fprintf(stderr, "FAIL: expected write to fail with EPIPE\n");
+ fprintf(stderr, "FAIL: expected write to fail with EPIPE, not %d\n", errno);
waitpid(pid, NULL, 0);
}
+#include <config.h>
#include <pthread.h>
#include <stdio.h>
#include <unistd.h>
#include <time.h>
+#ifdef HAVE_TLS
+
#define COUNT 10
static int race;
return 0;
}
+#else
+int main()
+{
+ printf("FAILED: no compiler support for __thread\n");
+ return 1;
+}
+#endif
+#include <config.h>
+
+#ifdef HAVE_TLS
__thread int static_extern;
+#endif
+#include <config.h>
+
+#ifdef HAVE_TLS
__thread int so_extern;
+#endif
+#include <config.h>
+
+#ifdef HAVE_TLS
#include <pthread.h>
extern __thread int so_extern;
int *test_so_global(void)
{
return &global;
-}
+}
+#endif
-noinst_SCRIPTS = filter_cpuid filter_int filter_stderr gen_insn_test.pl
+noinst_SCRIPTS = filter_cpuid filter_stderr gen_insn_test.pl
CLEANFILES = $(addsuffix .c,$(INSN_TESTS))
INSN_TESTS=insn_basic insn_fpu insn_cmov insn_mmx insn_mmxext insn_sse insn_sse2
int val;
sa.sa_sigaction = handler;
- sigemptyset(&sa.sa_mask);
+ sigfillset(&sa.sa_mask);
sa.sa_flags = SA_SIGINFO;
sigaction(SIGSEGV, &sa, NULL);
dir=`dirname $0`
-$dir/filter_stderr | $dir/../../../tests/filter_addresses
-
+$dir/filter_stderr
pinsrw imm8[1] r32.ud[0xffffffff] mm.uw[1234,5678,4321,8765] => 2.uw[1234,65535,4321,8765]
pinsrw imm8[2] r32.ud[0xffffffff] mm.uw[1234,5678,4321,8765] => 2.uw[1234,5678,65535,8765]
pinsrw imm8[3] r32.ud[0xffffffff] mm.uw[1234,5678,4321,8765] => 2.uw[1234,5678,4321,65535]
+pinsrw imm8[0] m16.uw[0xffff] mm.uw[1234,5678,4321,8765] => 2.uw[65535,5678,4321,8765]
+pinsrw imm8[1] m16.uw[0xffff] mm.uw[1234,5678,4321,8765] => 2.uw[1234,65535,4321,8765]
+pinsrw imm8[2] m16.uw[0xffff] mm.uw[1234,5678,4321,8765] => 2.uw[1234,5678,65535,8765]
+pinsrw imm8[3] m16.uw[0xffff] mm.uw[1234,5678,4321,8765] => 2.uw[1234,5678,4321,65535]
pmaxsw mm.sw[-1,2,-3,4] mm.sw[2,-3,4,-5] => 1.sw[2,2,4,4]
pmaxsw m64.sw[-1,2,-3,4] mm.sw[2,-3,4,-5] => 1.sw[2,2,4,4]
pmaxub mm.ub[1,2,3,4,5,6,7,8] mm.ub[8,7,6,5,4,3,2,1] => 1.ub[8,7,6,5,5,6,7,8]
pinsrw_2 ... ok
pinsrw_3 ... ok
pinsrw_4 ... ok
+pinsrw_5 ... ok
+pinsrw_6 ... ok
+pinsrw_7 ... ok
+pinsrw_8 ... ok
pmaxsw_1 ... ok
pmaxsw_2 ... ok
pmaxub_1 ... ok
pinsrw imm8[5] r32.ud[0xffffffff] xmm.uw[1234,5678,4321,8765,1111,2222,3333,4444] => 2.uw[1234,5678,4321,8765,1111,65535,3333,4444]
pinsrw imm8[6] r32.ud[0xffffffff] xmm.uw[1234,5678,4321,8765,1111,2222,3333,4444] => 2.uw[1234,5678,4321,8765,1111,2222,65535,4444]
pinsrw imm8[7] r32.ud[0xffffffff] xmm.uw[1234,5678,4321,8765,1111,2222,3333,4444] => 2.uw[1234,5678,4321,8765,1111,2222,3333,65535]
-#####pmaddwd xmm.sw[1234,5678,-4321,-8765,1234,5678,-4321,-8765] xmm.sw[1111,-2222,3333,-4444,1111,-2222,3333,-4444] => 1.sd[-11245542,24549767,-11245542,24549767]
-#####pmaddwd m128.sw[1234,5678,-4321,-8765,1234,5678,-4321,-8765] xmm.sw[1111,-2222,3333,-4444,1111,-2222,3333,-4444] => 1.sd[-11245542,24549767,-11245542,24549767]
+pinsrw imm8[0] m16.uw[0xffff] xmm.uw[1234,5678,4321,8765,1111,2222,3333,4444] => 2.uw[65535,5678,4321,8765,1111,2222,3333,4444]
+pinsrw imm8[1] m16.uw[0xffff] xmm.uw[1234,5678,4321,8765,1111,2222,3333,4444] => 2.uw[1234,65535,4321,8765,1111,2222,3333,4444]
+pinsrw imm8[2] m16.uw[0xffff] xmm.uw[1234,5678,4321,8765,1111,2222,3333,4444] => 2.uw[1234,5678,65535,8765,1111,2222,3333,4444]
+pinsrw imm8[3] m16.uw[0xffff] xmm.uw[1234,5678,4321,8765,1111,2222,3333,4444] => 2.uw[1234,5678,4321,65535,1111,2222,3333,4444]
+pinsrw imm8[4] m16.uw[0xffff] xmm.uw[1234,5678,4321,8765,1111,2222,3333,4444] => 2.uw[1234,5678,4321,8765,65535,2222,3333,4444]
+pinsrw imm8[5] m16.uw[0xffff] xmm.uw[1234,5678,4321,8765,1111,2222,3333,4444] => 2.uw[1234,5678,4321,8765,1111,65535,3333,4444]
+pinsrw imm8[6] m16.uw[0xffff] xmm.uw[1234,5678,4321,8765,1111,2222,3333,4444] => 2.uw[1234,5678,4321,8765,1111,2222,65535,4444]
+pinsrw imm8[7] m16.uw[0xffff] xmm.uw[1234,5678,4321,8765,1111,2222,3333,4444] => 2.uw[1234,5678,4321,8765,1111,2222,3333,65535]
pmaxsw xmm.sw[-1,2,-3,4,-5,6,-7,8] xmm.sw[2,-3,4,-5,6,-7,8,-9] => 1.sw[2,2,4,4,6,6,8,8]
pmaxsw m128.sw[-1,2,-3,4,-5,6,-7,8] xmm.sw[2,-3,4,-5,6,-7,8,-9] => 1.sw[2,2,4,4,6,6,8,8]
pmaxub xmm.ub[10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25] xmm.ub[25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10] => 1.ub[25,24,23,22,21,20,19,18,18,19,20,21,22,23,24,25]
pinsrw_6 ... ok
pinsrw_7 ... ok
pinsrw_8 ... ok
+pinsrw_9 ... ok
+pinsrw_10 ... ok
+pinsrw_11 ... ok
+pinsrw_12 ... ok
+pinsrw_13 ... ok
+pinsrw_14 ... ok
+pinsrw_15 ... ok
+pinsrw_16 ... ok
pmaxsw_1 ... ok
pmaxsw_2 ... ok
pmaxub_1 ... ok
-disInstr: unhandled instruction bytes: 0x........ 0x........ 0x........ 0x........
- at 0x........: main (int.c:5)
-Process terminating with default action of signal 4 (SIGILL)
- Illegal operand at address 0x........
+Process terminating with default action of signal 11 (SIGSEGV)
+ GPF (Pointer out of bounds?)
at 0x........: main (int.c:5)
prog: int
-stderr_filter: filter_int
cleanup: rm vgcore.pid*
+/*
+ Check that a thread which yields with pause (rep;nop) makes less
+ progress against a pure spinner.
+ */
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
-static pthread_mutex_t m_go;
-static pthread_cond_t c_go;
+static pthread_mutex_t m_go = PTHREAD_MUTEX_INITIALIZER;
+static pthread_cond_t c_go = PTHREAD_COND_INITIALIZER;
+static pthread_cond_t c_running = PTHREAD_COND_INITIALIZER;
-static volatile int alive;
+static volatile int alive, running;
-static int sch_yield;
+static int spin;
static int rep_nop;
-static void *th1(void *v)
+static void *spinner(void *v)
{
pthread_mutex_lock(&m_go);
while(!alive)
pthread_cond_wait(&c_go, &m_go);
+ running++;
+ pthread_cond_signal(&c_running);
pthread_mutex_unlock(&m_go);
- while(alive) {
- sch_yield++;
- sched_yield();
- }
+ while(alive)
+ spin++;
return 0;
}
-static void *th2(void *v)
+static void *rep_nopper(void *v)
{
pthread_mutex_lock(&m_go);
while(!alive)
pthread_cond_wait(&c_go, &m_go);
+ running++;
+ pthread_cond_signal(&c_running);
pthread_mutex_unlock(&m_go);
while(alive) {
{
pthread_t a, b;
- pthread_create(&a, NULL, th1, NULL);
- pthread_create(&b, NULL, th2, NULL);
+ pthread_create(&a, NULL, spinner, NULL);
+ pthread_create(&b, NULL, rep_nopper, NULL);
/* make sure both threads start at the same time */
pthread_mutex_lock(&m_go);
alive = 1;
- pthread_cond_signal(&c_go);
+ pthread_cond_broadcast(&c_go);
+
+ /* make sure they both get started */
+ while(running < 2)
+ pthread_cond_wait(&c_running, &m_go);
pthread_mutex_unlock(&m_go);
- sleep(1);
+ sleep(2);
alive = 0;
pthread_join(a, NULL);
pthread_join(b, NULL);
- if (abs(sch_yield - rep_nop) < 2)
+ if (0)
+ printf("spin=%d rep_nop=%d rep_nop:spin ratio: %g\n",
+ spin, rep_nop, (float)rep_nop / spin);
+
+ if (spin > rep_nop)
printf("PASS\n");
else
- printf("FAIL sch_yield=%d rep_nop=%d\n",
- sch_yield, rep_nop);
+ printf("FAIL spin=%d rep_nop=%d rep_nop:spin ratio: %g\n",
+ spin, rep_nop, (float)rep_nop / spin);
return 0;
}
vg_regtest \
filter_addresses \
filter_discards \
+ filter_libc \
filter_numbers \
filter_stderr_basic \
+ filter_sink \
filter_test_paths
EXTRA_DIST = $(noinst_SCRIPTS)
# generic C ones
cputest_SOURCES = cputest.c
cputest_CFLAGS = $(AM_CFLAGS) -D__$(VG_ARCH)__
+cputest_DEPENDENCIES =
+cputest_LDADD =
toobig_allocs_SOURCES = toobig-allocs.c
true_SOURCES = true.c
-
#endif // __ppc__
#ifdef __x86__
-static __inline__ void cpuid(unsigned int n,
- unsigned int *a, unsigned int *b,
- unsigned int *c, unsigned int *d)
+static void cpuid ( unsigned int n,
+ unsigned int* a, unsigned int* b,
+ unsigned int* c, unsigned int* d )
{
__asm__ __volatile__ (
"cpuid"
# This filter should be applied to *every* stderr result. It removes
# Valgrind startup stuff and pid numbers.
+dir=`dirname $0`
+
# Remove ==pid== and --pid-- and ++pid++ and **pid** strings
sed "s/\(==\|--\|\+\+\|\*\*\)[0-9]\{1,5\}\1 //" |
# Anonymise vg_intercept lines
sed "s/vg_intercept.c:[0-9]*/vg_intercept.c:.../" |
-# Anonymise vg_libpthread lines
-sed "s/vg_libpthread.c:[0-9]*/vg_libpthread.c:.../" |
-
# Hide suppressed error counts
sed "s/^\(ERROR SUMMARY[^(]*(suppressed: \)[0-9]*\( from \)[0-9]*)$/\10\20)/" |
# Reduce some libc incompatibility
-sed "s/ __getsockname / getsockname /" |
-sed "s/ __sigaction / sigaction /" |
-sed "s/ __GI___/ __/" |
-sed "s/ __\([a-z]*\)_nocancel / \1 /" |
+$dir/filter_libc |
# Remove line info out of order warnings
sed "/warning: line info addresses out of order/d" |
{
void *p;
- int size = 2 * 1023 * 1024 * 1024; // just under 2^31 (2GB)
+ unsigned long size = 2ul * 1023ul * 1024ul * 1024ul; // just under 2^31 (4GB)
fprintf(stderr, "Attempting too-big malloc()...\n");
p = malloc(size); // way too big!
open(INPUTFILE, "< $f") || die "File $f not openable\n";
while (my $line = <INPUTFILE>) {
- if ($line =~ /^\s*vgopts:\s*(.*)$/) {
+ if ($line =~ /^\s*#/ || $line =~ /^\s*$/) {
+ next;
+ } elsif ($line =~ /^\s*vgopts:\s*(.*)$/) {
$vgopts = $1;
} elsif ($line =~ /^\s*prog:\s*(.*)$/) {
$prog = validate_program(".", $1, 0, 0);
printf("%-16s valgrind $vgopts $prog $args\n", "$name:");
# Pass the appropriate --tool option for the directory (can be overridden
- # by an "args:" or "args.dev:" line, though).
+ # by an "args:" line, though).
my $tool=determine_tool();
- mysystem("VALGRINDLIB=$tests_dir/.in_place $valgrind --tool=$tool $vgopts $prog $args > $name.stdout.out 2> $name.stderr.out");
+ mysystem("VALGRINDLIB=$tests_dir/.in_place $valgrind --command-line-only=yes --tool=$tool $vgopts $prog $args > $name.stdout.out 2> $name.stderr.out");
if (defined $stdout_filter) {
mysystem("$stdout_filter < $name.stdout.out > $tmp");
%defattr(-,root,root)
/usr/include/valgrind/valgrind.h
/usr/include/valgrind/memcheck.h
-/usr/include/valgrind/helgrind.h
+#/usr/include/valgrind/helgrind.h
/usr/include/valgrind/basic_types.h
/usr/include/valgrind/tool.h
/usr/include/valgrind/tool_asm.h
/usr/include/valgrind/x86-linux/vki_arch_posixtypes.h
/usr/bin/valgrind
/usr/bin/cg_annotate
-/usr/lib/valgrind
-/usr/lib/valgrind/*
/usr/bin/valgrind-listener
+/usr/lib/valgrind
/usr/lib/pkgconfig/valgrind.pc
%doc