Opened 9 years ago

Closed 9 years ago

Last modified 9 years ago

#614 closed task (complete)

Run unit tests with Valgrind

Reported by: stephen Owned by: vorner
Priority: medium Milestone: A-Team-Sprint-20110309
Component: Unclassified Version:
Keywords: Cc:
CVSS Scoring: Parent Tickets:
Sensitive: no Defect Severity:
Sub-Project: Feature Depending on Ticket:
Estimated Difficulty: 8.0 Add Hours to Ticket: 0
Total Hours: 8.0 Internal?: no

Description

Run the unit tests with Valgrind and produce report listing the issues found.

Subtickets

Attachments (2)

valgrind.log (254.9 KB) - added by vorner 9 years ago.
tests-sh.diff (996 bytes) - added by jinmei 9 years ago.

Download all attachments as: .zip

Change History (17)

comment:1 Changed 50 years ago by vorner

  • Add Hours to Ticket changed from 0.0 to 8.0
  • Total Hours changed from 0.0 to 8.0

comment:1 Changed 9 years ago by vorner

  • Owner set to vorner
  • Status changed from new to accepted

I have an idea about how to do it in a fast (but maybe slightly dirty) way. I'll try to write it and we'll see if we need a proper solution or this is enough.

comment:2 Changed 9 years ago by vorner

I need to write some documentation or comment into the script, but it already produces the output. It's in the tools directory.

Now I need to read trough the output and filter out the leaks that are in the tests, not the tested code (which will be most of them, I guess). I'm attaching the current output so anybody can just read it.

Anyway, if you want output with some other params, you can run the tool for yourself (but don't be surprised if you break it, it's just a nasty hack).

One test fails, but I believe it's because valgrind skews random number generation in some way. But I'll see if the output says something about it.

Changed 9 years ago by vorner

comment:3 follow-up: Changed 9 years ago by vorner

  • Owner changed from vorner to UnAssigned
  • Status changed from accepted to reviewing

Hello

I have two scripts in the branch:

tools/tests_in_valgrind.sh:

It runs the tests in valgrind (it is possible to provide different options and output file). But it is kind of nasty bash script (there are some things I use from bash, but if it was needed to run somewhere where it's not installed it would probably be possible to modify it without much effort).

It finds the run_unittests (yes, by the name), copies them and modifies the line that executes the binary (these are wrapper scripts from autotools to provide the correct library paths).

tools/valgrind_test_cleaner.pl:

It turned out that there are many memory leaks reported from the tests only, this one is able to throw out the leak reports whose backtrace contain no function from our tested code.

This is written in perl. I know we decided to use python, but this was mostly for me to help me get trough the output, and writing it in python would take much longer. I just put it there if someone else in future finds it useful. Despite being a perl, I hope this one is quite readable. I guess it's not a problem when this is not for users (and I just couldn't resist writing little bit in my favorite language :-P).

I'd like to merge these patches in master and keep them there for future generations. I think this doesn't need a changelog, since it's our tool.

The results

About the output, I went trough it, removed the ones that are caused by the tests only (we might want to clean the tests sometime in future too, but I think it is low priority now). There are several areas of problems, I'm not sure what causes them yet:

  • The ZoneEntryTest?.AddressSelection? (NSAS) test fails. It seems to have something to do with selection of random numbers. Is it possible that valgrind replaces rand or something like that? But I would suggest we ignore this one.
  • Many things in NSAS seem to leak. Maybe it's because the events are not delivered to them (AFAIK some stuff in there is freed when either an answer or timeout comes, which is possible we don't do in the test), maybe it's something more. This is by far the longest part of output, but its relatively repeated.
  • Asiolink might have a leak in DNSService (but I guess it's only a DNSService is created and not freed, actually), and in RecursiveQuery::resolve.
  • Cache seems to drop MessageEntry? objects sometimes.
  • Some minor/less visible problems with the rest (invalid read inside freed memory in cc tests, probably caused by tests, open with NULL filename in dns library tests, IOEndpoint leak in auth tests).

May I create tickets for each of these tasks (except the AddressSelection?)? I would the output as attachment for each of them.

Thanks

comment:4 Changed 9 years ago by jinmei

  • Owner changed from UnAssigned to jinmei
  • Status changed from reviewing to accepted

comment:5 in reply to: ↑ 3 ; follow-up: Changed 9 years ago by jinmei

Replying to vorner:

Hello

I have two scripts in the branch:

The scripts basically seem okay in the context of this ticket. Some
minor comments:

  • when merge, maybe tools/README will have to be updated (it's not necessarily for "helping developers maintain the source tree")
  • just FYI: valgrind on bind10.isc.org doesn't recognize the --read-var-info option, so we'll have to specify FLAGS
  • this message wasn't really clear to me and I had to read the comment inside the script to know I had to run it on the top directory:
    Did you run configure? Do you call me from the top bind10 directory?
    
    I'm not really sure how we can improve it, but perhaps something like this is better: "Maybe you are running this script under the "tools" directory; make sure you run it at the top bind10 directory"
  • s/witouth/without/?
        echo "No test was found. It is possible you configured witouth --with-gtest or you run it from wrong directory" >&2
    

As for using bash, I think it's okay because this is an optional
feature and because valgrind is (realistically) quite linux
centric anyway...(hmm I didn't know it's available for MacOS X, too.
But in either case bash should be generally available).

As for using perl, again, I have no problem with it at least at the
moment, also with the understanding that perl is pretty ubiquitously
available. But I'd like to avoid creating a precedent of a chaos
where everyone is introducing tool scripts using their favorite
language saying "I like ruby/java/haskel/xxx and it just seemed to me
using XXX was the best way to do this job". So, I'd suggest adding
a note about the choice of perl, and, unless it's your intent, it's
not for opening the possibility of the chaos (or, better than that,
we'll eventually rewrite it in our primary script language, python, if
it's a plan).

I'd like to merge these patches in master and keep them there for future generations. I think this doesn't need a changelog, since it's our tool.

The results

I've not fully examined the results yet (I thought it's beyond the
original scope of this ticket), but anyway,

May I create tickets for each of these tasks (except the AddressSelection?)? I would the output as attachment for each of them.

I think it's a good idea.

comment:6 Changed 9 years ago by jinmei

  • Owner changed from jinmei to vorner
  • Status changed from accepted to assigned

comment:7 Changed 9 years ago by jinmei

Additional comments: I've installed valgrind on my macbook pro and ran
the scripts, finding the following portability issues:

  • -executable option for find wasn't available
  • if [ -n "$FAILED" ] didn't work (-n was considered a program)

For the first issue, I think we can simply remove this option in practice,
until we really have a file named "run_unittests" that should be excluded
and can only be excluded with this option (which doesn't seem to be the
case for now).

The second issue can be solved by explicitly specifying 'test' instead of [.

I'm attaching a proposed patch.

Changed 9 years ago by jinmei

comment:8 in reply to: ↑ 5 ; follow-up: Changed 9 years ago by vorner

  • Owner changed from vorner to jinmei
  • Status changed from assigned to reviewing

Hello

Replying to jinmei:

  • just FYI: valgrind on bind10.isc.org doesn't recognize the --read-var-info option, so we'll have to specify FLAGS

I removed that one. I was reading trough the manpage and hoped it will give me better output, but it seems it doesn't matter in our case anyway.

As for using perl, again, I have no problem with it at least at the
moment, also with the understanding that perl is pretty ubiquitously
available. But I'd like to avoid creating a precedent of a chaos
where everyone is introducing tool scripts using their favorite
language saying "I like ruby/java/haskel/xxx and it just seemed to me
using XXX was the best way to do this job". So, I'd suggest adding
a note about the choice of perl, and, unless it's your intent, it's
not for opening the possibility of the chaos (or, better than that,
we'll eventually rewrite it in our primary script language, python, if
it's a plan).

I added a note. I didn't think of including it in time of writing. Just when I did use it, it crossed my mind it would be nice to preserve it, so I just put it there (I had the choice of including it or not, rewriting it at the time seemed like no gain).

So, should I merge it now?

May I create tickets for each of these tasks (except the AddressSelection?)? I would the output as attachment for each of them.

I think it's a good idea.

OK, I'll put them into the relevant backlogs for now and we can promote them if they seem important enough.

Thanks

comment:9 Changed 9 years ago by vorner

It turned out that ${var:-default} isn't a bashisms, so moving it from bash to sh.

comment:10 in reply to: ↑ 8 Changed 9 years ago by jinmei

Replying to vorner:

So, should I merge it now?

Yes, go ahead. (you may want to add a changelog entry, which I forgot to
mention in the previous comment).

comment:11 Changed 9 years ago by jinmei

  • Owner changed from jinmei to vorner

comment:12 Changed 9 years ago by vorner

  • Resolution set to complete
  • Status changed from reviewing to closed

Thanks, merged.

comment:13 Changed 9 years ago by vorner

  • Add Hours to Ticket changed from 0 to 8

comment:14 Changed 9 years ago by vorner

  • Estimated Difficulty changed from 0.0 to 8
Note: See TracTickets for help on using tickets.