[klee-dev] rationale behind the parameters used in the KLEE OSDI paper
Cristian Cadar
c.cadar at imperial.ac.uk
Thu Feb 7 11:03:04 GMT 2013
Hi Lei,
On 05/02/13 03:35, Lei Zhang wrote:
> Coreutils experiment. They are very helpful! Now we have set up the
> environment using Ubuntu 10.04 + LLVM 2.7 (I'd like to contribute to the
> upgrading to 2.9 after I finish this project).
That would be useful, thanks.
> And we have some
> questions about replaying ktest and accumulating results. We use
> klee-replay to replay all the generated ktests in the obj-gcov directory
> and use gcov to get the line coverage on every single source file. Is
> that right?
Yes, that's correct. You might want to use a tool that Daniel wrote,
zcov, to visualize the results: http://minormatter.com/zcov/
> And is there any possible way to replay in a sandbox? When testing some
> tools (e.g., chmod), it will do harm to the filesystem or encounter
> permission errors (which may hurt the coverage). Thank you in advance!
Yes, the right thing to do would be to replay every test in a new
instance of a virtual machine (or something similar). But we didn't do
this, we just made sure that the replay process doesn't have permissions
to do too much harm :) In any case, what's important to remember is to
replay each test case in a fresh sandbox directory (I'm referring here
to the "sandbox" discussed at
http://klee.llvm.org/CoreutilsExperiments.html) and to set the same
environment variables as the ones set during symbolic execution (the
ones in test.env).
Hope this helps,
Cristian
More information about the klee-dev
mailing list