Tuesday, October 2, 2007

Fixing the Patcher (part 2): testing

Due to the complexity of the algorithm there was no doubt that we would need a set of tests to ensure proper behavior. In the first place we need to make sure that introducing the fuzz factor won't break anything and then we need to check if the fuzz factor mechanism works as the user would expected.

Tests related to the patcher are located in the PatchTest class. Having in mind that I will need to write plenty of tests to cover as many corner cases as possible I started to think how to make the job a little bit easier. This is when the idea of the "testPatchdataSubfolders" came to my mind. I thought it would be great if one could test the patcher by simply adding a directory with some files in it (as this what writing patch test is actually all about). So now what I need to do is create a subfolder in the "patchdata" folder (e.g. "196847" for bug number 196847). To properly run the test a specific set of file need to be place inside the subfolder:
  • context.txt - this is an original file
  • patch.txt - this is a patch we would like to apply
  • expected_context.txt - this is an expected result of the patch applied
  • actual_context.txt - this is an actual result after applying the patch
Part of a filename in bold fold is used to determine what's the role of the file (e.g. if there is a "exp" substring somewhere in a filename it will be used a expected result). There is no special pattern for the context file. If we want to use a specific fuzz factor when applying a patch we add to the filename "fuzzX" or simply "fX" (for fuzz factor equal 2 it will be "fuzz2" or "f2"). At this moment the test can be run for fuzz factors from 0 to 3. If there is no fuzz factor specified the patcher will try to guess it. File for the actual result is optional.

The other idea I had to test the patcher was to write a fully automated test class. The class could change a file content according to some algorithm or given parameters. The change couldn't be random as a possible failure need to be reproducible. Here are the steps:
  1. Create a project with a file and share it
  2. Make a change in the file
  3. Create a diff (patch) for the file
  4. Override and update the file (revert to previous version)
  5. Apply the patch and check if it's the same as in 1.
  6. Go to 2.
Well, this was just an idea but I thought it was worth writing it down. Anyway, in the end I decided that writing tests (or in this case providing set of files) is much better and faster in finding a corner case. But who knows, maybe I will return to this idea in the future, until then I will stick to "manual" testing.