PPQT 2 is to be pretty much a complete rewrite of version 1. I built the first version in an ad-hoc way, adding features one at a time to the basic editor, and as a result its software structure is rather ramshackle. Information about different data structures and formats leaks all over. So now I know where it's going, the next version can be properly compartmentalized and structured.
And better-tested! V1 got "tested" by my using it. V2, I am determined, will have a separate unit-test driver for each module, and every added function means adding test code to exercise it. We be professional here!
And logging! V1 has no logging of any kind. There may be one or two places where an except clause has a print statement in it (blush) but that's it. So I read up on Python logging, and each module will have its named logger and log some occasional INFO lines, always WARN lines where the module is working around some problem, and occasionally ERROR lines.
So the first module finished (yay!) is metadata.py and it has several places where it detects and logs errors. So how, in the matching metadata_test.py, can I test whether the module wrote the expected thing to the log?
There may be better ways, but this is how I'm doing it. First, at the top of the test module is this, which I expect will be boilerplate repeated in every test driver.
# set up logging to a stream import io log_stream = io.StringIO() import logging logging.basicConfig(stream=log_stream,level=logging.INFO) def check_log(text): global log_stream "check that the log_stream contains text, rewind the log, return T/F" log_data = log_stream.getvalue() x = log_stream.seek(0) x = log_stream.truncate() return (-1 < log_data.find(text))
During execution of the unit test, log output is directed to an in-memory stream. In the test code, the module under test is provoked into seeing an error that should cause it to write a log line. Then you can just code assert check_log('some text the test should have logged'). The assertion fails if the string isn't in the log. If it succeeds, execution continues with the log cleared out for the next test.
Looking at it now, I think maybe check_log() should take two parameters, the text and the level, so as to verify that the message is at the expected level:
assert check_log('whatever',logging.WARN)
I'll leave that as an exercise. Meaning, I'm too lazy to do it now.
Incidentally, another goal of V2 is to have localized (e.g. translated) text in the visible UI. Perhaps log messages should also be translated but... nah.
1 comment:
Thanks, this gave me some good ideas to test my custom log formatters.
Post a Comment