http://qs321.pair.com?node_id=1186301

stevieb has asked for the wisdom of the Perl Monks concerning the following question:

As someone who's written a simple logging module and who struggles with where to situate certain log messages, I'm curious to get feedback on the following post, and any other information you may have that makes you decide at which log level you drop stuff into.

Choosing a log level.

Replies are listed 'Best First'.
Re: Choosing a log level
by Your Mother (Archbishop) on Mar 29, 2017 at 04:43 UTC

    Log::Log4perl has these levels–

    OFF FATAL ERROR WARN INFO DEBUG TRACE ALL

    In Catalyst/Log::Log4perl::Catalyst I just pick what makes sense at the time. Fatal and error both sort of define themselves. Warn is something important for the dev to see early and often until fixed. Info… seems a step too much but I have used it when waffling about using debug. Debug for normal, easy to understand flow information that is possible, or prone, to regressions/problems. Trace, the whole enchilada, too much really but some trouble shooting requires it. All… I don’t even think is supported in Catalyst logging… never saw it before tonight.

Re: Choosing a log level
by Discipulus (Canon) on Mar 29, 2017 at 08:03 UTC
    hello

    while my opinion might not be so relevant, i think that too many levels of logging make the decision to assign a particular event to a precise log level too discretional and possibly smoky and confusing particulary when you need to add code later and the perception of what level apply to an event can be slightly changed.

    By other hand it always depends on the nature of the application: I think, about logging, you must think about the consumer of information to decide correctly. Normally there are just two consumers: the user and the programmer. Again normally the programmer want to be aware of user problems, but is not always true: infact you can have, let's say, a huge file upload application where you want/must tell between user_error and client_error and server_error as you can have another application where you want a supervisor_info distinct from a general info level.

    In most complex situations you can have some consumers of information that overlap so you have granulary defined log levels that group togheter under some circumstance: $usrlog = qw(ERROR USER_INFO); $supervisorlog = (SUPER_INFO USER_ERROR) and so on..

    In such cases is better to have the possibility of manually include a log level into another. Generally DEBUG implies all other levels: you may prefere something different in some circumstance. You can build up the list of levels with the rule that if only one level is specified for, let say, user it contains all the lower ones but if you specify more of them this will be the final list: so you can have $usr_log = qw(INFO); is the same of $usr_log = qw(INFO WARN ERR FATAL); but $dev_log = qw(DEBUG ERROR FATAL DB_ERROR DB_WARN DB_FATAL); that does not contains INFO nor WARN

    But in the majority of the case there are only the programmer and the user; if so i'd choose something simpler (as i'v done in my little log experiment):

    FATAL user and dev must be aware that the application cannot continue +: a fundamental resource cannot be reached, authentication failure fo +r the main application, another instance of a singleton is running... + ERROR any error that does not fall in the fatal ones: user login error +s, needed but not necessary files cannot be written or read. This lev +el mixes user's errors and application ones as generally both consume +rs are interested in them WARNING unexpected behaviours that are not strictly errors as choosing + a destination folder that does not exists, a destination files that +already exists. user's warnings generally let the user to rerun some +action. application warnings are autorecoverable errors like database + connectivity impossible with the first DB but a second attempt will +be made INFO any positive feedback from the application to the end user DEBUG any positive feedback from the application to the programmer: t +he file was open, the DB is available, the datastructure taken will b +e dumped..

    L*

    There are no rules, there are no thumbs..
    Reinvent the wheel, then learn The Wheel; may be one day you reinvent one of THE WHEELS.

      Very good, thoughtful post (++). There's just one point which baffles:

      needed but not necessary files

      What should this convey? To me, "needed" pretty much implies "necessary" and vice-versa. If you could clear that up then you would have a great sequence of categories.

        well the words seem identical in their sense but what i meant was, for example, an output file that cannot be written. This might not be a fatal error: you may receive ERROR: out.txt is in use and $0 cannot lock it that is different from FATAL cannot read the configuration you supplied even if it exists

        So probably the best word is complementary instead of needed, thanks for the comment.

        L*

        There are no rules, there are no thumbs..
        Reinvent the wheel, then learn The Wheel; may be one day you reinvent one of THE WHEELS.
Re: Choosing a log level
by BrowserUk (Patriarch) on Mar 29, 2017 at 11:49 UTC

    My personal take is that any logging beyond errors(*) and warnings(*) is pointless.

    The only purpose cutesy "informational" messages ("I opened a file", "I wrote to the file", "I closed the file"; etc.) serve, is as a program flow indicator, and as such, the only useful information they carry is the file&line number from which they are produced.

    The only time logs are read is when something went wrong. Before that occurrence, informational messages are just clutter, that must be stored, and most time skipped over to find the useful information. And the program lines that produce them are program clutter and a performance drain. Extra, non-functional code that serves no good purpose, but that must be both written and maintained.

    If you routinely write informational messages with enough, fine grained frequency to actually help locate where things go wrong, for 99% of the life of the program you are just producing crap that no one is interested in; but that must be written, maintained, stored and archived anyway. You've increased the size (and therefore complexity) of the program by 3 or 4 times; and are producing orders of magnitude of redundant output.

    And if you are only producing sporadic broad strokes tracing, when things do go wrong it will not be enough to locate the problem. Trying to predict, when writing a program, where and what information will allow you to find future failures, would require a crystal ball; a pointless exercise.

    The only meaningful option (IMO) for logging, beyond errors and warnings, is Devel::Trace style line by line tracing (though I think simple line numbers is better (all that is required) than full source text tracing); which can be turned on and off, on-demand, on a by-module basis.

    (*)What constitutes an "error" and what a "warning" is another long debate.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
    In the absence of evidence, opinion is indistinguishable from prejudice.

      I would place your examples at more of a DEBUG or higher level.

      Useful INFO level messages that I've seen include listing off the high level parameters being passed in, summarizing the header info on input data, and noting environmental details.

      That can help a lot with distinguishing between data getting mangled in your code, or being already wrong before it got to you and whom to blame for it. (Your image isn't cropped... because you told me to not crop it!)

      If you're logging the right things, it can also help you go back and identify issues that you had no idea were a problem actually worthy of a much more emphatic alert. For example, I had a set of INFO values that are individually all within the original sensible limits, but after a month they clustered into two groups, and the difference between those groups is a key indicator for an undesirable hardware change which now results in an ERROR & delayed FATAL. Based on the INFO level data, the should-have-been-fatal runs could be chased down and redone.

        All I can add is that I do not consider the logging of a program's input parameters and other external inputs (env. vars etc.) as "cutesy informational messages"; as they are crucial to reproducing failing runs.

        I do consider the logging of subroutine inputs as "cutesy", as they (given access to the program's inputs) are reproducible, on demand, driven by need, at runtime when debugging.

        Having worked on systems that used extensive logging of every subroutine, function and method parameters, along with every detail of the programs action that the programmer thought might be useful when debugging, and having had to wade through gigabyte after gigabyte of mostly meaningless and always reproducible logs trying to work out where things went wrong, I have a strong aversion to the cutesy, guessed-it-might-be-useful kind of logging.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
        In the absence of evidence, opinion is indistinguishable from prejudice.