25-Nov-2009

Logging verses debugging

Ian Cartwright wrote a piece calling for banning the debugger in the late stages of a development project. He says that "This usually leads to a big upswing in the amount of logging and it's logging we know helps to fix issues". Previously in the same article he implies that a single exception and its stack trace are not enough information to locate the problem.

Now, obviously Ian is Microsoft focused, but if I can locate the failing line of code (even down to the line of assembly code that produced the exception) on OpenVMS by reading a traceback dump, surely he can do it on his operating system of choice?

Granted, without some logging, you may not have contextual information about what the end user was doing with the software at the time, but I've found you rarely need that once you have the buggy line of code staring you in the face.

Conversely, one engineer I know writes an exception handler that will deal with all unexpected errors produced by each program he writes. The exception handler dumps appropriate context from the data structures to a file. Overkill? He thinks not, as this is first part of the code he writes, and he's convinced having this in place saves him significant development time.

Having said all that, I'm going to have to disagree with Ian. Taking away the most versatile tool the developer has to examine what's occuring in the code, and then retrofitting logging speaks of poor design up front.

P.S. Before you say "Up front design?!? What are you, a dinosaur?" here's an Agile story card for you:

"The end user will never see an unhandled error, instead they will see a message that an error has occurred and that the development team has automatically been notified of this fact. The program will then save enough contextual information to a text file to establish what the end user was doing when the error occured, and exactly where in the program the error occured. This text file will then be used to automatically establish a bug report (or increase the priority of an existant bug report) in the bug tracking system."

Posted at November 25, 2009 6:31 AM
Tag Set:
Comments

If you can manage to ban the debugger, then you should rethink your entire debugging-support strategy.

A debugging infrastructure can be a core and critical component within a non-trivial application. This whether the integrated debugging is used for classic application debugging, for call tracing and basic performance monitoring, for creating and then processing application dumps on severe errors, for associated tasks including crash notifications and restarts and failure-related statistics collection, or otherwise.

You build debugging in. From the onset.

One OS environment that I work with can launch the system's debugger entirely under program control, which means the application can detect an "unexpected" failure and can (barring cases of severe corruptions) invoke application-specific debugger command sequences which can access and display the core application context, and a debugger-readable dump then be generated, and (if appropriate) the application then restarted.

Logs never have enough data, and users (and programmers) don't always find and read logs. Better still, you don't have to switch your techniques when you ship your code; you're always using your primary tools.

Bake debugging right into all non-trivial applications.

Posted by: Stephen Hoffman at November 25, 2009 4:35 PM

Comments are closed