Logging – Human Readable vs. Machine Parsable

I have often encouraged other to implement proper logging even in the smallest projects. My .NET MVVM template available at GitHub comes with log4net per default. There is no need to reinvent the wheel. Besides log4net there is NLog and other frameworks. Most can target many diverse storages like files (with automatic rolling), the Windows event log, UDP ports and more. This way we get some decent log files.

Human Readable

With some log file rolling and different log levels it is mostly a good idea to log data which can be important, especially in production debugging. When you get the users to send you the log file or a feedback tool does that for them, great. Now you got your hands on some information which should look useful. Example:

Write done what is important in a nice human readable way. No binary dumps needed 😉

Okay, now that you got this 8MB of text right infront of you, the text editor for your choice will search for “ERROR”, a logger name and other subjects. Some advanced editors are able to hide all lines in a file which do not match your criteria. This way you can try to only show INFO messages.

When you have done this, you will find out that handling logs with several hundreds or even more lines can be a pain. Although these files are there to help you finding bugs – and might be your only debugging help in production scenarios (where it might be not possible to connect with a remote debugger).

Machine Parsable

Of course, I do not want to go back to any kind of proprietary log format – less readable and you have to write your own parser. A good idea is to go with a well know format. XML got infamous because most people do not like XML namespaces and the redundant information due to the closing tags. The fall of XML was the rise of JSON. It is a more lightweighted format (also there is an optional schema feature out there) and parsers are available for almost any programming language. If we provide the information about our log statements as JSON objects it could easily be parsed.

As you can see this approach assumes every file line is a JSON object. You could also have the complete file to be an array of JSON objects – but this would require to rewrite the end of the file all the time to enlarge the array. If the app crashed and the log writer cannot flush, you might end up with a destoryed JSON array. Therefore I prefer the “one line = one JSON object” approach.

You can image, this is slightly harder to read for a human but not too hard. Also using a text editor’s search feature might work even better because of the object property names visible all the time.

A format like this can be achieved in log4net with the additional libraries log4net.Ext.Json, log4net.Ext.Json.Serializers.Newtonsoft and (of course) Json.NET (former Newtonsoft.JSON, no .NET app using JSON without it). This is a rolling file appender configuration example:

This is a lot of assemblies but you might use Json.NET anyways and the other two are due to the modularity. And you could still add your own renderer.

With all this great machine-readable output there is one question left.

How can I read it with filtering and sorting?

I was suprised when I did not find  a nice little tool which would parse any kind of JSON objects into a column based view. If you can recommend any tool for this I would highly appreciate any comment to this post!

What I was looking for was a tool where not all lines would require the same JSON object. There could be a property only added to specific log lines for example. The tool should not crash neither ignore an inconsist JSON object layout through the log file.


Did I say it would be easy to parse log entries in JSON format? Yes, it is. So I built a small MVVM application to open those files and be able to search, filter and sort log entries.


This small sample app is available at GitHub as open source under the Apache 2 license.

It currently features:

  • Full text search through all entries.
  • Providing columns for all object properties found through the complete file.
  • Sorting and filtering based on columns.
  • Detailed JSON view for the selected entry as well as detailed JSON object property view (seleccted through the columns’ context menu).
  • Auto-refresh on file updates.
  • Optional automatic re-opening of the last loaded file.
  • Hiding columns.

Known limitations:

  • The file will be completly loaded into memory so it will consume quite a lot of it.
  • No recent files feature.
  • Only one filter value per column, so you cannot filter on ERROR and WARN for example.

This is just a small basic application but it might come in handy. And you are welcome to send push requests 😉

Further Noteslogs

The concepts here apply not only to the .NET platform but also Java, Ruby, etc – programming in general. Also there a companies providing specialized logging services for desktop and mobile apps. If you are going big, have a look at those – especially when going mobile or cloud where you might require to collect and analyze data from thousands of devices per day or even minute.

Also a log: “a portion or length of the trunk or of a large limb of a felled tree.

You may also like...

  • ZoolWay

    (Testing leaving a comment on the new domain host)