Aggregating A Python Error Summary from Log Files

Follow these steps to maintain more reliable scripts and catch more of your traceback errors:

  1. automate your scripts to run daily, weekly, monthly, etc.
  2. log all your traceback errors
  3. automate aggregating the logs and parsing tracebacks
  4. start a feedback loop of fixing the tracebacks until 0 tracebacks remain
  5. re-run the aggregator and confirm tracebacks disappeared

This pure python script allows me to hone in on potential automation problem areas with my scheduled Python scripts.

import os

def parse_tracebacks(log):
    errors = list()
    with open(log,'r') as f:
        for line in f:
            if 'Traceback' in line or 'Error' in line:
        return errors

# parse tracebacks from log files and write to text file
logs = [f for f in os.listdir(os.getcwd()) if '.log' in f.lower()]
tracebacks = [parse_tracebacks(log) for log in logs]
with open('tracebacks.txt', 'w') as fhand:
    for t in tracebacks:
        for error in t:
            fhand.write('%s\n' % ','.join(error))

This script does not catch the entire traceback but shows the error type and log file that contains that error found in a ‘.txt’ and ‘.csv ‘ accessible format. You can save the resulting text file as a csv in a text editor like Notepad.


Noteworthy gains from aggregating my logs:

  • less fear of missing mistakes
  • more freedom to improve the code
  • catch the mistakes faster