(b) You have a need to query against these fields, especially complex queries. (a) Your log file has distinct fields, like date logged, id of logged-in user at time of event, module triggering the event, etc and Two things would lead me to using a database: The text file format then provides ready access to the information (using a local text editor), used for diagnostic purposes interest for log event older than a few weeks. The only logs I would recommend leaving in flat text file only format, are these associated with rare/occasional error messages and other exception cases. On related matters, you could look into many useful logging libraries such as log4j (which btw can be configured to go to flat files, with rolling management, or to database back-end). Sometimes a valid approach is to store to a database heap (a table without any index, or maybe just one simple index), and to keep this heap small by moving its contents to a fully indexed table, every evening (or whenever the SQL load is expected to be low). This will be more noticeable if the underlying database table has many indexes. Typically storing to DB will incur a small performance hit, as compared with flat text output. (assuming the individual elements of the log event, such as date/time, event type, numeric code, clear-text message etc. Storing to a database could also allow someone to query the logs for various purposes at a later date. In the end, if I expect my logging to be used in failure scenarios, I need to have the highest availability possible, and presuming I am in a failure state, I can't assume that my DB will still be running. Again, not the most powerful but readily available. A well formatted log file can also have reports generated by use of grep/awk/sed, etc. Not the most powerful of toolsets but always available. If I can get to the server at all, a text file let's me debug with vi/emacs/notepad/*. What if my db/network/? are not functional in some way. If the reason for logging is to be able to trace failures and possibly debug the system, I need to be able to process my "Log" in the direst of circumstances. Storing a current status or critical state transitions of a complex workflow in a db is great, but loggin/tracing all of the steps in the DB can be an issue. Having said this I am very hesitant to do process logging in a db. OTOH if you are very used to working with SQL and are already using a database in the project then there's little overhead in also using a database for logging.Īs a developer of client/server and now n-tier applications, I have great love the the power, reliability and speed of database systems. Using an SQL database is a bit more complicated than just appending a file or two and calling fflush. You might prefer a database for transaction scalability - say if you want to centralise many logs into one database so are actually getting some concurrency ( though it's not intrinsic to the problem - having separate logs on one server would also allow this, but you then have to merge them to total for all your systems ). Since a log never updates an existing entry, and so has no constraints which can be violated or cascading deletions, there's a lot there which you'll never use. The implementation assumes that data should not be duplicated and there are integrity constraints relating to references to other relations/tables which need to be enforced. In general, most SQL databases are optimised for updating data robustly, rather than simply appending to the end of a time series. Given the wealth of log file analysis programs out there and the number of server logs which are plain text, it's well established that plain text log files do scale and are fairly easily queryable.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |