As a transitional step, this site will temporarily be made Read-Only from July 8th until the new community launch. During this time, you can still search and read articles and discussions.

While the community is read-only, if you have questions or issues requiring TIBCO review/response, please access the new TIBCO Community and select "Ask A Question."

You will need to register or log in or register to engage in the new community.

Best Logging Practices For TIBCO LogLogic® Log Management Intelligence

Last updated:
8:48pm Aug 26, 2019

Back to HomePage

TIBCO LogLogic® provides the industry's first enterprise-class, end-to-end log management solution. Using LogLogic log management solutions, IT organizations can analyze and archive network log data for the purpose of compliance and legal protection, decision support for network security remediation, and increased network performance and improved availability.

This page provides the best logging practices for applications in order to take full advantage of TIBCO LogLogic® LMI's log analysis. LMI can collect, index and search on both structured and unstructured data. But following the best guidelines provided in this article lets users take full advantage of out of the box reporting, parsing and data modelling to gain quick insights from your machine and log data. Here are some of the basic aspects to better logging framework to follow.

Data/Log Formats In General

Standard Data Formatting

  • All events should be expressed in "text" format. Binary format of data sounds good because it's compressed, but it requires decoding. Decoding and Encoding of binary data can increase the amount of work required to process the data for both the application generating this data and for TIBCO LogLogic® LMI to process it.
  • All events should be encoded in ANSI or UTF-8 without the necessity of BOM.
  • Create Human Readable Events. 
  • Avoid using a complex encoding that would require lookups to make event information intelligible.

Categories of Data

All events should be Categorized – Use INFO, WARN, ERROR, DEBUG, EXCEPTION.

  • DEBUG level for application debugging.
  • INFO level for semantic logging.
  • WARN level for recoverable errors or automatic retry situations.
  • ERROR level for errors that are reported but not handled.
  • EXCEPTION level for errors that are safely handled by the system.

Usage of Transaction IDs

  • Unique identifiers such as transaction IDs are tremendously helpful when debugging, and even more helpful when users are gathering analytics.
  • Unique IDs can point users to the exact transaction. Without them, users might only have a time range to use which can make it difficult to narrow down the search criteria.
  • Carrying the Transaction ID through multiple touch points and avoiding change in the format of these IDs between modules is a quick way to correlate data from multiple points. Users can track transactions through the system and follow them across various machines, networks, services, and endpoints to determine the flow of events.
  • Transaction ID should be unique even across restarts of the service, and not be re-used within the expected retention time of the logs. In case of a distributed architecture, ID's should be unique across the various components involved. Reusing the same ID pattern or the ID in multiple components can render the analysis based on unique ID useless. 

Keeping Multi-Line Events To A Minimum

  • In TIBCO LogLogic LMI Multiline events will natively be treated as a single event per line, where a line is denoted as having an End of Line (CR/LF, LF)
  • This causes the events to be segmented and unable to be seen as a single event.
  • Consider breaking multi-line events into separate events and use a Transaction ID to relate them together.
  • If multi-line events must be used, make sure they all contain a common prefix that will identify the beginning of a new line.
  • Do not mix multi-line and single-line logs in the same file, unless they all contain a common prefix.
  • If users have to have multi-line events then these events will need to be flattened before sending them over syslog. TIBCO LogLogic Universal Collector is one of the applications that can help with this process. UC can collect and flatten them before sending them to LMI for processing.
  • If users flatten them, then users can replace the CR\LF, LF with space or a \r\n string. This will ensure users can view multi-lined logs without the events being segmented. Here is a sample of how data looks like when it's not flattened and a comparison of the same when it is flattened by TIBCO LogLogic Universal Collector:
    • This is how the data looks like when it is not flattened:

       MultiLined Logs
    • Compare to how the data looks like when the data is flattened using a space or \r\n string. 

      Multilined Formatted Index Search

Separating Access Log Events From Debug Events

  • Access Logs should be logged to a different file.
  • Clean separation of access logs from debug logs is a good practice which allows users to have different logging policies.
  • Since access logs are used mostly for security and compliance they may need to persist longer then debug data and may need to be stored in a more secure manager. This can help users implement different data retention policies on both LogLogic and on the target disk which can maximize resource planning.

Timestamps And Their Importance

The correct time is critical to understanding the proper sequence of events when dealing with log or machine data.

  • All Timestamps should be close to the beginning of the line or preferably the start of the message. The farther users place the timestamp from the beginning of the message, the harder it is to analyze the timestamp.
  • All events should use the following standard timestamp format as follows:
      • Example: 2109-05-17T05:14:15.000003-07:00
    • The time zone should be a GMT/UTC offset.
    • For more supported formats see RFC5424, the one above is recommended for TIBCO LogLogic LMI. 

Importance Of Key-Value Pairs

  • All Key-Value Pair's (KVP’s) should be comma delimited. Example: Key1=“value1”, Key2=“value2”, Key3="value3", etc.
  • All Values in Key-Value Pairs should be double quoted
    • This allows for spaces or embedded commas to be implemented in the values without being escaped. 
    • No embedded double quotes should be used in values.
  • All Key Values should be in a static format
    • Having "Structured Data" is the key to parsing performance. Structured data means a defined or identifiable pattern of data which does not deviate from the pattern. 
    • Keys should always be in the same order for all events.
    • Any optional KVP should be added to the end of the event to make event processing faster.
  • It is recommended to have all events under 64k in size when possible.
  • Any event exceeding 64k should be split into multiple events with a common transaction ID to relate them.
  • TIBCO LogLogic LMI added support for "Jumbo Messages" which means messages longer than 64 KB and up to 1 MB can be collected via TCP syslog.

Note: If there are any events that must be larger then 64k users will need to use TCP syslog and each message should be no larger then 1M in size. Any event exceeding 1M should be split into multiple events with a common transaction ID to correlate them.

Key Value Pairs In Detail

All events should always contain the following information in the following order at the beginning of the event for them to be in a good KVP format:

  • Event Time – The time at which the event occurred.
  • Application -  The name of the application generating the event (should always be followed by Event Time to help with LMI identification).
  • Type – The type of event.
  • Source IP – The IP where the event occurred.
  • Target IP – The IP of where the event was targeting.
  • Source User – The User Name of the user triggering the event (where applicable, the value should be an empty string if not applicable).
  • Target User – The User Name of who the event was targeting (where applicable, the value should be an empty string if not applicable).
  • Any big blobs of free form text should always be put at the end of an event to avoid causing unstructured events. All important information should be broken into KVP’s but if users have details that need to be added to an event which may not be critical it should be put at the end of the event. This makes event processing faster

Example of a KVP Message

eventTime=“2019-05-19T05:14:15.00000307:00”,application=“MyAppManager”,type=“WARN”,sourceIP=“”,targetIP=“”,sourceUser=“admin”,targetUser=“”,description=“Service shutting down”

Sending Data To LMI via Syslog

Sending data to LMI using syslog is not always the easiest way. Especially if users are on systems that do not inherently have syslog based tools like in Unix. For the purposes of this article we assume users do have syslog based tools like syslogd, fluentd, etc. Users can also leverage the use of TIBCO LogLogic Universal Collector (UC) to achieve the same. To know more about UC, refer to our wiki page here. 

  • All Syslog events should use a standard syslog log header:
    • E.g. <13> 2019-05-19T08:30:08.00011-0700 [MYAPP]: eventTime=“2019-05-19 08:30:08.11”, ….
    • Why two timestamps? The timestamp in the syslog header is when the event was sent and the eventTime in the event body is the time the actual event occurred.
  • All events should have an Application Name in the header. This allows for faster identification.
  • The fields must be in the following order, with non-used fields replaced by '-' :
    • Example of supported RFC5424 syslog header format
      • Note: At this time LMI does not support the syslog VERSION NUMBER after the PRI field.
  • Regardless of file or syslog, its recommended to not mix formats within the events.
    • Example no embedded binary, no embedded XML, no embedded JSON, etc.
    • By not having mixed formats it will make event parsing more efficient.
    • If it is required to have mixed formats then it is recommended to have them at the end of the event to avoid unstructured data formats.

Logging More Than Just Debug Events

  • Put semantic meaning in events to get more out of the data.
  • Log audit trails, what users are doing, transactions, timing information, and so on.
  • Log anything that can add value when aggregated, charted, or further analyzed.
  • In other words, log anything that is interesting to the business.

Logging Types

Logs can contain different kinds of data. The selection of the data used is normally affected by the motivation leading to the logging. In this section, we attempt to explain the different types of logging information and the reasons why they need to be logged that was observed over time.

In general, the logging features include appropriate debugging information such as time of the event, initiating process or owner of the process, and a detailed description of the event. The following are types of system events that can be logged in an application. It depends on the particular application or system and the needs to decide which of these will be used in the logs:

  • Reading of data file access and what kind of data is read. This not only allows to see if data was read but also by 'whom' and 'when'.
  • Writing of data logs also where and with what mode (append, replace) data was written. This can be used to see if data was overwritten or if a program is writing at all.
  • Modification of any data characteristics, including access control permissions or labels, location in database or file system, or data ownership. Administrators can detect if their configurations were changed.
  • Administrative functions and changes in configuration regardless of overlap (account management actions, viewing any user's data, enabling or disabling logging, etc.)
  • Miscellaneous debugging information that can be enabled or disabled on the fly.
  • All authorization attempts (include time) like success/failure, resource or function being authorized, and the user requesting authorization. We can detect password guessing with these logs. These kinds of logs can be fed into an Intrusion Detection system that will detect anomalies.
  • Deletion of any data (object). Sometimes applications are required to have some sort of versioning in which the deletion process can be canceled.
  • Network communications (bind, connect, accept, etc.). With this information, an Intrusion Detection system can detect port scanning and brute force attacks.
  • All authentication events (logging in, logging out, failed logins, etc.) that allow to detect brute force and guessing attacks too.

Value Of Properly Formatting Data

Having events in proper formats will allow users to get the below value out of the data:

  • Parsing and Searching of events are easy and straight forward.
  • Allows for faster development of event support.
  • Allows for better performing event parsing.
  • It’s a long known standard format. Structured data is always better than unstructured data for analysis.
  • Allows for easy supportability of downstream applications – SEM, Correlation, Analytics, etc.

Good and Bad Logs

Here is an example of a good log message and a bad log message. This is just a sample to encompass what was covered in the previous sections. TIBCO LogLogic LMI supports collection, indexing, and searching for any data. But using the samples below users can get most out of the analysis.

Good Log Sample:

eventTime=“2019-05-19T05:14:15.00000307:00”,application=“MyAppManager”,type=“WARN”,sourceIP=“”,targetIP=“”,sourceUser=“admin”,targetUser=“”,description=“Service shutting down”

Bad Log Sample:

2019-05-05 08:05:58,818 [REQ-ACSGenericRequest_v1-0-4-0_c1-0-1-0.Java.ACSGenericRequestJava.ACSGenericRequest-5] DEBUG

com.abcd.webservices.toolkit.transport.WServerTransportSession - AnyDemandCodeRequestGenerator :

WServerTransportSession.sendRequest() - Response received: ?>A*RS@1708/05NOVMSP   PSGR SECURITY LIST  TTL-000  05NOV/1305Z


------  -----------------------  ------------------  ------



Another example of a bad log data sample is:

Bad Log Sample

Logging Maturity Model

The levels are accumulative so as you reach each higher level you should include the same functionality as the prior level

Level# Level Description Description
1 Minimal
  • Log anything users want to a file.
  • Logs are structured. E.g. log4j format.



Debug data:

  • Track application warnings and errors.
  • Track down bugs.
  • Track down performance problem areas.
  • Logs are more structured. E.g. log4j format.


Security and Compliance Access logs: 
  • Who did what?, Where?, When?, and How?
  • Create a clear audit trail from start to end of a given transaction or session.
  • Logs are in Key-Value Pairs.

Tighter Integration With TIBCO LogLogic LMI

An alternate way to send data to LMI is via ULDP (a proprietary protocol) which offers, encryption, compression, queuing and more. The ULDP library and sample code can be found as a part of the LMI supplemental package in the LMI distribution. To know more about this, contact TIBCO Support at

Other Integration Tools

TIBCO LogLogic® Logging Toolkit for Java: TIBCO LogLogic logging extension for Java supports the following logging backend:

  • Java JDK Logging framework
  • Log4j (1.x)
  • Log4j 2 (2.x)
  • Logback

You can find more information about this open source library at GitHub.

Additional Resources

To learn more about how to use LogLogic LMI to manage your Logs and refer to our documentation at and watch our tutorial videos on

If there are any general questions regarding the use of these LogLogic products post your questions to and for issues that require a support case please open a case with us at