Modern development practices have minimized application logging in production environments. Many factors have contributed to this that include:
- Rapid prototype-to-development practices that ignore adding practical production logging.
- Myopic focuses on optimized and resource minimal code that consider logging a burden.
- Comment-less, log-less development styles that focus on pumping out code, not errata.
Regardless of why, sufficient and reasonable “application” logging has fallen out of vogue. This article presents a practical guidance for application logs (and logging) that represents a “develop for security” best practice. The acronym FLACK is used to identify this practice and stands for:
- “F low log” all critical process, and data paths.
- Leak no critical information.
- Authentication/Authorization paths need success and failure logging
- Catch and log ALL Exceptions
- Keep and monitor all logs.
Each of these components will be discussed further in the sections to follow.
Critical process and data paths need to be logged in a way that during execution, a monitor can tell if they look “normal”, and/or if exceptional activities are taking place. This is accomplished by a technique the author refers to as “flow logging” the process or the data access.
What is flow logging? “Flow logging” means adding well-placed log statements throughout an application process that will provide a reasonable indicator of the success, failure and exceptional flow steps. This allows someone monitoring the log to see a consistent sequence of log entries every time that flow takes place. Oftentimes the timing of these log steps can also give clues to abnormal activity.
What should be “flow logged”? That is up to each application development team, however, some guidances:
- all macro level mutational tasks
- all credential usage and mutation
- any place critical data is stored or accessed
- primary “at risk” flows
Leak no critical information… This sounds very obvious, but in fact many logs do leak sensitive and critical information. What is critical?
- any credential or meta information about a credential
- data store structural information
- specific system path, structure, parameterization information (e.g., installation paths)
A general way of reducing log leakage is to develop “codes” for activities and errors in the application. The code book is not part of the application, but rather a support knowledge base which is available to the correct personnel.
The best way to ensure logs are not “leaky” is to task quality assurance teams should be tasked with running extensive log checks to ensure sensitive or critical information is not leaking into the logs. Some development shops put bounties on finding leaks in logs.
Authentication and authorization success and failures should always be logged. Period. This is a case where there should never be any deviance from the standard.
It is a best practice to do more than just simple success/failure logging of authentication or authorization activities. As referenced previously in this article, flow logging the processes and data access involved in authentication or authorization is a better choice. By flow logging auth/auth, one may:
- detect brute force attempts (multiple failures)
- detect illicit access (out of normal successes)
- identify subverted auth/auth processes or data (missing steps)
In application development there are many schools of thought on exception handling. Best security practices suggest that all exceptions should be handled reasonably and logged, including the stack trace (although user facing logs should have the stack trace cleaned).
Exceptional points in logic are often the points an attacker will exploit. If not exploited directly, the exceptional logic points can be tripped as an attacker is attempting an exploit. Knowing an exception is happening is always important, but can become bread crumbs in identifying more Advanced Persistent Threat attacks. Logging exceptions is the way to ensure the best chance to catch all of these scenarios.
The last aspect of our application security logging plan, once again, seems obvious. Too often, logs are only kept for interim, or short-term basis. Logs in general should cover multiple days and should go as far back as is reasonably possible. Log entries are to a security professional like symptoms are to a doctor – it is not always the value, but the history of what has transpired.
Too often, production “application” logs are never looked at until an incident occurs. Application logs should be reviewed regularly and integrated into log analysis tools (such as SIEM). For a long time the security industry has focused on perimeter defense logs. The modern kill chain is more like:
social engineer -> exploit -> permanence -> goals
This kill chain will not even touch the perimeter, as the social engineering attack most often circumvent it. Therefore perimeter logs have no value in detecting the first part of the kill chain. Application logs are the treasure trove for detecting the initial parts of the kill chain, and become the bastion for the other aspects.
FLACK does not directly resolve any security risk. Properly implemented it does provide an easy mnemonic to use and implement “application” security logging. Logs are important. Not only is the importance in the content (values) found, but also tempo, and flow. Good log review can detect changes in all of these which may be the only fingerprint we get to discover compromises on our systems. FLACK enables this possibility.