Then secretary of Defense Donald Rumsfeld popularized the terms: “known knowns”, “known unknowns”, and “unknown unknowns.” With the ever-increasing number of data breaches and vulnerabilities, database operation teams have to account for every possibility. Visualizing your audit data allows you to look for the “unknowns”, those access patterns or connections that you’d otherwise overlook.

Although enabling an audit log and shipping it off to a vault may meet security and regulatory requirements, you will lose an important opportunity to protect your customer and employee information.
The following dashboard demonstrates the type of information that audit logs can reveal:
- Who is connecting to my database (IP address, location, username..)
- Who is trying to connect to my database but getting access errors?
- Which tables are being accessed and by whom?
- Who is accessing sensitive data?

With a quick glance, we can see that “User_Unknown” and Ops1_US are accessing the salaries table. “User_Unknown” has executed 2 updates and has an IP address registered for Barbados. Also, you can see that User_Unknown has logged several 1045 errors, likely a brute-force password attempt.
The below diagram shows all the components to enable this type of real-time analytics: MySQL Enterprise Audit generates the logs, filebeat pushes log changes to logstash, logstash processes JSON and sends to ELK stack (Elasticsearch, logstash, Kibana).

Although I used the ELK stack, there are many great SIEM (security information and event management) and log analyzers: Splunk, SolarWinds, Oracle Audit Vault, Sumo Logic, Micro Focus, Trustwave, Datadog, etc…
For the audit log, I used Oracle’s MySQL Enterprise Audit plugin. For the above dashboard, I enabled specific filters to log connects and disconnects, connection errors and table access. These filters can be found on prior blog (MySQL Audit Logging — How to Avoid Data Overload.)
For auditing purposes, the general query log should be avoided. It lacks compression, automatic log rotation, encryption and it puts significant overhead on your system. In addition, it doesn’t collect sufficient information to meet regulatory security requirements.
If you’re interested in setting up a similar environment, I used Oracle cloud and the terraform scripts on this github repo to deploy Elasticsearch and Kibana. MySQL Enterprise Audit plugin can be downloaded with the enterprise server from Oracle e-delivery. For log processing, I changed the MySQL audit output to JSON (log file formats) I posted the logstash config file below for ingesting the file into elasticsearch.
I welcome any feedback on improvements or your feedback on alternative log analyzers. Also, please let us know if we can provide more filtering examples.
Filter definitions:
#logstash config file
input {
beats {
port => 5044
}
filter {
# remove comma between json events
mutate {
gsub => [“message”,”\} \}\,”,”} }”]
}
# remove errors due to malformed json
if “_jsonparsefailure” in [tags] {
drop { }
}
grok {
match => {
“message” => [ “%{JSON:payload_raw}” ]
}
pattern_definitions => {
“JSON” => “{.*$”
}
}
if “_grokparsefailure” in [tags] {
drop { }
}
json {
source => “payload_raw”
}
# replace log timestamp with actual audit log timestamp
date {
match => [ “[timestamp]”, “yyyy-MM-d H:m:s” ]
}
geoip{ source => “[login][ip]”
target => “geoip”
}
# Remove the temporary “payload_raw” field (and other fields)
mutate {
remove_field => [ “payload_raw”, “message”, “port”, “host”, “@version”, “timestamp” ]
}
}
output {
elasticsearch {
hosts => [ “[elastisearch_server_IP]:9200” ]
}
}