ELSA enterprise log archive query system

ELSA (full name: Enterprise Log Search and Archive) is an open source enterprise log archive query tool based on syslog-ng (a new generation of log collectors, but currently most Linux discovery versions do not include this tool) and MySQL. The perfect match of Sphinx, supports full-text indexing, and can search any string in hundreds of millions of logs as easily as searching the Web (provided that your server configuration is high enough). The working principle diagram of the single-node ELSA log collection system is as follows:

The above architecture diagram shows that ELSA is divided into three layers in terms of architecture:

The log receiver, which is completed by syslog-ng, is responsible for receiving
log storage indexes from local, network and imported log files . The storage is completed by the MySQL database, and the index is completed by sphinx.
Web front end.
ELSA uses syslog-ng's pattern-db parser for effective log standardization and Sphinx full-text index for log search. The internal API of the system summarizes the query results and sends them to the client. The entire system is executed asynchronously and multiple queries can be run. The receiver syslog-ng does not perform normalization processing when receiving logs (analogous to the OSSIM-Agent plug-in), so the amount of regular expression calculation for logs is not large, and it can maintain an efficient log receiving rate in syslog-ng. Most of them are composed of Perl scripts, and MySQL can insert 100K rows of data per second. Sphinx builds an index for newly inserted rows in the index, and rebuilds a permanent index every 2 hours. The entire system can process 100K logs per second when it is at its maximum efficiency.

If you have ELK actual combat experience, you can understand ELSA as a simplified version of ELK system with simple structure and fast speed. Installation (Interested friends can test on Debian (including Ubuntu)-based OS, get the installation tar package on the ELSA Google Code homepage) is relatively simple and not introduced, let's go directly to the topic.

1. Collect Windows server logs

We can use the Eventlog-to-Syslog tool to send the logs of the Windows platform to the ELSA server.
Method:
Copy evtsys.exe and evtsys.dll to the system directory and enter the following command
evtsys.exe -i -h The IP
log of the ELSA server will be used The syslog protocol is sent to your ELSA server, where the log will be parsed as "WINDOWS" type

2. Collect logs of Linux system and related services

Linux/Unix systems have rsyslog or Syslogd processes, just add the following configuration in its configuration file

. @ELSA server IP

3. Configuration file

The main configuration file of ELSA is /etc/elsa_node.conf

{

Local database connection information

    "database" : {
            "db": "syslog",
            "data_db": "syslog_data",
            "dsn" : "dbi:mysql:database=syslog",
            "username" : "elsa",
            "password" : "biglog"
    },

// The directory of the system coordination lock
"lockfile_dir": "/opt/elsa/node/tmp/locks",

    "num_indexes": 200,

//If you want to archive the log, please keep this
"archive": {

Uncomment to establish a retention period in days for archive logs

            #"days": 90,
            "percentage": 33,
            "table_size": 10000000
    },
    //日志大小限制+索引大小。设置为磁盘总空间的95-90%。
    "log_size_limit" : 8000000000,
    "sphinx" : {

            "indexer": "/usr/bin/indexer",

            "allowed_temp_percent" : 40,

            "allowed_mem_percent": 25
            "host" : "127.0.0.1",
            "port" : 9312,
    "mysql_port" : 9306,

            "config_file" : "/etc/sphinxsearch/sphinx.conf",

            "index_path" : "/nsm/elsa/data/sphinx",

            "index_interval" : 60,

            "perm_index_size" : 10000000,
            # Where the optional stopwords file is
            "stopwords": {
                    "file": "/etc/sphinxsearch/sphinx_stopwords.txt",
                    "top_n": 0,
                    "interval": 0,
                    "whitelist": []
            },

            "pid_file": "/var/run/sphinxsearch/searchd.pid"
    },

    "logdir" : "/nsm/elsa/data/elsa/log",
"mysql_dir": "/nsm/elsa/data/elsa/mysql",

    "num_log_readers" : 1,
   #调试跟踪级别
    "debug_level" : "TRACE",

    "buffer_dir" : "/nsm/elsa/data/elsa/tmp/buffers/",

    "log_parse_errors": 1,

    "stats" : {
            "retention_days": 365
    },

    "min_expected_hosts": 2

}
ELSA's web configuration file /etc/elsa_web.conf

{
#定义API密钥
"apikeys": {
"elsa": "b7292980d34c99e2581d36681831667b"
},
"version": {
"Author": "mcholste",
"Date": "2014-07-17 15:12:58 -0700 (Thu, 17 Jul 2014)",
"Rev": "1205",
"Sphinx": "Sphinx 2.1.9"
},
"peers": {
"127.0.0.1": {
"url": "http://127.0.0.1:3154/",
"username": "elsa",
"apikey": "b7292980d34c99e2581d36681831667b"
}
},
"admin_email_address": "root@localhost",
"connectors": {
},
"dashboards": {
},
"datasources": {
},
"transforms": {
"whois": {
"known_subnets": {
"10.0.0.0": {
"end": "10.255.255.255",
"org": "MyOrg"
},
"192.168.0.0": {
"end": "192.168.255.255",
"org": "MyOrg"
},
"172.16.0.0": {
"end": "172.31.255.255",
"org": "MyOrg"
}
},
"known_orgs": {
"MyOrg": {
"name": "MyOrg",
"org": "MyOrg",
"descr": "MyOrg",
"cc": "US",
"country": "United States",
"city": "Anytown",
"state": "Somestate"
}
}
},
"parse": {
"tld": [
{
"field": "domain",
"pattern": "\.([a-zA-Z]+)$",
"extractions": [
"tld"
]
},
{
"field": "site",
"pattern": "\.([a-zA-Z]+)$",
"extractions": [
"tld"
]
},
{
"field": "uri",
"pattern": "\.([a-zA-Z]+)(:|/|$)",
"extractions": [
"tld"
]
}
],
"url": [
{
"field": "uri",
"pattern": "(?:(?<proto>[a-zA-Z]+)://)?(?:(?<username>[^/]+):(?<password>[^/]+)@)?(?<domain>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}|[^/]+\.(?<tld>[a-zA-Z]+))(?::(?<port>\d+))?(?<resource>/[^?])?(?:\?(?<query_string>.))?$",
"extractions": [
"proto",
"username",
"password",
"domain",
"tld",
"port",
"resource",
"querystring"
]
}
],
"mimetype": [
{
"field": "msg",
"pattern": "[\"'\(\[\s\|;:](?<mime>(?<type>application|audio|chemical|image|message|model|multipart|text|video)/(?<subtype>[\w-
]+))[\"'\)\]\s\|;:]",
"extractions": [
"mime",
"type",
"subtype"
]
}
]
}
},
"plugins": {
"SNORT": "Info::Snort",
"WINDOWS": "Info::Windows",
"URL": "Info::Url",
"BRO_NOTICE": "Info::Bro"
},
"info": {
"snort": {
"url_templates": [
"http://doc.emergingthreats.net/bin/view/Main/%d"
]
},
"url": {
"url_templates": [
"http://whois.domaintools.com/%s"
]
},
"windows": {
"url_templates": [
"http://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventid=%d"
]
}
},
"max_concurrent_archive_queries": 4,
"schedule_interval": 60,
"node_info_cache_timeout": 60,
"email": {
"display_address": "[email protected]",
"base_url": "http://elsa/",
"subject": "ELSA Alert"
},
"link_key": "secret",
"yui": {
"local": "inc"
},
"data_db": {
"db": "syslog",
"username": "elsa",
"password": "biglog"
},
"meta_db": {
"dsn": "dbi:mysql:database=elsa_web",
"username": "elsa",
"password": "biglog"
},
"auth": {
"method": "security_onion"
},
"admin_groups": [
"system",
"admin"
],
"auth_db": {
"dsn": "dbi:mysql:database=securityonion_db",
"username": "root",
"password": "",
"auth_statement": "SELECT PASSWORD(password) FROM user_info WHERE username=?",
"email_statement": "SELECT email FROM user_info WHERE username=?"
},
"peer_id_multiplier": 1000000000000,
"query_timeout": 55,
"pcap_url": "/capme",
"logdir": "/nsm/elsa/data/elsa/log",
"buffer_dir": "/nsm/elsa/data/elsa/tmp/buffers",
"debug_level": "TRACE",
"default_start_time_offset": 2,
"livetail": {
"poll_interval": 5,
"time_limit": 3600
}
}

4. Typical application scenarios (screenshots)

Focus on showing several key functions of ELSA software.

1. Number of connections Top N

2. Dynamic dashboard display

Dynamically display the number of processed logs per unit time, the amount of query, the address of the collection host, and the log type and other parameters.

3. Query log details

We found in the Field Summary that these logs have 15 fields (host IP, process name, source address, source port, destination address, destination port, protocol type, number of input bytes, service type, duration, output Bytes, number of input data packets, number of output data packets, country code, etc.), each field is followed by the number of occurrences, and each field is separated by the "|" symbol.

4. Query ossec log information

5. Detected the alarm log information for the MySQL 3306 port scan

6. Port scan alarm log information

7.Ping alarm log information

You can read the best-selling book "Unix/Linux Network Log Analysis and Traffic Monitoring" for related topics about log analysis.

Guess you like

Origin blog.51cto.com/chenguang/2562538