On tracing thinking

table of Contents

background

Ideas

web system

Host system

Other commonly used systems

to sum up


background

The current trend in the general environment, the impact during the epidemic, when the virus is found, through various means of screening, and finally find the source for control or elimination;

Then in the network battle, attack source tracing is an important part of the post-event response to security incidents. By analyzing the victim assets and intranet traffic, the attacker’s attack path and attack methods can be restored to a certain extent, which helps to repair vulnerabilities and risks. Avoid secondary incidents. Attack knowledge can be converted into a defensive advantage. If you can be proactive and predictable, you can better control the consequences.

 

Ideas

In the process of traceability, in addition to related technical means, it is necessary to confirm an overall idea. Analyze the abnormal point as a whole and give several possible solutions according to the actual environment. As the saying goes, it is good to be prepared. In-depth analysis of frequently occurring problems and possible error points is carried out for continuous design and improvement. There are also the deployment of some detection equipment. Examples of abnormal points that appear routinely and are easily perceived by users are as follows:

  1. Web pages have been tampered with, black chains are hung up, web files are lost, etc.

  2. The database has been tampered with, the abnormal operation of the web system affects usability, and the web user password has been tampered with, etc.

  3. The host has an abnormal operation response, the file is encrypted, the host system has other users, etc.

  4. A large amount of abnormal traffic occurs at the host traffic layer

According to the situation of the user site, it is often necessary to do some information collection work, such as the abnormal time point (very important), the main business situation of the abnormal server, whether a network topology is roughly in the DMZ zone, whether it is accessible via the public network, Which ports have been opened, whether there are patches, what kind of web technology is used, whether there have been any changes recently, whether there are any security devices, etc.

Based on the information collected, several possibilities can often be drawn. There are many vulnerabilities. Some vulnerabilities have not been repaired in time. Should the risk be avoided and what strategies should be adopted; a web server public network can access the use of framework classes in the event of a black chain, then it can be initially suspected to be a command execution vulnerability; if A public network server has no patch installed and no firewall protection. The administrator's password is P@sswrod, so there is a high possibility that it will be successfully cracked by brute force; the following work is mainly to collect various data to prove this conjecture.

 

web system

General web security events can generally be found in web logs. After all, not every hacker can do things like clearing logs.

The logs of several common middleware are as follows:

  1. The apache log path is generally configured in the httpd.conf directory or located at /var/log/http

  2. The IIS log is by default in the directory under Logfiles in the system directory

  3. Tomcat is generally located under a logs folder in the tomcat installation directory

  4. Nginx logs are generally configured in nginx.conf or vhost conf file

Logs are generally named by dates to facilitate subsequent audits and security personnel for analysis.

A worker must first sharpen his tools if he wants to do his job well. Generally, the log volume is relatively large. There are still a lot of log detection tools on the Internet. I don’t like to use the main tools or notepad++ and Sublime Text to follow up the information collected, such as the time point. When you analyze the request logs before and after the time point, you can generally find some abnormal.

In order to easily identify some logs, there are also many open source projects on github that specifically look for security-related attacks or statistics in the logs. Because there are more scanners nowadays, a lot of invalid attacks will often be found after a check, which makes it more troublesome to screen.

Recommend a small tool: web-log-parser is an open source analysis web log tool, developed in python language, with flexible log format configuration. There are many excellent projects. If it doesn't work, just define your own rules and make one.

The connection is as follows: https://github.com/JeffXue/web-log-parser

When processing some visits, web page changes, upload path, source IP and other information can be better collected. Through the identification of some critical paths, combined with certain information, the entry point can often be located.

Examples of some common entry points are as follows:

  1. Some CMS EXP, such as Discuz Empire Spring and other command execution, permission bypass logic loopholes, etc. Because they are more common, many of them are public on the Internet, so they involve a relatively wide range.

  2. Editor upload vulnerabilities, such as the well-known FCK editor, UEditor, etc.

  3. Functional upload filtering is not strict, such as upload vulnerabilities caused by strict filtering in the profile upload interface of avatar upload.

  4. Web system weak password problem admin account, or tomcat manager user weak password, Axis2 weak password user, Openfire weak password, etc.

At the same time, the web system is prone to some webshell situations, and some webshells are often found in some upload directories. The webpage is obviously a JSP and a php sentence appears. Generally need to focus on. It is recommended to use D-Shield to scan the directory of the web system.

The scanned webshell time upload time, file creation time, and file modification time are often more accurate. Generally, this time will not be changed, and it is relatively easy to check in the log.

 

Host system

In the past, many methods of spreading that some worms were quite funny only relied on brute force cracking and vulnerabilities such as MS17-010. I felt that the spread should be relatively small, and then I found that this method is simple and rude but the most effective.

The relative security of the Linux platform is relatively high. Several common viruses such as XorDDOS, DDG, and XNote series generally rely on brute force cracking to spread, and brute force cracking is also important in the process of tracing the source.

Examples of some commonly used logs are as follows:

var/log/auth.log 包含系统授权信息,包括用户登录和使用的权限机制等信息/var/log/lastlog    记录登录的用户,可以使用命令lastlog查看/var/log/secure    记录大多数应用输入的账号与密码,登录成功与否/var/log/cron      记录crontab命令是否被正确的执行

The flexible use of grep, sed, sort, and awk commands, and attention to Accepted, Failed password, and invalid special keywords can generally easily find some clues as follows:

Often some attackers forget to clear the log, it is very convenient to view the details; a history command, the hacker's operation is clear at a glance. Of course, after some scripts are executed, the log will eventually be cleared. For example, the following often increases the difficulty, and the log is cleared and it is often more abnormal. You can focus on the remaining logs, or pay attention to whether there are other security settings at the network level that can be traced and analyzed at the traffic level. Originating from the characteristics of Linux that everything is a file and open source, there are also advantages and disadvantages in the process of traceability. The rootkit is the most troublesome thing. Because the plain texts of some commonly used commands in the system have been changed and replaced, the system has become completely unreliable. It is often not easy to find that the security service personnel have higher technical requirements in the process of traceability. Traceability under the Windows platform is relatively easy. Of course, it mainly relies on the windows log. Generally, use the eventvwr command to open the event viewer. The default is divided into three categories: application, security, and security are stored in the %systemroot%\system32\config directory in the form of evt files; reasonable use of filters can often help us to better investigate logs, such as screening for suspected brute force intrusion Event ID == 4625 audit failed log, follow-up analysis of time, source IP address, type, and request frequency to determine whether it is a brute force attack from the intranet. Determine whether it is malicious through the internal log of the system. The running status of the process can be confirmed by confirming the value of logontype through which protocol the brute force cracking was successful; the relative value relationship is as follows:​

local WINDOWS_RDP_INTERACTIVE = "2"local WINDOWS_RDP_UNLOCK = "7"local WINDOWS_RDP_REMOTEINTERACTIVE = "10"local WINDOWS_SMB_NETWORK = "3"

Patches for the Windows system are relatively important. If some key patches are not applied, it is easy to be attacked successfully. The focus is on some common security patches such as ms17-010, ms08-067 and ms16-032, which are commonly used attack packages for intranet penetration. You can view the installed patches in the current system through sysintemfo. In addition, Windows also includes a lot of domain control security logs. Because the content is too much, the description will not be expanded. The main reason for tracing is to restore the attack path. Through the windows log, you can understand the attack chain of the attacker and give the user an explanation.

Other commonly used systems

The database system is also some of the hardest-hit areas for attackers' entry points. For example, msssql server often has higher permissions after the data is installed in the window environment, and some users often do not harden the database after the installation is completed, based on the separation of the database station Many of the principles can be accessed directly from the mssql public network. The access control strategy is relatively weak, and the problem of weak passwords is particularly prominent.

For example, the brute-force cracking log for the sa user of mssql, which also records the client's IP address, is easy to be compromised if the password is not strict enough if the relevant lock policy is not configured. After the attacker successfully blasts, he can start xp_shell to execute system commands with high authority. Wouldn't he do whatever he wants with a windows shell? There is also a redis under the Linux platform that is also very popular, but the problem of unauthorized access after a few years of default installation is relatively widespread. For example, viruses such as DDG mining and WatchDog mining, which are relatively popular in recent events, mainly use redis unauthorized access to execute commands, pull mining programs from the Internet and write ssh public keys and other functions. When you see that port 6379 is open locally, you still need to pay attention to this problem. Consult users more about the usage and check the default configuration. There are also some commonly used systems such as a set of mysql database brute force cracking privileges, unauthorized access vulnerabilities, phishing emails, cracking software backdoors, malicious office macros, office code execution vulnerabilities, mailbox defects, VPN configuration defects, etc. The specific situation of the attacker's entry point needs to be checked in conjunction with the user's current situation.

 

to sum up

It is said that the essence of security is a contest between people. In recent years, the rise of honeypot products has changed the position of the defender from passive to active. Honeypot technology is essentially a technology that deceives the attacker. By arranging some hosts, network services or information as decoys to induce the attacker to attack them, the attack behavior can be captured and analyzed, the tools and methods used by the attacker can be understood, and the attack intention and motivation can be inferred. Let the defenders clearly understand the security threats they face, and use technology and management methods to enhance the security protection capabilities of the actual system. It is necessary to have a certain set of defense methods to consume as much as possible the limited attack resources, forcing the attacker to use the real IP, so that we can defend the business and trace back to the identity of the attacker. From the perspective of offense and defense, consider more possible ways for the attacker to think about more possible ways for the attacker to eliminate the information asymmetry disadvantage of the defender in the offensive and defensive confrontation, and create the advantage of information asymmetry for the attacker. The frequently used postures, vulnerabilities and commonly used attack methods are then verified with data, not limited to known vulnerabilities and let go of other problems. If you can be proactive and predictable, you can better control the consequences .

Guess you like

Origin blog.csdn.net/weixin_43650289/article/details/112982616