Alibaba Cloud ECS server memory has been high, and the Trojan horse solution has been linked

A project that my company is in charge of now uses Alibaba Cloud ecs server with 4 cores and 8G. Alibaba Cloud recently warned of a security breach. After logging in to the Alibaba Cloud account, he deleted the corresponding files according to the prompts. But what is strange is that the server memory has been increasing, reaching more than 80% at one time. I began to suspect that it was caused by the surge in users, and I did not take it too seriously.
However, this situation continued to occur for several days, and some of the timed task processes that ran were also killed for no reason, it attracted my attention. Of course, I didn't take it too seriously at this time. I just restarted the server and restarted the scheduled task. Checked the ecs memory usage, it dropped below 10%, and the timing tasks were normal. But after just one minute, the memory began to continue to rise, and the timed task was killed again.
Enter the ecs remote login interface, check the process information by command top, there is indeed a high-consumption process, which reaches 200%, and kill it decisively. But after a while, this process reappeared, it was just a change of PIDE. By now, I have definitely been suspended, and the server I manage is regarded as a broiler.

Enter the command: crontab -l for troubleshooting
Insert picture description here
/root/.systemd-service.sh This has basically been determined to be the problem script! Go
to the root directory to view this script. It is encoded
Insert picture description here
by base64 and decoded by Baidu base64. The command is parsed and found that the script is in the directory /tmp/.X11-unix/.
Insert picture description here
Enter this directory:
Insert picture description here
Now look at the contents of these files one by one:
Insert picture description here
It is easy to get the abnormal PID of the two processes from the content of the screenshot, and it will not be completely solved if killed.
To be cautious, I checked whether the /etc/hosts file is normal. Fortunately, it is normal here.
Continue to check /etc/cron.d/ and
found an abnormal file: 0systemd-service, open and check
cat 0systemd-service and
found that there is a script systemd-service.sh in the opt directory. . . . So you are hiding here! ! !

I opened this sinful file and I didn’t understand it. Haha, I’m going to decode it.

Finally found all the problems

There are two problematic process PIDs: 1847 and 4719.
Two problem files: /root/.systemd-service.sh and /opt/systemd-service.sh.
These two timing tasks are distributed in crontab and /etc/cron.d. /0systemd-service

My final solution is:
1. First clean up crontab
2. Delete /etc/cron.d/0systemd-service
3. Delete two scripts: /root/.systemd-service.sh, /opt/systemd-service.sh
4. Kill the two processes 1847 and 4719
5. Finally restart the server to see if the memory is reduced (don’t forget to restart the scheduled task)

Is it finally done? In fact, it is actually solved on the surface. This is a reminder to everyone, or to see if the server has vulnerabilities. Hope to help you all!

Guess you like

Origin blog.csdn.net/u010991531/article/details/114079824