Top 10 Mistakes That Hinder Fast and Effective Cybercrime Investigations

During the night, money was withdrawn from the company's account. In the morning, panicked, leading to more problems. For example, IT departments reinstall compromised systems -- starting from scratch or restoring from backups. In the first case, the traces of the intruder were wiped away, leaving the visiting incident investigation team with little more than a little effort and a long search for artifacts on other systems. In the second case, there is a risk of restoring an image that has been corrupted. In this post, we will discuss the main pitfalls that hinder a competent and quick response to hackers.

Mistake #1 -  Lack of a coping plan

A company's money was stolen. It only has antivirus software. Management wouldn't let us shut down servers and accounts because they didn't know what impact it would have on the business. We spent days trying to figure out who owned the computers that were hacked and what their role was in the company, while the hackers could be emptying ATMs, withdrawing money, or stealing data. In addition to responding to incidents, there are a host of organizational issues. It's different if a company has logging, basic asset management, and an understanding of business processes. This allows company employees to spot hackers on the network in the middle of an attack, and for us, if an incident does occur, the problem can be dealt with immediately without wasting valuable time. This approach greatly enhances the effectiveness of incident investigations and potentially prevents serious repercussions.

The purpose of the response plan is to eliminate the unexpected factors when an event occurs, and consider taking measures to minimize the loss. It is important to assess the adequacy of technical means to collect and store the data required for accident investigation. It is also necessary to understand the expertise of internal teams in investigating incidents. If investigative expertise is not available, external expertise should be sought.

The response plan also specifies the roles of those responsible within the company in the event of an incident. The IT department collects data related to the incident. Management, legal or PR (or all together) are responsible for dealing with the outside world. They need to know in what event (customer data stolen, money, etc.), and who they need to notify (customer, regulator, authority).

In certain sectors (banking, critical infrastructure), information security incidents must be reported to regulators. Other companies have to contact insurance companies - which also needs to be said, delays may cost victims the chance to make an insurance claim. All of these should be discussed with an attorney and incident investigator beforehand.

To avoid billing the company half for every virus detected, incidents should be segmented by threat level and importance of affected nodes. For example, if spyware is found on an accountant's computer, that's alarm number one. The checklist should include all potential activities a malicious user could do on it: misappropriation of funds, spoofing of credentials, authorization in remote banking, installation of hidden tools for remote computer administration, copying of user's electronic signature keys, etc. What follows is a full investigation and understanding how far the hacker has come. This will indicate which countermeasures are possible and at what stage. In the case of an accountant, this could be: rekeying a remote banking system, blocking account transactions, replacing the accountant's computer with a known clean computer, or blocking network access. The same plan should be in place for workstations for CEOs, senior managers, rank-and-file employees, and other asset classes.

Investigators who spot systems with suspicious activity often spend days to weeks figuring out where the computer actually is, who owns it, what its role is in the business process, and what's wrong with disabling it or collecting data. limit. It's the job of the IT and IS teams to clarify this up front. There are times when a system is in the accounting department and they simply won't drop a workstation because it's being paid for and work stops. For critical systems, without which the company cannot function, the IT department should have spare computers ready. But often these are not available, which delays the investigation. During this time, attackers can withdraw funds or steal data from the company.

Planned dependencies need to be regularly checked through penetration testing. The infiltrator infiltrated the accountant's computer, and we looked at the threats they could carry out. We write them down in the plan, and then, when a real event occurs, we check to see if the intruder carried out those threats. You can also choose a virtual attack scenario for training by conducting a tabletop exercise. Specify a possible scenario: a computer is hacked, a virus incident occurs with a cryptographer, or money is stolen. The next step is to test the functionality of the response plan: have all the system owners been located, have figured out the possible sequence of actions related to the asset, and have all the actors involved in the process. It's like in real life, but this approach avoids serious consequences and catches weaknesses that need to be addressed early.

Step-by-step instructions may be for known incidents, such as phishing attacks or crypto-ransomware infections. However, if a specific classification of suspicious activity is not possible, it is generally necessary to know

a) where in the network the asset is affected,

b) What can be done with these assets?

c) who is responsible for them,

d) communication plans within the company and with the outside world,

e) How to isolate the threat.

The importance of planning cannot be overemphasized, as events always test the strength of a company and those that are not prepared will pay a high price, including loss of business. The reality is that no business today is completely defensible, so events will happen to everyone sooner or later, and you better be prepared for it. After all, a well-thought-out plan turns chaos and crisis into a clear and precise algorithm of action.

Mistake #2 - Information security incidents that go uninvestigated.

A company reinstalled the operating system on an infected computer, ticked the "incident closed" box, and lost a large sum of money from the account a week later. This happens quite often - we cannot limit ourselves to half measures. You need to understand how attackers infiltrated the network, reconstruct the timeline of events, and determine measures to contain and neutralize the threat. Otherwise there may still be infected nodes through which the attack will proceed.

Mistake #3 -  Lack of event collection infrastructure

The infiltration of corporate networks may have happened a long time ago without leaving any traces. Like any operating system, Windows collects events, but it stores the data locally and for a limited time, often only until the operating system is rebooted. The situation is even worse with connected devices, which usually have a small memory capacity to store events. This makes it impossible to know if a particular computer connected to a malicious server three months ago.

In our survey, only one or two out of ten companies had event collection infrastructure, and SIEM was often a bauble. No one inside these companies checks that the SIEM collects the data in the correct format. Since integrators implemented it, they left it there. Data can be stored for a day or a week and not be used for anything.

IT teams use log collection and monitoring systems as part of their jobs. Such a system is also effective and can be used in investigations.

The minimum requirements for event collection include collecting data from operating systems and network devices (firewalls) and storing this information for at least one year. This will allow you to understand which nodes are connected where.

Storing this information is also useful for detecting events based on new information. Then, when using retrospective analysis tools, you can discover whether the company has been attacked in the past. This will allow action to be taken now, rather than a year from now when sensitive information suddenly appears on the dark web at the most inopportune time, such as before a company goes public.

It is impossible to obtain data from the past without an infrastructure for event capture. Plus, it saves time -- you don't have to analyze dozens, hundreds, and sometimes thousands of computers. However, it is of course best to use a properly configured, dedicated IS solution. When incident collection tools are available, such as a properly configured SIEM system, they can help identify incidents, reconstruct timelines and find entry points for attackers. With such a system, individual microscopic events can be pieced together into a mosaic.

Mistake #4 -  Lack of Asset Information

In the best cases, we encounter descriptions of which components make up the business system, who is responsible for it, and what tasks it solves. More often, it's worse. In most companies, asset management either does not exist at all or information about assets is irrelevant. You look at papers from three years ago and they describe a single network configuration when in fact the network has doubled in size. In such cases, quick and effective incident response is not possible.

At a basic level, asset management does not require a large investment. After all, attackers often know a company's infrastructure better than IT does, and they won't use cumbersome systems. They spend time studying it, figuring out what's working, how it's working, what processes are running, who's connected to whom, and how to monitor.

There are some technical solutions to make things easier, but in general it's about process. A company needs to understand its business processes, and how to track all of its computers -- in Excel or automatically with a SIEM -- is a matter of choice.

Mistake #5 - Lack of Documentation

Documentation in this case is a scenario for the normal course of interaction between departments, determining whether, for example, an accountant must log into the IT department's computer, and the rest must transfer files via personal email. Technical documentation describing how the system interacts is also important. For example: the documentation describes a component to only work with one module, but in fact it works with three modules. Otherwise, especially if the IT person who knew all about it has long since quit, investigators will be chasing clues and suspicious events that have nothing to do with the incident. And these interactions, historically formed or evolved, have been figured out in conversations with employees -- employees don't have other tools to share files, for example. This consumes a lot of time. Admittedly, it's even worse if there are no employees at all to help make sense of these established relationships.

Mistake #6 -  Tampering with Evidence

It is a common mistake to attempt to investigate an incident without the necessary expertise. Sometimes, when trying to image a disk, it is done by mistake and the evidence disk is wiped. Sometimes we find critical points in the evolution of an attack where the owner has reinstalled the system, removing artifacts critical to the investigation. In some cases, the computer cannot be turned off while a memory dump is being taken, or the information will be corrupted. At the same time, a SIEM doesn't always have all the data it needs. Also, it is often not possible to rotate the chain further without a copy of the history. Therefore, it is imperative to have a rigorous plan in place to properly deal with compromised hosts, the most important of which is to coordinate all these changes with dedicated experts at the beginning of the investigation. PT Expert Security Center, in such urgent situations, tries to quickly intervene in the investigation, often even before any formal agreement is reached.

Mistake #7 - Taunting Intruders

Employees without the necessary expertise may try to block everything without looking, or threaten hackers in their communications, without realizing how serious the situation is. In this case, it is possible for hackers to start "burning bridges", not only covering their tracks, but also compromising the company, for example, setting up an encryption virus on the entire infrastructure for fun, or "shutting down" a critical service. You need to anticipate what attackers will do if they are discovered, and be prepared.

Mistake 8 -  Lesson Not Learned

Some companies did not learn any lessons from the incident. From time to time, there are situations where we investigate an attack, set a timeline, make recommendations, and the client works with us and does nothing -- a monitoring program isn't set up, a vulnerability isn't addressed. Over the summer, we helped eliminate the aftermath of the hack, removed malware, reset passwords, and cleaned up the network. And in the fall, the situation repeated itself. And hackers return to the network as if they were at home. Last time, a government organization was hacked twice. Most companies do implement protections after an incident, if not immediately. If a network is constantly under attack, security departments don't have another month to fix the first critical vulnerability being exploited. Recurring events happen faster.

Mistake #9 - Alerting the Attacker

If the e-mail system is compromised, the information security service handles the letters in the mail, and the hacker can monitor all the countermeasures. Either he will lie low, remove the traces, and make the investigation more difficult, or he will perform destructive actions (see mistake #7) and the company will no longer have time to investigate. This is exactly what happened once, during an investigation, an administrator communicated with our work computer via the desktop version of Telegram. Hackers were able to monitor chats and strike out proactively, complicating an already difficult investigation until the administrator's computer was compromised and we changed the communication channel.

In case of an accident, it is necessary to have backup communication channels that are not connected to the company's infrastructure, using the same Telegram or WhatsApp, but only from trusted devices. Just be clear - if your entire network is compromised, you are no longer in control, and you need to think through every step of the way in your investigation.

Mistake #10 -  Zombie Events

After investigation and remediation, the problem may not be closed. A compromised six-month-old image was restored from backup, and the attackers had the unexpected gift of restoring access to the internal network. This is rare, but it hits companies hard. Therefore, you not only need to pay attention to your current infrastructure, but also your archiving and backup systems. Over a period of time, even after the incident has been investigated, it's a good idea to watch for signs of damage. Zombie incidents occur not only with backup systems, but also with the arrival of assets that were absent at the time of the investigation. For example, a person goes on vacation and their computer is not checked. Or the computer is under repair. A backdoor is activated on a computer and no one cleans it. In such cases, a timely response is only possible if a sound monitoring system is in place and all recommendations are implemented after the investigation is concluded.

The main task of investigating an incident seems simple - get to the bottom of it, reconstruct the chronology of events, find the source, and stop the threat. But if a company doesn't have the minimum infrastructure in place to identify and deal with incidents, or begins to respond without the right expertise, it won't necessarily be able to help. Therefore, it is better to be prepared in advance to meet the cybercriminals.

Guess you like

Origin blog.csdn.net/ptsecurity/article/details/131332395