Analysis of the technical principle of SEO leaving traces and dominating the screen

Marking effect

Usually, a certain keyword is queried, and a large number of search results appear on the webpage, and they will all lead to other platforms or websites for browsing and viewing. The Leaving Trace website is not affiliated with the referring website or platform.

Example image:

 

Principle analysis

First of all, it is easy to understand through the content and links. It is essentially a search page. It happens that the website uses the user's search terms as TDK, so that a certain search result becomes an advertisement.

But in fact there is still a very important problem. Generally, search engines do not include search result pages, especially websites with low weight.

In fact, the spider pool technology is used here to deceive through the imperfections of the search engine's own algorithm.

Let’s talk about the conclusion directly. For high-weight websites, when other websites link to the search page, there is a certain probability that the search engine will include the linked search result page as a result.

Therefore, it is necessary to use the spider pool to splice links with a large number of collected websites that can be used to assemble search result words, and guide search engine spiders to crawl to high-weight websites, so as to realize the function of including high-weight website search content advertisement pages.

At present, most of the products that use this technology of leaving marks are mostly gray and black products.

spider pool

Spider pool refers to a website or a group of websites that are frequently crawled by search engine spiders. It seems that there are a large number of search engine spiders on these websites, so it is called spider pool.

The essence of the spider pool is to automatically generate a large amount of content that meets the preferences of search engines through programs. The role of the website itself is to implant links to guide spiders to crawl there.

Webmaster evasion plan

The options I can think of so far are as follows:

Set search threshold

For example, you can only log in to search, search requires a verification code, etc. Although such functions are easy to be simulated, for the business of leaving traces, there is no time to make specific rules for a specific website.

Set up filtering and text validation

this is a good idea. You can import black-name vocabulary or directly connect with third-party text review services.

Set search length limit

Leaving traces generally requires a long text and repeats the required words in multiple words. Therefore, limiting the length of the search term can also avoid this situation to a certain extent. But doing so may lose some long-tail word traffic.

Guess you like

Origin blog.csdn.net/qq_20051535/article/details/131225425