How to use meg to discover as many URL addresses in the target host as possible

about meg

meg is a powerful URL information collection tool. With the help of this tool, researchers can collect a large number of URL addresses related to the target host as much as possible without affecting the target host and server.

The tool is able to fetch multiple URL paths from multiple hosts at the same time, and the tool is also able to look for the same path in all hosts before moving on to the next path and repeating.

The tool runs very fast and will not cause the target host to be flooded with malicious traffic, that is, it will not affect the normal operation of the target host.

tool installation

meg is developed in Go language and does not require other runtime dependencies, so we first need to install and configure the Go v1.9+ environment on the local device. Next, we can use the following command to install meg:

▶ go install github.com/tomnomnom/meg@latest

In addition, we can also directly visit the [ Releases page ] of the project to download the precompiled tool version and store the tool path in $PATH (eg /usr/bin/).

If you encounter installation errors, it may be because your Go environment version is too low, you can try the following methods to solve it:

# github.com/tomnomnom/rawhttp

/root/go/src/github.com/tomnomnom/rawhttp/request.go:102: u.Hostname undefined (

type *url.URL has no field or method Hostname)

/root/go/src/github.com/tomnomnom/rawhttp/request.go:103: u.Port undefined (type

*url.URL has no field or method Port)

/root/go/src/github.com/tomnomnom/rawhttp/request.go:259: undefined: x509.System

CertPool

Basic use of tools

We can provide the tool with a list file containing paths:

/robots.txt

/.well-known/security.txt

/package.json

Or provide a list file containing host addresses:

http://example.com

https://example.com

http://example.net

Next, meg will send a request to each address in each host:

▶ meg --verbose paths hosts

out/example.com/45ed6f717d44385c5e9c539b0ad8dc71771780e0 http://example.com/robots.txt (404 Not Found)

out/example.com/61ac5fbb9d3dd054006ae82630b045ba730d8618 https://example.com/robots.txt (404 Not Found)

out/example.net/1432c16b671043271eab84111242b1fe2a28eb98 http://example.net/robots.txt (404 Not Found)

out/example.net/61deaa4fa10a6f601adb74519a900f1f0eca38b7 http://example.net/.well-known/security.txt (404 Not Found)

out/example.com/20bc94a296f17ce7a4e2daa2946d0dc12128b3f1 http://example.com/.well-known/security.txt (404 Not Found)

...

The tool will store all data output results in a directory named ./out:

▶ head -n 20 ./out/example.com/45ed6f717d44385c5e9c539b0ad8dc71771780e0

http://example.com/robots.txt

> GET /robots.txt HTTP/1.1

> Host: example.com

< HTTP/1.1 404 Not Found

< Expires: Sat, 06 Jan 2018 01:05:38 GMT

< Server: ECS (lga/13A2)

< Accept-Ranges: bytes

< Cache-Control: max-age=604800

< Content-Type: text/*

< Content-Length: 1270

< Date: Sat, 30 Dec 2017 01:05:38 GMT

< Last-Modified: Sun, 24 Dec 2017 06:53:36 GMT

< X-Cache: 404-HIT

<!doctype html>

<html>

<head>

If no arguments are provided, meg will read paths from a file named ./paths and target hosts from a file named ./hosts without providing any output:

▶ meg

But the results are stored in an index file named ./out/index:

▶ head -n 2 ./out/index

out/example.com/538565d7ab544bc3bec5b2f0296783aaec25e756 http://example.com/package.json (404 Not Found)

out/example.com/20bc94a296f17ce7a4e2daa2946d0dc12128b3f1 http://example.com/.well-known/security.txt (404 Not Found)

We can use this index file to find the storage location of the response information, and we can directly use grep to quickly find:

▶ grep -Hnri '< Server:' out/

out/example.com/61ac5fbb9d3dd054006ae82630b045ba730d8618:14:< Server: ECS (lga/13A2)

out/example.com/bd8d9f4c470ffa0e6ec8cfa8ba1c51d62289b6dd:16:< Server: ECS (lga/13A3)

If you want to request a path, you can specify it directly as a parameter:

▶ meg /admin.php

Detailed use of tools

The detailed help information of the meg tool is given below:

▶ me --help

Request many paths for many hosts

Usage:

meg [options] [path|pathsFile] [hostsFile] [outputDir]

Options:

-c, --concurrency <val> set concurrency level, default is 20

-d, --delay <val> Number of milliseconds between requests to the same host, default 5000

-H, --header <header> Send a custom HTTP header

-r, --rawhttp use the rawhttp library to send requests

-s, --savestatus <status> only save responses with the specified status code

-v, --verbose Verbose mode

-X, --method <method> The HTTP method to use, the Get method is used by default

Defaults:

pathsFile: ./paths

hostsFile: ./hosts

outputDir: ./out

Paths file format:

/robots.txt

/package.json

/security.txt

Hosts file format:

http://example.com

https://example.edu

https://example.net

Examples:

meg /robots.txt

me -s 200 -X HEAD

and -c 30 /

meg hosts.txt paths.txt output

Concurrency settings:

▶ meg --concurrency 5

Delay settings:

▶ me --delay 10000

Add Header:

▶ meg --header "Origin: https://evil.com"

▶ grep -h '^>' out/example.com/*

> GET /.well-known/security.txt HTTP/1.1

> Origin: https://evil.com

> Host: example.com

...

Store only specified status codes:

▶ me --savestatus 200 /robots.txt

Specify the HTTP method:

▶ meg --method TRACE

▶ grep -nri 'TRACE' out/

out/example.com/61ac5fbb9d3dd054006ae82630b045ba730d8618:3:> TRACE /robots.txt HTTP/1.1

out/example.com/bd8d9f4c470ffa0e6ec8cfa8ba1c51d62289b6dd:3:> TRACE /.well-known/security.txt HTTP/1.1

...

Guess you like

Origin blog.csdn.net/m0_60571990/article/details/129565829