A brief introduction to using Linux curl

 
Transferred to: http://www.cnblogs.com/-clq/archive/2012/01/29/2330827.html
------------------------------------------------------------

 

http://www.linuxidc.com/Linux/2008-01/10891.htm

 


Curl is a very powerful http command line tool under Linux, and its functions are very powerful.

1) Without further ado, let's start here!

After $ curl http://www.linuxidc.com

and press Enter, the html of www.linuxidc.com will be displayed on the screen

.

$ curl http://www.linuxidc.com > page.html

Sure, but don't bother!

Just use curl's built-in option, save the result of http, use this option: -o

$ curl -o page.html http://www.linuxidc.com

In this way, you can see the progress of a download page appear on the screen instruct. When the progress reaches 100%, it will be OK

. 3) What? ! Can't access? It must be that your proxy is not set.

When using curl, use this option to specify the proxy server and its port used for http access: -x

$ curl -x 123.45.67.89:1080 -o page.html http://www.linuxidc.com

4) Access some The website is annoying, he uses cookies to record session information.

Browsers like IE/NN, of course, can handle cookie information easily, but what about our curl? .....

Let's learn this option: -D <- This is to save the cookie information in the http response to a special file

$ curl -x 123.45.67.89:1080 -o page.html -D cookie0001.txt http://www.linuxidc.com In

this way, when the page is saved to page.html, the cookie information is also saved to cookie0001.txt 5) Then, how

to continue to use the cookie information left last time when you visit next time? You know, many websites rely on monitoring your cookie information to determine whether you are visiting their website unruly.

This time we use this option to append the last cookie information to the http request: -b

$ curl -x 123.45.67.89:1080 -o page1.html -D cookie0002.txt -b cookie0001.txt http:// Www.linuxidc.com

In this way, we can simulate almost all IE operations to visit web pages!

6) Wait a bit ~ I seem to have forgotten something ~ Right

! It's browser information.

Some nasty websites always require us to use some specific browsers to visit them, and sometimes even more so, use some specific versions of NND. Where's the time to find these weird browsers for it? device! ?

Fortunately, curl provides us with a useful option that allows us to arbitrarily specify our own browser information for this visit: -A

$ curl -A "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0) " -x 123.45.67.89:1080 -o page.html -D cookie0001.txt http://www.linuxidc.com

In this way, when the server receives a request for access, it will think that you are an IE6.0 running on Windows 2000. In fact, maybe you are using a Mac!

And "Mozilla/4.73 [en] (X11; U; Linux 2.2; 15 i686" can tell the other party that you are running Linux on a PC, using Netscape 4.73, hehehe

7) Another commonly used server-side The restriction method is to check the referer of http access. For example, if you visit the home page first, and then visit the download page specified in it, the referer address of the second visit is the page address after the first visit is successful. In this way, as long as the server side finds that the referer address of a certain visit to the download page is not the address of the home page, it can be concluded that it is a stolen link

~ I hate it~ I just want to steal it~! !

Fortunately, curl provides us with the option to set the referer: -e

$ curl -A "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)" -x 123.45.67.89:1080 -e "mail.linuxidc.com" - o page.html -D cookie0001.txt http://www.linuxidc.com

In this way, you can deceive the other party's server, you clicked a link from mail.linuxidc.com, hehe

8) write Just found out that I missed something important! ——- Use curl to download files

As mentioned earlier, to download a page to a file, you can use -o, and the same is true for downloading files. For example,

$ curl -o 1.jpg http://cgi2.tky.3web.ne.jp/~zzh/screen1.JPG

Here is a new option: -O Capital O, use it like this:

$ curl -O http://cgi2.tky.3web.ne.jp/~zzh/screen1.JPG

In this way, you can follow the files on the server Name, automatically exists locally!

Another better one.

If screen2.JPG, screen3.JPG, ...., screen10.JPG need to be downloaded in addition to screen1.JPG, is it possible that we need to write a script to complete these operations?

Don't do it!

In curl, just write like this:

$ curl -O http://cgi2.tky.3web.ne.jp/~zzh/screen[1-10].JPG

Hehehe, great, right? ! ~

9) Come again, we continue to explain the download!

$ curl -O http://cgi2.tky.3web.ne.jp/~{zzh,nick}/[001-201].JPG

The resulting download is

~zzh/001.JPG

~zzh/002.JPG

...

~zzh/201.JPG

~nick/001.JPG

~nick/002.JPG

...

~nick/201.JPG

is convenient enough? Hahaha

Huh? Happy too soon.

Since the file names under zzh/nick are all 001, 002..., 201, the downloaded files have the same name, and the later ones overwrite the previous files~

It doesn't matter, we have more ruthless!

$ curl -o #2_#1.jpg http://cgi2.tky.3web.ne.jp/~{zzh,nick}/[001-201].JPG

— this is.....custom filename download? - Opposite, hehe!

In this way, the customized downloaded file name becomes like this: Original: ~zzh/001.JPG —-> After downloading: 001-zzh.JPG Original: ~nick/001.JPG —-> After downloading: 001-nick.JPG In

this way, you are not afraid of the file name being duplicated, hehe

9) Continue to talk about downloading.

We usually use a tool like flashget on the windows platform to help us download in parallel in blocks, and it can also resume the download after disconnecting. curl does not lose to anyone in these aspects, hehe, for

example, when we download screen1.JPG, the connection is suddenly dropped, we can start resuming the upload like this

$ curl -c -O http://cgi2.tky.3wb.ne.jp /~zzh/screen1.JPG

Of course, you don't want to fool me with a half-file downloaded by flashget. The half-file of other download software may not be able to be used~

Download in chunks, we can use this option: -r

for example For example

, we have a http://cgi2.tky.3web.ne.jp/~zzh/zhao1.MP3 to download (Teacher Zhao's phone recitation :D ) We can use this command:

$ curl -r 0- 10240 -o "zhao.part1" http:/cgi2.tky.3web.ne.jp/~zzh/zhao1.MP3 &\

$ curl -r 10241-20480 -o "zhao.part1" http:/cgi2.tky.3web.ne.jp/~zzh/zhao1.MP3 &\

$ curl -r 20481-40960 -o "zhao.part1" http :/cgi2.tky.3web.ne.jp/~zzh/zhao1.MP3 &\

$ curl -r 40961- -o "zhao.part1" http:/cgi2.tky.3web.ne.jp/~zzh/zhao1 .MP3

can be downloaded in chunks. But you need to merge these broken files by yourself. If you are using UNIX or Apple, you can use cat zhao.part* > zhao.MP3. If you are using Windows, use copy /b to solve it

. http protocol download, in fact, ftp can also be used. How to use it,

$ curl -u name:passwd ftp://ip:port/path/file

or the familiar

$ curl ftp://name:passwd@ip:port/path/file

10) After downloading, the next step Naturally, the upload option is -T. For

example, we upload a file to ftp:

$ curl -T localfile -u name:passwd ftp://upload_site:port/path/

Of course, uploading a file to an http server can also be

$ curl -T localfile http://cgi2.tky.3web.ne.jp/~zzh/abc.cgi

Note that at this time, the protocol used is the PUT method of HTTP.

I just mentioned PUT. Hehe, it naturally reminds me of several other methods that I haven't talked about yet! Don't forget GET and POST.

HTTP submits a form, the more commonly used POST mode and GET mode

GET mode do not use any options, just write the variable in the url, for example:

$ curl http://www.linuxidc.com/login.cgi? user=nickwolfe&password=12345

and the option in POST mode is -d . For

example,

$ curl -d "user=nickwolfe&password=12345" http://www.linuxidc.com/login.cgi

is equivalent to sending a login application to this site ~

Whether to use GET mode or POST mode depends on the program settings of the opposite server.

One thing to note is that file uploads on files in POST mode, such as

<form method="POST" enctype="multipar/form-data" action="http://cgi2.tky.3web.ne.jp/ ~zzh/up_file.cgi">

<input type=file name=upload>

<input type=submit name=nick value="go">

</form>



$ curl -F upload=@localfile -F nick=go http://cgi2.tky.3web.ne.jp/~zzh/up_file.cgi

has talked so much, in fact, curl has many, many skills and For example, when using a local certificate when using https, you can do this

$ curl -E localcert.pem https://remote_server For

another example, you can also use curl to look up the dictionary through the dict protocol ~

$ curl dict://dict.org/d :computer

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326771196&siteId=291194637