20190613_day03 acquaintance reptiles

Today, class content is very substantial

The following main points

The remaining operation of a function:

  Including empty functions of the role, function object, point to the memory address of the function name

  Nested function definitions and calls

  Distinguish between different address spaces

2 schematically describes the packages and modules

  How to Import module package

  time module

  os, sys module

  json module

3 reptile basic principles:

  Use chrome debug mode

  How to find links to the content they want

  requests usage

  Try the crawler

The following are class notes:

morning

# Today contents: 

# remainder of the function
# built-in modules
# module package
# reptile basic principle
# requests module

# AM

# empty function
# duty, pass represents doing nothing

''
to be received inside the body of a function when you call the function results
' ''

# function object
# refers to the name of the function points
the memory address # function
# DEF funk1 ():
# Pass
# Print (funk1)
# DEF funk2 ():
# Pass
# dict1 = {
# '. 1': funk1,
# '2': funk2
#}
# '' '
when the need to achieve the function # when selected,
# dictionary address binding function much better than write a bunch ifelse
#
#' ''
# Choice = INPUT ( 'please enter the function number ') .strip ()
# IF Choice in dict1:
Dict1 # [Choice] ()
#
# # nested function
## function nested definitions
# def func1():
# print('func1...')
#
# def func2():
# print('func2...')
#
# def func3():
# print('func3...')

# # 函数嵌套调用
def func1():
print('func1...')

def func2():
print('func2...')

def func3():
print('func3...')
return func3

return func2
res1 = func1()

res2 = res1()
res2()
# #
# # def func1():
# # print('func1...')
# #
# # def func2():
# # print('func2...')
# #
# # def func3():
Print # # ( 'func3 ...')
# # func3 ()
# # func2 ()
# # func1 ()
#
# '' '
# Functions namespace
#' ''
#
# '' '
# packet module
# '' '
# import from the tEST a
# a
#
# # test
#
# iF __name__ ==' __main__ ':
# Print ( "the test")
# here content writing test content
# when the import name in the source file for the main but in the file import statement resides in name is not main, so to ensure the import rear is automatically executed.

## commonly used built-in modules
# # Time
#
# Import Time
## acquires a time stamp
# Print (time.time ())

# os module
# sys module

# import os # files in the operating system interacts
# Import SYS #
#
# print (os.path.

# Print (os.path.dirname (__ file__) )

afternoon
Import json # 
#
# USER_INFO = {
# 'name': 'Tank',
# 'pwd': '123'
#}
# # sequence of the dictionary is converted into a data format json
## and then converted into a string of data json
# RES json.dumps = (USER_INFO)
# Print (RES)
# with Open ( 'json1.json', 'wt', encoding = 'UTF-. 8') AS F:
# f.write (RES)
#
# # automatically trigger the dump f.write
# USER_INFO = {
# 'name': 'Tank',
# 'pwd': '123'
#}
# with Open ( 'jason1.json', 'W', encoding = 'UTF-. 8') AS F :
# The json.dump (USER_INFO, F)
#
#
# # deserialize loads
# with Open ( 'json1.json', 'RT', encoding = 'UTF-. 8') AS W:
# res = w.read()
# user_dict = json.loads(res)
Print # (user_dict)
# Print (type (user_dict))
#
# # Load automatically triggered f.read method
# with Open ( 'json1.json', 'RT', encoding = 'UTF-. 8') AS W:
# user_dict the json.load = (w)
# Print (user_dict)
# Print (of the type (user_dict))

# reptile
'' '
HTTP protocol
request url
https://www.baidu.com
request method
GET
request header:
the cookie: may need to focus on
User -Agent prove that you are the browser
Mozilla / 5.0 (Windows NT 10.0; Win64; x64) AppleWebKit / 537.36 (KHTML, like Gecko) Chrome / 75.0.3770.80 Safari / 537.36
Host:
www.baidu.com
'' '

# # Requests module the use
#
# Import Requests
Requests.get respn = # (URL = 'HTTPS: //www.baidu.com')
# respn.encoding = 'UTF-. 8'
# Print (respn)
# # returns the response status code
# Print (respn.status_code)
# returns a response text #
# Print (respn.text)
#
# with Open ( 'baidu.html', 'W', encoding = 'UTF-. 8') AS F:
# f.write (respn.text)

The following operations are crawling a website beautiful pictures, I replaced given a free hand drawing of favorite eva
import requests
res = requests.get('http://pic1.win4000.com/wallpaper/e/58e5aa2ea9215.jpg')

with open('shipin.jpg','wb') as f:
    f.write(res.content)
 

 



Guess you like

Origin www.cnblogs.com/badtimeandheroesaround/p/11020060.html