[Python web crawler] 150 lectures to easily get the Python web crawler paid course notes chapter six-data storage: MySQL

The web crawler course has entered the MySQL storage part, keep on!

This blog will introduce the use of MySQL in the crawling process. It will not introduce the installation of MySQL too much, mainly the operation of MySQL on python.

Basic MySQL operation: https://blog.csdn.net/weixin_44566432/article/details/106025116

 

1. MySQL driver

Python operation MySQL needs to rely on a middleware, the driver, which can be

  • In mysqldb, python2, maintenance has been stopped
  • Mysqlclient
  • Pymysql, choose pymysql here

2. MySQL connection

3. MySQL Insert

4. MySQL Lookup

import pymysql

# 1. 使用pymysql.connet方法连接数据库
db = pymysql.connect(host='localhost', user='root', password='123456', database='csdn_crawler')

# 中文,charset 指定 utf8, 不是utf-8

# 2. 使用cursor操作db
cursor = db.cursor()
#
cursor.execute("select * from article")
result = cursor.fetchone()
# print(result)
result2 = cursor.fetchall()
# print(result2)
result3 = cursor.fetchmany(4)
print(result3)

# 3. 插入数据
# sql = "insert into article(id, title, content) values (4 , 'hi', 'hello')"
#
# # sql = "insert into article(id, title, content) values (null, %s, %s)"
#
# cursor.execute(sql)
# db.commit()
# db.close()

 

Guess you like

Origin blog.csdn.net/weixin_44566432/article/details/108748762