[python crawler] 13. What to eat without getting fat (crawler practical exercise)

Preface

What to eat without getting fat - This is a topic that I paid close attention to when I was working out some time ago.

I believe that many people, even if they don’t work out, will pay attention to a healthy diet like me and care about the food calories they consume every day.

However, there should be very few people in life who will specifically count their daily food intake. Obviously doing this is somewhat troublesome. Maybe you have to download an APP that specifically checks calories, fill in the names of the foods, and check them one by one.

But in fact, using crawlers, we can easily crawl the calorie information of these foods, and we can know how many food calories we have consumed without any effort.

There are tens of millions of foods. If we want to crawl food calories, the amount of data must be huge.

You may think that you can use multi-coroutines to crawl. Indeed, it is a very reasonable and wise choice to use multi-coroutines to crawl a large amount of data.

Regarding the usage of multi-coroutine, we have already talked about it in the previous level, so we will briefly review it here.

Insert image description here

Project practice

Speaking of crawling food calories, if we want to crawl, we have to choose a website that stores food calorie information to crawl to the data.

I do know of such a website - Mint.com. It is a website related to fitness and weight loss, and you can query food data.

Insert image description here

If we choose this website to crawl food calories, we can not only put into practice the coroutine knowledge learned in the previous level, but also obtain a food calorie table, which is quite the best of both worlds.

Then, our project at this level can be defined as: using multi-coroutine to crawl the food calories of Mint.com.

You also know that when we are working on a project, we do not write code immediately. The first thing we need to do is to clarify the goals.

clear goal

Insert image description here
Now, please use your browser to open the link of Mint.com:

http://www.boohee.com/food/

Is it open? Be sure to really open it!

Simply browse this website, you will find a total of 11 common food categories -

Insert image description here
Click on the category [Grains, Potatoes, Miscellaneous Beans, Staple Foods], and you will see 10 pages of food records on the right side of the food category, including the names of the foods in this category and their calorie information. Clicking on the name of the food will also jump to the food details page.

Insert image description here
At this point, our project goal can be set as: using multi-coroutine to crawl food information (including food name, calories, and food details page links) in 11 common food categories on Mint.com.

Analysis process

Insert image description here
After the goals are clear, we proceed with the [analysis process]. This step plays a key role in the success of the project.

We can start with the four steps of the crawler (acquire data → parse data → extract data → store data) and start analyzing them one by one.

If we want to obtain the data of food calories, we must first determine where the data actually exists.

Insert image description here

In Level 7, we talked about how to determine where data is stored. Please open the http://www.boohee.com/food/group/1 website, right-click to open the "Inspection" tool, click it Network, and then refresh the page. Click on the 0th request 1to see Response.
Insert image description here

We can find food information in Response, indicating that the data we want exists in HTML.

Looking at the Headers of the 0th request 1, we can find that the web page request method of Mint.com is get.

Insert image description here
Knowing that the request method is get, we know that we can use requests.get() to get data.

Close the "Inspect" tool first. Let’s next observe the rules of the URLs of each common food category and the URLs of each food page.

Click on the first category [Valley Potatoes, Beans, Staple Foods], and the website shows:

http://www.boohee.com/food/group/1

Click on the second category [eggs, meat and products], and the URL becomes:

http://www.boohee.com/food/group/2

We can make a guess: the group parameter of the website represents common food categories, and the following numbers represent which category it is.

Just click on a few more common food categories to verify our conjecture.

Insert image description here
Sure enough, the website structure of common food categories is regular. The top 10 common food category URLs are:

http://www.boohee.com/food/group/+digits

Only the URL of the last common food category [Dishes] is different from the others, it is:

http://www.boohee.com/food/view_menu

We have found the rules for each common food category website. Now look back at the category [Potato, Yam, Miscellaneous Beans, Staple Food], click to go to the food record on page 2, and let’s see how the website will change.

Insert image description here
The URL has changed from http://www.boohee.com/food/group/1 to:

http://www.boohee.com/food/group/1?page=2

The URL has more page parameters. Does the number 2 mean page 2? Let's turn to the next two pages.

Insert image description here
It turns out that the ?page=number really means the number of pages. As long as you change the number behind the page, you can turn the page.

But why the URL of the food record on page 1 is at the beginning:

http://www.boohee.com/food/group/1, why not add ?page=1?

Is it not displayed by default on the website? Let's try adding ?page=1 to http://www.boohee.com/food/group/1 and see what happens.

http://www.boohee.com/food/group/1?page=1

You will find that after adding ?page=1, the food record on page 1 is still opened.

Based on our observations above, we can draw the URL rules for each page of food records in each food category on Mint.com——

Insert image description here
Next, let's analyze how to parse and extract data.

We knew earlier that the food calorie data on Mint.com is stored in HTML, so we can use the BeautifulSoup module to parse it later.

As for how to extract data, we have to figure out the structure of HTML first.

Insert image description here
Right-click to open the "Inspect" tool, look at Elements, click the cursor, and move the mouse to the food [Easy Fun Purple Sweet Potato Nutritional Porridge]. You will find that under the elements, there is food <li class="item clearfix">information, including links to food details, food names and calories. .

If you click href="/shiwu/fdd9b123", you will jump to the details page of [Easy Fun Purple Sweet Potato Nutritional Porridge].

Insert image description here
If you move the mouse to other foods, you will find that the information about each food is hidden in a <li class="item clearfix">…</li>label. There are 10 foods in each page of food records, which exactly correspond to the 10 <li class="item clearfix">…</li>tags in the source code of the web page.

Insert image description here
In this way, we can use find_all/find to extract

  • Food details link, name and calories under label.

    After extracting the data, we can choose to use any module from the csv and openpyxl modules to store the data, and the project can be completed.

    To summarize the ideas we have just analyzed:

    Insert image description here

    Code

    Next, should be the step you are most looking forward to when working on a project - code implementation.

    Based on the previous [analysis process], at this time we already have the idea of ​​​​realizing the project. As long as we turn these ideas into code, we can complete the project - use multi-coroutine to crawl to the food calorie data of Mint.com.

    Officially start writing code~

    #导入所需的库和模块:
    
    from gevent import monkey
    monkey.patch_all()
    #让程序变成异步模式。
    import gevent,requests, bs4, csv
    from gevent.queue import Queue
    

    The first thing to do when writing code is to import the libraries and modules we need.

    Based on the project goals and ideas derived from the analysis process, we know that we need to use the gevent library, queue, and monkey modules to implement the coroutine function, as well as the requests, BeautifulSoup, and csv modules.

    The following code needs to be written by you. Please follow the requirements and try to write it first. I will show you the code I wrote later.

    Code requirements: Import the required modules, and use a for loop to construct the URLs of the first 3 pages of food records of the first 3 common food categories and the first 3 pages of food records of the 11th common food category based on the URL patterns obtained from the previous analysis. URLs, and put those URLs into a queue, and print them out.

    Insert image description here

    The reference code is here:

    #导入所需的库和模块:
    from gevent import monkey
    monkey.patch_all()
    import gevent,requests, bs4, csv
    from gevent.queue import Queue
    
    work = Queue()
    #创建队列对象,并赋值给work。
    
    #前3个常见食物分类的前3页的食物记录的网址:
    url_1 = 'http://www.boohee.com/food/group/{type}?page={page}'
    for x in range(1, 4):
        for y in range(1, 4):
            real_url = url_1.format(type=x, page=y)
            work.put_nowait(real_url)
    #通过两个for循环,能设置分类的数字和页数的数字。
    #然后,把构造好的网址用put_nowait方法添加进队列里。
          
    #第11个常见食物分类的前3页的食物记录的网址:
    url_2 = 'http://www.boohee.com/food/view_menu?page={page}'
    for x in range(1,4):
        real_url = url_2.format(page=x)
        work.put_nowait(real_url)
    #通过for循环,能设置第11个常见食物分类的食物的页数。
    #然后,把构造好的网址用put_nowait方法添加进队列里。
    
    print(work)
    #打印队列
    

    Create an empty queue with Queue(). Through two for loops, the URLs of the food records of the first 3 pages of the first 3 common food categories are constructed.

    Since the URL of the 11th common food category is quite special, it needs to be constructed separately. Then use the put_nowait method to put the constructed URLs into the queue.

    You can run this code and print out the queue to see.

    Print the result:

    <Queue queue=deque(['http://www.boohee.com/food/group/1?page=1', 'http://www.boohee.com/food/group/1?page=2', 'http://www.boohee.com/food/group/1?page=3', 'http://www.boohee.com/food/group/2?page=1', 'http://www.boohee.com/food/group/2?page=2', 'http://www.boohee.com/food/group/2?page=3', 'http://www.boohee.com/food/group/3?page=1', 'http://www.boohee.com/food/group/3?page=2', 'http://www.boohee.com/food/group/3?page=3', 'http://www.boohee.com/food/view_menu?page=1', 'http://www.boohee.com/food/view_menu?page=2', 'http://www.boohee.com/food/view_menu?page=3'])>
    

    A total of 12 URLs were printed out, including the URLs of the first 3 pages of food records of [Grains, Yams, Miscellaneous Beans, and Staple Foods], the URLs of the first 3 pages of food records of [Eggs, Meats, and Products], and [Milks and Products]. 】The URL of the first 3 pages of food records and the URL of the last 3 common food categories [Dishes] of the first 3 pages of food records.

    As a teaching demonstration, we will not crawl all the food pages in the 11 common food categories on Mint.com. Because doing this will add burden to Mint.com's servers and is not a moral approach, so I don't recommend you to do it.

    Next, what we have to write is the core crawling code-using gevent to help us crawl data.

    Do you remember what's the point of implementing multi-coroutines with gevent?

    Insert image description here
    We have to define a crawling function first. Please read the code below carefully. You will need to write all the codes yourself in the following exercises.

    def crawler():
    #定义crawler函数
        headers = {
          
          
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36'
        }
        #添加请求头
        while not work.empty():
        #当队列不是空的时候,就执行下面的程序。
            url = work.get_nowait()
            #用get_nowait()方法从队列里把刚刚放入的网址提取出来。
            res = requests.get(url, headers=headers)
            #用requests.get获取网页源代码。
            bs_res = bs4.BeautifulSoup(res.text, 'html.parser')
            #用BeautifulSoup解析网页源代码。
            foods = bs_res.find_all('li', class_='item clearfix')
            #用find_all提取出<li class="item clearfix">标签的内容。
            for food in foods:
            #遍历foods
                food_name = food.find_all('a')[1]['title']
                #用find_all在<li class="item clearfix">标签下,提取出第2个<a>元素title属性的值,也就是食物名称。
                food_url = 'http://www.boohee.com' + food.find_all('a')[1]['href']
                #用find_all在<li class="item clearfix">元素下,提取出第2个<a>元素href属性的值,跟'http://www.boohee.com'组合在一起,就是食物详情页的链接。
                food_calorie = food.find('p').text
                #用find在<li class="item clearfix">标签下,提取<p>元素,再用text方法留下纯文本,也提取出了食物的热量。              
                print(food_name)
                #打印食物的名称。
    

    In the code that defines the crawler function above, you may have some doubts about the part that extracts data.

    Insert image description here
    However, by looking at the structure of HTML, you should be able to clear up your doubts. The food details link and name we want are in <li class="item clearfix">the second <a>element of the label, and can be extracted using find_all. Food calories are in <p>elements, we can extract them using find.

    After defining the crawler function, the entire core code only needs to use gevent.spawn() to create tasks and gevent.joinall() to execute tasks. After starting the coroutine, we can start crawling the data we want.

    I hope you can complete the final core. Therefore, please write the crawler function and the code to start the coroutine based on the above code to complete the task of crawling data.

    Did you write it successfully? If you don't write it out smoothly, I hope you can go back and rewrite it after reading the complete code below.

    Reference Code:

    #导入所需的库和模块:
    
    from gevent import monkey
    monkey.patch_all()
    import gevent,requests, bs4, csv
    from gevent.queue import Queue
    
    work = Queue()
    #创建队列对象,并赋值给work。
    
    #前3个常见食物分类的前3页的食物记录的网址:
    url_1 = 'http://www.boohee.com/food/group/{type}?page={page}'
    for x in range(1, 4):
        for y in range(1, 4):
            real_url = url_1.format(type=x, page=y)
            work.put_nowait(real_url)
    #通过两个for循环,能设置分类的数字和页数的数字。
    #然后,把构造好的网址用put_nowait添加进队列里。
        
    #第11个常见食物分类的前3页的食物记录的网址:
    url_2 = 'http://www.boohee.com/food/view_menu?page={page}'
    for x in range(1,4):
        real_url = url_2.format(page=x)
        work.put_nowait(real_url)
    #通过for循环,能设置第11个常见食物分类的食物的页数。
    #然后,把构造好的网址用put_nowait添加进队
    
    def crawler():
    #定义crawler函数
        headers = {
          
          
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36'
        }
        #添加请求头
        while not work.empty():
        #当队列不是空的时候,就执行下面的程序。
            url = work.get_nowait()
            #用get_nowait()方法从队列里把刚刚放入的网址提取出来。
            res = requests.get(url, headers=headers)
            #用requests.get获取网页源代码。
            bs_res = bs4.BeautifulSoup(res.text, 'html.parser')
            #用BeautifulSoup解析网页源代码。
            foods = bs_res.find_all('li', class_='item clearfix')
            #用find_all提取出<li class="item clearfix">标签的内容。
            for food in foods:
            #遍历foods
                food_name = food.find_all('a')[1]['title']
                #用find_all在<li class="item clearfix">标签下,提取出第2个<a>元素title属性的值,也就是食物名称。
                food_url = 'http://www.boohee.com' + food.find_all('a')[1]['href']
                #用find_all在<li class="item clearfix">标签下,提取出第2个<a>元素href属性的值,跟'http://www.boohee.com'组合在一起,就是食物详情页的链接。
                food_calorie = food.find('p').text
                #用find在<li class="item clearfix">标签下,提取<p>元素,再用text方法留下纯文本,就提取出了食物的热量。              
                print(food_name)
                #打印食物的名称。
    
    tasks_list = []
    #创建空的任务列表
    for x in range(5):
    #相当于创建了5个爬虫
        task = gevent.spawn(crawler)
        #用gevent.spawn()函数创建执行crawler()函数的任务。
        tasks_list.append(task)
        #往任务列表添加任务。
    gevent.joinall(tasks_list)
    #用gevent.joinall方法,启动协程,执行任务列表里的所有任务,让爬虫开始爬取网站。
    

    You can run this code to see if you can successfully crawl the food data.

    The result of my operation is:

    Easy Fun 营养粉丝(香菇炖鸡),又叫Easy Fun 营养粉丝(香菇炖鸡味)
    白粥,又叫白粥(粳米),稀饭,大米粥,白米粥,米粥,大米汤汤
    Easy Fun 营养粉丝(番茄鸡蛋),又叫Easy Fun 营养粉丝(番茄鸡蛋味)
    Easy Fun 低脂咖喱鸡饭
    Easy Fun 抹茶红豆麦片
    Easy Fun 高蛋白微波蛋糕预拌粉(香浓可可味)
    Easy Fun 红枣黑米圈,又叫红枣黑米、Easy Fun 薄荷健康红枣黑米圈
    Easy Fun 山药紫薯圈
    稀饭,又叫白粥(籼米),大米粥,白米粥
    鲜玉米,又叫玉米(鲜)、苞谷、珍珠米、棒子、玉蜀黍、苞米、六谷、
    虾,又叫对虾、鲜虾仁、虾仁
    鸭肉,又叫鸭子、鹜肉、家凫肉
    猪蹄,又叫猪脚、猪手、猪蹄爪
    猪肉(),又叫猪精肉,瘦肉
    鸡蛋白(鸡蛋清),又叫鸡蛋白、鸡蛋清、蛋清、蛋白
    火腿肠
    鸡胸肉,又叫鸡柳肉、鸡里脊肉、鸡胸、鸡胸脯肉
    荷包蛋(油煎),又叫荷包蛋、煎蛋、煎荷包蛋、煎鸡蛋
    咸鸭蛋,又叫盐蛋、腌蛋、味蛋
    猪肉(肥瘦),又叫豕肉、彘肉
    Easy Fun 高纤奇亚籽苏打饼干,又叫Easy Fun 高纤 奇亚籽苏打饼干、奇亚籽苏打咸味饼干、苏打饼干、EASY FUN 苏打饼干、Easy Ace 高纤奇亚籽苏打饼干
    白薯,又叫山芋、红皮山芋,地瓜、甘薯、红皮山芋
    大米,又叫稻米、米、生米
    全麦面包,又叫全麦面包、全麦吐司、全麦面包片、全麦土司
    烙饼
    花卷,又叫花之卷、大花卷、小花卷
    油条,又叫小油条
    曼可顿 全麦高纤维面包
    嘉顿 生命面包 450g
    包子(三鲜馅)
    燕麦片,又叫燕麦
    面条(),又叫面
    煮面条,又叫面、水煮面、面条(煮)
    籼米粉,又叫米线、米粉、粉、排米粉
    面包
    红薯,又叫地瓜、番薯、甘薯、山芋、红薯
    小米粥
    马铃薯,又叫土豆、洋芋、地蛋、山药蛋、洋番薯、土豆、洋芋
    包子(猪肉馅)
    米饭,又叫大米饭,饭,蒸米、锅巴饭、煮米饭
    Easy Fun 高蛋白小酥鱼(藤椒味)
    鸡蛋,又叫鸡子、鸡卵、蛋
    Easy Fun 低脂鸡胸肉肠(香辣味),又叫Easy Fun easy fun 低脂鸡胸肉肠、鸡胸肉肠
    Easy Fun 鸡胸肉丝(原味)
    Easy Fun 高蛋白小酥鱼(海苔味),又叫Easy Fun 高蛋白海苔鱼酥
    Easy Fun 低脂鸡胸肉肠(原味),又叫Easy Fun 低脂鸡胸肉肠、鸡胸肉肠、easyfun 低脂鸡胸肉肠
    猪小排,又叫排骨、猪排、猪脊骨(土鸡,家养)(母鸡,一年内)(肉鸡,肥)
    瓦罐鸡汤(含料),又叫瓦罐汤
    瓦罐鸡汤(无料)
    猪小排(良杂猪)
    猪肉(奶脯),又叫软五花、奶脯、五花肉
    猪大排,又叫猪排
    牛肉(腑肋),又叫牛腩
    Easy Fun 低脂鸡胸肉肠(原味),又叫Easy Fun 低脂鸡胸肉肠(原味)、鸡胸肉肠
    Easy Fun 低脂鸡蛋干(五香味)
    Easy Fun 低脂蛋清鸡肉饼(原味),又叫Easy Fun 低脂蛋清鸡肉饼
    草鱼,又叫鲩鱼、混子、草鲩、草包鱼、草根鱼、草青、白鲩
    酸奶
    牛奶,又叫纯牛奶、牛乳、全脂牛奶
    无糖全脂拿铁,又叫拿铁咖啡、拿铁(全脂,无糖)
    奶酪,又叫乳酪、芝士、起司、计司
    酸奶(中脂)
    脱脂奶粉
    酸奶(调味)
    酸奶(果料),又叫果料酸奶
    酸奶(果粒),又叫果粒酸奶
    蒙牛 高钙牛奶,又叫蒙牛袋装高钙牛奶
    光明 0脂肪 鲜牛奶,又叫光明 0脂肪鲜牛奶
    牛奶(强化VA,VD),又叫牛乳(强化VA,VD)
    光明 低脂牛奶
    蒙牛 木糖醇酸牛奶,又叫蒙牛木糖醇酸奶
    低脂奶酪
    伊利 无蔗糖酸牛奶(利乐包)150g
    蒙牛 酸牛奶(草莓+树莓)100g (小盒装)
    光明减脂90%脱脂鲜牛奶
    伊利优品嘉人优酪乳(原味)
    光明 畅优红枣燕麦低脂酸奶
    炒上海青,又叫炒青菜
    番茄炒蛋,又叫番茄炒鸡蛋、西红柿炒蛋、柿子炒鸡蛋、番茄炒鸡蛋、西红柿炒鸡蛋、西虹市炒鸡蛋、番茄炒蛋
    鸡蛋羹,又叫蒸蛋
    绿豆汤
    素炒小白菜,又叫小青菜
    烧茄子
    绿豆粥,又叫绿豆稀饭
    菜包子,又叫香菇菜包、菜包子、素包子、素包、香菇青菜包、素菜包、香菇青菜包、香菇包子
    蛋炒饭,又叫黄金炒饭、蛋炒饭
    红烧鳓鱼
    光明 e+益生菌酸牛奶(原味)220ml (袋装)
    早餐奶
    酸奶(高蛋白)
    奶片
    全脂牛奶粉
    光明 纯牛奶,又叫光明牛奶
    光明 优倍 高品质鲜牛奶,又叫光明 优倍高品质鲜牛奶
    光明 优倍 0脂肪 高品质脱脂鲜牛奶
    光明 优倍 0乳糖 巴士杀菌调制乳
    光明 致优 全鲜乳,又叫光明 致优全鲜乳
    盐水虾,又叫焖鲜虾
    清炒绿豆芽,又叫有机活体豆苗、炒绿豆芽
    葱油饼,又叫葱花饼、葱油饼
    清炒西葫芦,又叫炒西葫、西葫芦丝
    西红柿鸡蛋面,又叫番茄蛋面、番茄鸡蛋面
    酸辣土豆丝
    红烧肉
    韭菜包子
    卤蛋,又叫卤鸡蛋
    清炒土豆丝
    烧麦,又叫烧卖、糯米烧卖
    炒大白菜,又叫大白菜
    西红柿鸡蛋汤,又叫西红柿蛋汤、西红柿蛋花汤
    大饼,又叫饼,家常饼,死面饼
    清蒸鱼,又叫清蒸鱼、蒸鱼、鱼、蒸洄鱼
    酸菜鱼,又叫酸汤鱼、酸辣鱼、酸菜鱼、酸辣鱼汤
    寿司 自制1,又叫寿司卷
    麻婆豆腐,又叫麻婆豆腐
    牛肉面,又叫兰州拉面、牛腩面、牛肉拌面
    烧包菜丝
    

    At this point, the core code of the project has been completed. As long as the code for storing data is added, we have completed the [code implementation] step of the entire project.

    I chose the csv module to demonstrate data storage.

    from gevent import monkey
    monkey.patch_all()
    import gevent,requests, bs4, csv
    from gevent.queue import Queue
    
    work = Queue()
    url_1 = 'http://www.boohee.com/food/group/{type}?page={page}'
    for x in range(1, 4):
        for y in range(1, 4):
            real_url = url_1.format(type=x, page=y)
            work.put_nowait(real_url)
    
    url_2 = 'http://www.boohee.com/food/view_menu?page={page}'
    for x in range(1,4):
        real_url = url_2.format(page=x)
        work.put_nowait(real_url)
    
    def crawler():
        headers = {
          
          
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36'
        }
        while not work.empty():
            url = work.get_nowait()
            res = requests.get(url, headers=headers)
            bs_res = bs4.BeautifulSoup(res.text, 'html.parser')
            foods = bs_res.find_all('li', class_='item clearfix')
            for food in foods:
                food_name = food.find_all('a')[1]['title']
                food_url = 'http://www.boohee.com' + food.find_all('a')[1]['href']
                food_calorie = food.find('p').text
                writer.writerow([food_name, food_calorie, food_url])
                #借助writerow()函数,把提取到的数据:食物名称、食物热量、食物详情链接,写入csv文件。
                print(food_name)
    
    csv_file= open('boohee.csv', 'w', newline='')
    #调用open()函数打开csv文件,传入参数:文件名“boohee.csv”、写入模式“w”、newline=''。
    writer = csv.writer(csv_file)
    # 用csv.writer()函数创建一个writer对象。
    writer.writerow(['食物', '热量', '链接'])
    #借助writerow()函数往csv文件里写入文字:食物、热量、链接
    
    tasks_list = []
    for x in range(5):
        task = gevent.spawn(crawler)
        tasks_list.append(task)
    gevent.joinall(tasks_list)
    

    Huh~ The project of this level has finally been successfully completed!

    I wonder how you felt when doing this project? Do you worry for a long time because you can't understand a line of code, but you are happy when the code runs and passes?

    I still vividly remember the feeling I had when I first came into contact with programming and wrote the first program in my life - the joy that was so wonderful and lingering in my heart.

    It is no exaggeration to say that when I typed the last line of code in that program, clicked to run, and saw the terminal running out the data I wanted, I almost jumped with excitement.

    I always feel that it was at that moment that programming used its charm to change me and gave me the opportunity to become the me you see today.

    If you have the opportunity, I would also like to hear you share with me your feelings every time you work on a project. If you are not unhappy, you can put it in the comment area.

    See you in the next level~

Guess you like

Origin blog.csdn.net/qq_41308872/article/details/132662867