Use claude to write python code for scatter plots with labeled information, which can be used in scientific research papers

First of all, we need a set of data for drawing scatter diagrams. Here we ask claude to help us fabricate a set of data, which is the elevation data and elevation standard deviation data of 30 prefecture-level cities. The former reflects the average altitude of the region, and the latter reflects the degree of dispersion of altitude. The results are as follows:

insert image description here
Use claude to write the scatter plot code, as follows:
insert image description here
The result of running the code in vscode cannot display Chinese, as follows:
insert image description here
Since the matlab toolkit does not support Chinese, you need to import fonts and let claude modify the code, as follows:
insert image description here

Run the modified code as follows:
insert image description here

Compared with chatgpt, I found that Claude is smarter in writing codes. When it comes to displaying Chinese, chatgpt can’t solve it many times, but Claude can complete it once. In fact, the two have the same origin. Claude was created by the vice president of chatgpt with a group of people, so it is basically the same. Fortunately, claude can upload files, and the restrictions on input and the number of times of use are also stronger than chatgpt.

The complete code is as follows:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

plt.rcParams['font.sans-serif'] = ['SimHei'] 
plt.rcParams['axes.unicode_minus'] = False

data = pd.DataFrame({
    '市名称': ['南京市', '无锡市', '徐州市', '常州市', '苏州市', '南通市',  
               '连云港市', '淮安市', '盐城市', '扬州市', '镇江市', '泰州市',
               '宿迁市', '济南市', '青岛市', '淄博市', '枣庄市', '东营市',
               '烟台市', '潍坊市', '济宁市', '泰安市', '威海市', '日照市',
               '临沂市', '德州市', '聊城市', '滨州市', '菏泽市', '郑州市'],
    '平均高程(米)': [20, 10, 60, 15, 12, 80, 55, 18, 35, 10, 40, 90, 100,  
                      400, 50, 100, 150, 200, 80, 120, 140, 180, 60, 40, 130,
                      90, 220, 160, 190, 110], 
    '高程标准差(米)': [5, 3, 8, 4, 3, 10, 7, 5, 6, 2, 5, 12, 15, 20, 8, 10, 12,
                        15, 8, 10, 12, 15, 6, 5, 10, 8, 18, 12, 15, 10]
})

plt.scatter(data['平均高程(米)'], data['高程标准差(米)'], s=80, label='数据点') 

for i, txt in enumerate(data['市名称']):
    plt.annotate(txt, (data['平均高程(米)'][i], data['高程标准差(米)'][i]), fontsize=8)
    
fit = np.polyfit(data['平均高程(米)'], data['高程标准差(米)'], 1)
fit_fn = np.poly1d(fit)
plt.plot(data['平均高程(米)'], fit_fn(data['平均高程(米)']), '--k', label='拟合线')

plt.xlabel('平均高程(米)')
plt.ylabel('高程标准差(米)')
plt.title('地级市高程数据散点图')

# 图例置于右下角
plt.legend(loc='lower right') 

# 拟合公式置于左上角
plt.text(20, 6, '拟合公式:\ny={:.2f}x+{:.2f}'.format(fit[0], fit[1]), fontsize=10)

plt.show()

Guess you like

Origin blog.csdn.net/weixin_42464154/article/details/131870206