How to write an experiment report for filling in the blanks with python idioms [practice report]

Hello everyone, the editor is here to answer the following questions for you.

1. How to write the conclusion of the internship report!

Ideas for writing the conclusion of the internship report: Like the beginning, the end of the article is also very important. A successful ending can enable readers to understand the content of the article more deeply and thoroughly, and further understand the central idea of ​​the article; a wonderful ending can arouse readers' thinking and resonance, and enhance the appeal of the article. The ending should be like a bell.

Example of an internship report closing statement:

1. During the internship in the company, I really realized the power of the team. I just got a notification from the company that I passed the interview. I am really happy. When I didn't join the company, I had a lot of admiration. I hope that I can do a lot of work. At that time, I felt that I had too much potential to be discovered, and I was just waiting for someone to discover such a piece of gold.

After entering, I realized that I am actually so small. Don’t be self-righteous. You are much better than you. You are nothing special. You must be down-to-earth. Without the joint efforts of colleagues, it is definitely not good to rely on one's own strength. Especially in such a department as the sales department, it needs everyone's joint efforts.

Internship is an experience that every college graduate must have. It allows us to understand the society in practice, let us learn a lot of things that we can't learn in class, and lay a solid foundation for us to go further into the society in the future. An attempt to apply the theoretical knowledge we have learned in practice.

2. In the process of my internship, I have both the joy of harvest and some regrets. Maybe it’s because the short internship days are not related to me. My understanding of some secretarial work is only on the surface. I just watched people do it and listened to how to do it. .

But through internships, I have deepened my understanding of the basic knowledge of secretarial, enriched my practical management knowledge, and enabled me to have a certain perceptual and rational understanding of daily secretarial management. Recognizing that to do a good job in daily corporate secretarial management, it is necessary to pay attention to the study of management theory knowledge, and more importantly, to closely combine practice and theory.

After working for more than a month, I deeply feel my own shortcomings. I will work harder in my future work and study, learn from each other's strengths, and ask for advice.

3. I am a management student. I have learned many sets of classic management theories in books, which seem to be easy to understand, but I have never put them into practice. Maybe I will not realize how difficult it is until I actually manage a company;

We have seen a lot of wonderful negotiation cases from teachers or in books, which seem to be easy. Maybe we can only realize our lack of ability and lack of knowledge in person or in person.

During the two-month internship, I have gained knowledge and experienced the cruelty of social competition, and I hope that I can accumulate various experience in my work to prepare myself for the road of entrepreneurship in the future.

2. How to write the experiment report of filling in the blanks with python idioms

#python idiom fill-in-the-blank experiment report Relevant code:
from random import (choice,randint);
while 1:
stringa="问耳骑铃@揠草养长@一叶问目@淡竽填数@指鹿为马@胶羊唥犯@夏郎自大@隐渡成库"#This line is indented at the beginning 4 grids;
lista=stringa.split("@")#Indent 4 grids at the beginning of this line;
listb=choice(lista)#Indent 4 grids at the beginning of this line;
listc=[j for j in listb]#The beginning of this line Indent 4 grids;
space_letter=choice(listc)#Indent 4 grids at the beginning of this line;
select_letters=listb.replace(space_letter,"()")#Indent 4 grids at the beginning of this line;
print(select_letters)#Indent the beginning of this line Indent 4 spaces;
insert_letter=input("Fill:")#Indent 4 spaces at the beginning of this line;
result=(True if insert_letter==space_letter else False)#Indent 4 spaces at the beginning of this line;
print(f'Your anser :{result}')#The first indentation of this line is 4 spaces; if
result:#The first indentation of this line is 4 spaces; break
#The first indentation of this line is 8 spaces ; :False night() arrogant Fill:Lang







Your anser:True
'''

3. Python data analysis and application-Python data analysis and application PDF internal full data version

An e-book resource related to Python data is brought to you, which introduces the content of Python. This book is published by People's Posts and Telecommunications Publishing House. The format is PDF and the resource size is 281 MB. It is written by Huang Hongmei and Zhang Liangjun. Amazon, Dangdang, JD.com and other e-books have a comprehensive score of 7.8.

Introduction

Table of contents

Chapter 1 Overview of Python Data Analysis 1

Task 1.1 Recognize data analysis 1

1.1.1 Master the concept of data analysis 2

1.1.2 Master the process of data analysis 2

1.1.3 Understand the application scenarios of data analysis 4

Task 1.2 Familiarize yourself with Python data analysis tools 5

1.2.1 Understand common tools for data analysis 6

1.2.2 Understand the advantages of Python data analysis 7

1.2.3 Understand the commonly used class libraries for Python data analysis 7

Task 1.3 Install the Anaconda distribution of Python 9

1.3.1 Understanding Python's Anaconda distribution 9

1.3.2 Install Anaconda 9 on Windows

1.3.3 Install Anaconda 12 in Linux system

Task 1.4 Master common functions of Jupyter Notebook 14

1.4.1 Master the basic functions of Jupyter Notebook 14

1.4.2 Master the advanced functions of Jupyter Notebook 16

Summary 19

Exercises after class 19

Chapter 2 NumPy Numerical Computing Fundamentals 21

Task 2.1 Mastering the NumPy array object ndarray 21

2.1.1 Creating an array object 21

2.1.2 Generating random numbers 27

2.1.3 Accessing arrays by index 29

2.1.4 Transforming the shape of an array 31

Task 2.2 Mastering NumPy matrices and general functions 34

2.2.1 Creating NumPy matrices 34

2.2.2 Mastering the ufunc function 37

Task 2.3 Statistical Analysis Using NumPy 41

2.3.1 Reading/writing files 41

2.3.2 Simple statistical analysis using functions 44

2.3.3 Task realization 48

Summary 50

Training 50

Practice 1 Create an array and perform operations 50

Training 2 Create a chess board 50

After-school exercises 51

Chapter 3 Matplotlib Data Visualization Fundamentals 52

Task 3.1 Master the basic grammar and common parameters of drawing 52

3.1.1 Master the basic grammar of pyplot 53

3.1.2 Setting the dynamic rc parameters of pyplot 56

Task 3.2 Analyzing Relationships Between Features 59

3.2.1 Drawing a scatter plot 59

3.2.2 Drawing a Line Chart 62

3.2.3 Task realization 65

Task 3.3 Analyze the internal data distribution and dispersion of features 68

3.3.1 Drawing a histogram 68

3.3.2 Drawing a pie chart 70

3.3.3 Draw a boxplot 71

3.3.4 Task realization 73

Summary 77

Training 78

Training 1 Analysis of the relationship between 1996 and 2015 population data characteristics 78

Training 2 Analyze the distribution and dispersion of various characteristics of population data from 1996 to 201578

Homework Exercises 79

Chapter 4 Fundamentals of Pandas Statistical Analysis 80

Task 4.1 Read/Write Data from Different Data Sources 80

4.1.1 Reading/writing database data 80

4.1.2 Reading/writing text files 83

4.1.3 Reading/writing Excel files 87

4.1.4 Task realization 88

Task 4.2 Master Common Operations of DataFrame 89

4.2.1 Check the common properties of DataFrame 89

4.2.2 Check, modify, add and delete DataFrame data 91

4.2.3 Describing and analyzing DataFrame data 101

4.2.4 Task realization 104

Task 4.3 Transform and process time series data 107

4.3.1 Convert string time to standard time 107

4.3.2 Extract time series data information 109

4.3.3 Addition and subtraction of time data 110

4.3.4 Task realization 111

Task 4.4 Computing Within Groups Using Group-by-Aggregate 113

4.4.1 Using the groupby method to split data 114

4.4.2 Aggregating data using the agg method 116

4.4.3 Aggregating data using the apply method 119

4.4.4 Aggregating data using the transform method 121

4.4.5 Task realization 121

Task 4.5 Create pivot tables and crosstabs 123

4.5.1 Using the pivot_table function to create a pivot table 123

4.5.2 Create a crosstab using the crosstab function 127

4.5.3 Task realization 128

Summary 130

Training 130

Training 1 Read and view the basic information of the P2P network loan data master table 130

Training 2 Extract time information from user information update table and login information table 130

Training 3 Use the group aggregation method to further analyze the user information update table and login information table 131

Training 4: Convert the length and width tables to the user information update table and login information table 131

Homework Exercises 131

Chapter 5 Data Preprocessing with pandas 133

Task 5.1 Merging Data 133

5.1.1 Stacking merged data 133

5.1.2 Primary key merge data 136

5.1.3 Overlapping merged data 139

5.1.4 Task realization 140

Task 5.2 Cleaning the Data 141

5.2.1 Detecting and handling duplicate values ​​141

5.2.2 Detecting and handling missing values ​​146

5.2.3 Detecting and handling outliers 149

5.2.4 Task realization 152

Task 5.3 Standardize Data 154

5.3.1 Standardized data for dispersion 154

5.3.2 Normalizing data by standard deviation 155

5.3.3 Decimal scaling normalized data 156

5.3.4 Task realization 157

Task 5.4 Transforming Data 158

5.4.1 Dummy variables for categorical data 158

5.4.2 Discretizing continuous data 160

5.4.3 Task realization 162

Summary 163

Training 164

Training 1 Interpolation of missing values ​​of user electricity consumption data 164

Training 2 Combine line loss, power consumption trend and line alarm data 164

Training 3 Standardized Modeling Expert Sample Data 164

Homework Exercises 165

Chapter 6 Building Models with scikit-learn 167

Task 6.1 Process data using sklearn transformers 167

6.1.1 Loading datasets in the datasets module 167

6.1.2 Dividing the dataset into training and testing sets 170

6.1.3 Data preprocessing and dimensionality reduction using sklearn converter 172

6.1.4 Task realization 174

Task 6.2 Construct and evaluate a clustering model 176

6.2.1 Building a clustering model using sklearn estimators 176

6.2.2 Evaluating cluster models 179

6.2.3 Task realization 182

Task 6.3 Build and evaluate a classification model 183

6.3.1 Building classification models using sklearn estimators 183

6.3.2 Evaluating classification models 186

6.3.3 Task realization 188

Task 6.4 Construct and evaluate regression models 190

6.4.1 Building linear regression models using sklearn estimators 190

6.4.2 Evaluating regression models 193

6.4.3 Task realization 194

Summary 196

Training 196

Training 1 Use sklearn to process wine and wine_quality data sets 196

Training 2 Constructing a K-Means clustering model based on wine dataset 196

Training 3 Build an SVM classification model based on wine dataset 197

Training 4 Build a regression model based on wine_quality data set 197

Homework Exercises 198

Chapter 7 Airline Customer Value Analysis 199

Task 7.1 Understanding Airline Status and Customer Value Analysis 199

7.1.1 Understand the status quo of airline companies 200

7.1.2 Understanding Customer Value Analysis 201

7.1.3 Familiar with the steps and process of aviation customer value analysis 201

Task 7.2 Preprocessing Airline Customer Data 202

7.2.1 Dealing with missing data and outliers 202

7.2.2 Construct the key characteristics of aviation customer value analysis 202

7.2.3 Five characteristics of the normalized LRFMC model 206

7.2.4 Task realization 207

Task 7.3 Segmenting Customers Using the K-Means Algorithm 209

7.3.1 Understanding the K-Means clustering algorithm 209

7.3.2 Analyzing clustering results 210

7.3.3 Model application 213

7.3.4 Task realization 214

Summary 215

Training 215

Training 1 Processing credit card data outliers 215

Training 2 Construct the key features of credit card customer risk assessment 217

Training 3 Building K-Means Clustering Model 218

Homework Exercises 218

Chapter 8 Revenue Forecast Analysis 220

Task 8.1 Understand the Background and Methodology of Fiscal Revenue Forecasting 220

8.1.1 Analyzing the fiscal revenue forecast background 220

8.1.2 Understanding methods for revenue forecasting 222

8.1.3 Familiar with the steps and process of fiscal revenue forecasting 223

Task 8.2 Analyzing the Correlation of Characteristics of Fiscal Revenue Data 223

8.2.1 Understanding correlation analysis 223

8.2.2 Analyzing calculation results 224

8.2.3 Task realization 225

Task 8.3 Using Lasso Regression to Select Key Features for Fiscal Revenue Forecasting 225

8.3.1 Understanding the Lasso regression method 226

8.3.2 Analyzing Lasso regression results 227

8.3.3 Task realization 227

Task 8.4 Building a Fiscal Revenue Forecasting Model Using Gray Forecasting and SVR 228

8.4.1 Understanding Gray Prediction Algorithms 228

8.4.2 Understanding the SVR Algorithm 229

8.4.3 Analyzing forecast results 232

8.4.4 Task realization 234

Summary 236

Training 236

Practice 1 Find the correlation coefficient between the characteristics of corporate income tax 236

Training 2 Select key features of corporate income tax forecasting 237

Training 3 Building a corporate income tax forecasting model 237

Homework Exercises 237

Chapter 9 Analysis of Household Water Heater User Behavior and Event Recognition 239

Task 9.1 Understand the background and steps of user behavior analysis of domestic water heaters 239

9.1.1 Analysis of the status quo of domestic water heater industry 240

9.1.2 Understand the basic situation of data collected by water heaters 240

9.1.3 Familiar with the steps and process of household water heater user behavior analysis 241

Task 9.2 Preprocessing water heater user data 242

9.2.1 Removing redundant features 242

9.2.2 Classification of water use events 243

9.2.3 Determining the duration threshold of a single water use event 244

9.2.4 Task realization 246

Task 9.3 Construct water use behavior signatures and screen water use events 247

9.3.1 Construct water use duration and frequency characteristics 248

9.3.2 Construction of water consumption and fluctuation characteristics 249

9.3.3 Screening Candidate Bathing Events 250

9.3.4 Task realization 251

Task 9.4 Build a BP Neural Network Model for Behavioral Event Analysis 255

9.4.1 Understand the principle of BP neural network algorithm 255

9.4.2 Building the model 259

9.4.3 Evaluating the model 260

9.4.4 Task realization 260

Summary 263

Training 263

Training 1 Cleaning operator customer data 263

Training 2 Screening Customer Carrier Data 264

Training 3 Building a Neural Network Prediction Model 265

Homework Exercises 265

Appendix A 267

Appendix B 270

References 295

study notes

Jupyter Notebook (formerly known as IPython notebook) is an interactive notebook that supports running over 40 programming languages. The essence of Jupyter Notebook is a web application that facilitates the creation and sharing of literary program documentation, supporting live code, mathematical equations, visualizations, and markdown. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning, and more. Definition (recommended learning: Python video tutorial) Users can share Jupyter Notebook with others via email, Dropbox, GitHub and Jupyter Notebook Viewer. In Jupyter Notebook, the code can generate images, videos, LaTeX and JavaScript in real time. The data in Kaggle, the most popular competition in the field of data mining, is in Jupyter format. Architecture Jupyter Components Jupyter consists of the following components: Jupyter Notebook and...

This article describes the WeChat friend data analysis function implemented by Python. Share it for your reference, the details are as follows: Here, python is mainly used to analyze personal WeChat friends and output the results to an html document. The main python packages used are itchat, pandas, pyecharts, etc. 1. Install itchat WeChat python sdk, used to obtain personal friendship. The obtained code is as follows: import itchatimport pandas as ppdfrom pyecharts import Geo, Baritchat.login()friends = itchat.get_friends(update=True)[0:]def User2dict(User): User_dict = {} User_dict["NickName"] = User["NickName"] if User["NickName"] else "NaN" User_dict["City"] = User["City"] if User["City"] else "NaN" User_dict["Sex"] = User[ "Sex"] if User["Sex"] else 0 User_dict["Signature"] = User["Signature"] if User["Signature"] else "NaN" ...

Based on WeChat's open personal account interface python library itchat, it realizes the acquisition of WeChat friends, and performs data analysis on provinces, genders, and WeChat signatures. Effect: Directly upload the code, create three empty text files stopwords.txt, newdit.txt, unionWords.txt, download the font simhei.ttf or delete the code required by the font, and then run it directly. #wxfriends.py 2018-07-09import itchatimport sysimport pandas as pdimport matplotlib.pyplot as pltplt.rcParams['font.sans-serif']=['SimHei']#Can display Chinese when drawing plt.rcParams['axes.unicode_minus ']=False# Chinese can be displayed when drawing import jiebaimport jieba.posseg as psegfrom scipy.misc import imreadfrom wordcloud import WordCloudfrom os import path#Solve encoding problem non_bmp_map = dict.fromkeys(range(0x10000, sys.maxunicode + 1), 0xfffd ) #Get friend information def getFriends():...

An example of Python data analysis based on the linear regression algorithm to predict the winning result of the next period

This article describes the example of the Python data analysis of the two-color ball based on the linear regression algorithm to predict the next winning result. Share it with everyone for your reference, the details are as follows: The various algorithms about the two-color ball are described above, and here will be the prediction of the next two-color ball number. Thinking about it, I am a little excited. The linear regression algorithm is used in the code. This algorithm is used in this scene, and the prediction effect is average. You can consider using other algorithms to try the results. I found that a lot of code was repetitive work before. In order to make the code look more elegant, I defined a function and called it, and I suddenly became taller #!/usr/bin/python# -*- coding:UTF-8 -*- #Import required packages import pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport operatorfrom sklearn import datasets,linear_modelfrom sklearn.linear_model import LogisticRegression#Read file d...

The above is all the relevant content of the Python data e-book introduced this time. I hope that the resources we have compiled can help you. Thank you for your support to Guigui.

4. What are the simpler projects that Python can train?

The first stage: Python language and application
Course content: Python language foundation, object-oriented design, multi-thread programming, database interaction technology, front-end special effects, Web framework, crawler framework, network programming The second stage: machine learning and data analysis course
content
: Overview of Machine Learning, Supervised Learning, Unsupervised Learning, Data Processing, Model Tuning, Data Analysis, Visualization, Project Combat Phase 3:
Deep Learning
Course Content: Deep Learning Overview, TensorFlow Basics and Applications, Neural Networks, Multilayer LSTM, Autoencoder, generation confrontation network, small sample learning technology, project practice
Phase 4: image processing technology
course content: image basic knowledge, image operation and operation, image geometric transformation, image morphology, image contour, image statistics, image Filtering, project actual combat

5. Experimental purpose and requirements of python financial analysis

The purpose and requirements of python financial analysis experiments: Python is suitable for data analysis, and there are many mature data analysis frameworks: Pandas, Numpy, etc., which are taught in the course. These frameworks can easily complete the task of data analysis.

In python, an object is actually a pointer, pointing to a data structure, which has attributes and methods. Objects are often referred to as variables. From the concept of object-oriented OO, an object is an instance of a class. It's very simple in python, objects are variables. class A: myname="class a" above is a class.

high speed:

The bottom layer of Python is written in C language, and many standard libraries and third-party libraries are also written in C, which runs very fast. Free and Open Source: Python is one of FLOSS (Free/Open Source Software). Users are free to distribute copies of this software, read its source code, make changes to it, and use parts of it in new free software. FLOSS is based on the concept of a community sharing knowledge.

Guess you like

Origin blog.csdn.net/chatgpt001/article/details/129141799