Table of contents
1.GitHub to create a project
-
Enter the github website and click the New button
-
Enter the project name and select all options, click Create
-
Then we created successfully and got three initial files
-
README.md: Indicates that the project content uses markdown syntax
middle title
------------
# 一级标题
## 二级标题
### 三级标题
#### 四级标题
*强调*
@Override
protected void onDestroy() {
EventBus.getDefault().unregister(this); // code
super.onDestroy();
}
表头 | 表头 | 表头
---- | ----- | ------
单元格内容 | 单元格内容 | 单元格内容
单元格内容 | 单元格内容 | 单元格内容
![图片名称](https://www.baidu.com/img/bd_logo1.png)
1. 项目1
2. 项目2
3. 项目3
* 项目1 (注意一个*号会显示为一个黑点,注意⚠️有空格,否则直接显示为*项目1)
* 项目2
> 第一行引用文字
>> 第二行引用文字
2. Upload the code file
2.1CodeFactor
This is a very neat tool that can be used to check the quality of your code. It is free to use with all public repositories, and 1 private repo.
First, please go to the official website codefactor.io and log in through GitHub to create a new account. The icon (to do this) should be in the upper right corner of the home page.
After signing up, click the plus sign in the upper right corner to add 1 library to your control panel.
Select the library you want to analyze from the list and click import at the bottom of the page.
And that's it! You should now be taken to a control panel that contains a list of all problems found. marvelous!
2.2wemake-python-styleguide
Let's move on to the second tool. This tool is able to check that the code conforms to the official Python style guide.
This time we don't need to create a new account in any web service. We'll set up a GitHub Actions workflow that will be triggered whenever a pull request is created and will add comments when potential issues are found.
For those who don't know GitHub Actions yet: it's a new feature on GitHub that automates many tasks and is often seen as a CI/CD tool (Continuous Integration/Continuous Deployment, Continuous Integration/Continuous Deployment) , to be able to test, QA, and then deploy. But that's not its only goal.
To get started, first create a .github/workflows folder in the root folder of your project, where your workflow definitions will be placed.
Then create a new file and name it workflow-pr.yaml.
name: Python Pull Request Workflow
on: [pull_request]
jobs:
qa:
name: Quality check
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v1
- name: Set up Python
uses: actions/setup-python@master
with:
python-version: 3.8
- name: Run unit tests
run: |
pip install pytest
pytest
- name: Wemake Python Stylguide
uses: wemake-services/wemake-python-styleguide@0.13.4
continue-on-error: true
with:
reporter: 'github-pr-review'
env:
GITHUB_TOKEN: ${
{
secrets.GITHUB_TOKEN }}
This is a very simple workflow, the full name is Python Pull Request Workflow. Every pull request triggers it, so whenever a new job is created or an existing job is updated, it will run.
The above workflow contains only one qa job, which is divided into 4 steps:
- actions/checkout@v1 - must let the GitHub Actions workflow know that it can use the code in the repository
- Use actions/setup-python@master's Set up Python to configure a Python version. In this example, python-version: 3.8 is used.
- Run unit tests will run all unit tests located in the project. For this I am using pytest, first need to install pip install pytest so that I can run the next command pytest. If any one of the tests fails at this step, the next test will not run.
- This step of the Wemake Python Styleguide is of most interest to us. It operates using wemake-services/[email protected], which is the basic unit of workflow. You can find them on the GitHub Marketplace ( https://github.com/marketplace ), as mentioned above ( https://github.com/marketplace/actions/wemake-python-styleguide ). Configuring this code (with statement) to use the github-pr-review reporter enables inline comments in code review. More supported reporter options can be found on the official website. Finally, this workflow needs to pass your GIHUB_TOKEN, which is why the env statement was added.
To test it running, you need to create a new branch, commit some changes and push it to GitHub. Then create a pull request to trigger this workflow. To check it, go to the "Actions" tab in your project, if all goes well it should look like the image below:
If you click "Run unit tests", in the console log you will see the test report:
If you go back to the Pull request, you should see the comment added. Like here: algorithms-python
2.3Codecov
Finally, we want a coverage test report. For this, we again use the pytest library, which will generate the report for us, which we then upload to Codecov, where subsequent visualizations are done.
Before defining a new workflow, you first need to create a Codecov account. Therefore, you need to go to https://about.codecov.io/ by clicking the "Sign Up" button in the upper right corner first .
Then select GitHub Sign Up.
You will then be taken to the GitHub project's control panel and then need to click on the Add new repository button.
Then a page with a token will appear. Save it as it will be used in the next step.
Now go back to your GitHub project and click its "Settings" button. Click on "Secrets" and add a new secret to use the token you generated on the Codecov website. To do this, click Add secret.
Ok, everything is set up, and we can move on to defining the GitHub workflow.
name: Python Master Workflow
on:
push:
branches:
- 'master'
jobs:
codecov:
name: Codecov Workflow
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v1
- name: Set up Python
uses: actions/setup-python@master
with:
python-version: 3.8
- name: Generate coverage report
run: |
pip install pytest
pip install pytest-cov
pytest --cov=./ --cov-report=xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
with:
token: ${
{
secrets.CODECOV_TOKEN }}
file: ./coverage.xml
flags: unittests
Again, we create a separate file named workflow-master.yaml because this time we don't want to trigger this workflow when creating a pull request. This workflow will only run when new commits are pushed on the master branch.
In the jobs section, there is only one job called codecov, which consists of four steps:
- uses: actions/checkout@v1 - this step, just like last time, is just to tell GitHub Actions that we want to use the files in the current library
- uses: actions/setup-python@master - this step has also been mentioned before, here we set the Python version is 3.8
- Then there is a new step, responsible for generating the coverage report (Generate coverage report); this consists of a series of scripts covering installing pytests (pip install pytest), pytest-cov (pip install pytest-cov) and running the actual tests (pytest- cov=./-cov-report=xml),
- Finally, the generated test coverage report can be uploaded to Codecov (Upload coverage to Codecov). Here we use: codecov/codecov-action@v1 ( https://github.com/marketplace/actions/codecov ). In it, we provide 3 parameters: token: ${ { secrets.CODECOV_TOKEN }}, which takes the value from the file we placed in the GitHub Secrets vault, and the location of the test coverage report (generated in the previous step) is file: . /coverage.xml, and flags: unittests is a flag to group our unit tests.
To test, you need to push some commits to the master branch. This can be done directly in your native repository, or by incorporating a pull request. If all went well, the page should look like this:
Now, if you go back to Codecov, go back to your project control panel, you should see output similar to:
Before ending, I want to tell you that CodeFactor, Codecov or wemake-python-styleguide are not the only tools that can help you write higher-quality code. In fact, there are many such tools, such as SonarCloud, Pylint, Coveralls, DeepSource and so on. Some of these are available on the GitHub Marketplace, and if you don't like the few tools I'm proposing, it's a good place to start looking there.
3. Use Pycharm IDE to review code
3.1 View code running time
Run->Concurrency Diagram for’xx.py’
3.2 Project code coverage
Run->'xx.py'with Coverage
If you only perform code coverage review for a single file, you can browse the article of Python code coverage analysis tool Coverage .
Coverage is just a reference indicator, don't get too 100% coverage
3.3 Call relationship between codes
Run->Profile'xx.py'with Coverage
The following figure shows the calling relationship between codes and the time spent