Project Summary - How to achieve version control in multiple environments in agile testing

  In the software development process, the test environment is indispensable. The problems involved include how many environments are needed and what they are used for. When the environment is available, it is necessary to consider how to deploy, how to implement version control during deployment, and to ensure that Testers conduct effective tests and reduce the number of cases that cannot be measured in half or the cases that cannot be measured at all. I believe that no one wants to see work blocked.
  Try not to make too much of the environment. The person who executes it can’t remember it, and it will also take a long time to push a version; too little is not good. If the environment is mixed, there is no way to test the test, and there is no way to debug the development. trouble. In addition, the test environment needs to have a clean environment. There are two levels, one is that the test data needs to be clean and consistent with business logic; the other is that only the test can be moved to the environment, and the development cannot be debugged on it. Strictly speaking, for the first project I went through, the environment was well controlled, and any environment development would not be able to debug. Everyone has their own debugging environment. Then you may ask, some problems caused by specific data need to be debugged in the test environment, then we should separate the DB, developers have their own libraries, and the test environment is fixed to 1 or 2 libraries. Change to the library for the test environment. This should be a more reasonable approach.
  There are also cases where different projects require different considerations. For example, if it is only a unilaterally developed project, basically there is a HEAD version and a BRANCH version in terms of version. If there are other test purposes, other environments, such as Performance, can be set up, because there will be different needs; If it is a multi-party joint development project, it may be necessary to say that there are alpha and beta, not only unilateral alpha and beta, but also alpha(beta) for one side & alpha(beta) for both sides (I have reservations here. , because alpha env. for both sides integration does not make much sense, because the functions that can be connected in series are really limited). But from the perspective of testing, whether it is alpha, beta, HEAD or Branch, the basic requirements should be untouchable by developers. Why? Because development should have its own environment to start a service, it can be debugged after packaging it. In addition, there is also a question, what are the requirements of the customer? So in addition to meeting the basic principles, it is also necessary to agree with the client before the project starts, and these things should not interfere with the project process later.
  How many environments are needed and what they are used for will be discussed after negotiating, how to quickly push the version to a certain environment for testing. There are also two levels of meaning, one is to push to a certain environment, the other is to ensure quality. If the project , the first problem can be easily solved, just click Run job. Solving the second problem is the key. Only after the quality is guaranteed can it be launched for Regression. In fact, the essence of this problem should be guaranteed code quality, complete unit  test and automation testing, each bug fix or feature change is committed after pair review, and developers are responsible for their own functions tested in their own development environment Only after the commit code can be deployed at the fastest speed, smoke testing can be performed, and then deployed to the next environment for regression testing.
  Let's briefly talk about Continuous Integration (CI: Continuous Integration). The two tools I have come into contact with are CruiseControl and Hudson (Jenkins). The concept of continuous integration is that each developer commits code to the code repository (for example, SVN) during the process of local development of the functions they are responsible for. This commit should be done in small units as much as possible, rather than committing after all functions have been developed. After the unit test is performed by configuring the job, the code specification check is passed (generally the job status is blue light), and then the package is packaged. For example, the java project may be a war or jar package. If there are no special circumstances, deploy can be carried out. Basically, there will be related jobs related to the environment before this, such as what dependent packages are needed and so on. Who will perform the deployment and other work, it may be maven, ant or make, etc. The selection of this part is also considered based on the project itself. Through continuous integration, you can manage your project well, so that the project has a good quality before entering the test. Of course, this depends on the concept of test first from the beginning of the project, and each function must have Unit Test. To make Unit Test useful, do n't write unit tests for the sake of unit tests, otherwise it's a waste of time. As for the CI part, if I have the ability to share it later
  Since there is CI, Unit Test, and pair review, these levels should theoretically be able to guarantee the quality of the code and provide a guarantee for the rapid deployment of a testable version. The rest of the work depends on how the test is done. For example, how to test on the HEAD or Alpha environment, when to enter the next environment, and so on. The projects I have experienced before will define a code freeze date. At this point in time, a branch must be cut out and deployed to the branch environment for testing, but this execution method should be a relatively well-running project, which can control this time. . However, if the project is tight and the demand becomes more and more, this method is not applicable, and it needs to be handled flexibly. For example, the version on the HEAD or ALPHA environment is basically still in the development stage. In most cases, there is no way to go on the big process, and there is no way to carry out regression testing. Therefore, in order to verify whether the version is available on alpha, smoke testing can be carried out. , if you encounter a very bad version, the rejection test will reject the test. But there is a contradiction here. If the test is stopped after a major problem is found, there may be hidden major problems that have not been discovered and modified, and the next version still exists, which may cause repeated workload, so here I still It is suggested that the overall situation can be fed back to the development after completing the smoke. Regression testing is performed after the corresponding module process is adjusted and the function becomes stable. Stability can be obtained from two aspects, one is the generation rate of new bugs, and the other is the close situation of P1 and P2 bugs. This also involves criteria defined before the test begins, such as Entry criteria & Exit criteria. After meeting the defined standards, you can enter the next stage of testing. Similarly, the next environment or stage should also meet the corresponding standards before entering.
  从这个角度来说,我觉得如果PM看不到这个情况或在项目很紧的时候,测试应该有这样的power说,现在的情况不能进入新功能的开发,或者是不能进入下一个环境,不然只会让项目越来越糟,肯定无法按期交付。这里又补充到了一个规范标准的定义,这其实是应该是制定在测试计划中的,每个阶段或者每次大型测试之前都应制定相应的计划,来衡量测试的结果是否达到预期,是否可以进行后续开发。

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326678715&siteId=291194637