[Soft Exam - Notes on Essential Knowledge Points for Software Designers] Chapter 6 System Development and Operation

Preface

Since the notes copied to CSDN have invalid styles, I don’t have the energy to completely check and set the styles again. Those with points can download the word, pdf, and Youdao cloud note versions.
It should be noted that the downloaded content is consistent with the content shared in this article. The only difference is the style [for example, the key memory and frequently tested content have colors, font sizes, weights, etc., and the directory structure is more complete. Tables are not pictures, etc.]

Download address of this chapter:
https://download.csdn.net/download/chengsw1993/85598023

If you find that the article has reading differences, abnormal display, etc., please let us know in the comment area so that we can modify it. This should be caused by CSDN's markdown syntax.

Series of articles

Previous article:[Soft Exam - Notes on Essential Knowledge Points for Software Designers] Chapter 5 Basic Knowledge of Software Engineering

Next article:[Soft Exam - Notes on Essential Knowledge Points for Software Designers] Chapter 7 Object-Oriented Technology

System Analysis and Design

What does the analysis system do?

Purpose and tasks: To further investigate the current system in detail, collect the documents obtained from the investigation, analyze the overall management status and information processing process within the organization, provide the required information for system development, and submit the system plan description

Steps: Based on the current system physical model, derive the logical model, analyze and optimize the logical model of the target system, and then concretely establish the physical model of the target system.

system design

Basic principles of system design:

  • Abstract (focus on essential aspects and ignore non-essential aspects);
  • Modularity (units that can be combined, disassembled and replaced);
  • information hiding (hiding or encapsulating components of each program within a single design module);
  • Modules are independent (each module completes a relatively independent specific sub-function and has simple connections with other modules).

The design of a module requires high independence, which requires high cohesion and low coupling. Cohesion refers to the correlation between the internal functions of a module, and coupling refers to the connection between multiple modules.

cohesion

Insert image description here

Time convergence: The functions completed by the module must be executed at the same time, and these functions are only related to each other due to time factors.

Process cohesion: Each processing component within a module is related and must be executed in a specific order.

Information (communication) cohesion: Information cohesion means that the module is written as multiple functions, each function operates on the same data structure, and each function has a unique entrance.

Functional cohesion: A module only includes all components necessary to complete a certain function (all components of a module work together to complete a function and are indispensable).

coupling

Insert image description here

Data coupling: When one module accesses another module, input and output information are exchanged through simple data parameters (not control parameters, public data structures or external variables).

Tag coupling: A group of modules transfer record information through parameter tables, which is tag coupling. This record is a substructure of a data structure, not a simple variable. In fact, what is passed is the address of this data structure;

Control coupling: If one module clearly controls the selection of the function of another module by transmitting control information such as switches, flags, names, etc., it is control coupling.

Content coupling: Content coupling occurs when one module directly modifies or operates the data of another module, or directly transfers it to another module. At this point, the modified module is completely dependent on the module that modified it.

Overall system structure design

System structure design principles: decomposition-coordination principle, top-down principle, information concealment and abstraction principle, consistency principle, clarity principle, high cohesion and low coupling between modules, reasonable fan-in and fan-out coefficients of modules, modules Appropriate scale.

Principles of subsystem division: subsystems should be relatively independent, data dependence between subsystems should be as small as possible, the result of subsystem division should be such that data redundancy is small, and the setting of subsystems should consider the needs of future management development. The division of subsystems should facilitate the phased implementation of the system, and the division of subsystems should take into account the full utilization of various resources.

WebApp Analysis and Design

WebApp is a web-based system and application. Most WebApps are developed using the agile development process model.

Features of WebApp: network density (serving the needs of all different customers), concurrency (a large number of users accessing at the same time), unpredictable load (number of users), performance (long response time leading to user loss) ), usability (best available 724365), data-driven (interacting with user data).

WebApp five demand models:

1. Content model: Gives the entire series of content provided by WebApp, including text, graphics, images, audio and video. Contains structural elements that provide an important view into the WebApp's content needs. These structural elements contain content objects and all analysis classes that generate and manipulate user-visible entities when the user interacts with the WebApp.
Content development may occur before the WebApp is implemented, during construction, or after it is put into operation (the entire process).
Content objects: text descriptions of products, news articles, photos, videos, etc.
Data tree: Any content composed of multiple content objects and data items can generate a data tree. It is the basis of content design, defines a hierarchical relationship, and provides a method for auditing content. , to catch omissions and inconsistencies before you start designing.

2. Interaction model: describes the interaction method used by users with WebApp. It consists of one or more elements, including use cases, sequence diagrams, state diagrams, user interface prototypes, etc.

  • Use cases are the main tool for interaction analysis to facilitate customers to understand the functions of the system.
  • Sequence diagrams are the way in which users interact with the system in interaction analysis. Users use the system in a predetermined order and complete corresponding functions, such as the login process.
  • State diagram is a dynamic description of the system in interaction analysis. Such as changes in status.
  • The user interface prototype shows the user interface layout, content, main navigation links, implemented interaction mechanisms and the overall aesthetics of the user's WebApp.

3. Function model: Many WebApps provide a large number of calculation and operation functions, which are directly related to the content (can both be used and generate content, such as statistical reports). These functions often have user interaction as their main goal.
The functional model defines the operations that will be used with WebApp content and describes other processing functions that are independent of the content but required by the end user.

4. Navigation model: Define all navigation strategies for WebApp. Considers how each type of user navigates from one WebApp element (such as a content object) to another.

5. Configuration model: Describes the environment and infrastructure where WebApp is located. In situations where complex configuration architectures must be considered, UML deployment diagrams can be used.

WebApp design

1. Architecture design: Use a multi-layer architecture to construct, including a user interface or presentation layer, a controller based on a set of business rules to guide information interaction with the client browser, and business rules that can include WebApp The content or model layer describes how user interaction will be managed, internal processing tasks will be performed, navigation and content will be displayed.
The MVC (Model-View-Controller) structure is one of the basic structure models of WebApp, which separates WebApp functions and information content.

2. Component design
WebApp component: Well-defined aggregation functions, which process content or provide calculation or processing data for end users; an aggregation package of content and functions, providing what end users need required functionality. Therefore, WebApp component design usually includes content design elements and functional design elements.
Component-level content design: Focus on content objects and the way they are packaged and presented to end users, which should be suitable for the characteristics of the WebApp created.
Component-level functional design: Deliver WebApp as a series of components that are developed in parallel with the information architecture to ensure consistency.

3. Content design: Focus on the presentation of content objects and the organization of navigation, usually using four structures: linear structure, grid structure, hierarchical structure, network structure and their combinations.

4. Navigation design: Define the navigation path so that users can access the content and functions of the WebApp.

Software Requirements

Classification by demand content:

Business requirements: A macro functional requirement proposed by customers.

User needs: Designers investigate the specific needs of each user involved in the requirements.

System requirements: After integration, the final system requirements are formed, including requirements in three aspects: function, performance, and design constraints.

Classification from a customer perspective:

Basic requirements: Functions that require clearly defined functionality.

Expected needs: In addition to basic needs, other features that customers think should be included.

Exciting requirements: Other functional requirements not requested by customers will waste project development time and cost.

demand analysis

Software requirements classification:

Functional requirements: The basic actions that the software must complete.

Performance requirements: Describe the static or dynamic numerical requirements for software or human interaction with software, such as system response speed, processing speed, etc.

Design Constraints: Affected by other standard hardware limitations, etc.

Properties: Availability, Security, Maintainability, Transferability/Transferability.

External interface requirements: user interface, hardware interface, software interface, communication interface.

requirements engineering

Six stages of requirements engineering

Requirements acquisition: Obtain requirements through data collection, joint seminars (JRP), user interviews, written surveys, on-site observations, participation in business practices, reading historical documents, and sampling surveys.

Demand analysis and negotiation: Analyze and judge the relationship between all demands raised by different people.

System modeling: establishing an abstract model of the system.

Requirements specification: Also known as requirements definition, the purpose is to write a requirements specification (i.e. requirements specification) and reach a consensus between both parties.

Requirements verification: A review method in the requirements development stage. After the requirements verification is passed, the user must sign for confirmation as one of the acceptance criteria. At this time, the requirements specification is the requirements baseline.

Requirements management: planning and controlling all processes involved in requirements engineering.

Demand management

Define the requirements baseline: The requirements specification that has passed the review is the requirements baseline. If you need to change the requirements next time, you need to follow the process step by step.

Handling demand changes: Mainly concerned with demand risk management during the demand change process. Risky practices include: insufficient user participation, neglect of user classification, increasing user needs, ambiguous requirements, unnecessary features, and oversimplification. SRS, inaccurate estimates.

Requirements tracking: two-way tracking, two levels. Forward tracking indicates whether the user's original needs are realized, and reverse tracking indicates whether the software implements all the user requirements, no more, no less. As shown below:
Insert image description here

Structured Analysis and Design

The structured analysis method SA is top-down and gradually decomposed. It is data-oriented and emphasizes the data flow of the analysis object. It needs to establish:Functional model (data flow diagram ), behavioral model (state transition diagram), data model (E-R diagram) and data dictionary (data elements, data structure, data flow, data storage, processing logic, external entities).

The data flow diagram describes how data is transmitted or transformed in the system, and the functions or sub-functions of how to transform the data flow. It is used to model functions. The related concepts of the data flow diagram are as follows:
Insert image description here

data flow diagram

Data Flow Diagram: DFD for short

The data flow diagram can be hierarchical, from the top layer (ie, context-free data flow) to layer 0, layer 1, etc. The top-level data flow diagram only contains one processing to represent the entire management information system, describing the input and output of the system, and Interact with data from external entities. An example of a data flow diagram is as follows:
Insert image description here

Basic design principles of data flow diagrams: [When supplementing data flow diagrams in the afternoon test questions, you need to follow this principle]

(1) Data conservation principle: For any processing (operation), the data in all its output data streams must be directly obtained from the input data stream of the processing, or the data that can be generated through the processing.

(2) Conservation processing principle: For the same processing, the names of the input and output must be different, even if their components are the same.

(3)For each processing, there must be both an input data stream and an output data stream.

(4)There is no data flow between external entities and external entities

(5)There is no data flow between external entities and data storage

(6)There is no data flow between data storage and data storage

(7)The balance principle between the parent graph and the child graph: the input and output data flow of the child graph must be consistent with the input and output data flow of the corresponding processing of the parent graph, which is the parent graph Balance with subplots. The principle of balance between parent and child images does not exist in a single image.

(8)Data flow is related to processing and must be processed.

Data Dictionary

The data dictionary is used to define the meaning of symbols or names that appear in the data flow diagram. In the data flow diagram, the meaning of each storage, processing, and entity must be defined in the data dictionary, and these between the parent diagram and the child diagram must be defined in the data dictionary. The names should be the same. Examples are as follows:
Insert image description here

Testing Basics

System testing is the process of executing programs in order to find errors. A successful test is a test that discovers errors that have not yet been discovered.

测试原则

Testing should be done early and continuously;

Testing work should avoid being undertaken by the person or group that originally developed the software;

When designing a test plan, not only the input data must be determined, but also the expected output results must be determined based on the system functions;

Contains both valid and reasonable test cases, as well as unreasonable and invalid test cases;

Test whether the program does what it is supposed to do and whether it does what it is not supposed to do;

Strictly follow the test plan;

Properly save test plans and test cases;

Test cases can be reused or appended to tests.

testing phase

Unit testing: Test a single module. The programmer tests the interface, information, and functions inside the module. The test is based on the detailed instructions of the software. In unit testing, the driver module (upper layer) is used to call the module under test. In top-down unit testing, there is no need to write additional driver modules. The stub module (bottom layer) is used to simulate the submodules called by the module under test.
Integration testing: combine modules for testing, divided into one-time assembly (simple, save time, find fewer errors, only suitable for small projects) and incremental assembly (can discover more Errors, time-consuming, and can be divided into: top-down, bottom-up, hybrid).

Confirmation testing: Functional testing of the completed software, divided into internal confirmation testing (without users), Alpha testing (testing by users in the development environment), Beta testing (testing by users in actual use), Acceptance testing (users accept the project based on SRS)

System testing: Perform performance testing on the software, mainly testing three aspects, namely load testing (under extreme conditions, various performance indicators of the system), strength testing (when system resources are particularly low), and capacity testing (concurrency testing, system The maximum number of simultaneous users that can be handled). There are other performance tests such as reliability, and the system test adopts the black box testing method.

Regression testing: After a bug or change has been made to the software, regression testing is performed to verify whether the previously correct code introduced bugs.

Test Methods

Dynamic testing: testing when the program is running, divided into:

Black box testing method: Functional testing, which does not understand the software code structure, designs use cases based on the functions, and tests the software functions.

White box testing method: Structural testing, clarifying the code flow, designing use cases based on code logic, and performing use case coverage.

Gray box testing method: There are both black boxes and white boxes.

Static testing: When the program is static, the code is manually reviewed, divided into:

  • Desktop inspection: Programmers check the programs they write after the program is compiled and before unit testing.

  • Code review: A review team composed of several programmers and testers conducts review by holding program review meetings.

  • Code walkthrough: Meetings are also used to review the code, but instead of simply checking the code, testers provide test cases, allowing programmers to play the role of computers, manually run test cases, and check code logic.

testing strategy

Bottom-up: Start testing from the lowest module, which requires writing a driver, and then start merging modules one by one, and finally complete the test of the entire system. The advantage is that the underlying modules are verified earlier.

Top-down: First test the entire system, which requires writing stubs, and then gradually move downward until finally testing the lowest-level module. The advantage is that the main control and judgment points of the system are verified earlier.
Sandwich: There are both bottom-up and top-down testing methods, including both. It has the advantages of both, but the disadvantage is the heavy testing workload.

Test case design

Black box test cases: Treat the program as a black box, only knowing the input and output, but not the internal code. From this, test cases are designed, which are divided into the following categories:

  • Equivalence class division: Classify all data according to certain characteristics, and then select one data from each category. Design principles for equivalence class test cases: Design a new test case to cover as many valid equivalence classes as possible that have not been covered. Repeat this step until all valid equivalence classes are covered; design a Create a new test case so that it covers only one invalid equivalence class that has not yet been covered. Repeat this step until all invalid equivalence classes are covered.

  • Boundary value division: Use the boundary value of each category as a test case. The boundary value is generally the two end values ​​of the range and the smallest distance outside this range. Two values, such as the age range is 0-150, and the boundary values ​​are 0, 150, -1, and 151.

  • Error speculation: There is no fixed method, based on experience, to speculate where problems may occur and test them as test cases.

  • Cause-and-effect diagram: A method of inferring the cause from a result. Specific results are analyzed in detail. There is no fixed method.

White-box test cases: Know the code logic of the program, and design test cases that cover the code branches according to the program's code statements. The coverage levels are divided into the following six types from low to high:

  • Statement coverage: All statements in the logic code must be executed once, with the lowest coverage level, because executing all statements does not mean that all conditional judgments have been executed.
  • Judgment coverage: The true and false branches of all judgment statement conditions in the logic code must be covered once.
  • Conditional coverage: For a condition in the code, which may be combined, such as a>0&&b<0, judgment coverage only makes two test cases for the true and false branches of this combined condition, and < /span>, the level is higher, please pay attention to the difference , condition coverage, true and false coverage for each condition, judgment coverage, only for one condition judgment statement. Conditional coverage is a test case that requires true and false branches for each independent condition. There can be a total of 4 test cases
  • Judgment/condition coverage: Make all possible values ​​(true/false) of each condition in the judgment appear at least once, and the judgment result (true/false) of each judgment itself also appear at least once, that is, the synthesis of the two types of coverage.
  • Condition combination coverage: every combination of possible values ​​of the conditions in each decision condition occurs at least once.
  • Path coverage: All feasible paths in the logic code are covered, with the highest coverage level.

debug

Testing is to find errors, debugging is to find the error code and its cause.

Debugging requires determining the exact location of the error; determining the cause of the problem and trying to correct it; and performing regression testing after correction.

Debugging methods include: brute force method, backtracking method (starting from the place where the error occurred and looking back), and cause elimination method (finding all possible causes and eliminating them one by one, including deduction method, induction method, and dichotomy method).

System operation and maintenance

system conversion

System conversion refers to the process in which a new system is developed, put into operation, and replaces the existing system. Many issues need to be considered to achieve handover with the old system. There are three conversion plans as follows:

True conversion: The existing system is replaced by a new system, which is very risky. It is suitable for situations where the new system is not complex or the existing system can no longer be used. The advantage is cost savings.

Parallel conversion: The new system and the old system work in parallel for a period of time. The new system will be replaced after trial operation. If the new system has problems during the trial operation, it will not be affected. The risk of running the existing system is minimal. During the trial operation, the performance of the new and old systems can also be compared, which is suitable for large-scale systems. The disadvantage is that it consumes manpower and time resources, and it is difficult to control the data conversion between the two systems.

Pegmental conversion: Gradual conversion in stages and batches. It is a collection of direct and parallel conversions. A large system is divided into multiple subsystems, and each subsystem is trial-run in turn to mature one Subsystem, just convert a subsystem. It is also suitable for large-scale projects, but it is more time-consuming, and the mixed use of existing systems and new systems requires coordination of interfaces and other issues.

Data conversion and migration: Migrate data from the old database to the new database. The reasonable data structure in the old system must be preserved in the new system as much as possible to reduce the difficulty of migration. There are also three methods: migration through tools before system switching, manual entry before system switching, and generation through the new system after system switching.

The conversion process is called ETL, and it has three steps: extraction (old database data) - conversion (three conversion methods) - loading (loading the new database, and check data).

system maintenance

Software maintenance is the last stage in the software life cycle and does not belong to the system development process. It is the process of modifying the software in order to correct errors or meet new requirements after the software has been delivered for use, that is, all changes made to the software after the software has been delivered for use.

The maintainability of a system can be defined as the ease with which maintenance personnel can understand, correct, change and improve the software. Its evaluation indicators are as follows:

  • Ease of testability: A property of software related to the effort required to validate modified software;
  • Analyzability: refers to the software attributes related to the effort required to diagnose defects or failures, or to determine the parts to be modified;
  • Changeability: A property of software related to the effort required to modify, debug, or adapt to changing circumstances;
  • Stability: A property of software related to the risk of modifications having unanticipated effects.

System maintenance includeshardware maintenance, software maintenance and data maintenance, of which the software maintenance types are as follows:

Correctness maintenance: modifications made when bugs are discovered.

Adaptive maintenance: passive modification and upgrade of software due to changes in the external environment.

Perfection maintenance: Based on users' initiative to put forward more demands for the software, modify the software and add more functions to make it more functional and more complete than the previous software.

Preventive maintenance: Make preventive modifications to fix bugs that may occur in the future.

system assesment

Systematic review classification

Project approval evaluation: Pre-evaluation before system development, analyzing whether to approve project development, and conducting feasibility evaluation.

Mid-term evaluation: A stage review at each stage of project development mid-term. Or if the project encounters major changes during development, the evaluation should continue.

Project completion evaluation: After the system is put into formal operation, a comprehensive evaluation of the system is conducted to understand whether the system has achieved the expected purposes and requirements.

Indicators for systematic evaluation

(1) Starting from the components of the information system, the information system is a system composed of humans and machines, so indicators can be constructed based on two clues: operating effect and user needs (human), system quality and technical conditions (machine).

(2) Starting from the evaluation object of the information system, for developers, they are concerned about system quality and technical level; for users, they are concerned about user needs and operation quality; the external environment of the system is mainly through social reflected by benefit indicators.

(3) From an economic perspective, establish indicators based on three clues: system cost, system benefit and financial indicators.

Guess you like

Origin blog.csdn.net/chengsw1993/article/details/125215089