System analysts practice the knowledge points of wrong questions every day

computer network:

  1. A problem with the RIP protocol is that when the network fails, it takes a long time to transmit information to all routers. In this intermediate process, it is actually a routing loop problem; when a routing loop occurs, the routing table will change frequently, resulting in one or several routing tables being unable to converge, resulting in the network being in a state of Paralyzed or semi-paralyzed state. Solutions for routing loops, the solutions for scenario routing loops are as follows:
    1. Define the maximum value and stop accumulating after reaching the value
    2. split horizon
      1. Simple horizontal split: routing entries received from this interface are no longer sent from this interface. This question describes the simple horizontal split method
      2. Split horizon with poison reverse: routing entries sent out from this interface will be sent out from this interface, but marked as unreachable
      3. Create a timer, the router delays the possible faults in the network, and then updates after confirmation

Multimedia foundation, basic concept of multimedia technology:

Rich Text Format (RTF) is a format for text and graphic documents that is easily viewed with different devices, systems.

  1. WAV is a sound file format developed by Microsoft Corporation. It conforms to the RIFF file specification and is used to save audio information resources of the Windows platform. It is widely supported by the Windows platform and its applications.
  2. The full name of JPG is JPEG. JPEG images store a single bitmap in 24-bit color. JPEG is a platform-independent format that supports the highest level of compression. However, this compression is lossy. Progressive JPEG files support interlacing.
  3. MPEG is an international standard for moving image compression algorithms, and it is now supported by almost all computer platforms, including MPEG-1, MPEG-2, MPEG-4, and MPEG-1 is widely used in VCD (video DIsk). HDTV (High Definition Television Broadcasting) and some demanding video editing, processing aspects. The file extension for MPEG-format video is usually MPEG or MPG.

Requirements Engineering---UML

UML describes the analysis and design model of the system from multiple aspects through the graphical representation mechanism, including use case diagrams, static diagrams, behavior diagrams and implementation diagrams.

Behavior diagram:

  1. Interaction diagrams, state diagrams and activity diagrams describe the dynamic behavior of the system from different aspects.

Interactive diagram:

  1. Describe the message passing between objects, which can be divided into two forms: sequence diagram and collaboration diagram

Embedded system --- multiprocessor system

Massively parallel processing computer: Massive Parallel Processor. A multiprocessor system composed of a large number of general-purpose microprocessors is suitable for processing multiple instruction streams and multiple data streams. Its characteristics include:

  1. Most MPP systems use standard CPUs as their processors
  2. The MPP system uses a high-performance customized high-speed Internet and network interface, which can transmit messages under the condition of delay and high bandwidth.
  3. MPP is an asynchronous MIMD system with distributed storage structure. Its program has multiple processes, which are distributed on each microprocessor. Each process has its own independent address space, and the processes communicate with each other through message passing.
  4. There is a special problem in MPP, that is, how to implement fault tolerance. In the case of using thousands of CPUs, it is impossible to avoid several CPU failures every week, so large-scale MPP systems always use special hardware and software to monitor the system, detect errors and recover from them smoothly.

System Design --- Structural Design

The coupling degree between software modules is sorted from low to high as:

  1. Indirect coupling: There is no direct relationship between the two modules, and the connection between them is completely realized through the control and calling of the main module
  2. Data coupling: a group of modules pass simple data with the help of parameter tables
  3. Tag coupling: a group of modules pass record information (data structures) through parameter tables
  4. Control coupling: the information passed between modules contains information used to control the internal logic of the module
  5. External coupling: a group of modules all access the same global simple variable instead of the same global data structure, and the information of the global variable is not passed through the parameter table
  6. Public coupling: Multiple modules all access the same common data environment. The common data environment can be a global data structure, a shared communication area, a common coverage area of ​​memory, etc.
  7. Content coupling: one module directly accesses the internal data of another module, and one module does not go through the normal entry to the inside of the other module; some of the program codes of the two modules overlap, and one module has multiple entries

Information Security---Others

Database disaster recovery should belong to system security and application security, because on the one hand, the database management system belongs to the system software, and on the other hand, the database stores application-level data.

System Design --- Design Patterns

Object mode:

Behavior-like pattern:

  1. interpreter mode
  2. iterator pattern
  3. memo mode
  4. template method
  5. visitor pattern

Class structure pattern:

  1. appearance mode
  2. Flyweight mode
  3. Proxy mode

From the figure, there are very few belonging to the class mode, only: the factory method in the creation mode; the adapter mode in the structural mode; the behavioral mode

Enterprise Informatization Strategy and Implementation----Data Warehouse and Data Mining

As can be seen from the name, data cleaning is to "wash out" the "dirty", which refers to the last procedure to find and correct identifiable errors in data files, including checking data consistency, dealing with invalid and missing values, etc., because The data in the data warehouse is a collection of data oriented to a certain topic. These data are extracted from multiple business systems and contain historical data. In this way, it is inevitable that some data are wrong data, and some data are mutually incompatible. There are conflicts, these wrong or conflicting data are obviously not what we want, called "dirty data". We wash away "dirty data" according to certain rules, which is data cleaning. The task of data cleaning is to filter the data that does not meet the requirements, and hand over the filtered results to the competent business department to confirm whether to filter out or be corrected by the business unit before extracting. Data that does not meet the requirements mainly include incomplete data, wrong data, and repeated data. Data cleaning is different from questionnaire review. Data cleaning after input is usually done by computer instead of manual.

Computer Network --- Open Systems Interconnection Reference Model:

Due to the attenuation of electromagnetic signals transmitted in the network medium, and the attenuation of signals due to electromagnetic noise and interference, the connection distance of the LAN is limited. In order to eliminate this limitation and expand the transmission range, a network repeater (Repeater) can be used to connect two cables for bidirectional forwarding of signals at both ends of the repeater. After the repeater detects the cable signal, it organizes and amplifies the signal and forwards it to the network connected by another cable. It works on the same principle as the beacon used primarily to report war information. After the soldiers on the beacon tower observed the light and smoke of the beacon tower in the distance, they lit firewood, animating the fire and smoke from their own beacon tower, and passing on the war information one by one.

Enterprise informatization strategy and implementation---enterprise portal:

With the rapid development of Internet technology, enterprise portals have become an important means for enterprises to optimize business models, expand market channels, improve customer service, and enhance corporate image and cohesion. According to the actual application types, enterprise portals can be divided into four categories, namely enterprise websites, enterprise information portals, enterprise knowledge portals and enterprise application portals. In order to support the workflow across multiple application systems, enterprise portals mainly use application integration technology to integrate the processing logic of existing application systems.

Embedded system---microkernel operating system

The microkernel architecture is shown in the figure below. The basic idea is to extract the part directly related to the hardware in the operating system as a common layer, called the microkernel abstraction layer (HAL). This hardware abstraction layer is actually a virtual machine, which provides a series of standard services to all other layers based on this layer through the API interface. Only a few components such as processor scheduling, storage management, and message communication are reserved in the microkernel. Some components of the traditional operating system kernel are implemented outside the kernel, such as the file management system, process management, device management, virtual memory and network functions in the traditional operating system are placed outside the kernel as an independent subsystem to achieve. Therefore, most codes of the operating system only need to be designed on a unified hardware architecture.

The main features of the microkernel architecture are:

  1. The kernel is very small, and many operating system services do not belong to the kernel, but run on top of the kernel, so that the kernel does not need to be recompiled when high-level modules are updated
  2. With a hardware abstraction layer, the kernel can be easily ported to other hardware architectures. Because when it needs to be transplanted to a new software or hardware environment, the microkernel can be embedded in the new hardware environment only by slightly modifying the hardware-related parts. In most cases, there is no need to transplant external servers or clients. application
  3. Flexibility and scalability One of the biggest advantages of the microkernel is its flexibility and scalability. If you want to implement another view, you can add an external server. If you want to expand the functionality, you can add and expand internal servers.

Software Engineering---Development Model

The spiral model combines the waterfall model and the rapid prototyping model, emphasizes the risk analysis that other models ignore, and is especially suitable for large and complex systems. The spiral model performs several iterations along the spiral, and the four quadrants in the diagram represent the following activities.

  1. Make a plan: determine the software goal, select the implementation plan, and clarify the constraints of project development;
  2. Risk analysis: analyze and evaluate the selected options, consider how to identify and eliminate risks
  3. Implementation Engineering: Validation of Implementation Software Development
  4. Customer evaluation: Evaluate the development work, make suggestions for revisions, and formulate plans for the next step

The spiral model is driven by risk, emphasizes alternatives and constraints to support software reuse, and helps to incorporate software quality as a special goal into product development. However, the spiral model also has certain restrictions, as follows:

  1. The spiral model emphasizes risk analysis, but it is not easy for many customers to accept and believe in this analysis, and to respond accordingly. Therefore, this model is often suitable for large-scale software development in-house
  2. There is no point in performing a risk analysis if it will greatly affect the profit of the project, therefore, the spiral model is only suitable for large-scale software projects
  3. Software developers should be good at finding possible risks and analyzing risks accurately, otherwise it will bring greater risks
  4. The first is to determine the goals of a stage, complete the options and constraints of these goals, and then analyze the development strategy of the plan from the perspective of risk, and try to eliminate various potential risks. Sometimes it needs to be completed by building a prototype. If some risks cannot If excluded, the program is terminated immediately, otherwise the next development step is started. Finally, the results of this phase are evaluated, and the next phase is designed.

Software Engineering---Clean Room Software Engineering

Clean room software engineering is a formalized method of software development that can develop higher quality software. It uses the box structure specification for analysis and modeling, and uses correctness verification as the main mechanism to find and eliminate errors, and uses statistical tests to obtain the information needed to verify the reliability of software. Cleanroom software engineering emphasizes rigor in specification and design, as well as formal verification of each element of the design model using mathematically based proofs of correctness.

Operating system --- process status

Resources shared by threads within the same process include:

  1. Heap: Since the heap is created in the process space, it is shared
  2. Global variable: It has nothing to do with a specific function, so it has nothing to do with threads, so it is also shared
  3. Static variable: Although for a local variable, it is placed in a certain function in the code, but its storage location is the same as that of the global variable. The .bss and .data segments opened up in the heap are shared
  4. Common resources such as files: Threads using these common resources must be synchronized. Win32 provides several ways to synchronize resources, including signals, critical sections, events and mutexes

Exclusive resources include:

  1. Stack: The stack in each thread is exclusive to the thread itself.
  2. Registers: Registers are used when each thread executes instructions, and registers between threads are not shared

Computer composition and architecture --- multi-level storage structure

  1. Access by content is the most basic feature of associative storage. Cache is a very classic associative memory

Multimedia Basics---Common Multimedia Standards

The MPEG-1 standard refers to the encoding of moving images and their accompanying audio on digital storage bodies, and its digital rate is 1.5MB/s. In order to improve the compression ratio, intra-frame/inter-frame image data compression techniques must be used at the same time.

The intra-frame compression algorithm is roughly the same as the JPEG compression algorithm. DCT-based transform coding technology is used to reduce spatial redundant information. The inter-frame compression algorithm uses prediction and interpolation methods. The prediction error can be further compressed by DCT transform coding. Inter-frame coding technology can reduce redundant directions in the direction of the time axis

Enterprise Informatization Strategy and Implementation --- System Modeling

The IDEF integrated definition method is a general term for a series of modeling, analysis and simulation methods. Each method obtains a certain type of information through modeling. Among them, IDEF0 can be used to model business processes; IDEF4 can be used to model object-oriented design

Mathematics and Economic Management---Linear Programming

The feasible solution region of linear programming is formed by a set of linear constraints, and geometrically speaking, it is the region formed by some linear solutions. Since the objective function of linear programming is also linear, the equivalence domain of the objective function is a linear region. If an interior point in the feasible solution domain is in the objective function and reaches the optimal value, the optimal solution can also be reached through the intersection of the objective function equivalence domain of the interior point and the boundary of the feasible solution domain. so:

The conclusion of the first step is that the optimal solution must be reached at the boundary of the feasible solution domain. Since the iso-value domains of the objective function are parallel, and the value of the objective function will increase or decrease (or remain unchanged) as the iso-value domains move in parallel in a certain direction. If the optimal solution is reached at a non-vertex on the boundary of the feasible solution domain, as the equivalence domain moves in a certain direction, the value of the objective function will increase or decrease (contradicting with the optimal solution) or remain unchanged (in this paragraph The optimal solution is reached on the boundary), so the optimal solution will still be reached at a vertex of the feasible solution domain.

Since the feasible solution domain is surrounded by a linear region corresponding to a set of linear constraints, when adding another constraint, either the feasible solution domain is reduced (the new constraint condition divides the original feasible solution domain), or the feasible solution domain is unchanged (the new constraints do not intersect the original feasible solution domain).

If the feasible solution domain is unbounded, when the equivalent domain of the objective function is translated in a certain direction (the value of the objective function changes linearly), there may be an infinite increase or an infinite decrease, so there may be no optimal solution. Of course, sometimes, even if the feasible solution domain is unbounded, there is still an optimal solution, but there are cases where no optimal solution exists.

Since the feasible solution domain of linear programming is a convex domain, if two points are randomly selected in the area, all points on the line connecting these two points belong to the feasible solution domain (linear function). If the linear solidification problem reaches the optimal solution (equivalent value) at two points in the feasible solution domain, then the optimal solution can be achieved on the connection of these two points (if the equivalent value domain of the objective function includes some two points , then all points on the line connecting these two points are also included). Therefore, the optimal solution to the linear programming problem is either 0 (none), or the only 1, or there are infinitely many.

Software Engineering---Information System Development Method

In the case of unclear requirements, it is very risky to adopt the structured method waterfall model; the formal method has higher requirements and must be based on mathematical modeling. In this way, only extreme programming is the most suitable. Because extreme programming is an agile method, it emphasizes small steps and fast running, and will continue to release small books, which can effectively deal with the situation where the requirements are not clear

Embedded system---bus and interface

Multiple devices and slave devices are allowed on the shared bus, and there may be multiple master devices that require the use of the bus at the same time (the execution of the operation is initiated by the device). In order to prevent bus competition, only one is allowed on the shared bus at a certain time. Master devices use the bus. This requires bus arbitration. Centralized arbitration uses a central bus arbiter (bus controller), which determines who can obtain the right to use the bus from the master devices that simultaneously request use on the bus. There are three main methods: daisy chain query method, counter timing query and Independent request method.

  1. In the daisy chain query method, the order in which devices are connected determines their priority, so it is impossible to achieve equal opportunities
  2. The counter timing query (polling) method can make the chances of each master device to get the right to use the bus basically the same
  3. The independent request method can also achieve that the chances of each master device to obtain the right to use the bus are basically the same

Embedded system --- multiprocessor system:

Broadly speaking, a computer system that uses multiple computers to work together to accomplish a required task is a multiprocessor system. The traditional narrow multi-processor system refers to the use of multiple cpus in the system to execute multiple user programs in parallel to improve the throughput of the system or to perform redundant operations to improve the reliability of the system. Program level parallelism belongs to job level and task level.

Parallelism can be divided into coarse-grained parallelism and fine-grained parallelism. The so-called coarse-grained parallelism is to run multiple processes on multi-processors separately, and multiple processors cooperate to complete a program. The so-called fine-grained parallelism refers to parallel processing at the operation and/or instruction level in one process. These two kinds of granular parallelism can be used simultaneously in a computer system, and fine-grained parallelism is used on a single processor.

Database system---ER model

  1. Requirements analysis: data flow diagram, data dictionary, requirements specification
  2. Conceptual structure design: ER model
  3. Logical structure design: conversion rules, normalization theory
  4. Physical Design: Views, Integrity Constraints, and Application Processing Specifications

Multimedia basics --- basic concepts of multimedia technology:

The basic characteristics of computer multimedia technology are digitization, integration, interactivity, and being formed around and controlled by computers. Computer and multimedia technology are all based on digitization.

Requirements Engineering---UML

UML uses five system views to describe the organizational structure of the system, including the components of the system decomposition, as well as their associations, interaction mechanisms and guiding principles, etc. to provide information for system design.

  1. The use case view is the most basic requirement analysis model.
  2. The logical view represents the architecturally significant parts of the design model, ie, subsets of classes, subsystems, packages, and use case realizations.
  3. The process view is the modeling of executable threads and processes as active classes.
  4. The implementation class view models the files and components that make up the system-based physical code.
  5. The deployment view is to deploy components to a set of physical nodes, representing the mapping and distribution structure of software to hardware

Computer network---TCP, IP protocol family

In the name resolution process, the DNS server first queries the local cache. If there is no record of the domain name in the cache, it searches the primary domain name server in the region, then queries the forwarding domain name server, and finally the root domain name server. Therefore, it is correct The order is: local cache record --- "zone record ---" forwarding domain name server ---- "root domain name server

Information Security---Firewall Technology

DMZ is the abbreviation of English demilitarized zone, and the Chinese name is called "isolation zone", also known as "demilitarized zone". It is to solve the problem that the external network cannot access the internal network server after the firewall is installed, and a buffer between the non-secure system and the secure system is set up. This buffer is located in the small network area between the internal network and the external network of the enterprise. , In this small network area, some server facilities that must be made public can be placed, such as corporate Web servers, FTP servers, and forums. On the other hand, through such a DMZ area, the internal network is more effectively protected, because this kind of network deployment has one more checkpoint for attackers than the general firewall solution. The network structure is shown in the figure below

Enterprise Informationization Strategy---Information System Strategic Planning

Information System Strategic Planning (ISSP) is to start from the strategy, construct the basic information structure of the enterprise, carry out unified planning, management and application of internal and external information resources of the enterprise, use information to control enterprise behavior, assist enterprises in decision-making, and help enterprises The company achieves its strategic goals.

The ISSP method has gone through three main stages, and the methods used in each stage are different.

The first stage mainly focuses on data processing and information system planning around the needs of functional departments. The main methods include enterprise system planning method, critical success factor method and strategy collection transformation method;

The second stage mainly focuses on the internal management information system of the enterprise, and conducts information system planning around the overall needs of the enterprise. The main methods include strategic planning method, information engineering method and strategic grid method;

In the third stage, considering the internal and external environment of the enterprise, the integration is the core, and the information system planning is carried out around the strategic needs of the enterprise. The main methods include the value chain analysis method and the strategic consistency model.

Strategic goal set conversion method: regard the whole process as a collection of information, and transform the strategic goal of the organization into the strategic goal of the management information system.

Enterprise system planning method: By identifying enterprise goals, enterprise processes and data from top to bottom, and then analyzing the data, design information systems from bottom to top.

Enterprise informatization strategy and implementation---Information system strategic planning

Information Resource Planning (Information Resource Planning IRP) is the basic project of information construction, which refers to the comprehensive planning of the information needed for the production and operation activities of enterprises, such as generation, acquisition, processing, storage, transmission and utilization.

IRP emphasizes the close combination of requirements analysis and system modeling. Requirements analysis is the preparation for system modeling, and system modeling is the finalized and planned expression of user needs. The main process of IRP is as follows:

  1. Business requirement analysis: functional domain analysis, business domain definition, business process sorting
  2. Data requirements analysis: user view collection, user view grouping, analysis, data element analysis
  3. System function modeling: subsystem definition, function module definition, program unit definition
  4. System data modeling: subject database definition, basic table definition, extended table definition

Enterprise informatization strategy and implementation---concept and type of information system

Enterprise information system is a large system with business complexity and technical complexity. In order to make the target system not only realize the basic functions of the current system, but also improve and enhance it, system analysts must first understand and describe the actual existing current system. system, and then improve it to create a target new system that is based on the current system, but better than the current system.

The purpose of system development is to transform the physical model of the existing system into the physical model of the target system.

Enterprise Informatization Strategy and Implementation --- System Modeling

IDEF is a general term for a series of modeling, analysis and simulation methods. There are 16 sets of methods from IDEF0 to IDEF14. Each set of methods obtains a specific type of information through a modeling program.

  1. IDEF0: Functional Modeling
  2. IDEF1: Information Modeling
  3. IDEF1X: Data Modeling
  4. IDEF2: Simulation Modeling Design
  5. IDEF3: process description acquisition
  6. IDEF4: Object-Oriented Design
  7. IDEF5: Ontology description acquisition
  8. IDEF6: Acquisition of Design Principles
  9. IDEF7: Information System Audit
  10. IDEF8: User Interface Modeling
  11. IDEF9: Scenario-driven information system design
  12. IDEF10: Implementation Framework Modeling
  13. IDEF11: Information Artifact Modeling
  14. IDEF12: Organizational Modeling
  15. IDEF13: Three-mode mapping design
  16. IDEF14: Network Planning

The modeling features of IDEF0 can be used to describe the business process of an enterprise, and its ladder level can be used to describe the ladder structure characteristics of the business process. From a high-level view, the functional activities of IDEF0 correspond to the business process; and from a low-level view , the functional activities correspond to the business activities of the process. Using IDEF0's activity description method and the connection method between activities, the business process architecture can be well described. The IDEF0 model is visual, intuitive, and easy to understand and analyze. However, this graphical model does not deeply reveal the internal structural characteristics and laws of the business process, and when the business process is complex, the corresponding directed graph is called a mutual Intersecting, chaotic networks are not conducive to analyzing the characteristics of the process.

Enterprise Informatization Strategy and Implementation---Government Informatization and E-government

E-government is divided into the following types:

  1. G2G Government To Government: Interactions between governments and interactions between governments and civil servants. Collection, processing and utilization of basic information, such as population information; decision support for governments at all levels; G2G includes in principle: Government to Civil Service (G2G, Government To Employee): internal management information system
  2. Government to Business (G2B, Government To Business): The policy environment provided by the government for businesses. Various business licenses, permits, certificates of conformity, and quality certification issued to business units.
  3. Business to Government (B2G, Business To Government): Enterprises pay taxes and enterprises provide services to the government, enterprises participate in bidding for various government projects, supply various goods and services to the government, enterprises make suggestions and appeal to the government
  4. Government to Citizen (G2C, Government To Citizen): The services provided by the government to citizens. Community public security and information related to public safety such as water, fire, and natural disasters. Management of accounts, various certificates and licenses.
  5. C2G Citizen To Government: Individuals should pay taxes and fines to the government as well as citizen feedback channels. Individuals should pay various taxes and fees to the government to understand public opinion and solicit opinions from the masses. Alarm services (burglar, medical, ambulance, fire, etc.)

Enterprise Informatization Strategy and Implementation --- Data Warehouse and Data Mining

  1. Correlation analysis: Correlation analysis is mainly used to discover the correlation between different events, that is, when one event occurs, another event often occurs. The focus of correlation analysis is to quickly discover those events that are associated with practical value. The main basis is that the probability and conditional probability of an event should conform to a certain statistical significance. While performing association analysis, two parameters also need to be calculated, namely the minimum confidence (credibility) and the minimum support read. The former indicates the minimum reliability that the rule needs to meet to filter out rules that are too unlikely; The latter is used to indicate the minimum degree that the rule needs to be satisfied in a statistical sense.
  2. Sequence analysis: Sequence analysis is mainly used to discover events that occur successively within a certain time interval. These events constitute a sequence, and the discovered sequence should have universal significance. The basis is not only statistical probability, but also time constraints. When performing sequence analysis, confidence and support should also be calculated.
  3. Classification analysis: Classification analysis analyzes the characteristics of samples with categories to obtain rules or methods for determining which samples belong to various categories. When using these rules and methods to classify samples of unknown categories, it should have certain accuracy. The main methods are Bayesian method based on statistics, neural network method, decision tree method and so on. Classification analysis first assigns a label (a group of categories with different characteristics) to each record, that is, classifies records according to the label, and then checks these calibrated records to describe the characteristics of these records. These descriptions may be explicit, eg, the definition of a set of rules, or implicit, eg, a mathematical model or formula.
  4. Cluster analysis: Cluster analysis is the process of aggregating samples without categories into different groups and describing each such group according to the principle of "like flock together". The main basis is that the samples gathered in the same group should be similar to each other, and the samples belonging to different groups should be sufficiently dissimilar. Cluster analysis is the inverse process of classification analysis. Its input set is a set of calibration records. That is, the input records are not processed in any way, the purpose is to reasonably plan the record collection according to certain rules, and describe different categories with explicit or implicit methods.

Information technology knowledge --- application integration technology --- database and data warehouse technology

The realization of business intelligence has three levels: data report, multi-dimensional data analysis and data mining

Enterprise Informatization Strategy and Implementation---Business Intelligence

The processing process of the business intelligence system includes four stages: data preprocessing, data warehouse establishment, data analysis and data presentation:

  1. Data preprocessing: the first step of integrating enterprise raw data, including three processes of data extraction, conversion and loading.
  2. Establish a data warehouse: the basis for processing massive data.
  3. Data analysis: the key to reflect the intelligence of the system, generally using OLAP and data mining technology.
    1. (OLAP) On-Line Analytical Processing: It not only performs data aggregation and aggregation, but also provides data analysis functions such as slicing, dicing, drill-down, roll-up and rotation. Users can easily perform multi-dimensional analysis on massive data.
    2. Data mining: the goal is to mine the hidden knowledge behind the data, establish analysis models through methods such as correlation analysis, clustering and classification, and predict the future development trend of the enterprise and the problems it will face.
  1. In the case of massive data and increasing analysis methods, data display mainly ensures the visualization of system analysis results.

System Planning---Cost-Benefit Analysis

According to the classification of cost behavior, it can be divided into fixed cost, variable cost and mixed cost.

  1. Fixed costs: Fixed costs refer to the costs that remain constant without being affected by changes in business volume within a certain period of time and within a certain business volume range, such as: salaries of management personnel, office expenses, depreciation of fixed assets, employee training fee etc. Fixed costs can be divided into discretionary fixed costs and binding fixed costs. Discretionary fixed cost refers to the fixed cost whose amount cannot be determined by the management, that is, the cost that must be spent, such as office space and machine depreciation, house and equipment rent, management staff wages, etc.
  2. Variable cost: Variable cost, also known as variable cost, refers to the cost whose total amount changes proportionally with the change of business volume within a certain period of time and within a certain range of business volume. For example: direct material costs, product packaging costs, outsourcing costs, development bonuses, etc. Variable costs can also be divided into discretionary variable costs and binding variable costs. Development bonuses, outsourcing expenses, etc. can be regarded as discretionary variable costs; binding variable costs are usually expressed as direct material consumption costs of system construction, and direct material costs are the most typical.
  3. Mixed costs: Mixed costs are costs that are a mixture of fixed costs and variable costs, such as water and electricity charges, telephone charges, etc. These costs usually have a base, beyond which the base will increase with the increase in business volume. For example, the cost of salaries of quality assurance personnel, equipment power costs, etc. is constant within a certain business volume, and will increase with the increase of business volume if it exceeds this amount. Sometimes, the wages of employees can also be attributed to mixed costs, because the usual wages of employees are generally fixed, but if overtime is required, there is a direct proportional relationship between overtime wages and the length of time.

Software Engineering---Information System Development Method

  1. Top-down development method: first define, design, program and test the problems in the highest level, and put the unsolved problems in the next level as the next subtask to solve;
  2. Bottom-up development method: According to the functional requirements of the system, start from specific devices, logic components or similar systems, and form the required system by interconnecting, modifying and expanding them;
  3. Formal method: It is a method with a solid mathematical foundation, which allows strict processing and demonstration of the system and development process, and is suitable for software development that requires extremely high system security levels
  4. Informal method: It does not take strictness as its main problem point, and it is reflected in the form of various development models.
  5. Holistic method: From the perspective of scope of application, it can be divided into holistic method and partial method. The method applicable to the whole process of software development is called the holistic method;
  6. Local approach: A software approach that applies to a specific stage of the development process is called a local approach.

Software Engineering---Development Model:

RUP includes four phases: initial phase, elaboration phase, component phase, delivery phase

  1. Inception phase: The tasks in the initial phase are to establish the business model and define the project boundaries. In the initial phase, all external entities that interact with the system must be identified, and the characteristics of the system's interaction with external entities must be defined. In this phase, the focus is on the main risks to the business and requirements of the overall project.
  2. Elaboration phase: The task of the elaboration phase is to analyze the problem domain, establish a sound structure, and eliminate the elements with the highest risk in the project. During the Elaboration phase, decisions on the architecture must be made based on an understanding of the entire system, including its scope, major functions, and non-functional requirements such as performance, while establishing a supporting environment for the project.
  3. Build Phase: All remaining components and application functionality are developed, integrated into a product, and tested in detail. In a sense, the build phase is a manufacturing process that focuses on managing resources and controlling operations to optimize cost, schedule, and quality. The main task of the construction phase is to minimize the development cost by optimizing resources and avoiding unnecessary scrap and rework; complete the analysis, development and testing of all required functions, and quickly complete the available version; determine whether the software, site and users are The software is ready for deployment.
  4. Delivery phase: When the baseline is complete enough to be installed in the actual environment of the end user, it enters the delivery phase. The focus of the delivery phase is to ensure that the software is available to the end user. The main task of the delivery phase is to test and make product releases version; finalize end-user support documents; confirm the new system according to user needs; train user maintainers; obtain user feedback on the current version, and adjust the product based on feedback, such as: debugging, performance or usability enhancements, etc.
  5. Each phase in RUP can be further broken down into iterations. An iterative A complete development cycle, resulting in an executable version of the product, which is a subset of the final product, which is incrementally developed from one iterative process to another until the final system is called. The traditional project organization is to pass through each workflow in sequence, and each workflow is only once, which is the familiar waterfall life cycle. The result of this is that by the end of the implementation period when the product is complete and testing begins, hidden problems left over from the analysis, design, and implementation phases will emerge in large numbers, and the project may be stopped and a long bug-fixing cycle begins.
  6. A more flexible and less risky approach is to go through the different development workflows multiple times, which allows for a better understanding of the requirements, constructs a robust architecture, and ultimately delivers a series of incrementally completed releases. This is called an iterative lifecycle. Each sequential pass in the workflow is called an iteration. The software life cycle is an iterative continuum through which software is developed incrementally. An iteration includes generating an executable development activity, as well as other auxiliary components necessary to use the release, such as release descriptions, user documentation, and so on. So a development iteration is in a sense a complete pass through all workflows including at least: requirements workflow, analysis and design workflow, implementation workflow, testing workflow, itself like a small Waterfall Project

Guess you like

Origin blog.csdn.net/qq_25580555/article/details/129669634