Experience after going through the pit for non-functional testing of financial data

Background (problem description) 

Recently, the author has received three non-functional testing tasks in the data center of three banks in a row. Now it has come to an end. Looking back on this period of time; I have gone through a lot of pitfalls. To sum up, the essence of the data center is the reuse and integration of enterprise capabilities. Sharing requires an agile response to rapidly changing business needs, so it requires the support of various platforms and components, not only the support of high-quality data, extraction and precipitation, but also reusable, shareable, and elastically scalable service capabilities. Therefore, during the entire non-functional testing cycle, the following work was done according to the customer's pain points and needs:

testing strategy 

Demand research stage: Responsible for preliminary demand research, clarify the requirements for data capabilities in various financial business scenarios, first familiarize yourself with the data asset and index system under the various business characteristics of the data center itself, its data blood relationship map, data collection, Processes such as processing, storage, and presentation use the characteristics of various components; secondly, clarify various typical business scenarios (customer marketing, financial performance, risk compliance, decision support, regulatory reporting, branch applications, product innovation, intelligent applications, channel services, etc.) the access methods provided by the data center (including data services, algorithm services, computing services, search services, model services, image recognition, etc.), whether it is an API or a visual interface, a message subscription model, or others, etc. Access frequency and access peak value in different time intervals, time range interval of a single query, number of records returned by a single query, and detailed data distribution curve. And use the mind map to sort out the corresponding test points and demand survey form. The final demand survey form covers typical application scenarios and indicators, etc., and explains the corresponding test strategy for each component design in the non-functional test plan. Reach various non-functional scenarios for quantification.

Test preparation phase: prepare corresponding scripts for various scenarios and polymorphic data sources (ES, hive, hbase, mysql, etc.) , data volume requirements) test data.

Test implementation phase:

Non-functional testing is carried out based on the concept of components first and then application. It is verified from three aspects of high availability, high performance, and high scalability. In terms of performance, it focuses on verifying whether the data center is equipped for various data analysis and data product development, and data input. Lake, data integration, etc. provide general and reliable platform support capabilities. Including data integration capabilities, data migration capabilities, data computing capabilities, data storage capabilities, and data access capabilities in various data forms.

In terms of reliability, because the data middle platform is an important basic platform for the banking business system, it serves the entire front-end system and provides support for the internal and external business systems, and because the use of the data middle platform depends on many underlying data background technology architectures , its business continuity is facing unprecedented challenges. Special attention should be paid to business continuity issues, and the system draws on the architecture of Internet companies, with many components; reliability issues are more prominent. Therefore, in terms of non-functional testing and reliability testing, verify from the following aspects to verify whether the gateway current limiting and downgrading are effective. Verify the effectiveness of current limiting based on global, application, and service dimensions. Verify  the cluster validity of cluster nodes such as web  service cluster nodes, data API gateway clusters, data API services, data query and visualization. Verify service downtime, traffic surge, cache penetration and other fault conditions; the impact on other service computing resources and important businesses. The microservice component hystrix performs service fusing and degradation to prevent the effectiveness of the failure chain reaction. Whether the data integration job supports rerunning.

In terms of scalability: For example, the big data basic platform components of the data center components need to have superior scalability, including but not limited to physical expansion of large-scale cluster servers, node and space expansion of distributed storage, and distributed computing , batch computing, scalable computing power of streaming computing, distributed database scalability, and intelligent data management, data federation functions, and resource management and control of multi-level scheduling, etc., can support high availability, high concurrency, Application scenarios of massive data. Therefore, we have verified the scalability in many aspects. Including elastic scaling of service cluster processing nodes for verification. Compute node scalability is verified.

In terms of test progress control: during the test process, timely adjust the non-functional test implementation strategy based on the test results, monitoring data, progress deviation, etc., in cooperation with team partners, using data-driven data science methods, supplemented by experience judgment; Increase the degree of parallelism to control the progress deviation within 5%. During the advancement of the test cycle; the number of defects shows a convergence trend with the progress of the stress test.

Test report stage: by collecting test monitoring and result data; forming a test report, analyzing the number and source distribution of defects; extracting project data assets. Through the implementation of this non-functional test, we draw on the advanced experience of the industry and are also familiar with the standard system of the data center (basic standards, technical standards, safety standards, application and service standards). 

  The data center involves common technical components:

  1) The construction of the big data basic platform adopts the CDP big data platform, which is the provider of platform data in the data

  2) Vue, the front-end development framework of the big data basic platform

  3) Big data basic platform back-end development framework spring boot

  4) The unified portal in the data center adopts the springcloud framework

  5) Ad hoc multidimensional olap analysis engine: kylin

  6) MPP  SQL  query engine impala

  7) Log management system ELK

  8) Log analysis and collection filebeat

  9) Authentication Kerberos

  10) Offline database: hive

  11) Distributed file system: HDFS

  12) Ad hoc query KV storage platform: HBase

  13) Offline computing platform: spark

  14) Real-time computing platform: storm

  15) Quasi-real-time computing platform: spark streaming

  16) Data platform reverse proxy/load balancing: F5, nginx

  17) Storage and retrieval of log information and data analysis information elastic search

  18) Hot data cache service: redis

  19) Data center microservice registration scheduling: eureka

  20) File cache Alluxio, realize file partition by day

  21) Message middleware Kafka

  22) Job resource scheduling: yarn

  23) Database (including configuration information monitoring information, etc.): Mysql

  24) Apollo, the data center configuration center

  25) Log file collection: filebeat

  26) service fuse hystrix

  27) Monitoring Suite: Promethues

  28) APM link monitoring: skywalking

  29) Authentication mechanism: kerberos

  30) Service registration container resource scheduling Kubernetes

  31) Deployment mode: virtual machine + container cloud high availability deployment

Finally, I would like to thank everyone who has read my article carefully. Reciprocity is always necessary. Although it is not a very valuable thing, you can take it away if you need it:

These materials should be the most comprehensive and complete preparation warehouse for [software testing] friends. This warehouse has also accompanied tens of thousands of test engineers through the most difficult journey, and I hope it can help you! Partners can click the small card below to receive  

 

Guess you like

Origin blog.csdn.net/okcross0/article/details/131982238