2023-2024 Cloud Track Simulation Question Bank

The 2023-2024 Cloud Track Simulation Question Bank is online, fully covering cloud computing, cloud services, big data and artificial intelligence test points, all with analysis, real-time updates, and permanent use

Participants and requirements:

Eligible candidates: Current Huawei ICT Academy students and undergraduate and higher vocational college students who are willing to become Huawei ICT Academy in the future.

Participation requirements:

1. There is no limit to the number of contestants who can register for the ICT Academy. A maximum of 30 contestants for non-Huawei ICT Academy can apply;

2. Teams participating in the National Finals need to have available PC and Internet access;

3. The finals team must consist of 3 students and 1 instructor. The students and instructors must be from the same university;

4. Each finalist can only participate in one track (registrations for the practice competition and innovation competition are mutually exclusive).

Schedule introduction:

Three stages: regional competition; national competition; global finals.

The competition is divided into practical competition and innovation competition. The practical competition includes two tracks: network and cloud. It mainly tests students’ ICT theoretical knowledge reserve, computer practical ability and teamwork ability; the innovation competition focuses on the Internet of Things, big data, and artificial intelligence. Application innovation in new technology directions, etc. mainly examines students’ innovation, entrepreneurship and cooperative development capabilities.

5. Award setting

Special prize, first prize, second prize, third prize.

If you need a question bank, please click three times in the comment area and leave a message. I will chat with you privately in the background.

2023-2024 Cloud Track Simulation Question Bank

Cloud computing test center

1. In Huawei Cloud Stack, in order to facilitate administrators to manage cloud resources in conjunction with the enterprise's IT projects, enterprise projects can be created and the usage of the cloud resource pool by each department can be controlled through enterprise project quotas. Which of the following roles cannot create enterprise projects? (Cloud computing test center)

A. Secondary VDC administrator

B. Acting maintenance administrator

C. Operations manager

D. Level 1 VDC administrator

Answer:B

Analysis: In Huawei Cloud Stack, enterprise projects are created by operations administrators or first-level VDC administrators. Agency administrators are the people responsible for maintaining cloud platform equipment and systems. They do not have the authority to create enterprise projects, so theanswer is B. Agency management member.

2. Which of the following descriptions of Huawei Cloud Stack as a commercial solution is incorrect? (Cloud computing test center)

A. Reliability, including overall reliability, data reliability and single device reliability

B. Security, follow industry security standards, design security protection, and ensure the safety of customer data centers

C. Scalability. The resources supporting the data center need to be elastically scalable according to business application workload requirements. The IT infrastructure should be loosely coupled with the business system.

D. Advancedness, synchronize the open source OpenStack release version twice a year to ensure that the latest open source OpenStack features are synchronized to the customer's current network environment

Answer:D

Analysis: As a commercial solution, Huawei Cloud Stack does have the characteristics of reliability, security, and scalability. However, Huawei Cloud Stack does not synchronize the open source OpenStack release version twice a year, which is a wrong description. Therefore, the incorrect option is D. Advancedness.

14. When going to the backup server, you need to select the backup file of the corresponding component. What are the principles for backing up files? (Cloud computing test center)

A. First check the time of the corresponding component failure alarm, and select the backup file closest to the failure time for recovery.

B. First check the time of the corresponding component failure alarm, and select the backup file closest to the failure time for recovery.

C. First check the time of the corresponding component failure alarm, and select the backup file with the largest volume before the failure time for recovery.

D. First check the time of the corresponding component failure alarm, and select the backup file with the largest volume after the failure time for recovery.

Answer:A

Analysis: The principle of backing up files is to first check the time of the corresponding component failure alarm, and select the closest backup file before the failure time for recovery. In the backup server, the backup files of each component are stored. When a component fails and needs to be restored, you need to select the appropriate backup file for the restore operation. In order to ensure the integrity and accuracy of the data, the closest backup file before the failure time is usually selected for recovery. This can restore to the state before the failure as much as possible and reduce data loss. Therefore, option A is the correctanswer. Option B means to select the closest backup file after the failure time for recovery, which does not comply with the principle of data recovery. Options C and D are selected based on the size of the backup file, and have nothing to do with whether the failure time matches, nor do they conform to the principles of data recovery.

15. When the "Users" permission group is not configured during the template creation process, when adding virtual machine users, what can the permission group be set to? (Cloud computing test center)

A.Users

B.Auditors

C.Operators

D.Administrators

Answer:D

Analysis: When the "Users" permission group is not configured during the template creation process, you can set the permission group to "Administrators" when adding virtual machine users. Permission groups are used to define permission levels and access rights for virtual machine users. In the absence of the "Users" permission group, the "Administrators" permission group is the highest permission group and has the highest management permissions and access permissions. Therefore, "Administrators" in option D is the correct choice. The "Users" permission group of option A is not available in this scenario. The "Auditors" and "Operators" permission groups of options B and C have lower permission levels and do not meet the requirements for adding virtual machine users.

Cloud service test site

73. Which operations can only be performed when the elastic cloud server is shut down? (Cloud service test site)

A.Reset password

B. Create an image

C.Change security group

D. Migrate cloud server

Answer:D

Analysis: When the elastic cloud server is running, operations that can be performed include restarting, adjusting configuration, changing security groups, etc. However, some operations can only be performed when the elastic cloud server is shut down. The following is an explanation of each option: A. Reset password: You do not need to shut down the elastic cloud server to reset the password. B. Create an image: You can create an image while the elastic cloud server is running or when it is shut down. C. Change the security group: You can change the security group while the elastic cloud server is running, or when it is shut down. D. Migrate the cloud server: Migration can only be performed when the elastic cloud server is shut down, because the running cloud server needs to be stopped first, its data copied to a new location, and then the cloud server restarted. Therefore, option D describes operations that can only be performed when the elastic cloud server is shut down.

74. If only one scaling policy is enabled in the scaling group, the execution action in the scaling policy is: add two instances. How many instances are there in the current scaling group? (Cloud service test site)

A. 1

B. 2

C. 3

D. Not sure

Answer:D

Analysis: According to the question description, only one scaling policy is enabled in the scaling group, and the action performed by this scaling policy is to add two instances. Based on the given information, it is impossible to determine how many instances are in the current scaling group. Because we don’t know how many instances there are in the scaling group before executing the scaling policy. Assume that there are N instances in the scaling group before executing this scaling policy. After adding two instances, the number of instances in the scaling group will increase to N+2. However, since the current number of instances in the scaling group is not provided in the question, it is impossible to determine how many instances there are in the scaling group. Therefore, option D is not sure the correctanswer.

75. By creating (), the system can automatically back up the cloud disk at a set time point. (Cloud service test site)

A.Backup

B. Label

C.Backup strategy

D. Share

Answer:C

Analysis: By creating a backup policy, the system can automatically back up the cloud disk at a set time point. A backup policy is a setting for automatically backing up cloud disks on a regular basis. By creating a backup policy, you can specify parameters such as the backup time point, backup cycle, and number of retained backups, so that the system can automatically back up the cloud disk according to the set rules. Therefore, option C backup strategy is the correctanswer.

76. If you want to keep multiple versions of an object, which of the following methods can be used? (Cloud service test site)

A. Modify the storage class

B. Enable log management

C. Configure life cycle rules

D. Enable multi-version control

Answer:D

Parsing: If you want to keep multiple versions of an object, you can enable multi-version control. Enabling multi-versioning is a feature in the object storage service that allows multiple versions of the same object to be stored in the same bucket. When multi-version control is enabled, each time the object is updated or deleted, the system will automatically retain the previous version of the object, thereby achieving multi-version management and retention. Therefore, option D, enabling multi-versioning, is the correctanswer.

Artificial Intelligence AI Test Points

238.Which of the following are subfields of AI? (AI test site)

A.Machine learning

B.Computer Vision

C. Voice recognition

D.Natural language processing

Answer:ABCD

Analysis: Machine learning is a method that allows computers to imitate human learning capabilities and improve the performance of algorithms by learning rules and patterns from data. Computer vision refers to the ability of computers to understand and analyze images and videos like humans, including image recognition, target detection, face recognition and other application scenarios. Speech recognition refers to the technology that converts voice signals into text or commands through computer programs. It is one of the important technologies for realizing voice interaction. Natural language processing refers to the ability of computers to process natural language, including tasks such as text classification, information extraction, and semantic understanding. It is a key technology for realizing applications such as intelligent dialogue and voice assistants. To sum up, the ABCD options all belong to the subfield of AI.

239.What elements does artificial intelligence include? (AI test site)

A. Algorithm

B. Scenario

C.Computing power

D.data

Answer:ABCD

Analysis: Artificial intelligence refers to a discipline that enables machines to simulate and realize human intelligence. In the process of realizing artificial intelligence, many elements need to be involved, the most important of which are algorithms, scenarios, computing power and data. Algorithms refer to artificial intelligence models designed, optimized and implemented using mathematics, logic and other methods, and are the basis for realizing artificial intelligence. Scenarios refer to specific application scenarios for artificial intelligence applications, such as face recognition, intelligent voice, etc. Computing power refers to the computing power required to support the operation of artificial intelligence technology, including hardware equipment and related software tools. Data is an important foundation for artificial intelligence, involving the quality, quantity, and type of data. Therefore, the correctanswer is ABCD, which is algorithm, scenario, computing power and data.

240.Which of the following aspects belong to Huawei's full-stack AI solution? (AI test site)

A.Ascend

B.CANN

C.ModelArts

D.MindSpore

Answer:ABCD

Analysis: Huawei’s full-stack AI solution mainly consists of four parts, namely Ascend chip, CANN (Huawei neural network processor architecture), ModelArts platform and MindSpore framework. Among them, the Ascend chip is an AI chip independently developed by Huawei and supports various application scenarios such as high-performance computing, deep learning inference and training; CANN is a neural network processor architecture for Ascend chips, which can greatly improve the hardware acceleration function; ModelArts is Huawei Cloud's AI-based full-stack solution provides a rich set of AI development and operating environments; MindSpore is Huawei's independently developed AI framework, which is efficient, flexible, and easy to use. Therefore, the correctanswer is ABCD.

241. Which of the following are the application areas of AI? (AI test site)

A.Smart education

B.Smart city

C.Smart home

D.Smart medical care

Answer:ABCD

Analysis: The application fields of artificial intelligence are very wide, covering all walks of life. Smart education, smart cities, smart homes, smart medical care, etc. are all areas of AI application. Specifically: A. Smart education: AI can be applied to intelligent recommendation of educational resources, intelligent tutoring, teaching management and other fields to help teachers and students improve teaching and learning efficiency. B. Smart cities: AI can be applied to various aspects such as urban transportation, environmental protection, security, and governance to improve the smartness of cities through data analysis and intelligent decision-making. C. Smart home: AI can provide families with services such as smart home control, voice recognition, and smart security, making families more convenient, comfortable, and safe. D. Smart medical care: AI can be applied to medical imaging, auxiliary diagnosis, intelligent health management and other fields to improve the accuracy and efficiency of medical diagnosis and treatment. Therefore, options ABCD are all application fields of AI.

Big data test center

243. As the core object of Spark, RDD has which of the following characteristics? (Big data test site)

A.Read only

B.Partition

C.Fault tolerance

D. Efficient

Answer:ABCD

Analysis: RDD (elastic distributed data set), as the core object of Spark, has the following characteristics: A. Read-only (Immutable): RDD is a read-only data set , once created it cannot be modified. If the RDD needs to be transformed or operated, a new RDD will be generated. B. Partitioned: RDD divides the data into multiple partitions for parallel processing. Each RDD can contain multiple partitions, each partition being a subset of the data that is computed on a different node in the cluster. C. Fault-tolerant: RDD achieves fault tolerance by recording a series of operations (called lineage) for data conversion. If a partition is lost, Spark can use lineage to recompute the lost partition. D. Efficient: RDD supports memory computing, can store data in memory, and provides high-speed data access and processing capabilities. In addition, RDD can also achieve efficient computing through data partitioning and parallel processing. Therefore, the correctanswer is ABCD.

244.In FusionInsight HD, what computing frameworks can be used in real-time processing scenarios? (Big data test site)

A.Spark Streaming

B.Streaming

C.MapReduce

D.HDFS

Answer:AB

Analysis: In real-time processing scenarios in FusionInsight HD, the following two computing frameworks can usually be used: A. Spark Streaming: Spark Streaming is a streaming computing based on the Spark engine frame. It allows for high-performance, highly scalable processing and analysis on real-time streaming data, with fault tolerance, exactly-once semantics, and elasticity. B. Streaming (also known as Storm): Streaming (also known as Storm) is an open source, real-time streaming processing system that can perform distributed computing and processing of real-time streaming data. It supports fault tolerance, scalability, high throughput and other features, and is suitable for real-time processing scenarios that require low latency and high throughput. C. MapReduce: MapReduce is a batch computing model and is not suitable for real-time processing scenarios. D. HDFS: HDFS is a distributed file system that provides large-scale data storage and access capabilities. It is not a framework specifically used for real-time computing.

245.Which of the following reasons will cause the NameNode of HDFS to enter safemode (safe read-only mode)? (Big data test site)

A. The disk space where the metadata of the active and standby NameNode is located is insufficient.

B. The number of lost blocks exceeds the threshold.

C. Lost replicas exceed the threshold.

D. The damaged copy exceeds the threshold.

Answer:AB

Analysis: The NameNode in HDFS is the key node of the cluster. It maintains the metadata of the entire file system namespace and data blocks. When certain circumstances occur, the NameNode may enter safemode (safemode), in which the cluster is in a read-only state and cannot write new data blocks. Common reasons that cause the NameNode to enter safe mode include the following: A. Insufficient disk space for the metadata of the active and backup NameNodes. B. The number of lost blocks exceeds the threshold. By default, when the number of replicas of a data block is less than the minimum number of replicas (dfs.replication.min) that stores this data block, the data block is considered lost. C. Lost replicas exceed the threshold. When all copies of a data block expire, the data block is considered lost. If the number of lost replicas exceeds the threshold, the NameNode enters safe mode. D. The number of damaged replicas exceeds the threshold. A data block is considered corrupt when all copies of it are corrupted or lost. If the number of corrupted replicas exceeds the threshold, the NameNode enters safe mode. Therefore, the correctanswers to this question are A and B.

If you need a question bank, please click three times in the comment area and leave a message. I will chat with you privately in the background.

Guess you like

Origin blog.csdn.net/weixin_52808317/article/details/133699158