Database cloud capacity management

Greene said that, like many people, he can only work remotely from home during the coronavirus epidemic. The IT team he leads manages database cloud capacity in a converged VMware environment. He said that the capacity management issues faced by public cloud providers are similar to those being addressed by Greeneideas. Therefore, its IT team members have participated in various online supplier meetings and received online training to understand whether they have encountered similar problems in the cloud computing world, and what technologies and experiences can be learned to improve analysis and processes.

Database cloud capacity management Database cloud capacity management

After Greene understood the cloud computing provider's view of its customers, combined with its rich work experience, began to identify the challenges of capacity management. Therefore, the adoption of a private cloud by an organization may be perceived as insufficient capacity on a particular computer, or as a rising cost in a public cloud environment.

The key topics Greene provides for capacity management in cloud computing environments are:

  • It is necessary to establish a capacity model that all stakeholders can understand from their own perspective.
  • Adopting application teams may not really know what they need when configuring capacity.
  • More demanding applications must be handled in a different way.
  • Cleanup does not happen naturally and will waste capacity.
  • There are many different views on what IaaS, PaaS, and other applications really provide.

The following will provide an in-depth look at these key topics:

Capacity model

The first key theme is the need to build a capacity model that all stakeholders can understand. Whether you are a financial officer or an application system administrator, you need to provide a list of 150 servers or 200 containers for usage checking, which usually does not produce effective results. Why is this? I believe that few people can understand the host name or container name (or service instance). After trying, the IT team led by Greene enhanced the capacity model driven from the server and container list, and merged the data in the configuration management database (CMDB), database, and operating system monitoring tools. The IT team gets information everywhere, and this information uses resources on the network to check capacity usage. Therefore, when communicating with the application team, it helps to determine the database on these servers, the version of the database used (so that they can see which databases are the original databases that were migrated to meet the risk requirements), and the costs involved When people communicate, they must first use the resources (disk, CPU, memory, etc.) that generate the bill, and then map them to the various application teams involved and the versions used.

In these cases, the organizational IT team can see the issues they care about and map them to the application or user community, which helps them assess whether they still need it and understand where they might need to make changes, such as from The original version of Windows 2000 is migrated. Basically, it can be boiled down to a model that can provide a set of tailor-made reports to help them understand what they have, instead of itemizing the resources used for billing.

Assess needs

Greene said the next topic they discovered was that application teams might not know what they really want when they first migrate to a cloud environment or build new applications. They usually have excellent functions and ideas that can impress users, but when asked about how many CPUs and how much memory they use, they usually ask the supplier and hope to run their products better, while the infrastructure department faces cost savings and Increase the pressure of utilization, but will eventually choose to use cloud computing services. The challenge they face is that there are many assumptions and even guesses about the acceptance of the application and the functions that may be thought of in the next step. This usually leads to a situation: must migrate to a different operating environment to meet their performance requirements, which requires application teams and infrastructure teams to spend a lot of time and energy for processing.

One assumption made by many teams is that one architecture that fits all applications can be built, but most large companies have a broad portfolio, usually following the 80/20 or 90/10 rule. Under normal circumstances, only a few applications can promote business development, have a large user base or require higher performance. Therefore, although most applications can adapt to the cost-effective, high-density environment designed for users, it is important to require a higher-performance environment or available options instead of adopting a solution that meets all needs.

Cleanup does not happen naturally

Another theme is that cleanup does not happen naturally and will waste capacity. In public clouds, this is usually an increase in cost, while in private clouds, this usually manifests as insufficient capacity or unexpected growth. In most cases, developers are allowed to configure the system for their tasks in an automated way, but when capacity is no longer needed, no one cleans up. Therefore, when they complete a special development project that requires resources, or when they migrate to the next version of a database, web server, or operating system to meet architecture or risk standards, no one is willing to give up the original resources (maybe they want Understand whether the new resource is really effective). If you do not pay attention to this, as the organization operates in the cloud platform for longer, more useless data will be accumulated. The key here is to show to the people responsible for paying the bills, or justify their use of private cloud resources, and the content they use related to them, so that they can make the right decisions.

Handling demanding applications

Finally, many organizations began to migrate to the cloud, they understand how low the utilization rate of the original data center, and the waste caused by inefficient IT equipment. What is sensitive to this is that local cloud computing providers have found a way to commit the same resource (CPU, memory or IO bandwidth) to multiple applications or virtual machines at the same time. It is believed that applications that share these resources are unlikely to use these resources at the same time.

Under normal circumstances, this is a good choice for things like Web servers, and users can quickly respond to requests sent by the Web within a day. However, this may not be good for the database server, because the database server may take a few seconds to process some queries, and the application usage in the database tends to have some peaks. The challenge here is that if everyone puts forward CPU or memory requirements for the system, then the system will swap. During the swap process, the system will spend all of its time moving processes into or out of memory, or stacking the system to a point where it cannot be satisfied. The degree of demand. Therefore, in this example, you can analyze each application or product (for example, the database usually allocates a large amount of memory area at startup without releasing them, and if it is oversubmitted, it may be exchanged), and do it for this application Make the right decisions instead of using general guidelines based on the supplier’s laboratory environment.

What exactly do IaaS, PaaS and XaaS provide?

The last theme is that people have many different views on the true meaning of IaaS, PaaS and XaaS. Application teams can read many articles about what cloud computing can do, and they assume that when they migrate to the cloud, they get more features and services in some way. Greene said that organizations will get what is built and designed in the system. From backup to meet organizational requirements, to failover automation, to firewall security, all of these need to be planned and implemented with appropriate vendor tools, because they are not static. Most cloud computing providers offer many options and possibilities for operating systems, disk speeds, supported applications and even settings. The challenge the organization faces is a large number of choices and transforming them into a list of configurations that meet the needs of the organization and work well with the supplier’s environment.

in conclusion

From a capacity point of view, these are some of the themes applied in public and private clouds. I believe that every operating environment needs to be researched and modeled to enable organizations to run analysis to see where capacity problems are. It should be noted that there are actually two types of capacity issues: the first is performance. Organizations will find that a given application is too much for its current location (need to migrate to a better operating environment). The second is overall capacity management (this will ensure that the organization can provide enough resources for a given container or virtual machine). This will be an endless analysis, because once one problem is solved, there is another problem to be solved. This model helps organizations identify problems, and then tools in the environment (moving containers, migrating to a new container, or possibly moving to a new architecture) can be used to ensure that the operating environment is ready for future development.

Guess you like

Origin blog.csdn.net/yaxuan88521/article/details/113752722