Oracle+Intel, data processing efficiency has improved more than a little bit

Autonomous driving, intelligent manufacturing, Internet of Things... With the advent of the Internet of Things era, huge amounts of data are coming like a tide. Gold mines can only become gold after a series of processes such as mining, cleaning, and purification. Data must be governed. Play value. When it comes to data governance, one vendor has to mention that it is Oracle, the overlord in the database field.

 

Undeniably, Oracle has encountered some challenges in the past two years. Faced with the rapid increase in data volume and the increasingly accelerating data growth rate, as a representative of traditional relational databases, Oracle has encountered some bottlenecks in terms of database system scalability limits, data processing capabilities, and memory capacity support.

 

But the power of the overlord is not troublesome. At present, Oracle's latest Database 19c with Intel's latest Xeon scalable processor and Optane persistent memory can already give users a completely different data processing experience than before, not just performance. Improve, but also reduce the total cost of ownership. Heartbeat?

 

Data explosion, Oracle's response

 

There is no doubt that we have entered the era of big data, and the rapid increase in data volume is an indisputable fact. Data from the well-known analysis organization IDC shows that "more than half of the world's data was created in the past two years."

 

In the face of such great changes in the era, the industry has given a consistent answer, that is, IT infrastructure must be changed to adapt to the needs of the new era. In a study conducted by Enterprise Strategy Group (ESG), 81% of respondents believed that if they did not accept IT transformation, the company would lose competitiveness.

 

How to change? The underlying architecture must change, and the data processing methods must change. To this end, Oracle Database19c has made a lot of effort in the functionality and flexibility required for online analytical processing (OLAP) and online transaction processing (OLTP). such as:

 

In terms of multi-tenant architecture, Oracle's unique multi-tenant database architecture can simplify database integration and achieve pattern-based high-density integration without changing existing applications.

 

In terms of performance, in addition to performance tuning and problem diagnosis, Oracle Database 19c also enhances SQL query and data optimization to provide database-level performance for operations, analysis, and mixed workloads.

 

In addition, in terms of usability, security, data warehouse, and application development, Oracle Database19c has made many improvements in order to better adapt to the era of data explosion.

 

In fact, Oracle Database 19c is not only a database, but also a complete platform and toolbox, suitable for enterprise internal business applications and excellent customer relationship management (CRM) and enterprise resource planning (ERP) business solutions, including JD Edwards, PeopleSoft and Oracle Financial.

 

So, it is time to consider upgrading to Oracle Database 19c.

 

Equipped with Intel, faster and more stable

 

At the same time, the upgrade should also have the underlying architecture, including processor, memory, etc., only in this way can the data processing efficiency of Oracle Database 19c be fully demonstrated. Because the latest Intel Xeon Scalable processors and Optane persistent memory have improved more than the previous generation. For example: higher performance core, higher memory capacity, lower total cost of ownership, stronger performance optimization.

 

Focus on memory here. The higher the data processing efficiency you want, the faster and larger the processor and memory. At present, the memory wants to become larger, and the cost is increased exponentially. The greatest value of AoTeng lies in its ability to achieve affordable memory capacity. Therefore, for companies that need large-capacity memory for computing, Optane is simply a boon. Up to now, many forward-looking customers have deployed Optane persistent memory and received good feedback.

 

Focus on Oracle, although Optane’s App Direct mode is not currently supported. In this mode, the memory can store data persistently. In other words, when performing memory calculations, you don’t have to worry about unexpected failures such as power failure. The data processing efficiency is in this mode. The next will be higher. But there are also ways to allow Oracle to apply Optane's App Direct model. For example, NetApp MAX Data can use this model, and Oracle will also support the App Direct model in the future, so that customers will always benefit from it.

 

Will it work, Bibi will know

 

seeing is believing. In order to prove the strength of the Oracle Database 19c+ Intel Xeon Scalable processor and Optane persistent memory combination, Intel and Oracle conducted a detailed control test. During the test, Intel determined three reference configurations for the Oracle environment to simulate different enterprise sizes and requirements. among them:

 

  • Small : A cost-effective and modern platform suitable for situations where the database size is less than 1.5 TB and the throughput requirement is up to 3 million transactions per minute (TPM)

  • Medium : A high-performance solution that is suitable for situations where the database size is less than 1.5 TB and the throughput requirement is up to 5 million transactions per minute (TPM)

  • Large : A solution for extremely demanding data analysis workloads, suitable for situations where the database size exceeds 1.5 TB and the throughput demand is up to 7 million transactions per minute (TPM)

 

The specific configuration is shown in the figure below. What needs to be emphasized is that in the three configurations, Intel’s comparison tests are not only the CPU comparison, but also the comparison test under the condition of introducing Optane persistent memory through NetApp MAX Data.

 

Tips:

 

NetApp MAX Data is a file system solution for automatic layering on compute nodes, allowing Oracle software to make full use of the App Direct mode of Optane persistent memory. Different from the 2LM mode used in the three reference configurations, with the App Direct mode, the data stored on the Optane persistent memory can be persisted throughout the power cycle. In this way, there is no need to load data from a slower storage medium into DRAM, which speeds up operating efficiency.

 

Results tell everything

 

In the actual test, Intel used the industry standard HammerDB benchmark test to conduct throughput (TPM) tests on various processors, and compared the current Intel processors with previous generation products. Since the number of processor cores is an important factor affecting Oracle user licensing costs, the main performance indicator for comparison is throughput per core, not throughput per processor.

 

In the small configuration, Intel compared the current small configuration system with a series of older eight-core Intel systems. The test results are as follows. It can be seen that the performance of each generation of product innovation has improved to different degrees.

 

In the medium configuration, Intel conducted a comparative test between the current medium configuration system and the four-year Intel system. The test results are shown in the figure below. Compared with the four-year Intel system, the performance per core of the medium configuration system has increased by 60%.

 

In the large configuration, Intel conducted a comparative test between the current large configuration system and the four-year Intel system. The test results are as follows. Compared with the four-year Intel system, the performance per core of the large configuration system has increased by 50%.

 

With Optane, the performance is very different

 

The above is the comparative test result without adding Optane persistent memory. It can be seen that only the improvement of the processor can make Oracle's performance a qualitative leap. What about adding Optane persistent memory? Intel also did a detailed comparison test.

 

In the test, after adding 1.5 TB of Intel Optane persistent memory to the system running Oracle Database, the system performance was significantly improved. As shown in the figure below, the performance is improved by 12 times.

 

In order to further tap the advantages of Intel Optane persistent memory, Intel used NetAppMAXData for performance improvements in the test.

 

In the test, Intel compared the performance of Oracle Database 19c with or without NetApp MAX Data and Intel Optane persistent memory. The test environment uses a benchmark system with 384 GB DRAM and Linux XFS file system to compare with a system using NetApp MAX Data and an additional 1TB Intel Optane persistent memory, and uses HammerDB to measure the throughput of Oracle Database bare metal instances and virtualized instances.

 

The following figure shows the test results of Oracle bare metal systems and virtualization systems for 200 users. After adding NetApp MAX Data and Intel Optane persistent memory, the performance of the bare metal system has increased by up to 1.9 times, and the performance of the virtualized system has increased by up to 3.16 times.

 

 

Obviously, Intel's new architecture and Oracle Database 19c can bring users a new data processing experience, which is what users urgently need in the era of big data. In fact, not only Oracle, but many big data platforms including Spark, MongoDB, Cassandra, Aerospike, etc., Intel Xeon Scalable processors and Optane persistent memory can help achieve very good performance improvements.

 

It's no wonder why AoTeng has just launched, but its market acceptance is very high. After all, AoTeng can improve performance and save money. Therefore, in the era of big data, upgrading the data processing platform is not enough, and the underlying architecture should also be upgraded.

 

Guess you like

Origin blog.csdn.net/ZPWhPdjl/article/details/108505018