Many shared-memory parallel applications do not scale beyond a few tens of cores.However, may benefit from large amounts of memory:
- In-memory databases
- Scientific applications
Moreover, memory in the nodes of current clusters are often Overscaled in order to fit the requirements of “any” application and remains unused most of the time. One of the objective we try to achieve with project Hecatonchire is to unleash your memory-constrained application by using the memory in the rest of nodes.In this post we demonstrate how Hecatonchire enables users to have memory that grows with their business or applications, not before. While using high-volume components to build high-value systems and eliminating physical limitation of Cloud / Datacenter or servers.
The application : SAP HANA
HANA DB takes advantage of the low cost of main memory (RAM), data processing abilities of multi-core processors and the fast data access of solid-state drives relative to traditional hard drives to deliver better performance of analytical and transactional applications. It offers a multi-engine query processing environment which allows it to support both relational data (with both row- and column-oriented physical representations in a hybrid engine) as well as graph and text processing for semi- and unstructured data management within the same system. HANA DB is 100% ACID compliant.
The Benchmark, hardware and Methodology
- Application : SAP HANA ( In memory Database)
- Workload : OLAP ( TPC-H Variant)
- Data size
- For Mall and Medium Instance: ~600 GB uncompressed ( ~30 GB compressed in RAM)
- For Large: 300 GB compressed Data ( 2 TB of uncompressed data)
- 18 different Queries (TPC-H Variant)
- 15 iteration of each query set
- Virtual Machine:
- Small Size: 64 GB Ram - 32 vCPU
- Medium Size: 128 GB RAM – 40 vCPU
- Large Size: 1 TB RAM 40 vCPU
- Hypervisor: KVM
- Server with Intel Xeon West Mere
- 4 socket
- 1 TB or 512 GB RAM
- Infiniband QDR 40Gbps Switch + Mellanox ConnectX2
The results demonstrate the scalability and performance of the Hecatonchire solution, for small and large instance we only notice an average of 3% overhead compare to non scale out benchmark.
|Hecatonchire Over Head vs Standard Virtualised HANA ( Small Instance)|
Moreover, we can notice in the query break down that for very short (almost point) query (13-14) the cost of accessing scaled out memory is immediately perceptible. However, Hecatonchire demonstrated that we are able to smooth out the impact of the scaling out for lengthier and memory intensive query.
|Per Query Overhead breakdown for Small instance Benchmark|
We officially tested Hecatonchire and HANA only up to 1 TB and obtained similar results as the one with the small and medium instance (3% overhead). We are currently running test for 4 to 8 TB scale out solution in order to validate larger scale scenario which require new feature that are currently added into the code. Stay tuned for a new and improved Heca!
|1 TB Virtualized SAP HANA scaled out with Hecatonchire|
Finally, we can demonsrated that for Hecatonchire scale very well when we spread the memory across multiple memory provider node. We not only benefit form the increased bandwidth but also from the improved latency with excellent performance result with 1.5% overhead when 2 third of the memory is externalised.
|Various Hecatonchire deployment scenario of a virtualized medium Instance HANA|
Note: typically running HANA on Top of KVM add by default a 5% to 8% overhead compare to bare bone instance that we didn't take into account in the result as we are only comparing visualized against scaled out visualized .