Home  /   News  /   Latest News
Latest News

Oracle Database In-Memory Implementation and Usage

A new Database In-Memory Implementation and Usage White Paper is now available. This complements the Database In-Memory Technical White Paper and provides specific information to help you implement and use Database In-Memory. You can download this paper from here.

Identify Analytic Workloads

Oracle Database In-Memory (Database In-Memory) is not a one size fits all solution. It is specifically targeted at analytic workloads, which is why the IM column store is populated in a columnar format. Columnar formats are ideal for scanning and filtering a relatively small number of columns very efficiently. It is important that you understand the fundamental difference between Database In-Memory and the traditional Oracle Database row-store format. The row format is excellent for OLTP type workloads, but the columnar format is not and Database In-Memory will not speed up OLTP workloads.

The ideal workload for Database In-Memory is analytical queries that scan large amounts of data, access a limited number of columns and use aggregation and filter criteria to return a small number of values. Queries that spend the majority of their time scanning and filtering data see the most benefit. Queries that spend the majority of their time on complex joins, sorting or returning millions of rows back to the client will see less benefit. A good way to think about it is, the goal is to perform as much of the query as possible while scanning the data. By taking advantage of the ability to push predicate filters directly into the scan of the data, use Bloom filters to transform joins into filters that can be applied as part of the scan of the larger table, and use vector group by to perform aggregations as part of the scan, Database In-Memory can complete the majority of a query’s processing while scanning the data. Other Database In-Memory features complement these primary attributes and include the use of Single Instruction Multiple Data (SIMD) vector processing and In-Memory storage index pruning to further speed up columnar scans.

Workloads that involves pure OLTP, that is inserts, updates and deletes (i.e. DML) along with queries that select a single row, or just a few rows, will generally not benefit from Database In-Memory. Workloads that do mostly ETL where the data is only written and read once are also not very good candidates. However, the exception to that can be when the source data is in the IM column store. In this case, the time to load may be reduced by the amount of time saved scanning the IM column store rather than having to fetch the data from the row store.

Comments are closed.

ACARDIA LIMITED 12th Floor, Ocean House, The Ring, Bracknell, Berkshire RG12 1AX. United Kingdom.