# Optimal Cache-Oblivious Mesh Layouts - Computer Science > Data Structures and Algorithms

Optimal Cache-Oblivious Mesh Layouts - Computer Science > Data Structures and Algorithms - Download this document for free, or read online. Document in PDF available to download.

Abstract: A mesh is a graph that divides physical space into regularly-shaped regions.Meshes computations form the basis of many applications, e.g. finite-elementmethods, image rendering, and collision detection. In one important meshprimitive, called a mesh update, each mesh vertex stores a value and repeatedlyupdates this value based on the values stored in all neighboring vertices. Theperformance of a mesh update depends on the layout of the mesh in memory.This paper shows how to find a memory layout that guarantees that the meshupdate has asymptotically optimal memory performance for any set of memoryparameters. Such a memory layout is called cache-oblivious. Formally, for a$d$-dimensional mesh $G$, block size $B$, and cache size $M$ (where$M=\Omega(B^d)$), the mesh update of $G$ uses $O(1+|G|-B)$ memory transfers.The paper also shows how the mesh-update performance degrades for smallercaches, where $M=o(B^d)$.The paper then gives two algorithms for finding cache-oblivious mesh layouts.The first layout algorithm runs in time $O(|G|\log^2|G|)$ both in expectationand with high probability on a RAM. It uses $O(1+|G|\log^2(|G|-M)-B)$ memorytransfers in expectation and $O(1+(|G|-B)(\log^2(|G|-M) + \log|G|))$ memorytransfers with high probability in the cache-oblivious and disk-access machine(DAM) models. The layout is obtained by finding a fully balanced decompositiontree of $G$ and then performing an in-order traversal of the leaves of thetree. The second algorithm runs faster by almost a $\log|G|-\log\log|G|$ factorin all three memory models, both in expectation and with high probability. Thelayout obtained by finding a relax-balanced decomposition tree of $G$ and thenperforming an in-order traversal of the leaves of the tree.

Author: ** Michael A. Bender, Bradley C. Kuszmaul, Shang-Hua Teng, Kebin Wang**

Source: https://arxiv.org/