Recent Releases of mmap-access-pattern-openmp
mmap-access-pattern-openmp - Design a fast parallel memory access pattern for a memory-mapped file with mmap()
Design a fast parallel memory access pattern for a memory-mapped file with mmap().
Note You can just copy
main.shto your system and run it. \ For the code, refer tomain.cxx.
```bash $ ./main.sh
OMPNUMTHREADS=64
Finding byte sum of file /home/graphwork/Data/indochina-2004.mtx ...
{adv=0, block=4096, mode=0} -> {0000532.8ms, sum=148985827434} byteSum
OMPNUMTHREADS=64
Finding byte sum of file /home/graphwork/Data/uk-2002.mtx ...
{adv=0, block=4096, mode=0} -> {0000880.9ms, sum=244964049087} byteSum
...
```
I tried a simple file bytes sum using mmap() - both sequential and parallel using openmp (64 threads on DGX). I adjust madvise(), mmap(), and per-thread block size to see which access pattern has the best performance. It seems to me using an early madvice(MADV_WILLNEED), a per-thread block size of 256K (dynamic schedule) is good. Below is a plot showing the time taken with this config for each graph.
In parallel doing a file byte sum takes ~100ms on indochina-2004 graph. Note that PIGO takes ~650ms to load the same graphs as CSR. Next i measure the read bandwith of each file, simply by dividing the size of each file by the time taken.
We appear to be reaching a peak bandwidth of ~35GB/s. The KIOXIA KCM6DRUL7T68 7.68TB NVMe SSD installed on DGX has a peack sequential read performance of 62GB/s (we are close). Sequential approach can hit a max of only 6GB/s.
There is a paper called "Efficient Memory Mapped File I/O for In-Memory File Systems" on the topic - where Choi et al. working at Samsung say that mmap() is a good interface for fast IO (in contrast to file streams) and propose async map-ahead based madvise() for low-latency NVM storage devices. The also have a good slides on this - where their (modified) extended madvice obtains ~2.5x better performance than default mmap() by minimizing the number of page faults.
References
- Efficient Memory Mapped File I/O for In-Memory File Systems
- Efficient Memory Mapped File I/O for In-Memory File Systems - Slides
- DI-MMAP—a scalable memory-map runtime for out-of-core data-intensive applications
- Kioxia CD6 7.68TB NVMe PCIe4x4 2.5" U.3 15mm SIE 1DWPD - KCD6XLUL7T68
- Kioxia KCD6XLUL7T68 - 7.68TB SSD NVMe 2.5-inch 15mm CD6-R Series, SIE, PCIe 4.0 6200 MB/sec Read, BiCS FLASH™ TLC, 1 DWPD
- Is mmap + madvise really a form of async I/O?
- Is there really no asynchronous block I/O on Linux?
- Linux mmap() with MAP_POPULATE, man page seems to give wrong info
- When would one use mmap MAP_FIXED?
- Overlapping pages with mmap (MAP_FIXED)
- Zero a large memory mapping with
madvise - convert string to size_t
- mmap(2) — Linux manual page
- madvise(2) — Linux manual page
- msync(2) — Linux manual page
- mincore(2) — Linux manual page
- Reference - cstdlib - atol
- Air gap (networking)
- Stuxnet
- C++
Published by wolfram77 over 2 years ago


