HeteroSync is a benchmark suite used to test the performance of various types of fine-grained synchronization on tightly-coupled GPUs. The version in gem5-resources contains only the HIP code.
Below the README details the various synchronization primitives and the other command-line arguments for use with heterosync.
cd src/gpu/heterosync docker run --rm -v ${PWD}:${PWD} -w ${PWD} -u $UID:$GID gcr.io/gem5-test/gcn-gpu:v21-2 make release-gfx8
The release-gfx8 target builds for gfx801, a GCN3-based APU, and gfx803, a GCN3-based dGPU. There are other targets (release) that build for GPU types that are currently unsupported in gem5.
HeteroSync has multiple applications that can be run (see below). For example, to run sleepMutex with 10 ld/st per thread, 16 WGs, and 4 iterations of the critical section:
docker run -u $UID:$GID --volume $(pwd):$(pwd) -w $(pwd) gcr.io/gem5-test/gcn-gpu:v21-2 gem5/build/GCN3_X86/gem5.opt gem5/configs/example/apu_se.py -n 3 -c bin/allSyncPrims-1kernel --options="sleepMutex 10 16 4"
http://dist.gem5.org/dist/v21-2/test-progs/heterosync/gcn3/allSyncPrims-1kernel
Information from original HeteroSync README included below:
These files are provided AS IS, and can be improved in many aspects. While we performed some performance optimization, there is more to be done. We do not claim that this is the most optimal implementation. The code is presented as a representative case of a CUDA and HIP implementations of these workloads only. It is NOT meant to be interpreted as a definitive answer to how well this application can perform on GPUs, CUDA, or HIP. If any of you are interested in improving the performance of these benchmarks, please let us know or submit a pull request on GitHub.
Structure: All of the HeteroSync microbenchmarks are run from a single main function. Each of the microbenchmarks has a separate .cu (CUDA) file that contains the code for its lock and unlock functions. In the HIP version, these files are header files, because of HIP's requirements for compilation.
Contents: The following Synchronization Primitives (SyncPrims) microbenchmarks are included in HeteroSync:
All microbenchmarks access shared data that requires synchronization.
A subsequent commit will add the Relaxed Atomics microbenchmarks discussed in our paper.
Compilation:
Since all of the microbenchmarks run from a single main function, users only need to compile the entire suite once in order to use any of the microbenchmarks. You will need to set CUDA_DIR in the Makefile in order to properly compile the code. To use HIP, you will need to set HIP_PATH for compilation to work correctly.
Running:
The usage of the microbenchmarks is as follows:
./allSyncPrims-1kernel <syncPrim> <numLdSt> <numTBs> <numCSIters>
where <syncPrim> is a string that differs for each synchronization primitive to be run: // Barriers use hybrid local-global synchronization * atomicTreeBarrUniq: atomic tree barrier * atomicTreeBarrUniqLocalExch: atomic tree barrier with local exchange * lfTreeBarrUniq: lockfree tree barrier * lfTreeBarrUniqLocalExch: lockfree tree barrier with local exchange // global synchronization versions * spinMutex: spin lock mutex * spinMutexEBO: spin lock mutex with exponential backoff * sleepMutex: decentralized ticket lock * faMutex: centralized ticket lock (aka, fetchandadd mutex) * spinSem1: spin lock semaphore, semaphore size 1 * spinSem2: spin lock semaphore, semaphore size 2 * spinSem10: spin lock semaphore, semaphore size 10 * spinSem120: spin lock semaphore, semaphore size 120 * spinSemEBO1: spin lock semaphore with exponential backoff, semaphore size 1 * spinSemEBO2: spin lock semaphore with exponential backoff, semaphore size 2 * spinSemEBO10: spin lock semaphore with exponential backoff, semaphore size 10 * spinSemEBO120: spin lock semaphore with exponential backoff, semaphore size 120 // local synchronization versions * spinMutexUniq: local spin lock mutex * spinMutexEBOUniq: local spin lock mutex with exponential backoff * sleepMutexUniq: local decentralized ticket lock * faMutexUniq: local centralized ticket lock * spinSemUniq1: local spin lock semaphore, semaphore size 1 * spinSemUniq2: local spin lock semaphore, semaphore size 2 * spinSemUniq10: local spin lock semaphore, semaphore size 10 * spinSemUniq120: local spin lock semaphore, semaphore size 120 * spinSemEBOUniq1: local spin lock semaphore with exponential backoff, semaphore size 1 * spinSemEBOUniq2: local spin lock semaphore with exponential backoff, semaphore size 2 * spinSemEBOUniq10: local spin lock semaphore with exponential backoff, semaphore size 10 * spinSemEBOUniq120: local spin lock semaphore with exponential backoff, semaphore size 120
<numLdSt> is a positive integer representing how many loads and stores each thread will perform. For the mutexes and semaphores, these accesses are all performed in the critical section. For the barriers, these accesses use barriers to ensure that multiple threads are not accessing the same data.
<numTBs> is a positive integer representing the number of thread blocks (TBs) to execute. For many of the microbenchmarks (especially the barriers), this number needs to be divisible by the number of SMs on the GPU.
<numCSIters> is a positive integer representing the number of iterations of the critical section.
The HIP UVM version is based on HIP 4.0, and uses HIP's unified virtual memory to avoid making explicit copies of some of the arrays and structures. Unlike the IISWC '17 version, this version does not make any assumptions about ordering atomics provide. Nor does it require epilogues. Instead, it adds the appropriate HIP fence commands around atomic accesses to ensure the SC-for-DRF ordering is provided. This version has been tested on a Vega 20 GPU, but has not been tested as rigorously as the IISWC '17 version.
If you publish work that uses these benchmarks, please cite the following papers:
M. D. Sinclair, J. Alsop, and S. V. Adve, HeteroSync: A Benchmark Suite for Fine-Grained Synchronization on Tightly Coupled GPUs, in the IEEE International Symposium on Workload Characterization (IISWC), October 2017
J. A. Stuart and J. D. Owens, “Efficient Synchronization Primitives for GPUs,” CoRR, vol. abs/1110.4623, 2011
This work was supported in part by a Qualcomm Innovation Fellowship for Sinclair, the National Science Foundation under grants CCF 13-02641 and CCF 16-19245, the Center for Future Architectures Research (C-FAR), a Semiconductor Research Corporation program sponsored by MARCO and DARPA, and the Center for Applications Driving Architectures (ADA), one of six centers of JUMP, a Semiconductor Research Corporation program co-sponsored by DARPA.