Write a Blog >>
PPoPP 2022
Sat 2 - Wed 6 April 2022
Mon 4 Apr 2022 12:50 - 13:05 - Session 3 Chair(s): Bin Ren

Over the most recent years, quantized graph neural network (QGNN) attracts lots of research and industry attention due to its high robustness and low computation and memory overhead. Unfortunately, the performance gains of QGNN have never been realized on modern GPU platforms. To this end, we propose the first Tensor Core (TC) based computing framework, \textbf{QGTC}, to support any-bitwidth computation for QGNNs on GPUs. We introduce a novel quantized low-bit arithmetic design based on the low-bit data representation and bit-decomposed computation. We craft a novel TC-tailored CUDA kernel design by incorporating 3D-stacked bit compression, zero-tile jumping, and non-zero tile reuse technique to improve the performance systematically. We incorporate an effective bandwidth-optimized subgraph packing strategy to maximize the transferring efficiency between CPU host and GPU device. We integrate QGTC with PyTorch for better programmability and extensibility. Extensive experiments demonstrate that QGTC can achieve evident inference speedup (on average $2.7\times$) compared with the state-of-the-art DGL framework across diverse settings.

Mon 4 Apr

Displayed time zone: Eastern Time (US & Canada) change

12:50 - 13:35
Session 3Main Conference
Chair(s): Bin Ren Pacific Northwest National Laboratories
12:50
15m
Talk
QGTC: Accelerating Quantized Graph Neural Networks via GPU Tensor Core
Main Conference
Yuke Wang UC Santa Barbara, Boyuan Feng University of California Santa Barbara, Yufei Ding University of California at Santa Barbara
13:05
15m
Talk
FasterMoE: Modeling and Optimizing Training of Large-Scale Dynamic Pre-Trained Models
Main Conference
Jiaao He Tsinghua University, China, Jidong Zhai Tsinghua University, Tiago Antunes Tsinghua University, Haojie Wang Tsinghua University, Fuwen Luo Tsinghua University, Shangfeng Shi Tsinghua University, Qin Li Tsinghua University
13:20
15m
Talk
Near-Optimal Sparse Allreduce for Distributed Deep Learning
Main Conference
Shigang Li ETH Zurich, Torsten Hoefler ETH Zurich