Recent IEEE Computer Architecture Letters Publications (2022 Issue1)

In this blog post, we highlight recent publications from the IEEE Computer Architecture Letters (CAL) from the IEEE Computer Architecture Letter Vol. 21 (Issue1  •  Jan-Jun2022).  We include a short summary if provided by the authors.  

IEEE CAL Editorial Board

Accelerating Graph Processing With Lightweight Learning-Based Data Reordering

Mo Zou, Mingzhe Zhang, Rujia Wang, Xian-He Sun, Xiaochun Ye, Dongrui Fan, Zhimin Tang

MPU-Sim: A Simulator for In-DRAM Near-Bank Processing Architectures

Xinfeng Xie, Peng Gu, Jiayi Huang, Yufei Ding, Yuan Xie

MPU-Sim is an end-to-end simulator from parallel programs to hardware architectures for near-bank in-DRAM processing-in-memory. With calibrated hardware simulation models and well-defined programming interfaces, MPU-Sim can help facilitate the future research and development of near-bank in-DRAM processing systems and hardware architectures.

A Pre-Silicon Approach to Discovering Microarchitectural Vulnerabilities in Security Critical Applications

Kristin Barber, Moein Ghaniyoun, Yinqian Zhang, Radu Teodorescu

This paper introduces a promising new direction for detecting microarchitectural vulnerabilities by using pre-silicon simulation, tracing infrastructure, and differential analysis techniques to search for exploitable behavior of hardware designs during the execution of an application of interest. 

MQSim-E: An Enterprise SSD Simulator

Dusol Lee, Duwon Hong, Wonil Choi, Jihong Kim

Lightweight Hardware Implementation of Binary Ring-LWE PQC Accelerator

Benjamin J. Lucas, Ali Alwan, Marion Murzello, Yazheng Tu, Pengzhou He, Andrew J. Schwartz, David Guevara, Ujjwal Guin, Kyle Juretus, Jiafeng Xie

This paper focuses on developing an efficient PQC hardware accelerator for the binary Ring-learning-with-errors (BRLWE)-based encryption scheme, a promising lightweight PQC suitable for resource-constrained applications. 

Characterizing and Understanding Distributed GNN Training on GPUs

Haiyang Lin, Mingyu Yan, Xiaocheng Yang, Mo Zou, Wenming Li, Xiaochun Ye, Dongrui Fan

The paper has an in-depth analysis of distributed GNN training by profiling the end-to-end execution with the state-of-the-art framework, Pytorch-Geometric (PyG), revealing several significant observations and providing useful guidelines for both software optimization and hardware optimization.

LSim: Fine-Grained Simulation Framework for Large-Scale Performance Evaluation

Hamin Jang, Taehun Kang, Joonsung Kim, Jaeyong Cho, Jae-Eon Jo, Seungwook Lee, Wooseok Chang, Jangwoo Kim, Hanhwi Jang

Performance modeling of a large-scale workload without a large-scale system is extremely challenging. To this end, we present LSim, an accurate framework for large-scale performance modeling. Based on the captured workload behavior within small-scale workload traces, LSim predicts the performance at large scales.

LINAC: A Spatially Linear Accelerator for Convolutional Neural Networks

Hang Xiao, Haobo Xu, Ying Wang, Yujie Wang, Yinhe Han

This paper proposes a linear regression-based method to utilize the spatially linear correlation between activations and weights of CNN models. Stronger bit-sparsity is excavated to achieve further bit-sparsity-of-activations acceleration and memory communication reduction.

Recent IEEE Computer Architecture Letters Publications (2022 Issue1)
Tagged on: