Publications
Publications in reversed chronological order.
2024
- USENIX SecurityABACuS: All-Bank Activation Counters for Scalable and Low Overhead RowHammer MitigationAtaberk Olgun, Yahya Can Tugrul, Nisa Bostanci, Ismail Emir Yuksel, Haocong Luo, Steve Rhyner, Abdullah Giray Yaglikci, Geraldo F. Oliveira, and Onur MutluIn USENIX Security, 2024
We introduce ABACuS, a new low-cost hardware-counter-based RowHammer mitigation technique that performance-, energy-, and area-efficiently scales with worsening RowHammer vulnerability. We observe that both benign workloads and RowHammer attacks tend to access DRAM rows with the same row address in multiple DRAM banks at around the same time. Based on this observation, ABACuS’s key idea is to use a single shared row activation counter to track activations to the rows with the same row address in all DRAM banks. Unlike state-of-the-art RowHammer mitigation mechanisms that implement a separate row activation counter for each DRAM bank, ABACuS implements fewer counters (e.g., only one) to track an equal number of aggressor rows. Our evaluations show that ABACuS securely prevents RowHammer bitflips at low performance/energy overhead and low area cost. We compare ABACuS to four state-of-the-art mitigation mechanisms. At a near-future RowHammer threshold of 1000, ABACuS incurs only 0.58% (0.77%) performance and 1.66% (2.12%) DRAM energy overheads, averaged across 62 single-core (8-core) workloads, requiring only 9.47 KiB of storage per DRAM rank. At the RowHammer threshold of 1000, the best prior low-area-cost mitigation mechanism incurs 1.80% higher average performance overhead than ABACuS, while ABACuS requires 2.50X smaller chip area to implement. At a future RowHammer threshold of 125, ABACuS performs very similarly to (within 0.38% of the performance of) the best prior performance- and energy-efficient RowHammer mitigation mechanism while requiring 22.72X smaller chip area.
@inproceedings{olgun2024abacus, title = {{ABACuS: All-Bank Activation Counters for Scalable and Low Overhead RowHammer Mitigation}}, author = {Olgun, Ataberk and Tugrul, Yahya Can and Bostanci, Nisa and Yuksel, Ismail Emir and Luo, Haocong and Rhyner, Steve and Yaglikci, Abdullah Giray and Oliveira, Geraldo F. and Mutlu, Onur}, booktitle = {USENIX Security}, year = {2024} }
- HPCAMIMDRAM: An End-to-End Processing-using-DRAM System for Energy-Efficient and Programmer-Transparent MIMD ComputingGeraldo F. Oliveira, Ataberk Olgun, Giray Yaglikci, Nisa Bostanci, Juan Gomez Luna, Saugata Ghose, and Onur MutluIn HPCA, 2024
Processing-using-DRAM (PuD) is a processing-in-memory (PIM) approach that uses DRAM’s massive parallelism to execute ultra-wide SIMD operations. However, DRAM rows’ large and rigid granularity limits the effectiveness and applicability of PuD in three ways. First, since applications display varying degrees of SIMD parallelism, PuD execution frequently leads to underutilization, throughput loss, and energy waste. Second, due to a lack of interconnecting networks that connect columns in a DRAM array, most PuD architectures are limited to the execution of map operations. Third, due to the lack of compiler support for PuD, users need to manually extract SIMD parallelism from an application and map computation to the underlying hardware. Our goal is to design a flexible PuD system that overcomes the limitations caused by the large and rigid granularity of PuD. To do so, we propose MIMDRAM, a hardware/software co-designed PuD system that introduces new mechanisms to allocate and control only the needed computing resources for PuD computation. The key idea of MIMDRAM is to leverage fine- grained DRAM (i.e., the ability to access portions of a DRAM row) for PuD. By exploiting this key idea, MIMDRAM enables a MIMD execution model on a single DRAM array (and SIMD execution within each segment). On the hardware side, MIMDRAM does simple changes to (i) DRAM’s row access circuitry to enable concurrent execution of PuD operations in segments of the DRAM row; and (ii) to the DRAM I/O circuitry to allow data to move across DRAM columns (enabling native support for PuD reduction operations). On the software side, MIMDRAM implements compiler passes to (i) identify and generate PuD operations with the appropriate single-instruction multiple-data (SIMD) granularity; and (ii) schedule the concurrent execution of independent PuD operations. We evaluate MIMDRAM using several real-world applications. Our evaluation shows that MIMDRAM achieves 18.6× the utilization, 14.3× the energy efficiency, 1.7× the throughput, and 1.3× the fairness of a state-of-the-art PuD framework; and 30.6×/6.8× the energy efficiency of a high-end CPU/GPU. MIMDRAM adds minimal area cost on top of a DRAM chip (1.11%) and CPU die (0.6%).
- HPCAFunctionally-Complete Boolean Logic in DRAM: An Experimental Characterization and Analysis of Real DRAM ChipsIsmail Emir Yuksel, Yahya Can Tugrul, Ataberk Olgun, Nisa Bostanci, Giray Yaglikci, Geraldo F. Oliveira, Haocong Luo, Juan Gomez Luna, Mohammad Sadrosadati, and Onur MutluIn HPCA, 2024
- HPCACoMeT: Count-Min Sketch-based Row Tracking to Mitigate RowHammer with Low CostNisa Bostanci, Ismail Emir Yuksel, Ataberk Olgun, Konstantinos Kanellopoulos, Yahya Can Tugrul, Giray Yaglikci, Mohammad Sadrosadati, and Onur MutluIn HPCA, 2024
- HPCASpatial Variation-Aware Read Disturbance Defenses: Experimental Analysis of Real DRAM Chips and Implications on Future SolutionsGiray Yaglikci, Geraldo F. Oliveira, Yahya Can Tugrul, Ismail Emir Yuksel, Ataberk Olgun, Haocong Luo, and Onur MutluIn HPCA, 2024
2023
- ASP-DACFundamentally Understanding and Solving RowHammerOnur Mutlu, Ataberk Olgun, and A Giray YağlıkcıIn ASP-DAC, 2023
We provide an overview of recent developments and future directions in the RowHammer vulnerability that plagues modern DRAM (Dynamic Random Memory Access) chips, which are used in almost all computing systems as main memory. RowHammer is the phenomenon in which repeatedly accessing a row in a real DRAM chip causes bitflips (i.e., data corruption) in physically nearby rows. This phenomenon leads to a serious and widespread system security vulnerability, as many works since the original RowHammer paper in 2014 have shown. Recent analysis of the RowHammer phenomenon reveals that the problem is getting much worse as DRAM technology scaling continues: newer DRAM chips are fundamentally more vulnerable to RowHammer at the device and circuit levels. Deeper analysis of RowHammer shows that there are many dimensions to the problem as the vulnerability is sensitive to many variables, including environmental conditions (temperature & voltage), process variation, stored data patterns, as well as memory access patterns and memory control policies. As such, it has proven difficult to devise fully-secure and very efficient (i.e., low-overhead in performance, energy, area) protection mechanisms against RowHammer and attempts made by DRAM manufacturers have been shown to lack security guarantees. After reviewing various recent developments in exploiting, understanding, and mitigating RowHammer, we discuss future directions that we believe are critical for solving the RowHammer problem. We argue for two major directions to amplify research and development efforts in: 1) building a much deeper understanding of the problem and its many dimensions, in both cutting-edge DRAM chips and computing systems deployed in the field, and 2) the design and development of extremely efficient and fully-secure solutions via system-memory cooperation.
@inproceedings{mutlu2023fundamentally, title = {{Fundamentally Understanding and Solving RowHammer}}, author = {Mutlu, Onur and Olgun, Ataberk and Ya{\u{g}}l{\i}kc{\i}, A Giray}, booktitle = {ASP-DAC}, year = {2023} }
- ACM TACOPiDRAM: A Holistic End-to-end FPGA-based Framework for Processing-in-DRAMAtaberk Olgun, Juan Gómez Luna, Konstantinos Kanellopoulos, Behzad Salami, Hasan Hassan, Oguz Ergin, and Onur MutluACM TACO, 2023
Processing-using-memory (PuM) techniques leverage the analog operation of memory cells to perform computation. Several recent works have demonstrated PuM techniques in off-the-shelf DRAM devices. Since DRAM is the dominant memory technology as main memory in current computing systems, these PuM techniques represent an opportunity for alleviating the data movement bottleneck at very low cost. However, system integration of PuM techniques imposes non-trivial challenges that are yet to be solved. Design space exploration of potential solutions to the PuM integration challenges requires appropriate tools to develop necessary hardware and software components. Unfortunately, current specialized DRAM-testing platforms, or system simulators do not provide the flexibility and/or the holistic system view that is necessary to deal with PuM integration challenges. We design and develop PiDRAM, the first flexible end-to-end framework that enables system integration studies and evaluation of real PuM techniques. PiDRAM provides software and hardware components to rapidly integrate PuM techniques across the whole system software and hardware stack (e.g., necessary modifications in the operating system, memory controller). We implement PiDRAM on an FPGA-based platform along with an open-source RISC-V system. Using PiDRAM, we implement and evaluate two state-of-the-art PuM techniques: in-DRAM (i) copy and initialization, (ii) true random number generation. Our results show that the in-memory copy and initialization techniques can improve the performance of bulk copy operations by 12.6x and bulk initialization operations by 14.6x on a real system. Implementing the true random number generator requires only 190 lines of Verilog and 74 lines of C code using PiDRAM’s software and hardware components.
@article{olgun2023pidram, title = {{PiDRAM: A Holistic End-to-end FPGA-based Framework for Processing-in-DRAM}}, author = {Olgun, Ataberk and Luna, Juan G{\'o}mez and Kanellopoulos, Konstantinos and Salami, Behzad and Hassan, Hasan and Ergin, Oguz and Mutlu, Onur}, journal = {ACM TACO}, year = {2023} }
- DSNAn Experimental Analysis of RowHammer in HBM2 DRAM ChipsAtaberk Olgun, Majd Osseiran, Yahya Can Tuğrul, Haocong Luo, Steve Rhyner, Behzad Salami, Juan Gomez Luna, and Onur MutluIn DSN, 2023
RowHammer (RH) is a significant and worsening security, safety, and reliability issue of modern DRAM chips that can be exploited to break memory isolation. Therefore, it is important to understand real DRAM chips’ RH characteristics. Unfortunately, no prior work extensively studies the RH vulnerability of modern 3D-stacked high-bandwidth memory (HBM) chips, which are commonly used in modern GPUs. In this work, we experimentally characterize the RH vulnerability of a real HBM2 DRAM chip. We show that 1) different 3D-stacked channels of HBM2 memory exhibit significantly different levels of RH vulnerability (up to 79% difference in bit error rate), 2) the DRAM rows at the end of a DRAM bank (rows with the highest addresses) exhibit significantly fewer RH bitflips than other rows, and 3) a modern HBM2 DRAM chip implements undisclosed RH defenses that are triggered by periodic refresh operations. We describe the implications of our observations on future RH attacks and defenses and discuss future work for understanding RH in 3D-stacked memories.
@inproceedings{olgun2023experimental, title = {{An Experimental Analysis of RowHammer in HBM2 DRAM Chips}}, author = {Olgun, Ataberk and Osseiran, Majd and Tu{\u{g}}rul, Yahya Can and Luo, Haocong and Rhyner, Steve and Salami, Behzad and Luna, Juan Gomez and Mutlu, Onur}, booktitle = {DSN}, year = {2023}, }
- IEEE TCADDRAM Bender: An Extensible and Versatile FPGA-based Infrastructure to Easily Test State-of-the-art DRAM ChipsAtaberk Olgun, Hasan Hassan, A Giray Yağlıkçı, Yahya Can Tuğrul, Lois Orosa, Haocong Luo, Minesh Patel, Oğuz Ergin, and Onur MutluIEEE TCAD, 2023
To understand and improve DRAM performance, reliability, security and energy efficiency, prior works study characteristics of commodity DRAM chips. Unfortunately, state-of-the-art open source infrastructures capable of conducting such studies are obsolete, poorly supported, or difficult to use, or their inflexibility limit the types of studies they can conduct. We propose DRAM Bender, a new FPGA-based infrastructure that enables experimental studies on state-of-the-art DRAM chips. DRAM Bender offers three key features at the same time. First, DRAM Bender enables directly interfacing with a DRAM chip through its low-level interface. This allows users to issue DRAM commands in arbitrary order and with finer-grained time intervals compared to other open source infrastructures. Second, DRAM Bender exposes easy-to-use C++ and Python programming interfaces, allowing users to quickly and easily develop different types of DRAM experiments. Third, DRAM Bender is easily extensible. The modular design of DRAM Bender allows extending it to (i) support existing and emerging DRAM interfaces, and (ii) run on new commercial or custom FPGA boards with little effort. To demonstrate that DRAM Bender is a versatile infrastructure, we conduct three case studies, two of which lead to new observations about the DRAM RowHammer vulnerability. In particular, we show that data patterns supported by DRAM Bender uncovers a larger set of bit-flips on a victim row compared to the data patterns commonly used by prior work. We demonstrate the extensibility of DRAM Bender by implementing it on five different FPGAs with DDR4 and DDR3 support.
@article{olgun2023dram, title = {{DRAM Bender: An Extensible and Versatile FPGA-based Infrastructure to Easily Test State-of-the-art DRAM Chips}}, author = {Olgun, Ataberk and Hassan, Hasan and Ya{\u{g}}l{\i}k{\c{c}}{\i}, A Giray and Tu{\u{g}}rul, Yahya Can and Orosa, Lois and Luo, Haocong and Patel, Minesh and Ergin, O{\u{g}}uz and Mutlu, Onur}, journal = {IEEE TCAD}, year = {2023} }
- ISCARowPress: Amplifying Read Disturbance in Modern DRAM ChipsHaocong Luo, Ataberk Olgun, Abdullah Giray Yağlıkçı, Yahya Can Tuğrul, Steve Rhyner, Meryem Banu Cavlak, Joël Lindegger, Mohammad Sadrosadati, and Onur MutluIn ISCA, 2023
Memory isolation is critical for system reliability, security, and safety. Unfortunately, read disturbance can break memory isolation in modern DRAM chips. For example, RowHammer is a well-studied read-disturb phenomenon where repeatedly opening and closing (i.e., hammering) a DRAM row many times causes bitflips in physically nearby rows. This paper experimentally demonstrates and analyzes another widespread read-disturb phenomenon, RowPress, in real DDR4 DRAM chips. RowPress breaks memory isolation by keeping a DRAM row open for a long period of time, which disturbs physically nearby rows enough to cause bitflips. We show that RowPress amplifies DRAM’s vulnerability to read-disturb attacks by significantly reducing the number of row activations needed to induce a bitflip by one to two orders of magnitude under realistic conditions. In extreme cases, RowPress induces bitflips in a DRAM row when an adjacent row is activated only once. Our detailed characterization of 164 real DDR4 DRAM chips shows that RowPress 1) affects chips from all three major DRAM manufacturers, 2) gets worse as DRAM technology scales down to smaller node sizes, and 3) affects a different set of DRAM cells from RowHammer and behaves differently from RowHammer as temperature and access pattern changes. We demonstrate in a real DDR4-based system with RowHammer protection that 1) a user-level program induces bitflips by leveraging RowPress while conventional RowHammer cannot do so, and 2) a memory controller that adaptively keeps the DRAM row open for a longer period of time based on access pattern can facilitate RowPress-based attacks. To prevent bitflips due to RowPress, we describe and evaluate a new methodology that adapts existing RowHammer mitigation techniques to also mitigate RowPress with low additional performance overhead. We open source all our code and data to facilitate future research on RowPress.
@inproceedings{luo2023rowpress, title = {{RowPress: Amplifying Read Disturbance in Modern DRAM Chips}}, author = {Luo, Haocong and Olgun, Ataberk and Ya{\u{g}}l{\i}k{\c{c}}{\i}, Abdullah Giray and Tu{\u{g}}rul, Yahya Can and Rhyner, Steve and Cavlak, Meryem Banu and Lindegger, Jo{\"e}l and Sadrosadati, Mohammad and Mutlu, Onur}, booktitle = {ISCA}, year = {2023} }
- arXivUnderstanding Read Disturbance in High Bandwidth Memory: An Experimental Analysis of Real HBM2 DRAM ChipsAtaberk Olgun, Majd Osseiran, Yahya Can Tuğrul, Haocong Luo, Steve Rhyner, Behzad Salami, Juan Gomez Luna, and Onur Mutlu2023
DRAM read disturbance is a significant and worsening safety, security, and reliability issue of modern DRAM chips that can be exploited to break memory isolation. Two prominent examples of read-disturb phenomena are RowHammer and RowPress. However, no prior work extensively studies read-disturb phenomena in modern high-bandwidth memory (HBM) chips. In this work, we experimentally demonstrate the effects of read disturbance and uncover the inner workings of undocumented in-DRAM read disturbance mitigation mechanisms in HBM. Our characterization of six real HBM2 DRAM chips shows that (1) the number of read disturbance bitflips and the number of row activations needed to induce the first read disturbance bitflip significantly varies between different HBM2 chips and different 3D-stacked channels, pseudo channels, banks, and rows inside an HBM2 chip. (2) The DRAM rows at the end and in the middle of a DRAM bank exhibit significantly fewer read disturbance bitflips than the rest of the rows. (3) It takes fewer additional activations to induce more read disturbance bitflips in a DRAM row if the row exhibits the first bitflip already at a relatively high activation count. (4) HBM2 chips exhibit read disturbance bitflips with only two row activations when rows are kept active for an extremely long time. We show that a modern HBM2 DRAM chip implements undocumented read disturbance defenses that can track potential aggressor rows based on how many times they are activated, and refresh their victim rows with every 17 periodic refresh operations. We draw key takeaways from our observations and discuss their implications for future read disturbance attacks and defenses. We explain how our findings could be leveraged to develop both i) more powerful read disturbance attacks and ii) more efficient read disturbance defense mechanisms.
@misc{olgun2023understanding, title = {{Understanding Read Disturbance in High Bandwidth Memory: An Experimental Analysis of Real HBM2 DRAM Chips}}, author = {Olgun, Ataberk and Osseiran, Majd and Tu{\u{g}}rul, Yahya Can and Luo, Haocong and Rhyner, Steve and Salami, Behzad and Luna, Juan Gomez and Mutlu, Onur}, howpublished = {arXiv:2310.14665}, year = {2023} }
2022
- HPCADR-STRaNGe: End-to-End System Design for DRAM-Based True Random Number GeneratorsF Nisa Bostancı, Ataberk Olgun, Lois Orosa, A Giray Yağlıkçı, Jeremie S Kim, Hasan Hassan, Oğuz Ergin, and Onur MutluIn HPCA, 2022
Random number generation is an important task in a wide variety of critical applications including cryptographic algorithms, scientific simulations, and industrial testing tools. True Random Number Generators (TRNGs) produce truly random data by sampling a physical entropy source that typically requires custom hardware and suffers from long latency. To enable high-bandwidth and low-latency TRNGs on commodity devices, recent works propose TRNGs that use DRAM as an entropy source. Although prior works demonstrate promising DRAM-based TRNGs, integration of such mechanisms into real systems poses challenges. We identify three challenges for using DRAM-based TRNGs in current systems: (1) generating random numbers can degrade system performance by slowing down concurrently-running applications due to the interference between RNG and regular memory operations in the memory controller (i.e., RNG interference), (2) this RNG interference can degrade system fairness by unfairly prioritizing applications that intensively use random numbers (i.e., RNG applications), and (3) RNG applications can experience significant slowdowns due to the high RNG latency. We propose DR-STRaNGe, an end-to-end system design for DRAM-based TRNGs that (1) reduces the RNG interference by separating RNG requests from regular requests in the memory controller, (2) improves the system fairness with an RNG-aware memory request scheduler, and (3) hides the large TRNG latencies using a random number buffering mechanism with a new DRAM idleness predictor that accurately identifies idle DRAM periods. We evaluate DR-STRaNGe using a set of 186 multiprogrammed workloads. Compared to an RNG-oblivious baseline system, DR-STRaNGe improves the average performance of non-RNG and RNG applications by 17.9% and 25.1%, respectively. DR-STRaNGe improves average system fairness by 32.1% and reduces average energy consumption by 21%.
@inproceedings{bostanci2022dr, title = {{DR-STRaNGe: End-to-End System Design for DRAM-Based True Random Number Generators}}, author = {Bostanc{\i}, F Nisa and Olgun, Ataberk and Orosa, Lois and Ya{\u{g}}l{\i}k{\c{c}}{\i}, A Giray and Kim, Jeremie S and Hassan, Hasan and Ergin, O{\u{g}}uz and Mutlu, Onur}, booktitle = {HPCA}, year = {2022} }
- ASPLOSGenStore: a High-Performance In-Storage Processing System for Genome Sequence AnalysisNika Mansouri Ghiasi, Jisung Park, Harun Mustafa, Jeremie Kim, Ataberk Olgun, Arvid Gollwitzer, Damla Senol Cali, Can Firtina, Haiyu Mao, Nour Almadhoun Alserr, and othersIn ASPLOS, 2022
Read mapping is a fundamental, yet computationally-expensive step in many genomics applications. It is used to identify potential matches and differences between fragments (called reads) of a sequenced genome and an already known genome (called a reference genome). To address the computational challenges in genome analysis, many prior works propose various approaches such as filters that select the reads that must undergo expensive computation, efficient heuristics, and hardware acceleration. While effective at reducing the computation overhead, all such approaches still require the costly movement of a large amount of data from storage to the rest of the system, which can significantly lower the end-to-end performance of read mapping in conventional and emerging genomics systems. We propose GenStore, the first in-storage processing system designed for genome sequence analysis that greatly reduces both data movement and computational overheads of genome sequence analysis by exploiting low-cost and accurate in-storage filters. GenStore leverages hardware/software co-design to address the challenges of in-storage processing, supporting reads with 1) different read lengths and error rates, and 2) different degrees of genetic variation. Through rigorous analysis of read mapping processes, we meticulously design low-cost hardware accelerators and data/computation flows inside a NAND flash-based SSD. Our evaluation using a wide range of real genomic datasets shows that GenStore, when implemented in three modern SSDs, significantly improves the read mapping performance of state-of-the-art software (hardware) baselines by 2.07-6.05× (1.52-3.32X) for read sets with high similarity to the reference genome and 1.45-33.63X (2.70-19.2X) for read sets with low similarity to the reference genome.
@inproceedings{mansouri2022genstore, title = {{GenStore: a High-Performance In-Storage Processing System for Genome Sequence Analysis}}, author = {Mansouri Ghiasi, Nika and Park, Jisung and Mustafa, Harun and Kim, Jeremie and Olgun, Ataberk and Gollwitzer, Arvid and Senol Cali, Damla and Firtina, Can and Mao, Haiyu and Almadhoun Alserr, Nour and others}, booktitle = {ASPLOS}, year = {2022}, }
- ACM TACOMetaSys: A Practical Open-Source Metadata Management System to Implement and Evaluate Cross-Layer OptimizationsNandita Vijaykumar, Ataberk Olgun, Konstantinos Kanellopoulos, F Nisa Bostanci, Hasan Hassan, Mehrshad Lotfi, Phillip B Gibbons, and Onur MutluACM TACO, 2022
This paper introduces the first open-source FPGA-based infrastructure, MetaSys, with a prototype in a RISC-V core, to enable the rapid implementation and evaluation of a wide range of cross-layer techniques in real hardware. Hardware-software cooperative techniques are powerful approaches to improve the performance, quality of service, and security of general-purpose processors. They are however typically challenging to rapidly implement and evaluate in real hardware as they require full-stack changes to the hardware, OS, system software, and instruction-set architecture (ISA). MetaSys implements a rich hardware-software interface and lightweight metadata support that can be used as a common basis to rapidly implement and evaluate new cross-layer techniques. We demonstrate MetaSys’s versatility and ease-of-use by implementing and evaluating three cross-layer techniques for: (i) prefetching for graph analytics; (ii) bounds checking in memory unsafe languages, and (iii) return address protection in stack frames; each technique only requiring 100 lines of Chisel code over MetaSys. Using MetaSys, we perform the first detailed experimental study to quantify the performance overheads of using a single metadata management system to enable multiple cross-layer optimizations in CPUs. We identify the key sources of bottlenecks and system inefficiency of a general metadata management system. We design MetaSys to minimize these inefficiencies and provide increased versatility compared to previously-proposed metadata systems. Using three use cases and a detailed characterization, we demonstrate that a common metadata management system can be used to efficiently support diverse cross-layer techniques in CPUs.
@article{vijaykumar2022metasys, title = {{MetaSys: A Practical Open-Source Metadata Management System to Implement and Evaluate Cross-Layer Optimizations}}, author = {Vijaykumar, Nandita and Olgun, Ataberk and Kanellopoulos, Konstantinos and Bostanci, F Nisa and Hassan, Hasan and Lotfi, Mehrshad and Gibbons, Phillip B and Mutlu, Onur}, journal = {ACM TACO}, year = {2022} }
- arXivA Case for Transparent Reliability in DRAM SystemsMinesh Patel, Taha Shahroodi, Aditya Manglik, A Giray Yaglikci, Ataberk Olgun, Haocong Luo, and Onur Mutlu2022
Today’s systems have diverse needs that are difficult to address using one-size-fits-all commodity DRAM. Unfortunately, although system designers can theoretically adapt commodity DRAM chips to meet their particular design goals (e.g., by reducing access timings to improve performance, implementing system-level RowHammer mitigations), we observe that designers today lack sufficient insight into commodity DRAM chips’ reliability characteristics to implement these techniques in practice. In this work, we make a case for DRAM manufacturers to provide increased transparency into key aspects of DRAM reliability (e.g., basic chip design properties, testing strategies). Doing so enables system designers to make informed decisions to better adapt commodity DRAM to meet modern systems’ needs while preserving its cost advantages. To support our argument, we study four ways that system designers can adapt commodity DRAM chips to system-specific design goals: (1) improving DRAM reliability; (2) reducing DRAM refresh overheads; (3) reducing DRAM access latency; and (4) mitigating RowHammer attacks. We observe that adopting solutions for any of the four goals requires system designers to make assumptions about a DRAM chip’s reliability characteristics. These assumptions discourage system designers from using such solutions in practice due to the difficulty of both making and relying upon the assumption. We identify DRAM standards as the root of the problem: current standards rigidly enforce a fixed operating point with no specifications for how a system designer might explore alternative operating points. To overcome this problem, we introduce a two-step approach that reevaluates DRAM standards with a focus on transparency of DRAM reliability so that system designers are encouraged to make the most of commodity DRAM technology for both current and future DRAM chips.
@misc{patel2022case, title = {{A Case for Transparent Reliability in DRAM Systems}}, author = {Patel, Minesh and Shahroodi, Taha and Manglik, Aditya and Yaglikci, A Giray and Olgun, Ataberk and Luo, Haocong and Mutlu, Onur}, howpublished = {arXiv:2204.10378}, year = {2022} }
- DSNUnderstanding RowHammer Under Reduced Wordline Voltage: An Experimental Study Using Real DRAM DevicesA Giray Yağlıkçı, Haocong Luo, Geraldo F De Oliviera, Ataberk Olgun, Minesh Patel, Jisung Park, Hasan Hassan, Jeremie S Kim, Lois Orosa, and Onur MutluIn DSN, 2022
RowHammer is a circuit-level DRAM vulnerability, where repeatedly activating and precharging a DRAM row, and thus alternating the voltage of a row’s wordline between low and high voltage levels, can cause bit flips in physically nearby rows. Recent DRAM chips are more vulnerable to RowHammer: with technology node scaling, the minimum number of activate-precharge cycles to induce a RowHammer bit flip reduces and the RowHammer bit error rate increases. Therefore, it is critical to develop effective and scalable approaches to protect modern DRAM systems against RowHammer. To enable such solutions, it is essential to develop a deeper understanding of the RowHammer vulnerability of modern DRAM chips. However, even though the voltage toggling on a wordline is a key determinant of RowHammer vulnerability, no prior work experimentally demonstrates the effect of wordline voltage (VPP) on the RowHammer vulnerability. Our work closes this gap in understanding. This is the first work to experimentally demonstrate on 272 real DRAM chips that lowering VPP reduces a DRAM chip’s RowHammer vulnerability. We show that lowering VPP 1) increases the number of activate-precharge cycles needed to induce a RowHammer bit flip by up to 85.8% with an average of 7.4% across all tested chips and 2) decreases the RowHammer bit error rate by up to 66.9% with an average of 15.2% across all tested chips. At the same time, reducing VPP marginally worsens a DRAM cell’s access latency, charge restoration, and data retention time within the guardbands of system-level nominal timing parameters for 208 out of 272 tested chips. We conclude that reducing VPP is a promising strategy for reducing a DRAM chip’s RowHammer vulnerability without requiring modifications to DRAM chips.
@inproceedings{yaglikci2022understanding, title = {{Understanding RowHammer Under Reduced Wordline Voltage: An Experimental Study Using Real DRAM Devices}}, author = {Ya{\u{g}}l{\i}k{\c{c}}{\i}, A Giray and Luo, Haocong and De Oliviera, Geraldo F and Olgun, Ataberk and Patel, Minesh and Park, Jisung and Hassan, Hasan and Kim, Jeremie S and Orosa, Lois and Mutlu, Onur}, booktitle = {DSN}, year = {2022} }
- DSNERIC: An Efficient and Practical Software Obfuscation FrameworkAlperen Bolat, Seyyid Hikmet Celik, Ataberk Olgun, Oğuz Ergin, and Marco OttaviIn DSN, 2022
Modern cloud computing systems distribute software executables over a network to keep the software sources, which are typically compiled in a security-critical cluster, secret. We develop ERIC, a new, efficient, and general software obfuscation framework. ERIC protects software against (i) static analysis, by making only an encrypted version of software executables available to the human eye, no matter how the software is distributed, and (ii) dynamic analysis, by guaranteeing that an encrypted executable can only be correctly decrypted and executed by a single authenticated device. ERIC comprises key hardware and software components to provide efficient software obfuscation support: (i) a hardware decryption engine (HDE) enables efficient decryption of encrypted hardware in the target device, (ii) the compiler can seamlessly encrypt software executables given only a unique device identifier. Both the hardware and software components are ISA-independent, making ERIC general. The key idea of ERIC is to use physical unclonable functions (PUFs), unique device identifiers, as secret keys in encrypting software executables. Malicious parties that cannot access the PUF in the target device cannot perform static or dynamic analyses on the encrypted binary. We develop ERIC’s prototype on an FPGA to evaluate it end-to-end. Our prototype extends RISC-V Rocket Chip with the hardware decryption engine (HDE) to minimize the overheads of software decryption. We augment the custom LLVM-based compiler to enable partial/full encryption of RISC-V executables. The HDE incurs minor FPGA resource overheads, it requires 2.63% more LUTs and 3.83% more flip-flops compared to the Rocket Chip baseline. LLVM-based software encryption increases compile time by 15.22% and the executable size by 1.59%.
@inproceedings{bolat2022eric, title = {{ERIC: An Efficient and Practical Software Obfuscation Framework}}, author = {Bolat, Alperen and Celik, Seyyid Hikmet and Olgun, Ataberk and Ergin, O{\u{g}}uz and Ottavi, Marco}, booktitle = {DSN}, year = {2022} }
- arXivA Case for Self-Managing DRAM Chips: Improving Performance, Efficiency, Reliability, and Security via Autonomous in-DRAM Maintenance OperationsHasan Hassan, Ataberk Olgun, A Giray Yaglikci, Haocong Luo, and Onur Mutlu2022
The memory controller is in charge of managing DRAM maintenance operations (e.g., refresh, RowHammer protection, memory scrubbing) in current DRAM chips. Implementing new maintenance operations often necessitates modifications in the DRAM interface, memory controller, and potentially other system components. Such modifications are only possible with a new DRAM standard, which takes a long time to develop, leading to slow progress in DRAM systems. In this paper, our goal is to 1) ease, and thus accelerate, the process of enabling new DRAM maintenance operations and 2) enable more efficient in-DRAM maintenance operations. Our idea is to set the memory controller free from managing DRAM maintenance. To this end, we propose Self-Managing DRAM (SMD), a new low-cost DRAM architecture that enables implementing new in-DRAM maintenance mechanisms (or modifying old ones) with no further changes in the DRAM interface, memory controller, or other system components. We use SMD to implement new in-DRAM maintenance mechanisms for three use cases: 1) periodic refresh, 2) RowHammer protection, and 3) memory scrubbing. We show that SMD enables easy adoption of efficient maintenance mechanisms that significantly improve the system performance and energy efficiency while providing higher reliability compared to conventional DDR4 DRAM. A combination of SMD-based maintenance mechanisms that perform refresh, RowHammer protection, and memory scrubbing achieve 7.6% speedup and consume 5.2% less DRAM energy on average across 20 memory-intensive four-core workloads.
@article{hassan2022case, title = {{A Case for Self-Managing DRAM Chips: Improving Performance, Efficiency, Reliability, and Security via Autonomous in-DRAM Maintenance Operations}}, author = {Hassan, Hasan and Olgun, Ataberk and Yaglikci, A Giray and Luo, Haocong and Mutlu, Onur}, howpublished = {arXiv:2207.13358}, year = {2022} }
- arXivSectored DRAM: An Energy-Efficient High-Throughput and Practical Fine-Grained DRAM ArchitectureAtaberk Olgun, F Bostanci, Geraldo F Oliveira, Yahya Can Tugrul, Rahul Bera, A Giray Yaglikci, Hasan Hassan, Oguz Ergin, and Onur Mutlu2022
There are two major sources of inefficiency in computing systems that use modern DRAM devices as main memory. First, due to coarse-grained data transfers (size of a cache block, usually 64B between the DRAM and the memory controller, systems waste energy on transferring data that is not used. Second, due to coarse-grained DRAM row activation, systems waste energy by activating DRAM cells that are unused in many workloads where spatial locality is lower than the large row size (usually 8-16KB). We propose Sectored DRAM, a new, low-overhead DRAM substrate that alleviates the two inefficiencies, by enabling fine-grained DRAM access and activation. To efficiently retrieve only the useful data from DRAM, Sectored DRAM exploits the observation that many cache blocks are not fully utilized in many workloads due to poor spatial locality. Sectored DRAM predicts the words in a cache block that will likely be accessed during the cache block’s cache residency and: (i) transfers only the predicted words on the memory channel, as opposed to transferring the entire cache block, by dynamically tailoring the DRAM data transfer size for the workload and (ii) activates a smaller set of cells that contain the predicted words, as opposed to activating the entire DRAM row, by carefully operating physically isolated portions of DRAM rows (MATs). Compared to prior work in fine-grained DRAM, Sectored DRAM greatly reduces DRAM energy consumption, does not reduce DRAM throughput, and can be implemented with low hardware cost. We evaluate Sectored DRAM using 41 workloads from widely-used benchmark suites. Sectored DRAM reduces the DRAM energy consumption of highly-memory-intensive workloads by up to (on average) 33% (20%) while improving their performance by 17% on average. Sectored DRAM’s DRAM energy savings, combined with its system performance improvement, allows system-wide energy savings of up to 23%.
@misc{olgun2022sectored, title = {{Sectored DRAM: An Energy-Efficient High-Throughput and Practical Fine-Grained DRAM Architecture}}, author = {Olgun, Ataberk and Bostanci, F and Oliveira, Geraldo F and Tugrul, Yahya Can and Bera, Rahul and Yaglikci, A Giray and Hassan, Hasan and Ergin, Oguz and Mutlu, Onur}, howpublished = {arXiv:2207.13795}, year = {2022} }
- MICROHermes: Accelerating Long-Latency Load Requests via Perceptron-Based Off-Chip Load PredictionRahul Bera, Konstantinos Kanellopoulos, Shankar Balachandran, David Novo, Ataberk Olgun, Mohammad Sadrosadat, and Onur MutluIn MICRO, 2022
Long-latency load requests continue to limit the performance of high-performance processors. To increase the latency tolerance of a processor, architects have primarily relied on two key techniques: sophisticated data prefetchers and large on-chip caches. In this work, we show that: 1) even a sophisticated state-of-the-art prefetcher can only predict half of the off-chip load requests on average across a wide range of workloads, and 2) due to the increasing size and complexity of on-chip caches, a large fraction of the latency of an off-chip load request is spent accessing the on-chip cache hierarchy. The goal of this work is to accelerate off-chip load requests by removing the on-chip cache access latency from their critical path. To this end, we propose a new technique called Hermes, whose key idea is to: 1) accurately predict which load requests might go off-chip, and 2) speculatively fetch the data required by the predicted off-chip loads directly from the main memory, while also concurrently accessing the cache hierarchy for such loads. To enable Hermes, we develop a new lightweight, perceptron-based off-chip load prediction technique that learns to identify off-chip load requests using multiple program features (e.g., sequence of program counters). For every load request, the predictor observes a set of program features to predict whether or not the load would go off-chip. If the load is predicted to go off-chip, Hermes issues a speculative request directly to the memory controller once the load’s physical address is generated. If the prediction is correct, the load eventually misses the cache hierarchy and waits for the ongoing speculative request to finish, thus hiding the on-chip cache hierarchy access latency from the critical path of the off-chip load. Our evaluation shows that Hermes significantly improves performance of a state-of-the-art baseline. We open-source Hermes.
@inproceedings{bera2022hermes, title = {{Hermes: Accelerating Long-Latency Load Requests via Perceptron-Based Off-Chip Load Prediction}}, author = {Bera, Rahul and Kanellopoulos, Konstantinos and Balachandran, Shankar and Novo, David and Olgun, Ataberk and Sadrosadat, Mohammad and Mutlu, Onur}, booktitle = {MICRO}, year = {2022} }
- MICROHiRA: Hidden Row Activation for Reducing Refresh Latency of Off-the-Shelf DRAM ChipsA Giray Yağlikçi, Ataberk Olgun, Minesh Patel, Haocong Luo, Hasan Hassan, Lois Orosa, Oğuz Ergin, and Onur MutluIn MICRO, 2022
DRAM is the building block of modern main memory systems. DRAM cells must be periodically refreshed to prevent data loss. Refresh operations degrade system performance by interfering with memory accesses. As DRAM chip density increases with technology node scaling, refresh operations also increase because: 1) the number of DRAM rows in a chip increases; and 2) DRAM cells need additional refresh operations to mitigate bit failures caused by RowHammer, a failure mechanism that becomes worse with technology node scaling. Thus, it is critical to enable refresh operations at low performance overhead. To this end, we propose a new operation, Hidden Row Activation (HiRA), and the HiRA Memory Controller (HiRA-MC). HiRA hides a refresh operation’s latency by refreshing a row concurrently with accessing or refreshing another row within the same bank. Unlike prior works, HiRA achieves this parallelism without any modifications to off-the-shelf DRAM chips. To do so, it leverages the new observation that two rows in the same bank can be activated without data loss if the rows are connected to different charge restoration circuitry. We experimentally demonstrate on 56% real off-the-shelf DRAM chips that HiRA can reliably parallelize a DRAM row’s refresh operation with refresh or activation of any of the 32% of the rows within the same bank. By doing so, HiRA reduces the overall latency of two refresh operations by 51.4%. HiRA-MC modifies the memory request scheduler to perform HiRA when a refresh operation can be performed concurrently with a memory access or another refresh. Our system-level evaluations show that HiRA-MC increases system performance by 12.6% and 3.73x as it reduces the performance degradation due to periodic refreshes and refreshes for RowHammer protection (preventive refreshes), respectively, for future DRAM chips with increased density and RowHammer vulnerability
@inproceedings{yaglikci2022hira, title = {HiRA: Hidden Row Activation for Reducing Refresh Latency of Off-the-Shelf DRAM Chips}, author = {Ya{\u{g}}lik{\c{c}}i, A Giray and Olgun, Ataberk and Patel, Minesh and Luo, Haocong and Hassan, Hasan and Orosa, Lois and Ergin, O{\u{g}}uz and Mutlu, Onur}, booktitle = {MICRO}, year = {2022} }
- arXivSpyhammer: Using Rowhammer to Remotely Spy on TemperatureLois Orosa, Ulrich Rührmair, A Giray Yaglikci, Haocong Luo, Ataberk Olgun, Patrick Jattke, Minesh Patel, Jeremie Kim, Kaveh Razavi, and Onur Mutlu2022
RowHammer is a DRAM vulnerability that can cause bit errors in a victim DRAM row by just accessing its neighboring DRAM rows at a high-enough rate. Recent studies demonstrate that new DRAM devices are becoming increasingly more vulnerable to RowHammer, and many works demonstrate system-level attacks for privilege escalation or information leakage. In this work, we leverage two key observations about RowHammer characteristics to spy on DRAM temperature: 1) RowHammer-induced bit error rate consistently increases (or decreases) as the temperature increases, and 2) some DRAM cells that are vulnerable to RowHammer cause bit errors only at a particular temperature. Based on these observations, we propose a new RowHammer attack, called SpyHammer, that spies on the temperature of critical systems such as industrial production lines, vehicles, and medical systems. SpyHammer is the first practical attack that can spy on DRAM temperature. SpyHammer can spy on absolute temperature with an error of less than 2.5 °C at the 90th percentile of tested temperature points, for 12 real DRAM modules from 4 main manufacturers.
@misc{orosa2022spyhammer, title = {{Spyhammer: Using Rowhammer to Remotely Spy on Temperature}}, author = {Orosa, Lois and R{\"u}hrmair, Ulrich and Yaglikci, A Giray and Luo, Haocong and Olgun, Ataberk and Jattke, Patrick and Patel, Minesh and Kim, Jeremie and Razavi, Kaveh and Mutlu, Onur}, howpublished = {arXiv:2210.04084}, year = {2022} }
- arXivTuRaN: True Random Number Generation Using Supply Voltage Underscaling in SRAMsİsmail Emir Yüksel, Ataberk Olgun, Behzad Salami, F Bostancı, Yahya Can Tuğrul, A Giray Yağlıkçı, Nika Mansouri Ghiasi, Onur Mutlu, and Oğuz Ergin2022
Prior works propose SRAM-based TRNGs that extract entropy from SRAM arrays. SRAM arrays are widely used in a majority of specialized or general-purpose chips that perform the computation to store data inside the chip. Thus, SRAM-based TRNGs present a low-cost alternative to dedicated hardware TRNGs. However, existing SRAM-based TRNGs suffer from 1) low TRNG throughput, 2) high energy consumption, 3) high TRNG latency, and 4) the inability to generate true random numbers continuously, which limits the application space of SRAM-based TRNGs. Our goal in this paper is to design an SRAM-based TRNG that overcomes these four key limitations and thus, extends the application space of SRAM-based TRNGs. To this end, we propose TuRaN, a new high-throughput, energy-efficient, and low-latency SRAM-based TRNG that can sustain continuous operation. TuRaN leverages the key observation that accessing SRAM cells results in random access failures when the supply voltage is reduced below the manufacturer-recommended supply voltage. TuRaN generates random numbers at high throughput by repeatedly accessing SRAM cells with reduced supply voltage and post-processing the resulting random faults using the SHA-256 hash function. To demonstrate the feasibility of TuRaN, we conduct SPICE simulations on different process nodes and analyze the potential of access failure for use as an entropy source. We verify and support our simulation results by conducting real-world experiments on two commercial off-the-shelf FPGA boards. We evaluate the quality of the random numbers generated by TuRaN using the widely-adopted NIST standard randomness tests and observe that TuRaN passes all tests. TuRaN generates true random numbers with (i) an average (maximum) throughput of 1.6Gbps (1.812Gbps), (ii) 0.11nJ/bit energy consumption, and (iii) 278.46us latency.
@misc{yuksel2022turan, title = {{TuRaN: True Random Number Generation Using Supply Voltage Underscaling in SRAMs}}, author = {Y{\"u}ksel, {\.I}smail Emir and Olgun, Ataberk and Salami, Behzad and Bostanc{\i}, F and Tu{\u{g}}rul, Yahya Can and Ya{\u{g}}l{\i}k{\c{c}}{\i}, A Giray and Ghiasi, Nika Mansouri and Mutlu, Onur and Ergin, O{\u{g}}uz}, howpublished = {arXiv:2211.10894}, year = {2022} }
2021
- Masters ThesisGerçek DRAM Aygıtlarında Dörtlü Etkinleştirme ile Yüksek Hızda Gerçek Rastgele Sayı ÜretilmesiAtaberk OlgunTOBB ETÜ, 2021
@mastersthesis{olgun2021gerccek, title = {{Ger{\c{c}}ek DRAM Ayg{\i}tlar{\i}nda D{\"o}rtl{\"u} Etkinle{\c{s}}tirme ile Y{\"u}ksek H{\i}zda Ger{\c{c}}ek Rastgele Say{\i} {\"U}retilmesi}}, author = {Olgun, Ataberk}, year = {2021}, school = {TOBB ET{\"U}}, dimensions = {true} }
- HPCABlockhammer: Preventing RowHammer at Low Cost by Blacklisting Rapidly-Accessed DRAM RowsA Giray Yağlikçi, Minesh Patel, Jeremie S Kim, Roknoddin Azizi, Ataberk Olgun, Lois Orosa, Hasan Hassan, Jisung Park, Konstantinos Kanellopoulos, Taha Shahroodi, and othersIn HPCA, 2021
Aggressive memory density scaling causes modern DRAM devices to suffer from RowHammer, a phenomenon where rapidly activating a DRAM row can cause bit-flips in physically-nearby rows. Recent studies demonstrate that modern DRAM chips, including chips previously marketed as RowHammer-safe, are even more vulnerable to RowHammer than older chips. Many works show that attackers can exploit RowHammer bit-flips to reliably mount system-level attacks to escalate privilege and leak private data. Therefore, it is critical to ensure RowHammer-safe operation on all DRAM-based systems. Unfortunately, state-of-the-art RowHammer mitigation mechanisms face two major challenges. First, they incur increasingly higher performance and/or area overheads when applied to more vulnerable DRAM chips. Second, they require either proprietary information about or modifications to the DRAM chip design. In this paper, we show that it is possible to efficiently and scalably prevent RowHammer bit-flips without knowledge of or modification to DRAM internals. We introduce BlockHammer, a low-cost, effective, and easy-to-adopt RowHammer mitigation mechanism that overcomes the two key challenges by selectively throttling memory accesses that could otherwise cause RowHammer bit-flips. The key idea of BlockHammer is to (1) track row activation rates using area-efficient Bloom filters and (2) use the tracking data to ensure that no row is ever activated rapidly enough to induce RowHammer bit-flips. By doing so, BlockHammer (1) makes it impossible for a RowHammer bit-flip to occur and (2) greatly reduces a RowHammer attack’s impact on the performance of co-running benign applications. Compared to state-of-the-art RowHammer mitigation mechanisms, BlockHammer provides competitive performance and energy when the system is not under a RowHammer attack and significantly better performance and energy when the system is under attack.
@inproceedings{yaglikci2021blockhammer, title = {{Blockhammer: Preventing RowHammer at Low Cost by Blacklisting Rapidly-Accessed DRAM Rows}}, author = {Ya{\u{g}}lik{\c{c}}i, A Giray and Patel, Minesh and Kim, Jeremie S and Azizi, Roknoddin and Olgun, Ataberk and Orosa, Lois and Hassan, Hasan and Park, Jisung and Kanellopoulos, Konstantinos and Shahroodi, Taha and others}, booktitle = {HPCA}, year = {2021}, dimensions = {true} }
- ISCAQUAC-TRNG: High-Throughput True Random Number Generation Using Quadruple Row Activation in Commodity DRAM ChipsAtaberk Olgun, Minesh Patel, A Giray Yağlıkçı, Haocong Luo, Jeremie S Kim, F Nisa Bostancı, Nandita Vijaykumar, Oğuz Ergin, and Onur MutluIn ISCA, 2021
True random number generators (TRNG) sample random physical processes to create large amounts of random numbers for various use cases, including security-critical cryptographic primitives, scientific simulations, machine learning applications, and even recreational entertainment. Unfortunately, not every computing system is equipped with dedicated TRNG hardware, limiting the application space and security guarantees for such systems. To open the application space and enable security guarantees for the overwhelming majority of computing systems that do not necessarily have dedicated TRNG hardware, we develop QUAC-TRNG. QUAC-TRNG exploits the new observation that a carefully-engineered sequence of DRAM commands activates four consecutive DRAM rows in rapid succession. This QUadruple ACtivation (QUAC) causes the bitline sense amplifiers to non-deterministically converge to random values when we activate four rows that store conflicting data because the net deviation in bitline voltage fails to meet reliable sensing margins. We experimentally demonstrate that QUAC reliably generates random values across 136 commodity DDR4 DRAM chips from one major DRAM manufacturer. We describe how to develop an effective TRNG (QUAC-TRNG) based on QUAC. We evaluate the quality of our TRNG using NIST STS and find that QUAC-TRNG successfully passes each test. Our experimental evaluations show that QUAC-TRNG generates true random numbers with a throughput of 3.44 Gb/s (per DRAM channel), outperforming the state-of-the-art DRAM-based TRNG by 15.08x and 1.41x for basic and throughput-optimized versions, respectively. We show that QUAC-TRNG utilizes DRAM bandwidth better than the state-of-the-art, achieving up to 2.03x the throughput of a throughput-optimized baseline when scaling bus frequencies to 12 GT/s.
@inproceedings{olgun2021quac, title = {{QUAC-TRNG: High-Throughput True Random Number Generation Using Quadruple Row Activation in Commodity DRAM Chips}}, author = {Olgun, Ataberk and Patel, Minesh and Ya{\u{g}}l{\i}k{\c{c}}{\i}, A Giray and Luo, Haocong and Kim, Jeremie S and Bostanc{\i}, F Nisa and Vijaykumar, Nandita and Ergin, O{\u{g}}uz and Mutlu, Onur}, booktitle = {ISCA}, year = {2021}, dimensions = {true} }
- MICROA Deeper Look into Rowhammer’s Sensitivities: Experimental Analysis of Real DRAM Chips and Implications on Future Attacks and DefensesLois Orosa, Abdullah Giray Yaglikci, Haocong Luo, Ataberk Olgun, Jisung Park, Hasan Hassan, Minesh Patel, Jeremie S Kim, and Onur MutluIn MICRO, 2021
RowHammer is a circuit-level DRAM vulnerability where repeatedly accessing (i.e., hammering) a DRAM row can cause bit flips in physically nearby rows. The RowHammer vulnerability worsens as DRAM cell size and cell-to-cell spacing shrink. Recent studies demonstrate that modern DRAM chips, including chips previously marketed as RowHammer-safe, are even more vulnerable to RowHammer than older chips such that the required hammer count to cause a bit flip has reduced by more than 10X in the last decade. Therefore, it is essential to develop a better understanding and in-depth insights into the RowHammer vulnerability of modern DRAM chips to more effectively secure current and future systems. Our goal in this paper is to provide insights into fundamental properties of the RowHammer vulnerability that are not yet rigorously studied by prior works, but can potentially be i) exploited to develop more effective RowHammer attacks or ii) leveraged to design more effective and efficient defense mechanisms. To this end, we present an experimental characterization using 248 DDR4 and 24 DDR3 modern DRAM chips from four major DRAM manufacturers demonstrating how the RowHammer effects vary with three fundamental properties: 1) DRAM chip temperature, 2) aggressor row active time, and 3) victim DRAM cell’s physical location. Among our 16 new observations, we highlight that a RowHammer bit flip 1) is very likely to occur in a bounded range, specific to each DRAM cell (e.g., 5.4% of the vulnerable DRAM cells exhibit errors in the range 70C to 90C), 2) is more likely to occur if the aggressor row is active for longer time (e.g., RowHammer vulnerability increases by 36% if we keep a DRAM row active for 15 column accesses), and 3) is more likely to occur in certain physical regions of the DRAM module under attack (e.g., 5% of the rows are 2x more vulnerable than the remaining 95% of the rows).
@inproceedings{orosa2021deeper, title = {{A Deeper Look into Rowhammer’s Sensitivities: Experimental Analysis of Real DRAM Chips and Implications on Future Attacks and Defenses}}, author = {Orosa, Lois and Yaglikci, Abdullah Giray and Luo, Haocong and Olgun, Ataberk and Park, Jisung and Hassan, Hasan and Patel, Minesh and Kim, Jeremie S and Mutlu, Onur}, booktitle = {MICRO}, year = {2021}, dimensions = {true} }