In this paper, we compare the performance of two algorithms for static locking of instruction caches: one using a genetic algorithm for cache contents selection (A.M. disable cache replacement) such that memory access times and cache-related preemption times are predictable. An alternative approach allowing to use caches in real-time systems is to lock their contents (i.e. However, cache-aware WCET analysis techniques are not always applicable or may be too pessimistic. A lot of progress has been achieved in the last ten years to statically predict worst-case execution times (WCETs) of tasks on architectures with caches. However, they are sources of predictability problems because of their dynamic and adaptive behavior, and thus need special attention to be used in hard real-time systems. We hope that this paper will help researchers in getting insights into cache locking schemes and will also stimulate further work in this area.Ĭache memories have been extensively used to bridge the gap between high speed processors and relatively slower main memories. We also discuss the opportunities and obstacles in using cache locking. We categorize the techniques in several groups to underscore their similarities and differences. In this paper, we present a survey of techniques proposed for cache locking.
![scratchpad memory vs cache scratchpad memory vs cache](https://image1.slideserve.com/2196171/computer-vision-tasks-on-the-texas-instruments-c6678-digital-signal-processor-l.jpg)
extra misses for unlocked blocks, complex algorithms required for selection of locking contents, etc.) and hence, a careful management is required to realize the full potential of cache locking. However, cache locking also has several disadvantages (e.g. Cache locking is a promising approach for simplifying WCET estimation and providing predictability and hence, several commercial processors provide ability for locking cache. Furthermore, we highlight the circumstances under which one type of on-chip memory is more appropriate than the other depending of architectural parameters (cache block size) and application characteristics (basic block size).Ĭache memory, although important for boosting application performance, is also a source of execution time variability, and this makes its use difficult in systems requiring worst case execution time (WCET) guarantees. Experimental results show that the algorithm yields to good ratios of on-chip memory accesses on the worst-case execution path, with a tolerable reload overhead, for both types of on-chip memories. The algorithm allows to make a quantitative comparison of worst-case performance of applications using these two kinds of on-chip memories. The contents of on-chip memory, although selected o-line, is changed at run-time, for the sake of scalability with respect to task size.
![scratchpad memory vs cache scratchpad memory vs cache](https://i.stack.imgur.com/miLNq.png)
![scratchpad memory vs cache scratchpad memory vs cache](https://venturebeat.com/wp-content/uploads/2018/12/1-dashboard.jpg)
The algorithm supports two types of on-chip memories, namely locked caches and scratchpad memories. We propose in this paper an algorithm for o-line selection of the contents of on-chip memories. As compared to unlocked caches which may raise predictability issues for some cache replacement policies (HLTW03), locked caches and software-controlled on-chip static RAM are more easily amenable to timing analysis. As a consequence, architectures with caches and/or on-chip static RAM (scratchpad memories) are of interest for such applications. In addition to this stringent demand for predictability, an increasing number of hard real-time applications need to be fast as well. Hard real-time tasks must meet their deadline in all situations, including in the worst-case one, otherwise the safety of the controlled system is jeopardized.