next up previous contents
Next: Processing L1 cache actions Up: Cache Hierarchy Previous: Bringing in messages

Processing the cache pipelines

 

Source files: src/MemSys/l1cache.c, src/MemSys/l2cache.c, src/MemSys/pipeline.c, src/MemSys/cachehelp.c

Header files: incl/MemSys/cache.h, incl/MemSys/pipeline.h

For each cycle in which there are accesses in the cache pipelines, the functions L1CacheOutSim and L2CacheOutSim are called.

These functions start out by checking what the system calls its smart MSHR list. The smart MSHR list is an abstraction used for simulator efficiency. In a real system, this list would correspond to state held at the cache resources (MSHRs or write-back buffer entries). Entries in the smart MSHR list correspond to messages being held in one of the above resources, waiting to be sent on one of the cache output ports. Messages can be held in their previously-allocated cache resources in order to prevent deadlock, as the cache must always accept replies in a finite amount of time. If there are any such messages held in their resources, the cache attempts to send one to its output port. If the cache successfully sends the message, the corresponding resource may be freed in some cases.

After attempting to process the smart MSHR list, the cache considers the current state of its pipelines. If a message has reached the head of its pipeline (in other words, has experienced all its expected latency), the cache calls one of the functions to process messages: namely, L1ProcessTagReq, L2ProcessTagReq, or L2ProcessDataReq. If the corresponding function returns successfully, the element is removed from the pipe. After elements have been processed from the head of their pipelines, the cache advances the elements by calling CyclePipe. The following sections describe functions L1ProcessTagReq, L2ProcessTagReq, and L2ProcessDataReq in detail.



Vijay Sadananda Pai
Thu Aug 7 14:18:56 CDT 1997