website: Fix code formatting and links for learning gem5 part 3
Change-Id: Idb91c492f7176eaaf9ffe9a1ab8dbfc3bd812a36
diff --git a/_pages/documentation/learning_gem5/part3/MSIbuilding.md b/_pages/documentation/learning_gem5/part3/MSIbuilding.md
index aff2054..b4bf80e 100644
--- a/_pages/documentation/learning_gem5/part3/MSIbuilding.md
+++ b/_pages/documentation/learning_gem5/part3/MSIbuilding.md
@@ -17,9 +17,9 @@
Now that we have finished implementing the protocol, we need to compile
it. You can download the complete SLICC files below:
-- MSI-cache.sm \<../../\_static/scripts/part3/MSI\_protocol/MSI-cache.sm\>
-- MSI-dir.sm \<../../\_static/scripts/part3/MSI\_protocol/MSI-dir.sm\>
-- MSI-msg.sm \<../../\_static/scripts/part3/MSI\_protocol/MSI-msg.sm\>
+- [MSI-cache.sm](/_pages/static/scripts/part3/MSI\_protocol/MSI-cache.sm)
+- [MSI-dir.sm](/_pages/static/scripts/part3/MSI\_protocol/MSI-dir.sm)
+- [MSI-msg.sm](/_pages/static/scripts/part3/MSI\_protocol/MSI-msg.sm)
Before building the protocol, we need to create one more file:
`MSI.slicc`. This file tells the SLICC compiler which state machine
@@ -35,7 +35,7 @@
come before the files they are used in (e.g., `MSI-msg.sm` must come
before `MSI-cache.sm` since `MSI-cache.sm` uses the `RequestMsg` type).
-``` {.sourceCode .c++}
+```cpp
protocol "MSI";
include "RubySlicc_interfaces.slicc";
include "MSI-msg.sm";
@@ -44,7 +44,7 @@
```
You can download the fill file
-here \<../../\_static/scripts/part3/MSI\_protocol/MSI.slicc\>
+[here](/_pages/static/scripts/part3/MSI_protocol/MSI.slicc).
Compiling a protocol with SCons
-------------------------------
@@ -55,7 +55,7 @@
to specify the SCons options on the command line. The command line below
will build our new protocol with the X86 ISA.
-``` {.sourceCode .sh}
+```
scons build/X86_MSI/gem5.opt --default=X86 PROTOCOL=MSI SLICC_HTML=True
```
diff --git a/_pages/documentation/learning_gem5/part3/MSIdebugging.md b/_pages/documentation/learning_gem5/part3/MSIdebugging.md
index 9c093c0..7ff437d 100644
--- a/_pages/documentation/learning_gem5/part3/MSIdebugging.md
+++ b/_pages/documentation/learning_gem5/part3/MSIdebugging.md
@@ -61,9 +61,9 @@
than when using normal CPUs. Thus, we need to use a different
`MyCacheSystem` than before. You can download this different cache
system file
-here \<../../\_static/scripts/part3/configs/test\_caches.py\> and you
+[here](/_pages/static/scripts/part3/configs/test_caches.py) and you
can download the modified run script
-here \<../../\_static/scripts/part3/configs/ruby\_test.py\>. The test
+[here](/_pages/static/scripts/part3/configs/ruby_test.py). The test
run script is mostly the same as the simple run script, but creates the
`RubyRandomTester` instead of CPUs.
@@ -148,7 +148,7 @@
use this extra field to pass additional information to the trace --
such as identifying the request as a load or store. For SLICC
transitions, `APPEND_TRANSITION_COMMENT` often use this, as we
- discussed previously \<MSI-actions-section\>.
+ [discussed previously](../cache-actions/).
Generally, spaces are used to separate each of these fields (the space
between the fields are added implicitly, you do not need to add them).
@@ -215,8 +215,8 @@
requestor is the owner do we include the number of sharers. Otherwise,
it doesn't matter at all and we just set the sharers to 0.
-::
-: panic: Invalid transition system.caches.controllers0 time: 5332
+
+ panic: Invalid transition system.caches.controllers0 time: 5332
addr: 0x4ac0 event: Inv state: SM\_AD
First, let's look at where Inv is triggered. If you get an invalidate...
@@ -224,7 +224,7 @@
We can use protocol trace and grep to find what's going on.
-``` {.sourceCode .sh}
+```
build/MSI/gem5.opt --debug-flags=ProtocolTrace configs/learning_gem5/part6/ruby_test.py | grep 0x4ac0
```
@@ -245,7 +245,7 @@
Maybe there is a sharer in the sharers list when there shouldn't be? We
can add a defensive assert in clearOwner and setOwner.
-``` {.sourceCode .c++}
+```cpp
action(setOwner, "sO", desc="Set the owner") {
assert(getDirectoryEntry(address).Sharers.count() == 0);
peek(request_in, RequestMsg) {
@@ -357,7 +357,7 @@
Now, let's try with two CPUs. First thing I run into is an assert
failure. I'm seeing the first assert in setState fail.
-``` {.sourceCode .c++}
+```cpp
void setState(Addr addr, State state) {
if (directory.isPresent(addr)) {
if (state == State:M) {
@@ -378,7 +378,7 @@
the assert. Note that you are required to use the RubySlicc debug flag.
This is the only debug flag included in the generated SLICC files.
-``` {.sourceCode .c++}
+```cpp
DPRINTF(RubySlicc, "Owner %s\n", getDirectoryEntry(addr).Owner);
```
@@ -449,8 +449,7 @@
corrupted in the ancient past. I believe the address is the last one in
the protocol trace.
-::
-: panic: Action/check failure: proc: 0 address: 19688 data: 0x779e6d0
+ panic: Action/check failure: proc: 0 address: 19688 data: 0x779e6d0
byte\_number: 0 m\_value+byte\_number: 53 byte: 0 [19688, value: 53,
status: Check\_Pending, initiating node: 0, store\_count: 4]Time:
5843
@@ -460,7 +459,7 @@
trace with the ack information. To do this, we can add comments to the
transition with APPEND\_TRANSITION\_COMMENT.
-``` {.sourceCode .c++}
+```cpp
action(decrAcks, "da", desc="Decrement the number of acks") {
assert(is_valid(tbe));
tbe.Acks := tbe.Acks - 1;
@@ -480,22 +479,22 @@
5382 1 L1Cache Inv S>I [0x4cc0, line 0x4cc0]
-> 5383: PerfectSwitch-1: Message: [ResponseMsg: addr = [0x4cc0, line
-> 0x4cc0] Type = InvAck Sender = L1Cache-1 Destination = [NetDest (16) 1
-> 0 - - - 0 - - - - - - - - - - - - - ] DataBlk = [ 0x0 0x0 0x0 0x0 0x0
-> 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
-> 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
-> 0x0 0x35 0x36 0x37 0x61 0x6d 0x6e 0x6f 0x70 0x0 0x0 0x0 0x0 0x0 0x0
-> 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 ] MessageSize = Control Acks =
-> 0 ] ... ... ... 5389 0 Directory MemData M\_M\>M [0x4cc0, line 0x4cc0]
-> 5390: PerfectSwitch-2: incoming: 0 5390: PerfectSwitch-2: Message:
-> [ResponseMsg: addr = [0x4cc0, line 0x4cc0] Type = Data Sender =
-> Directory-0 Destination = [NetDest (16) 1 0 - - - 0 - - - - - - - - -
-> - - - - ] DataBlk = [ 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
-> 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
-> 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
-> 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
-> 0x0 ] MessageSize = Data Acks = 1 ]
+ > 5383: PerfectSwitch-1: Message: [ResponseMsg: addr = [0x4cc0, line
+ > 0x4cc0] Type = InvAck Sender = L1Cache-1 Destination = [NetDest (16) 1
+ > 0 - - - 0 - - - - - - - - - - - - - ] DataBlk = [ 0x0 0x0 0x0 0x0 0x0
+ > 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
+ > 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
+ > 0x0 0x35 0x36 0x37 0x61 0x6d 0x6e 0x6f 0x70 0x0 0x0 0x0 0x0 0x0 0x0
+ > 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 ] MessageSize = Control Acks =
+ > 0 ] ... ... ... 5389 0 Directory MemData M\_M\ >M [0x4cc0, line 0x4cc0]
+ > 5390: PerfectSwitch-2: incoming: 0 5390: PerfectSwitch-2: Message:
+ > [ResponseMsg: addr = [0x4cc0, line 0x4cc0] Type = Data Sender =
+ > Directory-0 Destination = [NetDest (16) 1 0 - - - 0 - - - - - - - - -
+ > - - - - ] DataBlk = [ 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
+ > 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
+ > 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
+ > 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
+ > 0x0 ] MessageSize = Data Acks = 1 ]
It seems that memory is not being updated correctly on the M-\>S
transition. After lots of digging and using the MemoryAccess debug flag
diff --git a/_pages/documentation/learning_gem5/part3/cache-actions.md b/_pages/documentation/learning_gem5/part3/cache-actions.md
index 48b711b..e782a77 100644
--- a/_pages/documentation/learning_gem5/part3/cache-actions.md
+++ b/_pages/documentation/learning_gem5/part3/cache-actions.md
@@ -29,7 +29,7 @@
into the `trigger` function, and `tbe` is the TBE passed into the
`trigger` function.
-``` {.sourceCode .c++}
+```cpp
action(sendGetS, 'gS', desc="Send GetS to the directory") {
enqueue(request_out, RequestMsg, 1) {
out_msg.addr := address;
@@ -87,7 +87,7 @@
represent requests where we downgrading or evicting our copy of the
data.
-``` {.sourceCode .c++}
+```cpp
action(sendGetM, "gM", desc="Send GetM to the directory") {
enqueue(request_out, RequestMsg, 1) {
out_msg.addr := address;
@@ -133,7 +133,7 @@
Additionally, in this action we use the `cache_entry` variable to get
the data to send to the other cache.
-``` {.sourceCode .c++}
+```cpp
action(sendCacheDataToReq, "cdR", desc="Send cache data to requestor") {
assert(is_valid(cache_entry));
peek(forward_in, RequestMsg) {
@@ -153,7 +153,7 @@
an invalidation ack to the original requestor on a forward request when
this cache does not have the data.
-``` {.sourceCode .c++}
+```cpp
action(sendCacheDataToDir, "cdD", desc="Send the cache data to the dir") {
enqueue(response_out, ResponseMsg, 1) {
out_msg.addr := address;
@@ -190,12 +190,12 @@
protocols: `APPEND_TRANSITION_COMMENT`. This function takes a string, or
something that can easily be converted to a string (e.g., `int`) as a
parameter. It modifies the *protocol trace* output, which we will
-discuss in the debugging section \<MSI-debugging-section\>. On each
+discuss in the [debugging section](../MSIdebugging). On each
protocol trace line that executes this action it will print the total
number of acks this cache is still waiting on. This is useful since the
number of remaining acks is part of the cache block state.
-``` {.sourceCode .c++}
+```cpp
action(decrAcks, "da", desc="Decrement the number of acks") {
assert(is_valid(tbe));
tbe.AcksOutstanding := tbe.AcksOutstanding - 1;
@@ -209,7 +209,7 @@
directory's response message to get the number of acks and store them in
the (required to be valid) TBE.
-``` {.sourceCode .c++}
+```cpp
action(storeAcks, "sa", desc="Store the needed acks to the TBE") {
assert(is_valid(tbe));
peek(response_in, ResponseMsg) {
@@ -225,7 +225,7 @@
case of a store, we give the sequencer a pointer to the data block and
the sequencer updates the data in-place.
-``` {.sourceCode .c++}
+```cpp
action(loadHit, "Lh", desc="Load hit") {
assert(is_valid(cache_entry));
cacheMemory.setMRU(cache_entry);
@@ -307,7 +307,7 @@
the cache block until we are sure that this cache no longer are
responsible for the data.
-``` {.sourceCode .c++}
+```cpp
action(allocateCacheBlock, "a", desc="Allocate a cache block") {
assert(is_invalid(cache_entry));
assert(cacheMemory.cacheAvail(address));
@@ -356,7 +356,7 @@
prevents the `in_port` logic from consuming another message from the
same message buffer in a single cycle.
-``` {.sourceCode .c++}
+```cpp
action(popMandatoryQueue, "pQ", desc="Pop the mandatory queue") {
mandatory_in.dequeue(clockEdge());
}
@@ -378,7 +378,7 @@
usually simpler, but lower performance since a stall on a high priority
buffer can stall many requests that may not need to be stalled.
-``` {.sourceCode .c++}
+```cpp
action(stall, "z", desc="Stall the incoming request") {
// z_stall
}
diff --git a/_pages/documentation/learning_gem5/part3/cache-declarations.md b/_pages/documentation/learning_gem5/part3/cache-declarations.md
index e32b927..986a790 100644
--- a/_pages/documentation/learning_gem5/part3/cache-declarations.md
+++ b/_pages/documentation/learning_gem5/part3/cache-declarations.md
@@ -17,7 +17,7 @@
Create a file called `MSI-cache.sm` and the following code declares the
state machine.
-``` {.sourceCode .c++}
+```cpp
machine(MachineType:L1Cache, "MSI cache")
: <parameters>
{
@@ -60,7 +60,7 @@
For our MSI L1 cache, we have the following parameters:
-``` {.sourceCode .c++}
+```cpp
machine(MachineType:L1Cache, "MSI cache")
: Sequencer *sequencer;
CacheMemory *cacheMemory;
@@ -80,7 +80,7 @@
master port) and converts the gem5 the packet into a `RubyRequest`.
Finally, the `RubyRequest` is pushed onto the `mandatoryQueue` of the
state machine. We will revisit the `mandatoryQueue` in
-in port section \<MSI-in-ports-section\>.
+the [in-port section](../cache-in-ports).
Next, there is a `CacheMemory` object. This is what holds the cache data
(i.e., cache entries). The exact implementation, size, etc. is
@@ -109,7 +109,7 @@
The following code declares all of the needed message buffers.
-``` {.sourceCode .c++}
+```cpp
machine(MachineType:L1Cache, "MSI cache")
: Sequencer *sequencer;
CacheMemory *cacheMemory;
@@ -163,7 +163,7 @@
look very familiar. Note: This is a generated file and you should never
modify generated files directly!
-``` {.sourceCode .python}
+```python
from m5.params import *
from m5.SimObject import SimObject
from Controller import RubyController
@@ -191,7 +191,7 @@
Invalid to Modified waiting on acks and data. These states come directly
from the left column of Table 8.3 in Sorin et al.
-``` {.sourceCode .c++}
+```cpp
state_declaration(State, desc="Cache states") {
I, AccessPermission:Invalid,
desc="Not present/Invalid";
@@ -247,7 +247,7 @@
incoming messages for this cache controller. These events come directly
from the first row in Table 8.3 in Sorin et al.
-``` {.sourceCode .c++}
+```cpp
enumeration(Event, desc="Cache events") {
// From the processor/sequencer/mandatory queue
Load, desc="Load from processor";
@@ -290,7 +290,7 @@
use any of the member functions of `AbstractCacheEntry`, you need to
declare them here (this isn't used in this protocol).
-``` {.sourceCode .c++}
+```cpp
structure(Entry, desc="Cache entry", interface="AbstractCacheEntry") {
State CacheState, desc="cache state";
DataBlock DataBlk, desc="Data in the block";
@@ -306,7 +306,7 @@
used for the transitions where other controllers send acks instead of
the data.
-``` {.sourceCode .c++}
+```cpp
structure(TBE, desc="Entry for transient requests") {
State TBEState, desc="State of block";
DataBlock DataBlk, desc="Data for the block. Needed for MI_A";
@@ -322,7 +322,7 @@
the TBE structure defined above, which gets a little confusing, as we
will see.
-``` {.sourceCode .c++}
+```cpp
structure(TBETable, external="yes") {
TBE lookup(Addr);
void allocate(Addr);
@@ -356,7 +356,7 @@
thus we need to use the C++ name for the variable since it doesn't have
a SLICC name.
-``` {.sourceCode .c++}
+```cpp
TBETable TBEs, template="<L1Cache_TBE>", constructor="m_number_of_TBEs";
```
@@ -365,9 +365,9 @@
Next, any functions that are part of AbstractController need to be
declared, if we are going to use them in the rest of the file. In this
-case, we are only going to use `clockEdge()`
+case, we are only going to use `clockEdge()`:
-``` {.sourceCode .c++}
+```cpp
Tick clockEdge();
```
@@ -377,7 +377,7 @@
detail in the action section \<MSI-actions-section\>. These may be
needed when a transition has many actions.
-``` {.sourceCode .c++}
+```cpp
void set_cache_entry(AbstractCacheEntry a);
void unset_cache_entry();
void set_tbe(TBE b);
@@ -388,7 +388,7 @@
change the address mappings for banked directories or caches at runtime
so we don't have to hardcode them in the SLICC file.
-``` {.sourceCode .c++}
+```cpp
MachineID mapAddressToMachine(Addr addr, MachineType mtype);
```
@@ -401,7 +401,7 @@
defined a specific `Entry` type in the file, but the `CacheMemory` holds
the abstract type.
-``` {.sourceCode .c++}
+```cpp
// Convenience function to look up the cache entry.
// Needs a pointer so it will be a reference and can be updated in actions
Entry getCacheEntry(Addr address), return_by_pointer="yes" {
@@ -442,7 +442,7 @@
: Functionally write the data. Similarly, you may need to update the
data in both the TBE and the cache entry.
-``` {.sourceCode .c++}
+```cpp
State getState(TBE tbe, Entry cache_entry, Addr addr) {
// The TBE state will override the state in cache memory, if valid
if (is_valid(tbe)) { return tbe.TBEState; }
diff --git a/_pages/documentation/learning_gem5/part3/cache-in-ports.md b/_pages/documentation/learning_gem5/part3/cache-in-ports.md
index e3c2c99..5f5d1e3 100644
--- a/_pages/documentation/learning_gem5/part3/cache-in-ports.md
+++ b/_pages/documentation/learning_gem5/part3/cache-in-ports.md
@@ -17,7 +17,7 @@
However, before we get to the in ports, we must declare our out ports.
-``` {.sourceCode .c++}
+```cpp
out_port(request_out, RequestMsg, requestToDir);
out_port(response_out, ResponseMsg, responseToDirOrSibling);
```
@@ -66,7 +66,7 @@
other caches. Next we will break the code block down to explain each
section.
-``` {.sourceCode .c++}
+```cpp
in_port(response_in, ResponseMsg, responseFromDirOrSibling) {
if (response_in.isReady(clockEdge())) {
peek(response_in, ResponseMsg) {
@@ -118,7 +118,7 @@
not, then this `in_port` code block is skipped and the next one is
executed.
-``` {.sourceCode .c++}
+```cpp
in_port(response_in, ResponseMsg, responseFromDirOrSibling) {
if (response_in.isReady(clockEdge())) {
. . .
@@ -146,7 +146,7 @@
debugging of cache coherence protocols. It is encouraged to use asserts
liberally to make debugging easier.
-``` {.sourceCode .c++}
+```cpp
peek(response_in, ResponseMsg) {
Entry cache_entry := getCacheEntry(in_msg.addr);
TBE tbe := TBEs[in_msg.addr];
@@ -168,7 +168,7 @@
In the `MSI-msg.sm` file, add the following code block:
-``` {.sourceCode .c++}
+```cpp
structure(ResponseMsg, desc="Used for Dir->Cache and Fwd message responses",
interface="Message") {
Addr addr, desc="Physical address for this response";
@@ -209,7 +209,7 @@
`CoherenceResponseType`, to use it in this message. Add the following
code *before* the `ResponseMsg` declaration in the same file.
-``` {.sourceCode .c++}
+```cpp
enumeration(CoherenceResponseType, desc="Types of response messages") {
Data, desc="Contains the most up-to-date data";
InvAck, desc="Message from another cache that they have inv. the blk";
@@ -239,14 +239,14 @@
there are messages in-flight for an address that is functionally read or
written the functional access may fail.
-You can download the complete file `MSI-msg.sm`
-here \<../../\_static/scripts/part3/MSI\_protocol/MSI-msg.sm\>.
+You can download the complete `MSI-msg.sm` file
+[here](/_pages/static/scripts/part3/MSI_protocol/MSI-msg.sm).
Now that we have defined the data in the response message, we can look
at how we choose which action to trigger in the `in_port` for response
to the cache.
-``` {.sourceCode .c++}
+```cpp
// If it's from the directory...
if (machineIDToMachineType(in_msg.Sender) ==
MachineType:Directory) {
@@ -355,7 +355,7 @@
type enumerations, one for forward and one for normal requests, but it
simplifies the code to use a single message and type.
-``` {.sourceCode .c++}
+```cpp
enumeration(CoherenceRequestType, desc="Types of request messages") {
GetS, desc="Request from cache for a block with read permission";
GetM, desc="Request from cache for a block with write permission";
@@ -368,7 +368,7 @@
}
```
-``` {.sourceCode .c++}
+```cpp
structure(RequestMsg, desc="Used for Cache->Dir and Fwd messages", interface="Message") {
Addr addr, desc="Physical address for this request";
CoherenceRequestType Type, desc="Type of request";
@@ -390,14 +390,11 @@
}
```
-You can download the complete file `MSI-msg.sm`
-here \<../../\_static/scripts/part3/MSI\_protocol/MSI-msg.sm\>.
-
Now, we can specify the logic for the forward network `in_port`. This
logic is straightforward and triggers a different event for each request
type.
-``` {.sourceCode .c++}
+```cpp
in_port(forward_in, RequestMsg, forwardFromDir) {
if (forward_in.isReady(clockEdge())) {
peek(forward_in, RequestMsg) {
@@ -431,7 +428,7 @@
in some protocols. However, for this simple protocol we only need the
`LineAddress`.
-``` {.sourceCode .c++}
+```cpp
in_port(mandatory_in, RubyRequest, mandatoryQueue) {
if (mandatory_in.isReady(clockEdge())) {
peek(mandatory_in, RubyRequest, block_on="LineAddress") {
diff --git a/_pages/documentation/learning_gem5/part3/cache-intro.md b/_pages/documentation/learning_gem5/part3/cache-intro.md
index 5734efa..f94f6c1 100644
--- a/_pages/documentation/learning_gem5/part3/cache-intro.md
+++ b/_pages/documentation/learning_gem5/part3/cache-intro.md
@@ -16,32 +16,29 @@
the great book *A Primer on Memory Consistency and Cache Coherence* by
Daniel J. Sorin, Mark D. Hill, and David A. Wood which was published as
part of the Synthesis Lectures on Computer Architecture in 2011
-([DOI:\`10.2200/S00346ED1V01Y201104CAC016](DOI:`10.2200/S00346ED1V01Y201104CAC016)
-\<<https://doi.org/10.2200/S00346ED1V01Y201104CAC016>\>\_).
+([DOI:10.2200/S00346ED1V01Y201104CAC016](https://doi.org/10.2200/S00346ED1V01Y201104CAC016)).
If you are unfamiliar with cache coherence, I strongly advise reading that book before continuing.
In this chapter, we will be implementing an MSI protocol.
(An MSI protocol has three stable states, modified with read-write permission, shared with read-only permission, and invalid with no permissions.)
We will implement this as a three-hop directory protocol (i.e., caches can send data directly to other caches without going through the directory).
-Details for the protocol can be found in Section 8.2 of the \*Primer on Memory Consistency and Cache Coherence\* (pages 141-149).
+Details for the protocol can be found in Section 8.2 of *A Primer on Memory Consistency and Cache Coherence* (pages 141-149).
It will be helpful to print out Section 8.2 to reference as you are implementing the protocol.
-You can download an exceprt of Sorin et al. that contains Section 8.2 :download:here
-\<../../\_static/external/Sorin\_et-al\_Excerpt\_8.2.pdf\>.
+You can download an exceprt of Sorin et al. that contains Section 8.2 [here](/_pages/static/external/Sorin_et-al_Excerpt_8.2.pdf).
-First steps to writing a protocol
-\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~
+## First steps to writing a protocol
-Let's start by creating a new directory for our protocol: src/learning\_gem5/MSI\_protocol.
+Let's start by creating a new directory for our protocol at src/learning\_gem5/MSI\_protocol.
In this directory, like in all gem5 source directories, we need to create a file for SCons to know what to compile.
-However, this time, instead of creating a SConscript\` file, we are
+However, this time, instead of creating a `SConscript` file, we are
going to create a `SConsopts` file. (The `SConsopts` files are processed
before the `SConscript` files and we need to run the SLICC compiler
before SCons executes.)
We need to create a `SConsopts` file with the following:
-``` {.sourceCode .python}
+```python
Import('*')
all_protocols.extend([
@@ -60,7 +57,7 @@
the SLICC compiler.
You can download the `SConsopts` file
-here \<../../\_static/scripts/part3/MSI\_protocol/SConsopts\>.
+[here](/_pages/static/scripts/part3/MSI\_protocol/SConsopts\).
Writing a state machine file
----------------------------
diff --git a/_pages/documentation/learning_gem5/part3/cache-transitions.md b/_pages/documentation/learning_gem5/part3/cache-transitions.md
index 9c12e7a..9cc1e9c 100644
--- a/_pages/documentation/learning_gem5/part3/cache-transitions.md
+++ b/_pages/documentation/learning_gem5/part3/cache-transitions.md
@@ -31,7 +31,7 @@
this transition. For instance, a simple transition in the MSI protocol
is transitioning out of Invalid on a Load.
-``` {.sourceCode .c++}
+```cpp
transition(I, Load, IS_D) {
allocateCacheBlock;
allocateTBE;
@@ -55,7 +55,7 @@
send a GetS request to the directory, and finally we pop the head entry
off of the mandatory queue since we have fully handled it.
-``` {.sourceCode .c++}
+```cpp
transition(IS_D, {Load, Store, Replacement, Inv}) {
stall;
}
@@ -77,7 +77,7 @@
Below is the rest of the transitions needed to implement the L1 cache
from the MSI protocol.
-``` {.sourceCode .c++}
+```cpp
transition(IS_D, {DataDirNoAcks, DataOwner}, S) {
writeDataToCache;
deallocateTBE;
@@ -201,4 +201,4 @@
```
You can download the complete `MSI-cache.sm` file
-here \<../../\_static/scripts/part3/MSI\_protocol/MSI-cache.sm\>.
+[here](/_pages/static/scripts/part3/MSI_protocol/MSI-cache.sm).
diff --git a/_pages/documentation/learning_gem5/part3/configuration.md b/_pages/documentation/learning_gem5/part3/configuration.md
index 1db4eda..275c08d 100644
--- a/_pages/documentation/learning_gem5/part3/configuration.md
+++ b/_pages/documentation/learning_gem5/part3/configuration.md
@@ -22,7 +22,7 @@
First, so we can test our *coherence* protocol, let's use two CPUs.
-``` {.sourceCode .python}
+```python
system.cpu = [TimingSimpleCPU(), TimingSimpleCPU()]
```
@@ -31,7 +31,7 @@
following lines *after the CPU interrupts have been created, but before
instantiating the system*.
-``` {.sourceCode .python}
+```python
system.caches = MyCacheSystem()
system.caches.setup(system, system.cpu, [system.mem_ctrl])
```
@@ -43,7 +43,7 @@
system and the memory controllers.
You can download the complete run script
-here \<../../\_static/scripts/part3/configs/simple\_ruby.py\>
+[here](_pages/static/scripts/part3/configs/simple_ruby.py).
Cache system configuration
--------------------------
@@ -65,7 +65,7 @@
the individual machines. (Hopefully, in the future this requirement will
be removed.)
-``` {.sourceCode .python}
+```python
class L1Cache(L1Cache_Controller):
_version = 0
@@ -77,7 +77,7 @@
Next, we implement the constructor for the class.
-``` {.sourceCode .python}
+```python
def __init__(self, system, ruby_system, cpu):
super(L1Cache, self).__init__()
@@ -104,7 +104,7 @@
to send eviction notices to the CPU. Only if we are using the
out-of-order CPU and using x86 or ARM ISA should we forward evictions.
-``` {.sourceCode .python}
+```python
def getBlockSizeBits(self, system):
bits = int(math.log(system.cache_line_size, 2))
if 2**bits != system.cache_line_size.value:
@@ -136,7 +136,7 @@
implemented as gem5 ports*. In this protocol, we are assuming the
message buffers are ordered for simplicity.
-``` {.sourceCode .python}
+```python
def connectQueues(self, ruby_system):
self.mandatoryQueue = MessageBuffer()
@@ -165,7 +165,7 @@
`connectQueues` we need to instantiate the special message buffer
`responseFromMemory` like the `mandatoryQueue` in the L1 cache.
-``` {.sourceCode .python}
+```python
class DirController(Directory_Controller):
_version = 0
@@ -213,7 +213,7 @@
SimObject hierarchy which will cause infinite recursion in when the
system in instantiated with `m5.instantiate`.
-``` {.sourceCode .python}
+```python
class MyCacheSystem(RubySystem):
def __init__(self):
@@ -255,7 +255,7 @@
connected to the first sequencer, and so on. We also have to connect the
TLBs and interrupt ports (if we are using x86).
-``` {.sourceCode .python}
+```python
def setup(self, system, cpus, mem_ctrls):
self.network = MyNetwork(self)
@@ -315,7 +315,7 @@
of the "internal" links. Each router is connected to all other routers
to make the point-to-point network.
-``` {.sourceCode .python}
+```python
class MyNetwork(SimpleNetwork):
def __init__(self, ruby_system):
@@ -342,4 +342,4 @@
```
You can download the complete `msi_caches.py` file
-here \<../../\_static/scripts/part3/configs/msi\_caches.py\>.
+[here](/_pages/static/scripts/part3/configs/msi_caches.py).
diff --git a/_pages/documentation/learning_gem5/part3/directory.md b/_pages/documentation/learning_gem5/part3/directory.md
index 5a50117..4a3d70b 100644
--- a/_pages/documentation/learning_gem5/part3/directory.md
+++ b/_pages/documentation/learning_gem5/part3/directory.md
@@ -19,7 +19,7 @@
directory controllers and cache controllers. Let's dive straight in and
start modifying a new file `MSI-dir.sm`.
-``` {.sourceCode .c++}
+```cpp
machine(MachineType:Directory, "Directory protocol")
:
DirectoryMemory * directory;
@@ -74,7 +74,7 @@
After the parameters and message buffers, we need to declare all of the
states, events, and other local structures.
-``` {.sourceCode .c++}
+```cpp
state_declaration(State, desc="Directory states",
default="Directory_State_I") {
// Stable states.
@@ -151,7 +151,7 @@
Implementing it this way may save some host memory since this is lazily
populated.
-``` {.sourceCode .c++}
+```cpp
Tick clockEdge();
Entry getDirectoryEntry(Addr addr), return_by_pointer = "yes" {
@@ -220,7 +220,7 @@
directory does not have a TBE or cache entry. Thus, we do not pass
either into the `trigger` function.
-``` {.sourceCode .c++}
+```cpp
out_port(forward_out, RequestMsg, forwardToCache);
out_port(response_out, ResponseMsg, responseToCache);
@@ -294,7 +294,7 @@
since there are two different message buffers (virtual networks) that
data might arrive on.
-``` {.sourceCode .c++}
+```cpp
action(sendMemRead, "r", desc="Send a memory read request") {
peek(request_in, RequestMsg) {
queueMemoryRead(in_msg.Requestor, address, toMemLatency);
@@ -327,7 +327,7 @@
Next, we specify actions to update the sharers and owner of a particular
block.
-``` {.sourceCode .c++}
+```cpp
action(addReqToSharers, "aS", desc="Add requestor to sharer list") {
peek(request_in, RequestMsg) {
getDirectoryEntry(address).Sharers.add(in_msg.Requestor);
@@ -364,7 +364,7 @@
The next set of actions send invalidates and forward requests to caches
that the directory cannot deal with alone.
-``` {.sourceCode .c++}
+```cpp
action(sendInvToSharers, "i", desc="Send invalidate to all sharers") {
peek(request_in, RequestMsg) {
enqueue(forward_out, RequestMsg, 1) {
@@ -408,7 +408,7 @@
special buffer `responseFromMemory`. You can find the definition of
`MemoryMsg` in `src/mem/protocol/RubySlicc_MemControl.sm`.
-``` {.sourceCode .c++}
+```cpp
action(sendDataToReq, "d", desc="Send data from memory to requestor. May need to send sharer number, too") {
peek(memQueue_in, MemoryMsg) {
enqueue(response_out, ResponseMsg, 1) {
@@ -445,7 +445,7 @@
Then, we have the queue management and stall actions.
-``` {.sourceCode .c++}
+```cpp
action(popResponseQueue, "pR", desc="Pop the response queue") {
response_in.dequeue(clockEdge());
}
@@ -467,7 +467,7 @@
mostly come from Table 8.2 in Sorin et al., but there are some extra
transitions to deal with the unknown memory latency.
-``` {.sourceCode .c++}
+```cpp
transition({I, S}, GetS, S_m) {
sendMemRead;
addReqToSharers;
@@ -574,4 +574,4 @@
```
You can download the complete `MSI-dir.sm` file
-here \<../../\_static/scripts/part3/MSI\_protocol/MSI-dir.sm\>.
+[here](/_pages/static/scripts/part3/MSI_protocol/MSI-dir.sm).
diff --git a/_pages/documentation/learning_gem5/part3/running.md b/_pages/documentation/learning_gem5/part3/running.md
index 7664c3f..50f89f1 100644
--- a/_pages/documentation/learning_gem5/part3/running.md
+++ b/_pages/documentation/learning_gem5/part3/running.md
@@ -17,7 +17,7 @@
as of this writing there is a bug in gem5 preventing this code from
executing).
-``` {.sourceCode .c++}
+```cpp
#include <iostream>
#include <thread>
@@ -112,7 +112,7 @@
With the above code compiled as `threads`, we can run gem5!
-``` {.sourceCode .sh}
+```
build/MSI/gem5.opt configs/learning_gem5/part6/simple_ruby.py
```
diff --git a/_pages/documentation/learning_gem5/part3/simple-MI_example.md b/_pages/documentation/learning_gem5/part3/simple-MI_example.md
index 1f5ec7e..a00905b 100644
--- a/_pages/documentation/learning_gem5/part3/simple-MI_example.md
+++ b/_pages/documentation/learning_gem5/part3/simple-MI_example.md
@@ -26,9 +26,9 @@
the classes needed for `MI_example`. There are only a couple of changes
from `MSI`, mostly due to different naming schemes. You can download the
file
-here \<../\_static/scripts/part3/configs/ruby\_caches\_MI\_example.py\>
+[here](/_pages/static/scripts/part3/configs/ruby_caches_MI_example.py).
-``` {.sourceCode .python}
+```python
class MyCacheSystem(RubySystem):
def __init__(self):