resources: Instruction added to run PARSEC with gem5 stdlib.

This change updates the README.md for PARSEC, providing instructions
on how to use gem5 stdlib to simulate PARSEC. It also removes the
the contents of the gem5-resources/src/parsec/configs and
gem5-resources/src/parsec/configs-mesi-two-level directories.

Signed-off-by: Kaustav Goswami <kggoswami@ucdavis.edu>
Change-Id: I71efd6045c9d205c7f43cd58ceaa69e88fa4b2c0
Reviewed-on: https://gem5-review.googlesource.com/c/public/gem5-resources/+/53444
Reviewed-by: Bobby Bruce <bbruce@ucdavis.edu>
Maintainer: Bobby Bruce <bbruce@ucdavis.edu>
Tested-by: Bobby Bruce <bbruce@ucdavis.edu>
diff --git a/src/gapbs/README.md b/src/gapbs/README.md
index 991dfd2..3994046 100644
--- a/src/gapbs/README.md
+++ b/src/gapbs/README.md
@@ -73,6 +73,11 @@
 --size <size_or_graph_name>
 ```
 
+Description of the three arguments, provided in the above command are:
+* **--benchmark**, which refers to one of 5 benchmark programs, provided in the GAP Benchmark Suite. These include `cc`, `bc`, `tc`, `pr` and `bfs`. For more information on the workloads can be found at <http://gap.cs.berkeley.edu/benchmark.html>.
+* **--synthetic** refers whether to use a synthetic or a real graph. It accepts a boolean value.
+* **--size**, which refers to either the size of a synthetic graph from 1 to 16 nodes, or, a real graph. The real graph included in the pre-built disk-image is `USA-road-d.NY.gr`. Note that `--synthetic True` and `--size USA-road-d.NY.gr` cannot be combined, and, vice versa for real graphs.
+
 ## Working Status
 
 Working status of these tests for gem5-20 can be found [here](https://www.gem5.org/documentation/benchmark_status/gem5-20#gapbs-tests).
diff --git a/src/parsec/README.md b/src/parsec/README.md
index 0665fd2..1bbbd71 100644
--- a/src/parsec/README.md
+++ b/src/parsec/README.md
@@ -11,7 +11,7 @@
 license: BSD-3-Clause
 ---
 
-This document includes instructions on how to create an Ubuntu 18.04 disk-image with PARSEC benchmark installed. The disk-image will be compatible with the gem5 simulator.
+This document includes instructions on how to create an Ubuntu 18.04 disk-image with PARSEC benchmark installed. The disk-image will be compatible with the gem5 simulator. It also demostrates how tosimulate the same using an example gem5 script with a pre-configured system. The script uses a pre-built disk-image.
 
 This is how the `src/parsec-tests/` directory will look like if all the artifacts are created correctly.
 
@@ -30,19 +30,8 @@
   |             |___ runscript.sh              # script to run each workload
   |             |___ parsec-benchmark          # the parsec benchmark suite
   |
-  |___ configs
-  |      |___ system                           # system config directory
-  |      |___ run_parsec.py                    # gem5 run script
-  |
-  |___ configs-mesi-two-level
-  |      |___ system                           # system config directory
-  |      |___ run_parsec_mesi_two_level.py     # gem5 run script
-  |
   |___ README.md
 ```
-
-Notice that there are two sets of system configuration directories and run scripts. For further detail on the config files look [here](#gem5-run-scripts).
-
 ## Building the disk image
 
 In order to build the disk-image for PARSEC tests with gem5, build the m5 utility in `src/parsec-tests/` using the following:
@@ -75,29 +64,41 @@
 
 You can find the disk-image in `parsec/parsec-image/parsec`.
 
-## gem5 run scripts
+## Simulating PARSEC using an example script
 
-There are two sets of run scripts and system configuration files in the directory. The scripts found in `configs` use the classic memory system while the scripts in `configs-mesi-two-level` use the ruby memory system with MESI_Two_Level cache coherency protocol. The parameters used in the both sets of experiments are explained below:
+An example script with a pre-configured system is available in the following directory within the gem5 repository:
 
-* **kernel**: The path to the linux kernel. We have verified capatibility with kernel version 4.19.83 which you can download at <http://dist.gem5.org/dist/v21-1/kernels/x86/static/vmlinux-4.19.83>. More information on building kernels for gem5 can be around in `src/linux-kernel`.
-* **disk**: The path to the PARSEC disk-image. This can be downloaded, gzipped, from <http://dist.gem5.org/dist/v21-1/images/x86/ubuntu-18-04/parsec.img.gz>.
-* **cpu**: The type of cpu to use. There are two supported options: `kvm` (KvmCPU) and `timing` (TimingSimpleCPU).
-* **benchmark**: The PARSEC workload to run. They include `blackscholes`, `bodytrack`, `canneal`, `dedup`, `facesim`, `ferret`, `fluidanimate`, `freqmine`, `raytrace`, `streamcluster`, `swaptions`, `vips`, `x264`. For more information on the workloads can be found at <https://parsec.cs.princeton.edu/>.
-* **size**: The size of the chosen workload. Valid sizes are `simsmall`, `simmedium`, and `simlarge`.
-* **num_cpus**: The number of cpus to simulate. When using `configs`, the only valid option is `1`. When using `configs-mesi-two-level` the number of supported cpus is show in the table below:
+```
+gem5/configs/example/gem5_library/x86-parsec-benchmarks.py
+```
 
+The example script specifies a system with the following parameters:
 
-| CPU Model       | Core Counts |
-|-----------------|-------------|
-| KvmCPU          | 1,2,8       |
-| TimingSimpleCPU | 1,2         |
+* A `SimpleSwitchableProcessor` (`KVM` for startup and `TIMING` for ROI execution). There are 2 CPU cores, each clocked at 3 GHz.
+* 2 Level `MESI_Two_Level` cache with 32 kB L1I and L1D size, and, 256 kB L2 size. The L1 cache(s) has associativity of 8, and, the L2 cache has associativity 16. There are 2 L2 cache banks.
+* The system has 3 GB `SingleChannelDDR4_2400` memory.
+* The script uses `x86-linux-kernel-4.19.83` and `x86-parsec`, the disk image created from following the instructions in this `README.md`.
 
-Below are the examples of running an experiment with the two configurations.
+The example script must be run with the `X86_MESI_Two_Level` binary. To build:
 
 ```sh
-<gem5 X86 binary> configs/run_parsec.py <kernel> <disk> <cpu> <benchmark> <size> <num_cpus>
-<gem5 X86_MESI_Two_Level binary> configs-mesi-two-level/run_parsec.py <kernel> <disk> <cpu> <benchmark> <size> <num_cpus>
+git clone https://gem5.googlesource.com/public/gem5
+cd gem5
+scons build/X86/gem5.opt -j<proc>
 ```
+Once compiled, you may use the example config file to run the PARSEC benchmark programs using the following command:
+
+```sh
+# In the gem5 directory
+build/X86/gem5.opt \
+configs/example/gem5_library/x86-parsec-benchmarks.py \
+--benchmark <benchmark_program> \
+--size <size> \
+```
+
+Description of the two arguments, provided in the above command are:
+* **--benchmark**, which refers to one of 13 benchmark programs, provided in the PARSEC benchmark suite. These include `blackscholes`, `bodytrack`, `canneal`, `dedup`, `facesim`, `ferret`, `fluidanimate`, `freqmine`, `raytrace`, `streamcluster`, `swaptions`, `vips`, `x264`. For more information on the workloads can be found at <https://parsec.cs.princeton.edu/>.
+* **--size**, which refers to the size of the workload to simulate. There are three valid choices for the same: `simsmall`, `simmedium` and `simlarge`.
 
 ## Working Status
 
diff --git a/src/parsec/configs-mesi-two-level/run_parsec_mesi_two_level.py b/src/parsec/configs-mesi-two-level/run_parsec_mesi_two_level.py
deleted file mode 100755
index 311d8ca..0000000
--- a/src/parsec/configs-mesi-two-level/run_parsec_mesi_two_level.py
+++ /dev/null
@@ -1,225 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) 2019 The Regents of the University of California.
-# All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-
-
-""" Script to run PARSEC benchmarks with gem5. The memory model used
-    in the experiments is Ruby and uses MESEI_Two_Level protocolx.
-    The script expects kernel, diskimage, cpu (kvm or timing),
-    benchmark, benchmark size, and number of cpu cores as arguments.
-    This script is best used if your disk-image has workloads tha have
-    ROI annotations compliant with m5 utility. You can use the script in
-    ../disk-images/parsec/ with the parsec-benchmark repo at
-    https://github.com/darchr/parsec-benchmark.git to create a working
-    disk-image for this script.
-
-"""
-import argparse
-import time
-import m5
-import m5.ticks
-from m5.objects import *
-
-from system import *
-
-supported_cpu_types = ["kvm", "timing"]
-benchmark_choices = ["blackscholes", "bodytrack", "canneal", "dedup",
-                     "facesim", "ferret", "fluidanimate", "freqmine",
-                     "raytrace", "streamcluster", "swaptions", "vips", "x264"]
-size_choices=["simsmall", "simmedium", "simlarge"]
-
-
-def parse_options():
-
-    parser = argparse.ArgumentParser(description='For use with gem5. This '
-                'runs a NAS Parallel Benchmark application. This only works '
-                'with x86 ISA.')
-
-    parser.add_argument("kernel", type=str,
-                        help="Path to the kernel binary to boot")
-    parser.add_argument("disk", type=str, help="Path to the PARSEC disk image")
-    parser.add_argument("cpu", type=str, choices=supported_cpu_types,
-                        help="The type of CPU to use in the system")
-    parser.add_argument("benchmark", type=str, choices=benchmark_choices,
-                        help="The PARSEC benchmark application to run")
-    parser.add_argument("size", type=str, choices=size_choices,
-                        help="The input size to the PARSEC benchmark "
-                             "application")
-    parser.add_argument("num_cpus", type=int, choices=[1,2,8],
-                        help="The number of CPU cores")
-
-    return parser.parse_args()
-
-def writeBenchScript(dir, bench, size, num_cpus):
-    """
-    This method creates a script in dir which will be eventually
-    passed to the simulated system (to run a specific benchmark
-    at bootup).
-    """
-    file_name = '{}/run_{}'.format(dir, bench)
-    bench_file = open(file_name, 'w+')
-    bench_file.write('cd /home/gem5/parsec-benchmark\n')
-    bench_file.write('source env.sh\n')
-    bench_file.write('parsecmgmt -a run -p \
-            {} -c gcc-hooks -i {} -n {}\n'.format(bench, size, num_cpus))
-
-    # sleeping for sometime makes sure
-    # that the benchmark's output has been
-    # printed to the console
-    bench_file.write('sleep 5 \n')
-    bench_file.write('m5 exit \n')
-    bench_file.close()
-    return file_name
-
-if __name__ == "__m5_main__":
-
-    args = parse_options()
-
-    # create the system we are going to simulate
-    system = MyRubySystem(args.kernel, args.disk, args.num_cpus, args)
-
-    # Exit from guest on workbegin/workend
-    system.exit_on_work_items = True
-
-    # Create and pass a script to the simulated system to run the reuired
-    # benchmark
-    system.readfile = writeBenchScript(m5.options.outdir, args.benchmark,
-                                      args.size, args.num_cpus)
-
-    # set up the root SimObject and start the simulation
-    root = Root(full_system = True, system = system)
-
-    if system.getHostParallel():
-        # Required for running kvm on multiple host cores.
-        # Uses gem5's parallel event queue feature
-        # Note: The simulator is quite picky about this number!
-        root.sim_quantum = int(1e9) # 1 ms
-
-    #needed for long running jobs
-    m5.disableAllListeners()
-
-    # instantiate all of the objects we've created above
-    m5.instantiate()
-
-    globalStart = time.time()
-
-    print("Running the simulation")
-    print("Using cpu: {}".format(args.cpu))
-
-    start_tick = m5.curTick()
-    end_tick = m5.curTick()
-    start_insts = system.totalInsts()
-    end_insts = system.totalInsts()
-    m5.stats.reset()
-
-    exit_event = m5.simulate()
-
-    if exit_event.getCause() == "workbegin":
-        print("Done booting Linux")
-        # Reached the start of ROI
-        # start of ROI is marked by an
-        # m5_work_begin() call
-        print("Resetting stats at the start of ROI!")
-        m5.stats.reset()
-        start_tick = m5.curTick()
-        start_insts = system.totalInsts()
-        # switching to timing cpu if argument cpu == timing
-        if args.cpu == 'timing':
-            system.switchCpus(system.cpu, system.timingCpu)
-    else:
-        print("Unexpected termination of simulation!")
-        print()
-        m5.stats.dump()
-        end_tick = m5.curTick()
-        end_insts = system.totalInsts()
-        m5.stats.reset()
-        print("Performance statistics:")
-
-        print("Simulated time: %.2fs" % ((end_tick-start_tick)/1e12))
-        print("Instructions executed: %d" % ((end_insts-start_insts)))
-        print("Ran a total of", m5.curTick()/1e12, "simulated seconds")
-        print("Total wallclock time: %.2fs, %.2f min" % \
-                    (time.time()-globalStart, (time.time()-globalStart)/60))
-        exit()
-
-    # Simulate the ROI
-    exit_event = m5.simulate()
-
-    # Reached the end of ROI
-    # Finish executing the benchmark with kvm cpu
-    if exit_event.getCause() == "workend":
-        # Reached the end of ROI
-        # end of ROI is marked by an
-        # m5_work_end() call
-        print("Dump stats at the end of the ROI!")
-        m5.stats.dump()
-        end_tick = m5.curTick()
-        end_insts = system.totalInsts()
-        m5.stats.reset()
-        # switching to timing cpu if argument cpu == timing
-        if args.cpu == 'timing':
-            # This line is commented due to an unimplemented
-            # flush request in MESI_Two_Level that results in
-            # the crashing of simulation. There will be a patch
-            # fixing this issue but the line is commented out
-            # for now.
-            # system.switchCpus(system.timingCpu, system.cpu)
-            print("Performance statistics:")
-
-            print("Simulated time: %.2fs" % ((end_tick-start_tick)/1e12))
-            print("Instructions executed: %d" % ((end_insts-start_insts)))
-            print("Ran a total of", m5.curTick()/1e12, "simulated seconds")
-            print("Total wallclock time: %.2fs, %.2f min" % \
-                    (time.time()-globalStart, (time.time()-globalStart)/60))
-            exit()
-    else:
-        print("Unexpected termination of simulation!")
-        print()
-        m5.stats.dump()
-        end_tick = m5.curTick()
-        end_insts = system.totalInsts()
-        m5.stats.reset()
-        print("Performance statistics:")
-
-        print("Simulated time: %.2fs" % ((end_tick-start_tick)/1e12))
-        print("Instructions executed: %d" % ((end_insts-start_insts)))
-        print("Ran a total of", m5.curTick()/1e12, "simulated seconds")
-        print("Total wallclock time: %.2fs, %.2f min" % \
-                    (time.time()-globalStart, (time.time()-globalStart)/60))
-        exit()
-
-    # Simulate the remaning part of the benchmark
-    exit_event = m5.simulate()
-
-    print("Done with the simulation")
-    print()
-    print("Performance statistics:")
-
-    print("Simulated time in ROI: %.2fs" % ((end_tick-start_tick)/1e12))
-    print("Instructions executed in ROI: %d" % ((end_insts-start_insts)))
-    print("Ran a total of", m5.curTick()/1e12, "simulated seconds")
-    print("Total wallclock time: %.2fs, %.2f min" % \
-                (time.time()-globalStart, (time.time()-globalStart)/60))
diff --git a/src/parsec/configs-mesi-two-level/system/MESI_Two_Level.py b/src/parsec/configs-mesi-two-level/system/MESI_Two_Level.py
deleted file mode 100644
index 4dfdf39..0000000
--- a/src/parsec/configs-mesi-two-level/system/MESI_Two_Level.py
+++ /dev/null
@@ -1,339 +0,0 @@
-#Copyright (c) 2020 The Regents of the University of California.
-#All Rights Reserved
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-
-
-""" This file creates a set of Ruby caches for the MESI TWO Level protocol
-This protocol models two level cache hierarchy. The L1 cache is split into
-instruction and data cache.
-This system support the memory size of up to 3GB.
-"""
-
-
-
-
-import math
-
-from m5.defines import buildEnv
-from m5.util import fatal, panic
-
-from m5.objects import *
-
-class MESITwoLevelCache(RubySystem):
-
-    def __init__(self):
-        if buildEnv['PROTOCOL'] != 'MESI_Two_Level':
-            fatal("This system assumes MESI_Two_Level!")
-
-        super(MESITwoLevelCache, self).__init__()
-
-        self._numL2Caches = 8
-
-    def setup(self, system, cpus, mem_ctrls, dma_ports, iobus):
-        """Set up the Ruby cache subsystem. Note: This can't be done in the
-           constructor because many of these items require a pointer to the
-           ruby system (self). This causes infinite recursion in initialize()
-           if we do this in the __init__.
-        """
-        # Ruby's global network.
-        self.network = MyNetwork(self)
-
-        # MESI_Two_Level example uses 5 virtual networks
-        self.number_of_virtual_networks = 5
-        self.network.number_of_virtual_networks = 5
-
-        # There is a single global list of all of the controllers to make it
-        # easier to connect everything to the global network. This can be
-        # customized depending on the topology/network requirements.
-        # L1 caches are private to a core, hence there are one L1 cache per CPU core.
-        # The number of L2 caches are dependent to the architecture.
-        self.controllers = \
-            [L1Cache(system, self, cpu, self._numL2Caches) for cpu in cpus] + \
-            [L2Cache(system, self, self._numL2Caches) for num in range(self._numL2Caches)] + \
-            [DirController(self, system.mem_ranges, mem_ctrls)] + \
-            [DMAController(self) for i in range(len(dma_ports))]
-
-        # Create one sequencer per CPU and dma controller.
-        # Sequencers for other controllers can be here here.
-        self.sequencers = [RubySequencer(version = i,
-                                # Grab dcache from ctrl
-                                dcache = self.controllers[i].L1Dcache,
-                                clk_domain = self.controllers[i].clk_domain,
-                                pio_request_port = iobus.cpu_side_ports,
-                                mem_request_port = iobus.cpu_side_ports,
-                                pio_response_port = iobus.mem_side_ports
-                                ) for i in range(len(cpus))] + \
-                          [DMASequencer(version = i,
-                                        in_ports = port)
-                            for i,port in enumerate(dma_ports)
-                          ]
-
-        for i,c in enumerate(self.controllers[:len(cpus)]):
-            c.sequencer = self.sequencers[i]
-
-        #Connecting the DMA sequencer to DMA controller
-        for i,d in enumerate(self.controllers[-len(dma_ports):]):
-            i += len(cpus)
-            d.dma_sequencer = self.sequencers[i]
-
-        self.num_of_sequencers = len(self.sequencers)
-
-        # Create the network and connect the controllers.
-        # NOTE: This is quite different if using Garnet!
-        self.network.connectControllers(self.controllers)
-        self.network.setup_buffers()
-
-        # Set up a proxy port for the system_port. Used for load binaries and
-        # other functional-only things.
-        self.sys_port_proxy = RubyPortProxy()
-        system.system_port = self.sys_port_proxy.in_ports
-        self.sys_port_proxy.pio_request_port = iobus.cpu_side_ports
-
-        # Connect the cpu's cache, interrupt, and TLB ports to Ruby
-        for i,cpu in enumerate(cpus):
-            cpu.icache_port = self.sequencers[i].in_ports
-            cpu.dcache_port = self.sequencers[i].in_ports
-            cpu.createInterruptController()
-            isa = buildEnv['TARGET_ISA']
-            if isa == 'x86':
-                cpu.interrupts[0].pio = self.sequencers[i].interrupt_out_port
-                cpu.interrupts[0].int_requestor = self.sequencers[i].in_ports
-                cpu.interrupts[0].int_responder = \
-                    self.sequencers[i].interrupt_out_port
-            if isa == 'x86' or isa == 'arm':
-                cpu.mmu.connectWalkerPorts(
-                    self.sequencers[i].in_ports, self.sequencers[i].in_ports)
-
-
-class L1Cache(L1Cache_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, system, ruby_system, cpu, num_l2Caches):
-        """Creating L1 cache controller. Consist of both instruction
-           and data cache. The size of data cache is 512KB and
-           8-way set associative. The instruction cache is 32KB,
-           2-way set associative.
-        """
-        super(L1Cache, self).__init__()
-
-        self.version = self.versionCount()
-        block_size_bits = int(math.log(system.cache_line_size, 2))
-        l1i_size = '32kB'
-        l1i_assoc = '2'
-        l1d_size = '512kB'
-        l1d_assoc = '8'
-        # This is the cache memory object that stores the cache data and tags
-        self.L1Icache = RubyCache(size = l1i_size,
-                                assoc = l1i_assoc,
-                                start_index_bit = block_size_bits ,
-                                is_icache = True)
-        self.L1Dcache = RubyCache(size = l1d_size,
-                            assoc = l1d_assoc,
-                            start_index_bit = block_size_bits,
-                            is_icache = False)
-        self.l2_select_num_bits = int(math.log(num_l2Caches , 2))
-        self.clk_domain = cpu.clk_domain
-        self.prefetcher = RubyPrefetcher()
-        self.send_evictions = self.sendEvicts(cpu)
-        self.transitions_per_cycle = 4
-        self.enable_prefetch = False
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def getBlockSizeBits(self, system):
-        bits = int(math.log(system.cache_line_size, 2))
-        if 2**bits != system.cache_line_size.value:
-            panic("Cache line size not a power of 2!")
-        return bits
-
-    def sendEvicts(self, cpu):
-        """True if the CPU model or ISA requires sending evictions from caches
-           to the CPU. Two scenarios warrant forwarding evictions to the CPU:
-           1. The O3 model must keep the LSQ coherent with the caches
-           2. The x86 mwait instruction is built on top of coherence
-           3. The local exclusive monitor in ARM systems
-        """
-        if type(cpu) is DerivO3CPU or \
-           buildEnv['TARGET_ISA'] in ('x86', 'arm'):
-            return True
-        return False
-
-    def connectQueues(self, ruby_system):
-        """Connect all of the queues for this controller.
-        """
-        self.mandatoryQueue = MessageBuffer()
-        self.requestFromL1Cache = MessageBuffer()
-        self.requestFromL1Cache.out_port = ruby_system.network.in_port
-        self.responseFromL1Cache = MessageBuffer()
-        self.responseFromL1Cache.out_port = ruby_system.network.in_port
-        self.unblockFromL1Cache = MessageBuffer()
-        self.unblockFromL1Cache.out_port = ruby_system.network.in_port
-
-        self.optionalQueue = MessageBuffer()
-
-        self.requestToL1Cache = MessageBuffer()
-        self.requestToL1Cache.in_port = ruby_system.network.out_port
-        self.responseToL1Cache = MessageBuffer()
-        self.responseToL1Cache.in_port = ruby_system.network.out_port
-
-class L2Cache(L2Cache_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, system, ruby_system, num_l2Caches):
-
-        super(L2Cache, self).__init__()
-
-        self.version = self.versionCount()
-        # This is the cache memory object that stores the cache data and tags
-        self.L2cache = RubyCache(size = '1 MB',
-                                assoc = 16,
-                                start_index_bit = self.getBlockSizeBits(system, num_l2Caches))
-
-        self.transitions_per_cycle = '4'
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def getBlockSizeBits(self, system, num_l2caches):
-        l2_bits = int(math.log(num_l2caches, 2))
-        bits = int(math.log(system.cache_line_size, 2)) + l2_bits
-        return bits
-
-
-    def connectQueues(self, ruby_system):
-        """Connect all of the queues for this controller.
-        """
-        self.DirRequestFromL2Cache = MessageBuffer()
-        self.DirRequestFromL2Cache.out_port = ruby_system.network.in_port
-        self.L1RequestFromL2Cache = MessageBuffer()
-        self.L1RequestFromL2Cache.out_port = ruby_system.network.in_port
-        self.responseFromL2Cache = MessageBuffer()
-        self.responseFromL2Cache.out_port = ruby_system.network.in_port
-        self.unblockToL2Cache = MessageBuffer()
-        self.unblockToL2Cache.in_port = ruby_system.network.out_port
-        self.L1RequestToL2Cache = MessageBuffer()
-        self.L1RequestToL2Cache.in_port = ruby_system.network.out_port
-        self.responseToL2Cache = MessageBuffer()
-        self.responseToL2Cache.in_port = ruby_system.network.out_port
-
-
-
-class DirController(Directory_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, ruby_system, ranges, mem_ctrls):
-        """ranges are the memory ranges assigned to this controller.
-        """
-        if len(mem_ctrls) > 1:
-            panic("This cache system can only be connected to one mem ctrl")
-        super(DirController, self).__init__()
-        self.version = self.versionCount()
-        self.addr_ranges = ranges
-        self.ruby_system = ruby_system
-        self.directory = RubyDirectoryMemory()
-        # Connect this directory to the memory side.
-        self.memory = mem_ctrls[0].port
-        self.connectQueues(ruby_system)
-
-    def connectQueues(self, ruby_system):
-        self.requestToDir = MessageBuffer()
-        self.requestToDir.in_port = ruby_system.network.out_port
-        self.responseToDir = MessageBuffer()
-        self.responseToDir.in_port = ruby_system.network.out_port
-        self.responseFromDir = MessageBuffer()
-        self.responseFromDir.out_port = ruby_system.network.in_port
-        self.requestToMemory = MessageBuffer()
-        self.responseFromMemory = MessageBuffer()
-
-class DMAController(DMA_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, ruby_system):
-        super(DMAController, self).__init__()
-        self.version = self.versionCount()
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def connectQueues(self, ruby_system):
-        self.mandatoryQueue = MessageBuffer()
-        self.responseFromDir = MessageBuffer(ordered = True)
-        self.responseFromDir.in_port = ruby_system.network.out_port
-        self.requestToDir = MessageBuffer()
-        self.requestToDir.out_port = ruby_system.network.in_port
-
-
-class MyNetwork(SimpleNetwork):
-    """A simple point-to-point network. This doesn't not use garnet.
-    """
-
-    def __init__(self, ruby_system):
-        super(MyNetwork, self).__init__()
-        self.netifs = []
-        self.ruby_system = ruby_system
-
-    def connectControllers(self, controllers):
-        """Connect all of the controllers to routers and connec the routers
-           together in a point-to-point network.
-        """
-        # Create one router/switch per controller in the system
-        self.routers = [Switch(router_id = i) for i in range(len(controllers))]
-
-        # Make a link from each controller to the router. The link goes
-        # externally to the network.
-        self.ext_links = [SimpleExtLink(link_id=i, ext_node=c,
-                                        int_node=self.routers[i])
-                          for i, c in enumerate(controllers)]
-
-        # Make an "internal" link (internal to the network) between every pair
-        # of routers.
-        link_count = 0
-        self.int_links = []
-        for ri in self.routers:
-            for rj in self.routers:
-                if ri == rj: continue # Don't connect a router to itself!
-                link_count += 1
-                self.int_links.append(SimpleIntLink(link_id = link_count,
-                                                    src_node = ri,
-                                                    dst_node = rj))
diff --git a/src/parsec/configs-mesi-two-level/system/__init__.py b/src/parsec/configs-mesi-two-level/system/__init__.py
deleted file mode 100755
index fcc7c95..0000000
--- a/src/parsec/configs-mesi-two-level/system/__init__.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) 2016 Jason Lowe-Power
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-
-from .ruby_system import MyRubySystem
\ No newline at end of file
diff --git a/src/parsec/configs-mesi-two-level/system/fs_tools.py b/src/parsec/configs-mesi-two-level/system/fs_tools.py
deleted file mode 100755
index 9c02722..0000000
--- a/src/parsec/configs-mesi-two-level/system/fs_tools.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) 2016 Jason Lowe-Power
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-
-from m5.objects import IdeDisk, CowDiskImage, RawDiskImage
-import errno
-import os
-import sys
-class CowDisk(IdeDisk):
-
-    def __init__(self, filename):
-        super(CowDisk, self).__init__()
-        self.driveID = 'device0'
-        self.image = CowDiskImage(child=RawDiskImage(read_only=True),
-                                  read_only=False)
-        self.image.child.image_file = filename
diff --git a/src/parsec/configs-mesi-two-level/system/ruby_system.py b/src/parsec/configs-mesi-two-level/system/ruby_system.py
deleted file mode 100755
index e4e9d1f..0000000
--- a/src/parsec/configs-mesi-two-level/system/ruby_system.py
+++ /dev/null
@@ -1,242 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) 2016 Jason Lowe-Power
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-
-import m5
-import math
-from m5.objects import *
-from .fs_tools import *
-
-class MyRubySystem(System):
-
-    def __init__(self, kernel, disk, num_cpus, opts):
-        super(MyRubySystem, self).__init__()
-        self._opts = opts
-
-        self._host_parallel = True
-
-        # Set up the clock domain and the voltage domain
-        self.clk_domain = SrcClockDomain()
-        self.clk_domain.clock = '3GHz'
-        self.clk_domain.voltage_domain = VoltageDomain()
-
-        self.mem_ranges = [AddrRange(Addr('3GB')), # All data
-                           AddrRange(0xC0000000, size=0x100000), # For I/0
-                           ]
-
-        self.initFS(num_cpus)
-
-        # Replace these paths with the path to your disk images.
-        # The first disk is the root disk. The second could be used for swap
-        # or anything else.
-        self.setDiskImages(disk, disk)
-
-        # Change this path to point to the kernel you want to use
-        self.workload.object_file = kernel
-        # Options specified on the kernel command line
-        boot_options = ['earlyprintk=ttyS0', 'console=ttyS0', 'lpj=7999923',
-                         'root=/dev/hda1']
-
-        self.workload.command_line = ' '.join(boot_options)
-
-        # Create the CPUs for our system.
-        self.createCPU(num_cpus)
-
-        self.createMemoryControllersDDR3()
-
-        # Create the cache hierarchy for the system.
-
-        from .MESI_Two_Level import MESITwoLevelCache
-        self.caches = MESITwoLevelCache()
-
-        self.caches.setup(self, self.cpu, self.mem_cntrls,
-                          [self.pc.south_bridge.ide.dma,
-                           self.iobus.mem_side_ports],
-                          self.iobus)
-
-        if self._host_parallel:
-            # To get the KVM CPUs to run on different host CPUs
-            # Specify a different event queue for each CPU
-            for i,cpu in enumerate(self.cpu):
-                for obj in cpu.descendants():
-                    obj.eventq_index = 0
-
-                # the number of eventqs are set based
-                # on experiments with few benchmarks
-
-                cpu.eventq_index = i + 1
-
-    def getHostParallel(self):
-        return self._host_parallel
-
-    def totalInsts(self):
-        return sum([cpu.totalInsts() for cpu in self.cpu])
-
-    def createCPUThreads(self, cpu):
-        for c in cpu:
-            c.createThreads()
-
-    def createCPU(self, num_cpus):
-
-        # Note KVM needs a VM and atomic_noncaching
-        self.cpu = [X86KvmCPU(cpu_id = i)
-                    for i in range(num_cpus)]
-        self.kvm_vm = KvmVM()
-        self.mem_mode = 'atomic_noncaching'
-        self.createCPUThreads(self.cpu)
-
-        self.atomicCpu = [AtomicSimpleCPU(cpu_id = i,
-                                            switched_out = True)
-                            for i in range(num_cpus)]
-        self.createCPUThreads(self.atomicCpu)
-
-        self.timingCpu = [TimingSimpleCPU(cpu_id = i,
-                                     switched_out = True)
-				   for i in range(num_cpus)]
-        self.createCPUThreads(self.timingCpu)
-
-    def switchCpus(self, old, new):
-        assert(new[0].switchedOut())
-        m5.switchCpus(self, list(zip(old, new)))
-
-    def setDiskImages(self, img_path_1, img_path_2):
-        disk0 = CowDisk(img_path_1)
-        disk2 = CowDisk(img_path_2)
-        self.pc.south_bridge.ide.disks = [disk0, disk2]
-
-    def createMemoryControllersDDR3(self):
-        self._createMemoryControllers(1, DDR3_1600_8x8)
-
-    def _createMemoryControllers(self, num, cls):
-        intlv_bits = int(math.log(num, 2))
-        mem_ctrls = []
-        for i in range(num):
-            interface = cls()
-            interface.range = AddrRange(self.mem_ranges[0].start,
-                            size = self.mem_ranges[0].size(),
-                            intlvHighBit = 7,
-                            xorHighBit = 20,
-                            intlvBits = intlv_bits,
-                            intlvMatch = i)
-            ctrl = MemCtrl()
-            ctrl.dram = interface
-            mem_ctrls.append(ctrl)
-        self.mem_cntrls = mem_ctrls
-
-    def initFS(self, cpus):
-        self.pc = Pc()
-
-        self.workload = X86FsLinux()
-
-        # North Bridge
-        self.iobus = IOXBar()
-
-        # connect the io bus
-        # Note: pass in a reference to where Ruby will connect to in the future
-        # so the port isn't connected twice.
-        self.pc.attachIO(self.iobus, [self.pc.south_bridge.ide.dma])
-
-        ###############################################
-
-        # Add in a Bios information structure.
-        self.workload.smbios_table.structures = [X86SMBiosBiosInformation()]
-
-        # Set up the Intel MP table
-        base_entries = []
-        ext_entries = []
-        for i in range(cpus):
-            bp = X86IntelMPProcessor(
-                    local_apic_id = i,
-                    local_apic_version = 0x14,
-                    enable = True,
-                    bootstrap = (i ==0))
-            base_entries.append(bp)
-        io_apic = X86IntelMPIOAPIC(
-                id = cpus,
-                version = 0x11,
-                enable = True,
-                address = 0xfec00000)
-        self.pc.south_bridge.io_apic.apic_id = io_apic.id
-        base_entries.append(io_apic)
-        pci_bus = X86IntelMPBus(bus_id = 0, bus_type='PCI   ')
-        base_entries.append(pci_bus)
-        isa_bus = X86IntelMPBus(bus_id = 1, bus_type='ISA   ')
-        base_entries.append(isa_bus)
-        connect_busses = X86IntelMPBusHierarchy(bus_id=1,
-                subtractive_decode=True, parent_bus=0)
-        ext_entries.append(connect_busses)
-        pci_dev4_inta = X86IntelMPIOIntAssignment(
-                interrupt_type = 'INT',
-                polarity = 'ConformPolarity',
-                trigger = 'ConformTrigger',
-                source_bus_id = 0,
-                source_bus_irq = 0 + (4 << 2),
-                dest_io_apic_id = io_apic.id,
-                dest_io_apic_intin = 16)
-        base_entries.append(pci_dev4_inta)
-        def assignISAInt(irq, apicPin):
-            assign_8259_to_apic = X86IntelMPIOIntAssignment(
-                    interrupt_type = 'ExtInt',
-                    polarity = 'ConformPolarity',
-                    trigger = 'ConformTrigger',
-                    source_bus_id = 1,
-                    source_bus_irq = irq,
-                    dest_io_apic_id = io_apic.id,
-                    dest_io_apic_intin = 0)
-            base_entries.append(assign_8259_to_apic)
-            assign_to_apic = X86IntelMPIOIntAssignment(
-                    interrupt_type = 'INT',
-                    polarity = 'ConformPolarity',
-                    trigger = 'ConformTrigger',
-                    source_bus_id = 1,
-                    source_bus_irq = irq,
-                    dest_io_apic_id = io_apic.id,
-                    dest_io_apic_intin = apicPin)
-            base_entries.append(assign_to_apic)
-        assignISAInt(0, 2)
-        assignISAInt(1, 1)
-        for i in range(3, 15):
-            assignISAInt(i, i)
-        self.workload.intel_mp_table.base_entries = base_entries
-        self.workload.intel_mp_table.ext_entries = ext_entries
-
-        entries = \
-           [
-            # Mark the first megabyte of memory as reserved
-            X86E820Entry(addr = 0, size = '639kB', range_type = 1),
-            X86E820Entry(addr = 0x9fc00, size = '385kB', range_type = 2),
-            # Mark the rest of physical memory as available
-            X86E820Entry(addr = 0x100000,
-                    size = '%dB' % (self.mem_ranges[0].size() - 0x100000),
-                    range_type = 1),
-            ]
-
-        # Reserve the last 16kB of the 32-bit address space for m5ops
-        entries.append(X86E820Entry(addr = 0xFFFF0000, size = '64kB',
-                                    range_type=2))
-
-        self.workload.e820_table.entries = entries
diff --git a/src/parsec/configs/run_parsec.py b/src/parsec/configs/run_parsec.py
deleted file mode 100644
index 5dac66d..0000000
--- a/src/parsec/configs/run_parsec.py
+++ /dev/null
@@ -1,209 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) 2019 The Regents of the University of California.
-# All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-
-""" Script to run PARSEC benchmarks with gem5.
-    The script expects kernel, diskimage, cpu (kvm or timing),
-    benchmark, benchmark size, and number of cpu cores as arguments.
-    This script is best used if your disk-image has workloads tha have
-    ROI annotations compliant with m5 utility. You can use the script in
-    ../disk-images/parsec/ with the parsec-benchmark repo at
-    https://github.com/darchr/parsec-benchmark.git to create a working
-    disk-image for this script.
-"""
-import argparse
-import time
-import m5
-import m5.ticks
-from m5.objects import *
-
-from system import *
-
-supported_cpu_types = ["kvm", "timing"]
-benchmark_choices = ["blackscholes", "bodytrack", "canneal", "dedup",
-                     "facesim", "ferret", "fluidanimate", "freqmine",
-                     "raytrace", "streamcluster", "swaptions", "vips", "x264"]
-size_choices=["simsmall", "simmedium", "simlarge"]
-
-
-def parse_options():
-
-    parser = argparse.ArgumentParser(description='For use with gem5. This '
-                'runs a NAS Parallel Benchmark application. This only works '
-                'with x86 ISA.')
-
-    parser.add_argument("kernel", type=str,
-                        help="Path to the kernel binary to boot")
-    parser.add_argument("disk", type=str, help="Path to the PARSEC disk image")
-    parser.add_argument("cpu", type=str, choices=supported_cpu_types,
-                        help="The type of CPU to use in the system")
-    parser.add_argument("benchmark", type=str, choices=benchmark_choices,
-                        help="The PARSEC benchmark application to run")
-    parser.add_argument("size", type=str, choices=size_choices,
-                        help="The input size to the PARSEC benchmark "
-                             "application")
-    parser.add_argument("num_cpus", type=int, choices=[1,2,8],
-                        help="The number of CPU cores")
-
-    return parser.parse_args()
-
-def writeBenchScript(dir, bench, size, num_cpus):
-    """
-    This method creates a script in dir which will be eventually
-    passed to the simulated system (to run a specific benchmark
-    at bootup).
-    """
-    file_name = '{}/run_{}'.format(dir, bench)
-    bench_file = open(file_name, 'w+')
-    bench_file.write('cd /home/gem5/parsec-benchmark\n')
-    bench_file.write('source env.sh\n')
-    bench_file.write('parsecmgmt -a run -p \
-            {} -c gcc-hooks -i {} -n {}\n'.format(bench, size, num_cpus))
-
-    # sleeping for sometime makes sure
-    # that the benchmark's output has been
-    # printed to the console
-    bench_file.write('sleep 5 \n')
-    bench_file.write('m5 exit \n')
-    bench_file.close()
-    return file_name
-
-if __name__ == "__m5_main__":
-
-    args = parse_options()
-
-    # create the system
-    system = MySystem(args.kernel, args.disk, args.cpu, args.num_cpus)
-
-    # Exit from guest on workbegin/workend
-    system.exit_on_work_items = True
-
-    # Create and pass a script to the simulated system to run the reuired
-    # benchmark
-    system.readfile = writeBenchScript(m5.options.outdir, args.benchmark,
-                                      args.size, args.num_cpus)
-
-    # set up the root SimObject and start the simulation
-    root = Root(full_system = True, system = system)
-
-    if system.getHostParallel():
-        # Required for running kvm on multiple host cores.
-        # Uses gem5's parallel event queue feature
-        # Note: The simulator is quite picky about this number!
-        root.sim_quantum = int(1e9) # 1 ms
-
-    #needed for long running jobs
-    m5.disableAllListeners()
-
-    # instantiate all of the objects we've created above
-    m5.instantiate()
-
-    globalStart = time.time()
-
-    print("Running the simulation")
-    print("Using cpu: {}".format(args.cpu))
-
-    start_tick = m5.curTick()
-    end_tick = m5.curTick()
-    start_insts = system.totalInsts()
-    end_insts = system.totalInsts()
-    m5.stats.reset()
-
-    exit_event = m5.simulate()
-
-    if exit_event.getCause() == "workbegin":
-        print("Done booting Linux")
-        # Reached the start of ROI
-        # start of ROI is marked by an
-        # m5_work_begin() call
-        print("Resetting stats at the start of ROI!")
-        m5.stats.reset()
-        start_tick = m5.curTick()
-        start_insts = system.totalInsts()
-        # switching to timing cpu if argument cpu == timing
-        if args.cpu == 'timing':
-            system.switchCpus(system.cpu, system.detailedCpu)
-    else:
-        print("Unexpected termination of simulation!")
-        print()
-        m5.stats.dump()
-        end_tick = m5.curTick()
-        end_insts = system.totalInsts()
-        m5.stats.reset()
-        print("Performance statistics:")
-
-        print("Simulated time: %.2fs" % ((end_tick-start_tick)/1e12))
-        print("Instructions executed: %d" % ((end_insts-start_insts)))
-        print("Ran a total of", m5.curTick()/1e12, "simulated seconds")
-        print("Total wallclock time: %.2fs, %.2f min" % \
-                    (time.time()-globalStart, (time.time()-globalStart)/60))
-        exit()
-
-    # Simulate the ROI
-    exit_event = m5.simulate()
-
-    # Reached the end of ROI
-    # Finish executing the benchmark with kvm cpu
-    if exit_event.getCause() == "workend":
-        # Reached the end of ROI
-        # end of ROI is marked by an
-        # m5_work_end() call
-        print("Dump stats at the end of the ROI!")
-        m5.stats.dump()
-        end_tick = m5.curTick()
-        end_insts = system.totalInsts()
-        m5.stats.reset()
-        # switching to timing cpu if argument cpu == timing
-        if args.cpu == 'timing':
-            system.switchCpus(system.timingCpu, system.cpu)
-    else:
-        print("Unexpected termination of simulation!")
-        print()
-        m5.stats.dump()
-        end_tick = m5.curTick()
-        end_insts = system.totalInsts()
-        m5.stats.reset()
-        print("Performance statistics:")
-
-        print("Simulated time: %.2fs" % ((end_tick-start_tick)/1e12))
-        print("Instructions executed: %d" % ((end_insts-start_insts)))
-        print("Ran a total of", m5.curTick()/1e12, "simulated seconds")
-        print("Total wallclock time: %.2fs, %.2f min" % \
-                    (time.time()-globalStart, (time.time()-globalStart)/60))
-        exit()
-
-    # Simulate the remaning part of the benchmark
-    exit_event = m5.simulate()
-
-    print("Done with the simulation")
-    print()
-    print("Performance statistics:")
-
-    print("Simulated time in ROI: %.2fs" % ((end_tick-start_tick)/1e12))
-    print("Instructions executed in ROI: %d" % ((end_insts-start_insts)))
-    print("Ran a total of", m5.curTick()/1e12, "simulated seconds")
-    print("Total wallclock time: %.2fs, %.2f min" % \
-                (time.time()-globalStart, (time.time()-globalStart)/60))
diff --git a/src/parsec/configs/system/MESI_Two_Level.py b/src/parsec/configs/system/MESI_Two_Level.py
deleted file mode 100644
index 314f640..0000000
--- a/src/parsec/configs/system/MESI_Two_Level.py
+++ /dev/null
@@ -1,341 +0,0 @@
-#Copyright (c) 2020 The Regents of the University of California.
-#All Rights Reserved
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-
-
-""" This file creates a set of Ruby caches for the MESI TWO Level protocol
-This protocol models two level cache hierarchy. The L1 cache is split into
-instruction and data cache.
-
-This system support the memory size of up to 3GB.
-
-"""
-
-from __future__ import print_function
-from __future__ import absolute_import
-
-import math
-
-from m5.defines import buildEnv
-from m5.util import fatal, panic
-
-from m5.objects import *
-
-class MESITwoLevelCache(RubySystem):
-
-    def __init__(self):
-        if buildEnv['PROTOCOL'] != 'MESI_Two_Level':
-            fatal("This system assumes MESI_Two_Level!")
-
-        super(MESITwoLevelCache, self).__init__()
-
-        self._numL2Caches = 8
-
-    def setup(self, system, cpus, mem_ctrls, dma_ports, iobus):
-        """Set up the Ruby cache subsystem. Note: This can't be done in the
-           constructor because many of these items require a pointer to the
-           ruby system (self). This causes infinite recursion in initialize()
-           if we do this in the __init__.
-        """
-        # Ruby's global network.
-        self.network = MyNetwork(self)
-
-        # MESI_Two_Level example uses 5 virtual networks
-        self.number_of_virtual_networks = 5
-        self.network.number_of_virtual_networks = 5
-
-        # There is a single global list of all of the controllers to make it
-        # easier to connect everything to the global network. This can be
-        # customized depending on the topology/network requirements.
-        # L1 caches are private to a core, hence there are one L1 cache per CPU
-        # core. The number of L2 caches are dependent to the architecture.
-        self.controllers = \
-            [L1Cache(system, self, cpu, self._numL2Caches) for cpu in cpus] + \
-            [L2Cache(system, self, self._numL2Caches) for num in \
-            range(self._numL2Caches)] + [DirController(self, \
-            system.mem_ranges, mem_ctrls)] + [DMAController(self) for i \
-            in range(len(dma_ports))]
-
-        # Create one sequencer per CPU and dma controller.
-        # Sequencers for other controllers can be here here.
-        self.sequencers = [RubySequencer(version = i,
-                                # I/D cache is combined and grab from ctrl
-                                icache = self.controllers[i].L1Icache,
-                                dcache = self.controllers[i].L1Dcache,
-                                clk_domain = self.controllers[i].clk_domain,
-                                pio_request_port = iobus.cpu_side_ports,
-                                mem_request_port = iobus.cpu_side_ports,
-                                pio_response_port = iobus.mem_side_ports
-                                ) for i in range(len(cpus))] + \
-                          [DMASequencer(version = i,
-                                        in_ports = port)
-                            for i,port in enumerate(dma_ports)
-                          ]
-
-        for i,c in enumerate(self.controllers[:len(cpus)]):
-            c.sequencer = self.sequencers[i]
-
-        #Connecting the DMA sequencer to DMA controller
-        for i,d in enumerate(self.controllers[-len(dma_ports):]):
-            i += len(cpus)
-            d.dma_sequencer = self.sequencers[i]
-
-        self.num_of_sequencers = len(self.sequencers)
-
-        # Create the network and connect the controllers.
-        # NOTE: This is quite different if using Garnet!
-        self.network.connectControllers(self.controllers)
-        self.network.setup_buffers()
-
-        # Set up a proxy port for the system_port. Used for load binaries and
-        # other functional-only things.
-        self.sys_port_proxy = RubyPortProxy()
-        system.system_port = self.sys_port_proxy.in_ports
-        self.sys_port_proxy.pio_request_port = iobus.cpu_side_ports
-
-        # Connect the cpu's cache, interrupt, and TLB ports to Ruby
-        for i,cpu in enumerate(cpus):
-            cpu.icache_port = self.sequencers[i].in_ports
-            cpu.dcache_port = self.sequencers[i].in_ports
-            isa = buildEnv['TARGET_ISA']
-            if isa == 'x86':
-                cpu.interrupts[0].pio = self.sequencers[i].interrupt_out_port
-                cpu.interrupts[0].int_requestor = self.sequencers[i].in_ports
-                cpu.interrupts[0].int_responder = \
-                                        self.sequencers[i].interrupt_out_port
-            if isa == 'x86' or isa == 'arm':
-                cpu.itb.walker.port = self.sequencers[i].in_ports
-                cpu.dtb.walker.port = self.sequencers[i].in_ports
-
-class L1Cache(L1Cache_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, system, ruby_system, cpu, num_l2Caches):
-        """Creating L1 cache controller. Consist of both instruction
-           and data cache. The size of data cache is 512KB and
-           8-way set associative. The instruction cache is 32KB,
-           2-way set associative.
-        """
-        super(L1Cache, self).__init__()
-
-        self.version = self.versionCount()
-        block_size_bits = int(math.log(system.cache_line_size, 2))
-        l1i_size = '32kB'
-        l1i_assoc = '2'
-        l1d_size = '512kB'
-        l1d_assoc = '8'
-        # This is the cache memory object that stores the cache data and tags
-        self.L1Icache = RubyCache(size = l1i_size,
-                                assoc = l1i_assoc,
-                                start_index_bit = block_size_bits ,
-                                is_icache = True)
-        self.L1Dcache = RubyCache(size = l1d_size,
-                            assoc = l1d_assoc,
-                            start_index_bit = block_size_bits,
-                            is_icache = False)
-        self.l2_select_num_bits = int(math.log(num_l2Caches , 2))
-        self.clk_domain = cpu.clk_domain
-        self.prefetcher = RubyPrefetcher()
-        self.send_evictions = self.sendEvicts(cpu)
-        self.transitions_per_cycle = 4
-        self.enable_prefetch = False
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def getBlockSizeBits(self, system):
-        bits = int(math.log(system.cache_line_size, 2))
-        if 2**bits != system.cache_line_size.value:
-            panic("Cache line size not a power of 2!")
-        return bits
-
-    def sendEvicts(self, cpu):
-        """True if the CPU model or ISA requires sending evictions from caches
-           to the CPU. Two scenarios warrant forwarding evictions to the CPU:
-           1. The O3 model must keep the LSQ coherent with the caches
-           2. The x86 mwait instruction is built on top of coherence
-           3. The local exclusive monitor in ARM systems
-        """
-        if type(cpu) is DerivO3CPU or \
-           buildEnv['TARGET_ISA'] in ('x86', 'arm'):
-            return True
-        return False
-
-    def connectQueues(self, ruby_system):
-        """Connect all of the queues for this controller.
-        """
-        self.mandatoryQueue = MessageBuffer()
-        self.requestFromL1Cache = MessageBuffer()
-        self.requestFromL1Cache.out_port = ruby_system.network.in_port
-        self.responseFromL1Cache = MessageBuffer()
-        self.responseFromL1Cache.out_port = ruby_system.network.in_port
-        self.unblockFromL1Cache = MessageBuffer()
-        self.unblockFromL1Cache.out_port = ruby_system.network.in_port
-
-        self.optionalQueue = MessageBuffer()
-
-        self.requestToL1Cache = MessageBuffer()
-        self.requestToL1Cache.in_port = ruby_system.network.out_port
-        self.responseToL1Cache = MessageBuffer()
-        self.responseToL1Cache.in_port = ruby_system.network.out_port
-
-class L2Cache(L2Cache_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, system, ruby_system, num_l2Caches):
-
-        super(L2Cache, self).__init__()
-
-        self.version = self.versionCount()
-        # This is the cache memory object that stores the cache data and tags
-        self.L2cache = RubyCache(size = '1 MB',
-                                assoc = 16,
-                                start_index_bit = self.getBlockSizeBits(system,
-                                num_l2Caches))
-
-        self.transitions_per_cycle = '4'
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def getBlockSizeBits(self, system, num_l2caches):
-        l2_bits = int(math.log(num_l2caches, 2))
-        bits = int(math.log(system.cache_line_size, 2)) + l2_bits
-        return bits
-
-
-    def connectQueues(self, ruby_system):
-        """Connect all of the queues for this controller.
-        """
-        self.DirRequestFromL2Cache = MessageBuffer()
-        self.DirRequestFromL2Cache.out_port = ruby_system.network.in_port
-        self.L1RequestFromL2Cache = MessageBuffer()
-        self.L1RequestFromL2Cache.out_port = ruby_system.network.in_port
-        self.responseFromL2Cache = MessageBuffer()
-        self.responseFromL2Cache.out_port = ruby_system.network.in_port
-        self.unblockToL2Cache = MessageBuffer()
-        self.unblockToL2Cache.in_port = ruby_system.network.out_port
-        self.L1RequestToL2Cache = MessageBuffer()
-        self.L1RequestToL2Cache.in_port = ruby_system.network.out_port
-        self.responseToL2Cache = MessageBuffer()
-        self.responseToL2Cache.in_port = ruby_system.network.out_port
-
-
-class DirController(Directory_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, ruby_system, ranges, mem_ctrls):
-        """ranges are the memory ranges assigned to this controller.
-        """
-        if len(mem_ctrls) > 1:
-            panic("This cache system can only be connected to one mem ctrl")
-        super(DirController, self).__init__()
-        self.version = self.versionCount()
-        self.addr_ranges = ranges
-        self.ruby_system = ruby_system
-        self.directory = RubyDirectoryMemory()
-        # Connect this directory to the memory side.
-        self.memory_out_port = mem_ctrls[0].port
-        self.connectQueues(ruby_system)
-
-    def connectQueues(self, ruby_system):
-        self.requestToDir = MessageBuffer()
-        self.requestToDir.in_port = ruby_system.network.out_port
-        self.responseToDir = MessageBuffer()
-        self.responseToDir.in_port = ruby_system.network.out_port
-        self.responseFromDir = MessageBuffer()
-        self.responseFromDir.out_port = ruby_system.network.in_port
-        self.requestToMemory = MessageBuffer()
-        self.responseFromMemory = MessageBuffer()
-
-class DMAController(DMA_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, ruby_system):
-        super(DMAController, self).__init__()
-        self.version = self.versionCount()
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def connectQueues(self, ruby_system):
-        self.mandatoryQueue = MessageBuffer()
-        self.responseFromDir = MessageBuffer(ordered = True)
-        self.responseFromDir.in_port = ruby_system.network.out_port
-        self.requestToDir = MessageBuffer()
-        self.requestToDir.out_port = ruby_system.network.in_port
-
-
-class MyNetwork(SimpleNetwork):
-    """A simple point-to-point network. This doesn't not use garnet.
-    """
-
-    def __init__(self, ruby_system):
-        super(MyNetwork, self).__init__()
-        self.netifs = []
-        self.ruby_system = ruby_system
-
-    def connectControllers(self, controllers):
-        """Connect all of the controllers to routers and connec the routers
-           together in a point-to-point network.
-        """
-        # Create one router/switch per controller in the system
-        self.routers = [Switch(router_id = i) for i in range(len(controllers))]
-
-        # Make a link from each controller to the router. The link goes
-        # externally to the network.
-        self.ext_links = [SimpleExtLink(link_id=i, ext_node=c,
-                                        int_node=self.routers[i])
-                          for i, c in enumerate(controllers)]
-
-        # Make an "internal" link (internal to the network) between every pair
-        # of routers.
-        link_count = 0
-        self.int_links = []
-        for ri in self.routers:
-            for rj in self.routers:
-                if ri == rj: continue # Don't connect a router to itself!
-                link_count += 1
-                self.int_links.append(SimpleIntLink(link_id = link_count,
-                                                    src_node = ri,
-                                                    dst_node = rj))
diff --git a/src/parsec/configs/system/MI_example_caches.py b/src/parsec/configs/system/MI_example_caches.py
deleted file mode 100644
index a9a171c..0000000
--- a/src/parsec/configs/system/MI_example_caches.py
+++ /dev/null
@@ -1,277 +0,0 @@
-# Copyright (c) 2021 The Regents of the University of California
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-""" This file creates a set of Ruby caches, the Ruby network, and a simple
-point-to-point topology.
-See Part 3 in the Learning gem5 book: learning.gem5.org/book/part3
-You can change simple_ruby to import from this file instead of from msi_caches
-to use the MI_example protocol instead of MSI.
-
-IMPORTANT: If you modify this file, it's likely that the Learning gem5 book
-           also needs to be updated. For now, email Jason <jason@lowepower.com>
-
-"""
-
-from __future__ import print_function
-from __future__ import absolute_import
-
-import math
-
-from m5.defines import buildEnv
-from m5.util import fatal, panic
-
-from m5.objects import *
-
-class MIExampleSystem(RubySystem):
-
-    def __init__(self):
-        if buildEnv['PROTOCOL'] != 'MI_example':
-            fatal("This system assumes MI_example!")
-
-        super(MIExampleSystem, self).__init__()
-
-    def setup(self, system, cpus, mem_ctrls, dma_ports, iobus):
-        """Set up the Ruby cache subsystem. Note: This can't be done in the
-           constructor because many of these items require a pointer to the
-           ruby system (self). This causes infinite recursion in initialize()
-           if we do this in the __init__.
-        """
-        # Ruby's global network.
-        self.network = MyNetwork(self)
-
-        # MI example uses 5 virtual networks
-        self.number_of_virtual_networks = 5
-        self.network.number_of_virtual_networks = 5
-
-        # There is a single global list of all of the controllers to make it
-        # easier to connect everything to the global network. This can be
-        # customized depending on the topology/network requirements.
-        # Create one controller for each L1 cache (and the cache mem obj.)
-        # Create a single directory controller (Really the memory cntrl)
-        self.controllers = \
-            [L1Cache(system, self, cpu) for cpu in cpus] + \
-            [DirController(self, system.mem_ranges, mem_ctrls)] + \
-            [DMAController(self) for i in range(len(dma_ports))]
-
-        # Create one sequencer per CPU. In many systems this is more
-        # complicated since you have to create sequencers for DMA controllers
-        # and other controllers, too.
-        self.sequencers = [RubySequencer(version = i,
-                                # I/D cache is combined and grab from ctrl
-                                icache = self.controllers[i].cacheMemory,
-                                dcache = self.controllers[i].cacheMemory,
-                                clk_domain = self.controllers[i].clk_domain,
-                                pio_request_port = iobus.cpu_side_ports,
-                                mem_request_port = iobus.cpu_side_ports,
-                                pio_response_port = iobus.mem_side_ports
-                                ) for i in range(len(cpus))] + \
-                          [DMASequencer(version = i,
-                                        in_ports = port)
-                            for i,port in enumerate(dma_ports)
-                          ]
-
-        for i,c in enumerate(self.controllers[0:len(cpus)]):
-            c.sequencer = self.sequencers[i]
-
-        for i,d in enumerate(self.controllers[-len(dma_ports):]):
-            i += len(cpus)
-            d.dma_sequencer = self.sequencers[i]
-
-        self.num_of_sequencers = len(self.sequencers)
-
-        # Create the network and connect the controllers.
-        # NOTE: This is quite different if using Garnet!
-        self.network.connectControllers(self.controllers)
-        self.network.setup_buffers()
-
-        # Set up a proxy port for the system_port. Used for load binaries and
-        # other functional-only things.
-        self.sys_port_proxy = RubyPortProxy()
-        system.system_port = self.sys_port_proxy.in_ports
-        self.sys_port_proxy.pio_request_port = iobus.cpu_side_ports
-
-        # Connect the cpu's cache, interrupt, and TLB ports to Ruby
-        for i,cpu in enumerate(cpus):
-            cpu.icache_port = self.sequencers[i].in_ports
-            cpu.dcache_port = self.sequencers[i].in_ports
-            isa = buildEnv['TARGET_ISA']
-            if isa == 'x86':
-                cpu.interrupts[0].pio = self.sequencers[i].interrupt_out_port
-                cpu.interrupts[0].int_requestor = self.sequencers[i].in_ports
-                cpu.interrupts[0].int_responder = \
-                                        self.sequencers[i].interrupt_out_port
-            if isa == 'x86' or isa == 'arm':
-                cpu.itb.walker.port = self.sequencers[i].in_ports
-                cpu.dtb.walker.port = self.sequencers[i].in_ports
-
-
-class L1Cache(L1Cache_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, system, ruby_system, cpu):
-        """CPUs are needed to grab the clock domain and system is needed for
-           the cache block size.
-        """
-        super(L1Cache, self).__init__()
-
-        self.version = self.versionCount()
-        # This is the cache memory object that stores the cache data and tags
-        self.cacheMemory = RubyCache(size = '16kB',
-                               assoc = 8,
-                               start_index_bit = self.getBlockSizeBits(system))
-        self.clk_domain = cpu.clk_domain
-        self.send_evictions = self.sendEvicts(cpu)
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def getBlockSizeBits(self, system):
-        bits = int(math.log(system.cache_line_size, 2))
-        if 2**bits != system.cache_line_size.value:
-            panic("Cache line size not a power of 2!")
-        return bits
-
-    def sendEvicts(self, cpu):
-        """True if the CPU model or ISA requires sending evictions from caches
-           to the CPU. Two scenarios warrant forwarding evictions to the CPU:
-           1. The O3 model must keep the LSQ coherent with the caches
-           2. The x86 mwait instruction is built on top of coherence
-           3. The local exclusive monitor in ARM systems
-        """
-        if type(cpu) is DerivO3CPU or \
-           buildEnv['TARGET_ISA'] in ('x86', 'arm'):
-            return True
-        return False
-
-    def connectQueues(self, ruby_system):
-        """Connect all of the queues for this controller.
-        """
-        self.mandatoryQueue = MessageBuffer()
-        self.requestFromCache = MessageBuffer(ordered = True)
-        self.requestFromCache.out_port = ruby_system.network.in_port
-        self.responseFromCache = MessageBuffer(ordered = True)
-        self.responseFromCache.out_port = ruby_system.network.in_port
-        self.forwardToCache = MessageBuffer(ordered = True)
-        self.forwardToCache.in_port = ruby_system.network.out_port
-        self.responseToCache = MessageBuffer(ordered = True)
-        self.responseToCache.in_port = ruby_system.network.out_port
-
-class DirController(Directory_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, ruby_system, ranges, mem_ctrls):
-        """ranges are the memory ranges assigned to this controller.
-        """
-        if len(mem_ctrls) > 1:
-            panic("This cache system can only be connected to one mem ctrl")
-        super(DirController, self).__init__()
-        self.version = self.versionCount()
-        self.addr_ranges = ranges
-        self.ruby_system = ruby_system
-        self.directory = RubyDirectoryMemory()
-        # Connect this directory to the memory side.
-        self.memory_out_port = mem_ctrls[0].port
-        self.connectQueues(ruby_system)
-
-    def connectQueues(self, ruby_system):
-        self.requestToDir = MessageBuffer(ordered = True)
-        self.requestToDir.in_port = ruby_system.network.out_port
-        self.dmaRequestToDir = MessageBuffer(ordered = True)
-        self.dmaRequestToDir.in_port = ruby_system.network.out_port
-
-        self.responseFromDir = MessageBuffer()
-        self.responseFromDir.out_port = ruby_system.network.in_port
-        self.dmaResponseFromDir = MessageBuffer(ordered = True)
-        self.dmaResponseFromDir.out_port = ruby_system.network.in_port
-        self.forwardFromDir = MessageBuffer()
-        self.forwardFromDir.out_port = ruby_system.network.in_port
-        self.requestToMemory = MessageBuffer()
-        self.responseFromMemory = MessageBuffer()
-
-class DMAController(DMA_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, ruby_system):
-        super(DMAController, self).__init__()
-        self.version = self.versionCount()
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def connectQueues(self, ruby_system):
-        self.mandatoryQueue = MessageBuffer()
-        self.requestToDir = MessageBuffer()
-        self.requestToDir.out_port = ruby_system.network.in_port
-        self.responseFromDir = MessageBuffer(ordered = True)
-        self.responseFromDir.in_port = ruby_system.network.out_port
-
-
-class MyNetwork(SimpleNetwork):
-    """A simple point-to-point network. This doesn't not use garnet.
-    """
-
-    def __init__(self, ruby_system):
-        super(MyNetwork, self).__init__()
-        self.netifs = []
-        self.ruby_system = ruby_system
-
-    def connectControllers(self, controllers):
-        """Connect all of the controllers to routers and connec the routers
-           together in a point-to-point network.
-        """
-        # Create one router/switch per controller in the system
-        self.routers = [Switch(router_id = i) for i in range(len(controllers))]
-
-        # Make a link from each controller to the router. The link goes
-        # externally to the network.
-        self.ext_links = [SimpleExtLink(link_id=i, ext_node=c,
-                                        int_node=self.routers[i])
-                          for i, c in enumerate(controllers)]
-
-        # Make an "internal" link (internal to the network) between every pair
-        # of routers.
-        link_count = 0
-        self.int_links = []
-        for ri in self.routers:
-            for rj in self.routers:
-                if ri == rj: continue # Don't connect a router to itself!
-                link_count += 1
-                self.int_links.append(SimpleIntLink(link_id = link_count,
-                                                    src_node = ri,
-                                                    dst_node = rj))
diff --git a/src/parsec/configs/system/MOESI_CMP_directory.py b/src/parsec/configs/system/MOESI_CMP_directory.py
deleted file mode 100644
index f24022a..0000000
--- a/src/parsec/configs/system/MOESI_CMP_directory.py
+++ /dev/null
@@ -1,351 +0,0 @@
-#Copyright (c) 2020 The Regents of the University of California.
-#All Rights Reserved
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-
-
-""" This file creates a set of Ruby caches for the MOESI CMP directory
-protocol.
-This protocol models two level cache hierarchy. The L1 cache is split into
-instruction and data cache.
-
-This system support the memory size of up to 3GB.
-
-"""
-
-from __future__ import print_function
-from __future__ import absolute_import
-
-import math
-
-from m5.defines import buildEnv
-from m5.util import fatal, panic
-
-from m5.objects import *
-
-class MOESICMPDirCache(RubySystem):
-
-    def __init__(self):
-        if buildEnv['PROTOCOL'] != 'MOESI_CMP_directory':
-            fatal("This system assumes MOESI_CMP_directory!")
-
-        super(MOESICMPDirCache, self).__init__()
-
-        self._numL2Caches = 8
-
-    def setup(self, system, cpus, mem_ctrls, dma_ports, iobus):
-        """Set up the Ruby cache subsystem. Note: This can't be done in the
-           constructor because many of these items require a pointer to the
-           ruby system (self). This causes infinite recursion in initialize()
-           if we do this in the __init__.
-        """
-        # Ruby's global network.
-        self.network = MyNetwork(self)
-
-        # MOESI_CMP_directory example uses 3 virtual networks
-        self.number_of_virtual_networks = 3
-        self.network.number_of_virtual_networks = 3
-
-        # There is a single global list of all of the controllers to make it
-        # easier to connect everything to the global network. This can be
-        # customized depending on the topology/network requirements.
-        # L1 caches are private to a core, hence there are one L1 cache per CPU
-        # core. The number of L2 caches are dependent to the architecture.
-        self.controllers = \
-            [L1Cache(system, self, cpu, self._numL2Caches) for cpu in cpus] + \
-            [L2Cache(system, self, self._numL2Caches) for num in \
-            range(self._numL2Caches)] + [DirController(self, \
-            system.mem_ranges, mem_ctrls)] + [DMAController(self) for i \
-            in range(len(dma_ports))]
-
-        # Create one sequencer per CPU and dma controller.
-        # Sequencers for other controllers can be here here.
-        self.sequencers = [RubySequencer(version = i,
-                                # I/D cache is combined and grab from ctrl
-                                icache = self.controllers[i].L1Icache,
-                                dcache = self.controllers[i].L1Dcache,
-                                clk_domain = self.controllers[i].clk_domain,
-                                pio_request_port = iobus.cpu_side_ports,
-                                mem_request_port = iobus.cpu_side_ports,
-                                pio_response_port = iobus.mem_side_ports
-                                ) for i in range(len(cpus))] + \
-                          [DMASequencer(version = i,
-                                        in_ports = port)
-                            for i,port in enumerate(dma_ports)
-                          ]
-
-        for i,c in enumerate(self.controllers[:len(cpus)]):
-            c.sequencer = self.sequencers[i]
-
-        #Connecting the DMA sequencer to DMA controller
-        for i,d in enumerate(self.controllers[-len(dma_ports):]):
-            i += len(cpus)
-            d.dma_sequencer = self.sequencers[i]
-
-        self.num_of_sequencers = len(self.sequencers)
-
-        # Create the network and connect the controllers.
-        # NOTE: This is quite different if using Garnet!
-        self.network.connectControllers(self.controllers)
-        self.network.setup_buffers()
-
-        # Set up a proxy port for the system_port. Used for load binaries and
-        # other functional-only things.
-        self.sys_port_proxy = RubyPortProxy()
-        system.system_port = self.sys_port_proxy.in_ports
-        self.sys_port_proxy.pio_request_port = iobus.cpu_side_ports
-
-        # Connect the cpu's cache, interrupt, and TLB ports to Ruby
-        for i,cpu in enumerate(cpus):
-            cpu.icache_port = self.sequencers[i].in_ports
-            cpu.dcache_port = self.sequencers[i].in_ports
-            isa = buildEnv['TARGET_ISA']
-            if isa == 'x86':
-                cpu.interrupts[0].pio = self.sequencers[i].interrupt_out_port
-                cpu.interrupts[0].int_requestor = self.sequencers[i].in_ports
-                cpu.interrupts[0].int_responder = self.sequencers[i].interrupt_out_port
-            if isa == 'x86' or isa == 'arm':
-                cpu.itb.walker.port = self.sequencers[i].in_ports
-                cpu.dtb.walker.port = self.sequencers[i].in_ports
-
-
-class L1Cache(L1Cache_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, system, ruby_system, cpu, num_l2Caches):
-        """Creating L1 cache controller. Consist of both instruction
-           and data cache. The size of data cache is 512KB and
-           8-way set associative. The instruction cache is 32KB,
-           2-way set associative.
-        """
-        super(L1Cache, self).__init__()
-
-        self.version = self.versionCount()
-        block_size_bits = int(math.log(system.cache_line_size, 2))
-        l1i_size = '32kB'
-        l1i_assoc = '2'
-        l1d_size = '512kB'
-        l1d_assoc = '8'
-        # This is the cache memory object that stores the cache data and tags
-        self.L1Icache = RubyCache(size = l1i_size,
-                                assoc = l1i_assoc,
-                                start_index_bit = block_size_bits ,
-                                is_icache = True,
-                                dataAccessLatency = 1,
-                                tagAccessLatency = 1)
-        self.L1Dcache = RubyCache(size = l1d_size,
-                            assoc = l1d_assoc,
-                            start_index_bit = block_size_bits,
-                            is_icache = False,
-                            dataAccessLatency = 1,
-                            tagAccessLatency = 1)
-        self.clk_domain = cpu.clk_domain
-        self.prefetcher = RubyPrefetcher()
-        self.send_evictions = self.sendEvicts(cpu)
-        self.transitions_per_cycle = 4
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def getBlockSizeBits(self, system):
-        bits = int(math.log(system.cache_line_size, 2))
-        if 2**bits != system.cache_line_size.value:
-            panic("Cache line size not a power of 2!")
-        return bits
-
-    def sendEvicts(self, cpu):
-        """True if the CPU model or ISA requires sending evictions from caches
-           to the CPU. Two scenarios warrant forwarding evictions to the CPU:
-           1. The O3 model must keep the LSQ coherent with the caches
-           2. The x86 mwait instruction is built on top of coherence
-           3. The local exclusive monitor in ARM systems
-        """
-        if type(cpu) is DerivO3CPU or \
-           buildEnv['TARGET_ISA'] in ('x86', 'arm'):
-            return True
-        return False
-
-    def connectQueues(self, ruby_system):
-        """Connect all of the queues for this controller.
-        """
-        self.mandatoryQueue = MessageBuffer()
-        self.requestFromL1Cache = MessageBuffer()
-        self.requestFromL1Cache.out_port = ruby_system.network.in_port
-        self.responseFromL1Cache = MessageBuffer()
-        self.responseFromL1Cache.out_port = ruby_system.network.in_port
-        self.requestToL1Cache = MessageBuffer()
-        self.requestToL1Cache.in_port = ruby_system.network.out_port
-        self.responseToL1Cache = MessageBuffer()
-        self.responseToL1Cache.in_port = ruby_system.network.out_port
-        self.triggerQueue = MessageBuffer(ordered = True)
-
-class L2Cache(L2Cache_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, system, ruby_system, num_l2Caches):
-
-        super(L2Cache, self).__init__()
-
-        self.version = self.versionCount()
-        # This is the cache memory object that stores the cache data and tags
-        self.L2cache = RubyCache(size = '1 MB',
-                                assoc = 16,
-                                start_index_bit = self.getL2StartIdx(system,
-                                num_l2Caches),
-                                dataAccessLatency = 20,
-                                tagAccessLatency = 20)
-
-        self.transitions_per_cycle = '4'
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def getL2StartIdx(self, system, num_l2caches):
-        l2_bits = int(math.log(num_l2caches, 2))
-        bits = int(math.log(system.cache_line_size, 2)) + l2_bits
-        return bits
-
-
-    def connectQueues(self, ruby_system):
-        """Connect all of the queues for this controller.
-        """
-        self.GlobalRequestFromL2Cache = MessageBuffer()
-        self.GlobalRequestFromL2Cache.out_port = ruby_system.network.in_port
-        self.L1RequestFromL2Cache = MessageBuffer()
-        self.L1RequestFromL2Cache.out_port = ruby_system.network.in_port
-        self.responseFromL2Cache = MessageBuffer()
-        self.responseFromL2Cache.out_port = ruby_system.network.in_port
-
-        self.GlobalRequestToL2Cache = MessageBuffer()
-        self.GlobalRequestToL2Cache.in_port = ruby_system.network.out_port
-        self.L1RequestToL2Cache = MessageBuffer()
-        self.L1RequestToL2Cache.in_port = ruby_system.network.out_port
-        self.responseToL2Cache = MessageBuffer()
-        self.responseToL2Cache.in_port = ruby_system.network.out_port
-        self.triggerQueue = MessageBuffer(ordered = True)
-
-
-
-class DirController(Directory_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, ruby_system, ranges, mem_ctrls):
-        """ranges are the memory ranges assigned to this controller.
-        """
-        if len(mem_ctrls) > 1:
-            panic("This cache system can only be connected to one mem ctrl")
-        super(DirController, self).__init__()
-        self.version = self.versionCount()
-        self.addr_ranges = ranges
-        self.ruby_system = ruby_system
-        self.directory = RubyDirectoryMemory()
-        # Connect this directory to the memory side.
-        self.memory_out_port = mem_ctrls[0].port
-        self.connectQueues(ruby_system)
-
-    def connectQueues(self, ruby_system):
-        self.requestToDir = MessageBuffer()
-        self.requestToDir.in_port = ruby_system.network.out_port
-        self.responseToDir = MessageBuffer()
-        self.responseToDir.in_port = ruby_system.network.out_port
-        self.responseFromDir = MessageBuffer()
-        self.responseFromDir.out_port = ruby_system.network.in_port
-        self.forwardFromDir = MessageBuffer()
-        self.forwardFromDir.out_port = ruby_system.network.in_port
-        self.requestToMemory = MessageBuffer()
-        self.responseFromMemory = MessageBuffer()
-        self.triggerQueue = MessageBuffer(ordered = True)
-
-class DMAController(DMA_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, ruby_system):
-        super(DMAController, self).__init__()
-        self.version = self.versionCount()
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def connectQueues(self, ruby_system):
-        self.mandatoryQueue = MessageBuffer()
-        self.responseFromDir = MessageBuffer()
-        self.responseFromDir.in_port = ruby_system.network.out_port
-        self.reqToDir = MessageBuffer()
-        self.reqToDir.out_port = ruby_system.network.in_port
-        self.respToDir = MessageBuffer()
-        self.respToDir.out_port = ruby_system.network.in_port
-        self.triggerQueue = MessageBuffer(ordered = True)
-
-
-class MyNetwork(SimpleNetwork):
-    """A simple point-to-point network. This doesn't not use garnet.
-    """
-
-    def __init__(self, ruby_system):
-        super(MyNetwork, self).__init__()
-        self.netifs = []
-        self.ruby_system = ruby_system
-
-    def connectControllers(self, controllers):
-        """Connect all of the controllers to routers and connec the routers
-           together in a point-to-point network.
-        """
-        # Create one router/switch per controller in the system
-        self.routers = [Switch(router_id = i) for i in range(len(controllers))]
-
-        # Make a link from each controller to the router. The link goes
-        # externally to the network.
-        self.ext_links = [SimpleExtLink(link_id=i, ext_node=c,
-                                        int_node=self.routers[i])
-                          for i, c in enumerate(controllers)]
-
-        # Make an "internal" link (internal to the network) between every pair
-        # of routers.
-        link_count = 0
-        self.int_links = []
-        for ri in self.routers:
-            for rj in self.routers:
-                if ri == rj: continue # Don't connect a router to itself!
-                link_count += 1
-                self.int_links.append(SimpleIntLink(link_id = link_count,
-                                                    src_node = ri,
-                                                    dst_node = rj))
diff --git a/src/parsec/configs/system/__init__.py b/src/parsec/configs/system/__init__.py
deleted file mode 100644
index 5b02b9a..0000000
--- a/src/parsec/configs/system/__init__.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# Copyright (c) 2021 The Regents of the University of California
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-from .system import MySystem
-from .ruby_system import MyRubySystem
-
diff --git a/src/parsec/configs/system/caches.py b/src/parsec/configs/system/caches.py
deleted file mode 100644
index 7d60733..0000000
--- a/src/parsec/configs/system/caches.py
+++ /dev/null
@@ -1,140 +0,0 @@
-# Copyright (c) 2021 The Regents of the University of California
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-""" Caches with options for a simple gem5 configuration script
-
-This file contains L1 I/D and L2 caches to be used in the simple
-gem5 configuration script.
-"""
-
-from m5.objects import Cache, L2XBar, StridePrefetcher
-
-# Some specific options for caches
-# For all options see src/mem/cache/BaseCache.py
-
-class PrefetchCache(Cache):
-
-    def __init__(self):
-        super(PrefetchCache, self).__init__()
-        self.prefetcher = StridePrefetcher()
-
-class L1Cache(PrefetchCache):
-    """Simple L1 Cache with default values"""
-
-    assoc = 8
-    tag_latency = 1
-    data_latency = 1
-    response_latency = 1
-    mshrs = 16
-    tgts_per_mshr = 20
-    writeback_clean = True
-
-    def __init__(self):
-        super(L1Cache, self).__init__()
-
-    def connectBus(self, bus):
-        """Connect this cache to a memory-side bus"""
-        self.mem_side = bus.cpu_side_ports
-
-    def connectCPU(self, cpu):
-        """Connect this cache's port to a CPU-side port
-           This must be defined in a subclass"""
-        raise NotImplementedError
-
-class L1ICache(L1Cache):
-    """Simple L1 instruction cache with default values"""
-
-    # Set the default size
-    size = '32kB'
-
-    def __init__(self):
-        super(L1ICache, self).__init__()
-
-    def connectCPU(self, cpu):
-        """Connect this cache's port to a CPU icache port"""
-        self.cpu_side = cpu.icache_port
-
-class L1DCache(L1Cache):
-    """Simple L1 data cache with default values"""
-
-    # Set the default size
-    size = '32kB'
-
-    def __init__(self):
-        super(L1DCache, self).__init__()
-
-    def connectCPU(self, cpu):
-        """Connect this cache's port to a CPU dcache port"""
-        self.cpu_side = cpu.dcache_port
-
-class MMUCache(Cache):
-    # Default parameters
-    size = '8kB'
-    assoc = 4
-    tag_latency = 1
-    data_latency = 1
-    response_latency = 1
-    mshrs = 20
-    tgts_per_mshr = 12
-    writeback_clean = True
-
-    def __init__(self):
-        super(MMUCache, self).__init__()
-
-    def connectCPU(self, cpu):
-        """Connect the CPU itb and dtb to the cache
-           Note: This creates a new crossbar
-        """
-        self.mmubus = L2XBar()
-        self.cpu_side = self.mmubus.mem_side_ports
-        cpu.mmu.connectWalkerPorts(
-            self.mmubus.cpu_side_ports, self.mmubus.cpu_side_ports)
-
-    def connectBus(self, bus):
-        """Connect this cache to a memory-side bus"""
-        self.mem_side = bus.cpu_side_ports
-
-class L2Cache(PrefetchCache):
-    """Simple L2 Cache with default values"""
-
-    # Default parameters
-    size = '256kB'
-    assoc = 16
-    tag_latency = 10
-    data_latency = 10
-    response_latency = 1
-    mshrs = 20
-    tgts_per_mshr = 12
-    writeback_clean = True
-
-    def __init__(self):
-        super(L2Cache, self).__init__()
-
-    def connectCPUSideBus(self, bus):
-        self.cpu_side = bus.mem_side_ports
-
-    def connectMemSideBus(self, bus):
-        self.mem_side = bus.cpu_side_ports
diff --git a/src/parsec/configs/system/fs_tools.py b/src/parsec/configs/system/fs_tools.py
deleted file mode 100644
index 9e49ce7..0000000
--- a/src/parsec/configs/system/fs_tools.py
+++ /dev/null
@@ -1,36 +0,0 @@
-# Copyright (c) 2021 The Regents of the University of California
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-from m5.objects import IdeDisk, CowDiskImage, RawDiskImage
-
-class CowDisk(IdeDisk):
-
-    def __init__(self, filename):
-        super(CowDisk, self).__init__()
-        self.driveID = 'device0'
-        self.image = CowDiskImage(child=RawDiskImage(read_only=True),
-                                  read_only=False)
-        self.image.child.image_file = filename
diff --git a/src/parsec/configs/system/ruby_system.py b/src/parsec/configs/system/ruby_system.py
deleted file mode 100644
index 3959a71..0000000
--- a/src/parsec/configs/system/ruby_system.py
+++ /dev/null
@@ -1,228 +0,0 @@
-# Copyright (c) 2021 The Regents of the University of California
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-import m5
-from m5.objects import *
-from .fs_tools import *
-
-
-class MyRubySystem(System):
-
-    def __init__(self, kernel, disk, cpu_type, mem_sys, num_cpus):
-        super(MyRubySystem, self).__init__()
-
-        self._host_parallel = cpu_type == "kvm"
-
-        # Set up the clock domain and the voltage domain
-        self.clk_domain = SrcClockDomain()
-        self.clk_domain.clock = '3GHz'
-        self.clk_domain.voltage_domain = VoltageDomain()
-
-        self.mem_ranges = [AddrRange(Addr('3GB')), # All data
-                           AddrRange(0xC0000000, size=0x100000), # For I/0
-                           ]
-
-        self.initFS(num_cpus)
-
-        # Replace these paths with the path to your disk images.
-        # The first disk is the root disk. The second could be used for swap
-        # or anything else.
-        self.setDiskImages(disk, disk)
-
-        # Change this path to point to the kernel you want to use
-        self.workload.object_file = kernel
-        # Options specified on the kernel command line
-        boot_options = ['earlyprintk=ttyS0', 'console=ttyS0', 'lpj=7999923',
-                         'root=/dev/hda1']
-
-        self.workload.command_line = ' '.join(boot_options)
-
-        # Create the CPUs for our system.
-        self.createCPU(cpu_type, num_cpus)
-
-        self.createMemoryControllersDDR3()
-
-        # Create the cache hierarchy for the system.
-        if mem_sys == 'MI_example':
-            from .MI_example_caches import MIExampleSystem
-            self.caches = MIExampleSystem()
-        elif mem_sys == 'MESI_Two_Level':
-            from .MESI_Two_Level import MESITwoLevelCache
-            self.caches = MESITwoLevelCache()
-        elif mem_sys == 'MOESI_CMP_directory':
-            from .MOESI_CMP_directory import MOESICMPDirCache
-            self.caches = MOESICMPDirCache()
-        self.caches.setup(self, self.cpu, self.mem_cntrls,
-                          [self.pc.south_bridge.ide.dma, self.iobus.mem_side_ports],
-                          self.iobus)
-
-        if self._host_parallel:
-            # To get the KVM CPUs to run on different host CPUs
-            # Specify a different event queue for each CPU
-            for i,cpu in enumerate(self.cpu):
-                for obj in cpu.descendants():
-                    obj.eventq_index = 0
-                cpu.eventq_index = i + 1
-
-    def getHostParallel(self):
-        return self._host_parallel
-
-    def totalInsts(self):
-        return sum([cpu.totalInsts() for cpu in self.cpu])
-
-    def createCPU(self, cpu_type, num_cpus):
-        if cpu_type == "atomic":
-            self.cpu = [AtomicSimpleCPU(cpu_id = i)
-                              for i in range(num_cpus)]
-            self.mem_mode = 'atomic'
-        elif cpu_type == "kvm":
-            # Note KVM needs a VM and atomic_noncaching
-            self.cpu = [X86KvmCPU(cpu_id = i)
-                        for i in range(num_cpus)]
-            self.kvm_vm = KvmVM()
-            self.mem_mode = 'atomic_noncaching'
-        elif cpu_type == "o3":
-            self.cpu = [DerivO3CPU(cpu_id = i)
-                        for i in range(num_cpus)]
-            self.mem_mode = 'timing'
-        elif cpu_type == "simple":
-            self.cpu = [TimingSimpleCPU(cpu_id = i)
-                        for i in range(num_cpus)]
-            self.mem_mode = 'timing'
-        else:
-            m5.fatal("No CPU type {}".format(cpu_type))
-
-        for cpu in self.cpu:
-            cpu.createThreads()
-            cpu.createInterruptController()
-
-    def setDiskImages(self, img_path_1, img_path_2):
-        disk0 = CowDisk(img_path_1)
-        disk2 = CowDisk(img_path_2)
-        self.pc.south_bridge.ide.disks = [disk0, disk2]
-
-    def createMemoryControllersDDR3(self):
-        self._createMemoryControllers(1, DDR3_1600_8x8)
-
-    def _createMemoryControllers(self, num, cls):
-        self.mem_cntrls = [
-            MemCtrl(dram = cls(range = self.mem_ranges[0]))
-            for i in range(num)
-        ]
-
-    def initFS(self, cpus):
-        self.pc = Pc()
-
-        self.workload = X86FsLinux()
-
-        # North Bridge
-        self.iobus = IOXBar()
-
-        # connect the io bus
-        # Note: pass in a reference to where Ruby will connect to in the future
-        # so the port isn't connected twice.
-        self.pc.attachIO(self.iobus, [self.pc.south_bridge.ide.dma])
-
-        ###############################################
-
-        # Add in a Bios information structure.
-        self.workload.smbios_table.structures = [X86SMBiosBiosInformation()]
-
-        # Set up the Intel MP table
-        base_entries = []
-        ext_entries = []
-        for i in range(cpus):
-            bp = X86IntelMPProcessor(
-                    local_apic_id = i,
-                    local_apic_version = 0x14,
-                    enable = True,
-                    bootstrap = (i ==0))
-            base_entries.append(bp)
-        io_apic = X86IntelMPIOAPIC(
-                id = cpus,
-                version = 0x11,
-                enable = True,
-                address = 0xfec00000)
-        self.pc.south_bridge.io_apic.apic_id = io_apic.id
-        base_entries.append(io_apic)
-        pci_bus = X86IntelMPBus(bus_id = 0, bus_type='PCI   ')
-        base_entries.append(pci_bus)
-        isa_bus = X86IntelMPBus(bus_id = 1, bus_type='ISA   ')
-        base_entries.append(isa_bus)
-        connect_busses = X86IntelMPBusHierarchy(bus_id=1,
-                subtractive_decode=True, parent_bus=0)
-        ext_entries.append(connect_busses)
-        pci_dev4_inta = X86IntelMPIOIntAssignment(
-                interrupt_type = 'INT',
-                polarity = 'ConformPolarity',
-                trigger = 'ConformTrigger',
-                source_bus_id = 0,
-                source_bus_irq = 0 + (4 << 2),
-                dest_io_apic_id = io_apic.id,
-                dest_io_apic_intin = 16)
-        base_entries.append(pci_dev4_inta)
-        def assignISAInt(irq, apicPin):
-            assign_8259_to_apic = X86IntelMPIOIntAssignment(
-                    interrupt_type = 'ExtInt',
-                    polarity = 'ConformPolarity',
-                    trigger = 'ConformTrigger',
-                    source_bus_id = 1,
-                    source_bus_irq = irq,
-                    dest_io_apic_id = io_apic.id,
-                    dest_io_apic_intin = 0)
-            base_entries.append(assign_8259_to_apic)
-            assign_to_apic = X86IntelMPIOIntAssignment(
-                    interrupt_type = 'INT',
-                    polarity = 'ConformPolarity',
-                    trigger = 'ConformTrigger',
-                    source_bus_id = 1,
-                    source_bus_irq = irq,
-                    dest_io_apic_id = io_apic.id,
-                    dest_io_apic_intin = apicPin)
-            base_entries.append(assign_to_apic)
-        assignISAInt(0, 2)
-        assignISAInt(1, 1)
-        for i in range(3, 15):
-            assignISAInt(i, i)
-        self.workload.intel_mp_table.base_entries = base_entries
-        self.workload.intel_mp_table.ext_entries = ext_entries
-
-        entries = \
-           [
-            # Mark the first megabyte of memory as reserved
-            X86E820Entry(addr = 0, size = '639kB', range_type = 1),
-            X86E820Entry(addr = 0x9fc00, size = '385kB', range_type = 2),
-            # Mark the rest of physical memory as available
-            X86E820Entry(addr = 0x100000,
-                    size = '%dB' % (self.mem_ranges[0].size() - 0x100000),
-                    range_type = 1),
-            ]
-
-        # Reserve the last 16kB of the 32-bit address space for m5ops
-        entries.append(X86E820Entry(addr = 0xFFFF0000, size = '64kB',
-                                    range_type=2))
-
-        self.workload.e820_table.entries = entries
diff --git a/src/parsec/configs/system/system.py b/src/parsec/configs/system/system.py
deleted file mode 100644
index 09030c2..0000000
--- a/src/parsec/configs/system/system.py
+++ /dev/null
@@ -1,331 +0,0 @@
-# Copyright (c) 2021 The Regents of the University of California
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-import m5
-from m5.objects import *
-from .fs_tools import *
-from .caches import *
-
-class MySystem(System):
-
-    def __init__(self, kernel, disk, cpu_type, num_cpus, no_kvm = False):
-        super(MySystem, self).__init__()
-
-        self._no_kvm = no_kvm
-        self._host_parallel = cpu_type == "kvm"
-
-        # Set up the clock domain and the voltage domain
-        self.clk_domain = SrcClockDomain()
-        self.clk_domain.clock = '3GHz'
-        self.clk_domain.voltage_domain = VoltageDomain()
-
-        self.mem_ranges = [AddrRange(Addr('3GB')), # All data
-                           AddrRange(0xC0000000, size=0x100000), # For I/0
-                           ]
-
-        # Create the main memory bus
-        # This connects to main memory
-        self.membus = SystemXBar(width = 64) # 64-byte width
-        self.membus.badaddr_responder = BadAddr()
-        self.membus.default = Self.badaddr_responder.pio
-
-        # Set up the system port for functional access from the simulator
-        self.system_port = self.membus.cpu_side_ports
-
-        self.initFS(self.membus, num_cpus)
-
-        # Replace these paths with the path to your disk images.
-        # The first disk is the root disk. The second could be used for swap
-        # or anything else.
-        self.setDiskImages(disk, disk)
-
-        # Change this path to point to the kernel you want to use
-        self.workload.object_file = kernel
-        # Options specified on the kernel command line
-        boot_options = ['earlyprintk=ttyS0', 'console=ttyS0', 'lpj=7999923',
-                         'root=/dev/hda1']
-
-        self.workload.command_line = ' '.join(boot_options)
-
-        # Create the CPUs for our system.
-        self.createCPU(cpu_type, num_cpus)
-
-        # Create the cache heirarchy for the system.
-        self.createCacheHierarchy()
-
-        # Set up the interrupt controllers for the system (x86 specific)
-        self.setupInterrupts()
-
-        self.createMemoryControllersDDR3()
-
-        if self._host_parallel:
-            # To get the KVM CPUs to run on different host CPUs
-            # Specify a different event queue for each CPU
-            for i,cpu in enumerate(self.cpu):
-                for obj in cpu.descendants():
-                    obj.eventq_index = 0
-                cpu.eventq_index = i + 1
-
-    def getHostParallel(self):
-        return self._host_parallel
-
-    def totalInsts(self):
-        return sum([cpu.totalInsts() for cpu in self.cpu])
-
-    def createCPU(self, cpu_type, num_cpus):
-        # set up a kvm core or an atomic core to boot
-        if self._no_kvm:
-            self.cpu = [AtomicSimpleCPU(cpu_id = i, switched_out = False)
-                              for i in range(num_cpus)]
-            self.mem_mode = 'atomic'
-        else:
-            # Note KVM needs a VM and atomic_noncaching
-            self.cpu = [X86KvmCPU(cpu_id = i, switched_out = False)
-                        for i in range(num_cpus)]
-            self.kvm_vm = KvmVM()
-            self.mem_mode = 'atomic_noncaching'
-
-        for cpu in self.cpu:
-            cpu.createThreads()
-
-        # set up the detailed cpu or a kvm model with more cores
-        if cpu_type == "atomic":
-            self.detailedCpu = [AtomicSimpleCPU(cpu_id = i, switched_out = True)
-                                 for i in range(num_cpus)]
-        elif cpu_type == "kvm":
-            # Note KVM needs a VM and atomic_noncaching
-            self.detailedCpu = [X86KvmCPU(cpu_id = i, switched_out = True)
-                                 for i in range(num_cpus)]
-            self.kvm_vm = KvmVM()
-        elif cpu_type == "o3":
-            self.detailedCpu = [DerivO3CPU(cpu_id = i, switched_out = True)
-                                 for i in range(num_cpus)]
-        elif cpu_type == "simple" or cpu_type == "timing":
-            self.detailedCpu = [TimingSimpleCPU(cpu_id = i, switched_out = True)
-                                 for i in range(num_cpus)]
-        else:
-            m5.fatal("No CPU type {}".format(cpu_type))
-
-        for cpu in self.detailedCpu:
-            cpu.createThreads()
-
-    def switchCpus(self, old, new):
-        assert(new[0].switchedOut())
-        m5.switchCpus(self, list(zip(old, new)))
-
-    def setDiskImages(self, img_path_1, img_path_2):
-        disk0 = CowDisk(img_path_1)
-        disk2 = CowDisk(img_path_2)
-        self.pc.south_bridge.ide.disks = [disk0, disk2]
-
-    def createCacheHierarchy(self):
-        for cpu in self.cpu:
-            # Create a memory bus, a coherent crossbar, in this case
-            cpu.l2bus = L2XBar()
-
-            # Create an L1 instruction and data cache
-            cpu.icache = L1ICache()
-            cpu.dcache = L1DCache()
-            cpu.mmucache = MMUCache()
-
-            # Connect the instruction and data caches to the CPU
-            cpu.icache.connectCPU(cpu)
-            cpu.dcache.connectCPU(cpu)
-            cpu.mmucache.connectCPU(cpu)
-
-            # Hook the CPU ports up to the l2bus
-            cpu.icache.connectBus(cpu.l2bus)
-            cpu.dcache.connectBus(cpu.l2bus)
-            cpu.mmucache.connectBus(cpu.l2bus)
-
-            # Create an L2 cache and connect it to the l2bus
-            cpu.l2cache = L2Cache()
-            cpu.l2cache.connectCPUSideBus(cpu.l2bus)
-
-            # Connect the L2 cache to the L3 bus
-            cpu.l2cache.connectMemSideBus(self.membus)
-
-    def setupInterrupts(self):
-        for cpu in self.cpu:
-            # create the interrupt controller CPU and connect to the membus
-            cpu.createInterruptController()
-
-            # For x86 only, connect interrupts to the memory
-            # Note: these are directly connected to the memory bus and
-            #       not cached
-            cpu.interrupts[0].pio = self.membus.mem_side_ports
-            cpu.interrupts[0].int_requestor = self.membus.cpu_side_ports
-            cpu.interrupts[0].int_responder = self.membus.mem_side_ports
-
-
-    def createMemoryControllersDDR3(self):
-        self._createMemoryControllers(1, DDR3_1600_8x8)
-
-    def _createMemoryControllers(self, num, cls):
-        self.mem_cntrls = [
-            MemCtrl(dram = cls(range = self.mem_ranges[0]),
-                    port = self.membus.mem_side_ports)
-            for i in range(num)
-        ]
-
-    def initFS(self, membus, cpus):
-        self.pc = Pc()
-
-        self.workload = X86FsLinux()
-
-        # Constants similar to x86_traits.hh
-        IO_address_space_base = 0x8000000000000000
-        pci_config_address_space_base = 0xc000000000000000
-        interrupts_address_space_base = 0xa000000000000000
-        APIC_range_size = 1 << 12
-
-        # North Bridge
-        self.iobus = IOXBar()
-        self.bridge = Bridge(delay='50ns')
-        self.bridge.mem_side_port = self.iobus.cpu_side_ports
-        self.bridge.cpu_side_port = membus.mem_side_ports
-        # Allow the bridge to pass through:
-        #  1) kernel configured PCI device memory map address: address range
-        #  [0xC0000000, 0xFFFF0000). (The upper 64kB are reserved for m5ops.)
-        #  2) the bridge to pass through the IO APIC (two pages, already
-        #     contained in 1),
-        #  3) everything in the IO address range up to the local APIC, and
-        #  4) then the entire PCI address space and beyond.
-        self.bridge.ranges = \
-            [
-            AddrRange(0xC0000000, 0xFFFF0000),
-            AddrRange(IO_address_space_base,
-                      interrupts_address_space_base - 1),
-            AddrRange(pci_config_address_space_base,
-                      Addr.max)
-            ]
-
-        # Create a bridge from the IO bus to the memory bus to allow access
-        # to the local APIC (two pages)
-        self.apicbridge = Bridge(delay='50ns')
-        self.apicbridge.cpu_side_port = self.iobus.mem_side_ports
-        self.apicbridge.mem_side_port = membus.cpu_side_ports
-        self.apicbridge.ranges = [AddrRange(interrupts_address_space_base,
-                                            interrupts_address_space_base +
-                                            cpus * APIC_range_size
-                                            - 1)]
-
-        # connect the io bus
-        self.pc.attachIO(self.iobus)
-
-        # Add a tiny cache to the IO bus.
-        # This cache is required for the classic memory model for coherence
-        self.iocache = Cache(assoc=8,
-                            tag_latency = 50,
-                            data_latency = 50,
-                            response_latency = 50,
-                            mshrs = 20,
-                            size = '1kB',
-                            tgts_per_mshr = 12,
-                            addr_ranges = self.mem_ranges)
-        self.iocache.cpu_side = self.iobus.mem_side_ports
-        self.iocache.mem_side = self.membus.cpu_side_ports
-
-        ###############################################
-
-        # Add in a Bios information structure.
-        self.workload.smbios_table.structures = [X86SMBiosBiosInformation()]
-
-        # Set up the Intel MP table
-        base_entries = []
-        ext_entries = []
-        for i in range(cpus):
-            bp = X86IntelMPProcessor(
-                    local_apic_id = i,
-                    local_apic_version = 0x14,
-                    enable = True,
-                    bootstrap = (i ==0))
-            base_entries.append(bp)
-        io_apic = X86IntelMPIOAPIC(
-                id = cpus,
-                version = 0x11,
-                enable = True,
-                address = 0xfec00000)
-        self.pc.south_bridge.io_apic.apic_id = io_apic.id
-        base_entries.append(io_apic)
-        pci_bus = X86IntelMPBus(bus_id = 0, bus_type='PCI   ')
-        base_entries.append(pci_bus)
-        isa_bus = X86IntelMPBus(bus_id = 1, bus_type='ISA   ')
-        base_entries.append(isa_bus)
-        connect_busses = X86IntelMPBusHierarchy(bus_id=1,
-                subtractive_decode=True, parent_bus=0)
-        ext_entries.append(connect_busses)
-        pci_dev4_inta = X86IntelMPIOIntAssignment(
-                interrupt_type = 'INT',
-                polarity = 'ConformPolarity',
-                trigger = 'ConformTrigger',
-                source_bus_id = 0,
-                source_bus_irq = 0 + (4 << 2),
-                dest_io_apic_id = io_apic.id,
-                dest_io_apic_intin = 16)
-        base_entries.append(pci_dev4_inta)
-        def assignISAInt(irq, apicPin):
-            assign_8259_to_apic = X86IntelMPIOIntAssignment(
-                    interrupt_type = 'ExtInt',
-                    polarity = 'ConformPolarity',
-                    trigger = 'ConformTrigger',
-                    source_bus_id = 1,
-                    source_bus_irq = irq,
-                    dest_io_apic_id = io_apic.id,
-                    dest_io_apic_intin = 0)
-            base_entries.append(assign_8259_to_apic)
-            assign_to_apic = X86IntelMPIOIntAssignment(
-                    interrupt_type = 'INT',
-                    polarity = 'ConformPolarity',
-                    trigger = 'ConformTrigger',
-                    source_bus_id = 1,
-                    source_bus_irq = irq,
-                    dest_io_apic_id = io_apic.id,
-                    dest_io_apic_intin = apicPin)
-            base_entries.append(assign_to_apic)
-        assignISAInt(0, 2)
-        assignISAInt(1, 1)
-        for i in range(3, 15):
-            assignISAInt(i, i)
-        self.workload.intel_mp_table.base_entries = base_entries
-        self.workload.intel_mp_table.ext_entries = ext_entries
-
-        entries = \
-           [
-            # Mark the first megabyte of memory as reserved
-            X86E820Entry(addr = 0, size = '639kB', range_type = 1),
-            X86E820Entry(addr = 0x9fc00, size = '385kB', range_type = 2),
-            # Mark the rest of physical memory as available
-            X86E820Entry(addr = 0x100000,
-                    size = '%dB' % (self.mem_ranges[0].size() - 0x100000),
-                    range_type = 1),
-            ]
-
-        # Reserve the last 16kB of the 32-bit address space for m5ops
-        entries.append(X86E820Entry(addr = 0xFFFF0000, size = '64kB',
-                                    range_type=2))
-
-        self.workload.e820_table.entries = entries