resources: Instruction added to run NPB with gem5 stdlib.

This change updates the README.md for NPB, providing instructions
on how to use gem5 stdlib to simulate NPB. It also removes the
the contents of the gem5-resources/src/npb/configs directory.

Signed-off-by: Kaustav Goswami <kggoswami@ucdavis.edu>
Change-Id: Icb60e79b9f786455b7944e8e5b53427fdef892b4
Reviewed-on: https://gem5-review.googlesource.com/c/public/gem5-resources/+/53443
Reviewed-by: Bobby Bruce <bbruce@ucdavis.edu>
Maintainer: Bobby Bruce <bbruce@ucdavis.edu>
Tested-by: Bobby Bruce <bbruce@ucdavis.edu>
diff --git a/src/npb/README.md b/src/npb/README.md
index a4a71c2..269ffd3 100644
--- a/src/npb/README.md
+++ b/src/npb/README.md
@@ -5,12 +5,13 @@
     - fullsystem
 permalink: resources/npb
 shortdoc: >
-    Disk images and gem5 configurations to run the [NAS parallel benchmarks](https://www.nas.nasa.gov/).
+    Disk image and a gem5 configuration script to run the [NAS parallel benchmarks](https://www.nas.nasa.gov/).
 author: ["Ayaz Akram"]
 license: BSD-3-Clause
 ---
 
-This document provides instructions to create a disk image needed to run the NPB tests with gem5 and points to the gem5 configuration files needed to run these tests.
+This document provides instructions to create a disk image needed to run the NPB tests with gem5 and points to an example gem5 configuration script needed to run these tests. The example script uses a pre-built disk-image.
+
 The NAS parallel benchmarks ([NPB](https://www.nas.nasa.gov/)) are high performance computing (HPC) workloads consisting of different kernels and pseudo applications:
 
 Kernels:
@@ -47,10 +48,6 @@
   |            |___ npb-install.sh         # Compiles NPB inside the generated disk image
   |            |___ npb-hooks              # The NPB source (modified to function better with gem5).
   |
-  |___ configs
-  |      |___ system                       # gem5 system config files
-  |      |___ run_npb.py                   # gem5 run script to run NPB tests
-  |
   |___ linux                               # Linux source and binary will live here
   |
   |___ README.md                           # This README file
@@ -83,32 +80,49 @@
 Once this process succeeds, the created disk image can be found on `npb/npb-image/npb`.
 A disk image already created following the above instructions can be found, gzipped, [here](http://dist.gem5.org/dist/v21-1/images/x86/ubuntu-18-04/npb.img.gz).
 
-## gem5 Run Scripts
+## Simulating NPB using an example script
 
-The gem5 scripts which configure the system and run simulation are available in configs-npb-tests/.
-The main script `run_npb.py` expects following arguments:
+An example script with a pre-configured system is available in the following directory within the gem5 repository:
 
-**kernel:** path to the Linux kernel. This disk image has been tested with version 4.19.83, available at <http://dist.gem5.org/dist/v21-1/kernels/x86/static/vmlinux-4.19.83>. More info on building Linux Kernels can be found in the `src/linux-kernels` directory.
+```
+gem5/configs/example/gem5_library/x86-npb-benchmarks.py
+```
 
-**disk:** path to the npb disk image.
+The example script specifies a system with the following parameters:
 
-**cpu:** CPU model (`kvm`, `atomic`, `timing`).
+* A `SimpleSwitchableProcessor` (`KVM` for startup and `TIMING` for ROI execution). There are 2 CPU cores, each clocked at 3 GHz.
+* 2 Level `MESI_Two_Level` cache with 32 kB L1I and L1D size, and, 256 kB L2 size. The L1 cache(s) has associativity of 8, and, the L2 cache has associativity 16. There are 2 L2 cache banks.
+* The system has 3 GB `SingleChannelDDR4_2400` memory.
+* The script uses `x86-linux-kernel-4.19.83` and `x86-npb`, the disk image created from following the instructions in this `README.md`.
 
-**mem_sys:** memory system (`classic`, `MI_example`, `MESI_Two_Level`, or `MOESI_CMP_directory`).
-
-**benchmark:** NPB benchmark to execute (`bt.A.x`, `cg.A.x`, `ep.A.x`, `ft.A.x`, `is.A.x`, `lu.A.x`, `mg.A.x`,  `sp.A.x`).
-
-**Note:**
-We have only tested class `A` of the NPB suite, though `A`,`B`,`C` and `D` of NPB are available in the disk image
-For example, for build class `F` of the `bt` benchmark `bt.F.x` can be specified (replacinv `A` with `F` from above).
-
-**num_cpus:** number of CPU cores.
-
-An example of how to use these scripts:
+The example script must be run with the `X86_MESI_Two_Level` binary. To build:
 
 ```sh
-gem5/build/X86/gem5.opt configs/run_npb.py <kernel> <disk> <cpu> <mem_sys> <benchmark> <num_cpus>
+git clone https://gem5.googlesource.com/public/gem5
+cd gem5
+scons build/X86/gem5.opt -j<proc>
 ```
+Once compiled, you may use the example config file to run the NPB benchmark programs. You would need to specify the benchmark program (`bt`, `cg`, `ep`, `ft`, `is`, `lu`, `mg`, `sp`) and the class (`A`, `B`, `C`) separately, using the following command:
+
+```sh
+# In the gem5 directory
+build/X86/gem5.opt \
+configs/example/gem5_library/x86-npb-benchmarks.py \
+--benchmark <benchmark_program> \
+--size <class_of_the_benchmark>
+```
+
+Description of the two arguments, provided in the above command are:
+* **--benchmark**, which refers to one of 8 benchmark programs, provided in the NAS parallel benchmark suite. These include `bt`, `cg`, `ep`, `ft`, `is`, `lu`, `mg` and `sp`. For more information on the workloads can be found at <https://www.nas.nasa.gov/>.
+* **--size**, which refers to the workload class to simulate. The classes present in the pre-built disk-image are `A`, `B`, `C` and `D`. More information regarding these classes are written in the following paragraphs.
+
+A few important notes to keep in mind while simulating NPB using the disk-image from gem5 resources:
+
+* The pre-built disk image has NPB executables for classes `A`, `B`, `C` and `D`.
+* Classes `D` and `F` requires main memory sizes of more than 3 GB. Therefore, most of the benchmark programs for class `D` will fail to be executed properly, as our system only has 3 GB of main memory. The `X86Board` from `gem5 stdlib` is currently limited to 3 GB of memory.
+* Only benchmark `ep` with class `D` works in the aforemented configuration.
+* The configuration file `x86-npb-benchmarks.py` takes class input of `A`, `B` or `C`.
+* More information on memory footprint for NPB is available in the paper by [Akram et al.](https://arxiv.org/abs/2010.13216)
 
 ## Working Status
 
diff --git a/src/npb/configs/run_npb.py b/src/npb/configs/run_npb.py
deleted file mode 100755
index 7aa3dca..0000000
--- a/src/npb/configs/run_npb.py
+++ /dev/null
@@ -1,215 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) 2019 The Regents of the University of California.
-# All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-# Authors: Jason Lowe-Power, Ayaz Akram
-
-""" Script to run NAS parallel benchmarks with gem5.
-    The script expects kernel, diskimage, mem_sys,
-    cpu (kvm, atomic, or timing), benchmark to run
-    and number of cpus as arguments.
-
-    If your application has ROI annotations, this script will count the total
-    number of instructions executed in the ROI. It also tracks how much
-    wallclock and simulated time.
-"""
-import argparse
-import time
-import m5
-import m5.ticks
-from m5.objects import *
-
-from system import *
-
-def writeBenchScript(dir, bench):
-    """
-    This method creates a script in dir which will be eventually
-    passed to the simulated system (to run a specific benchmark
-    at bootup).
-    """
-    file_name = '{}/run_{}'.format(dir, bench)
-    bench_file = open(file_name,"w+")
-    bench_file.write('/home/gem5/NPB3.3-OMP/bin/{} \n'.format(bench))
-
-    # sleeping for sometime (5 seconds here) makes sure
-    # that the benchmark's output has been
-    # printed to the console
-    bench_file.write('sleep 5 \n')
-    bench_file.write('m5 exit \n')
-    bench_file.close()
-    return file_name
-
-supported_protocols = ["classic", "MI_example", "MESI_Two_Level",
-                        "MOESI_CMP_directory"]
-supported_cpu_types = ['kvm', 'atomic', 'timing']
-benchmark_choices = ['bt.A.x', 'cg.A.x', 'ep.A.x', 'ft.A.x',
-                     'is.A.x', 'lu.A.x', 'mg.A.x', 'sp.A.x',
-                     'bt.B.x', 'cg.B.x', 'ep.B.x', 'ft.B.x',
-                     'is.B.x', 'lu.B.x', 'mg.B.x', 'sp.B.x',
-                     'bt.C.x', 'cg.C.x', 'ep.C.x', 'ft.C.x',
-                     'is.C.x', 'lu.C.x', 'mg.C.x', 'sp.C.x',
-                     'bt.D.x', 'cg.D.x', 'ep.D.x', 'ft.D.x',
-                     'is.D.x', 'lu.D.x', 'mg.D.x', 'sp.D.x',
-                     'bt.F.x', 'cg.F.x', 'ep.F.x', 'ft.F.x',
-                     'is.F.x', 'lu.F.x', 'mg.F.x', 'sp.F.x']
-
-def parse_options():
-
-    parser = argparse.ArgumentParser(description='For use with gem5. This '
-                'runs a NAS Parallel Benchmark application. This only works '
-                'with x86 ISA.')
-
-    # The manditry position arguments.
-    parser.add_argument("kernel", type=str,
-                        help="Path to the kernel binary to boot")
-    parser.add_argument("disk", type=str,
-                        help="Path to the disk image to boot")
-    parser.add_argument("cpu", type=str, choices=supported_cpu_types,
-                        help="The type of CPU to use in the system")
-    parser.add_argument("mem_sys", type=str, choices=supported_protocols,
-                        help="Type of memory system or coherence protocol")
-    parser.add_argument("benchmark", type=str, choices=benchmark_choices,
-                        help="The NPB application to run")
-    parser.add_argument("num_cpus", type=int, help="Number of CPU cores")
-
-    # The optional arguments.
-    parser.add_argument("--no_host_parallel", action="store_true",
-                        help="Do NOT run gem5 on multiple host threads "
-                              "(kvm only)")
-    parser.add_argument("--second_disk", type=str,
-                        help="The second disk image to mount (/dev/hdb)")
-    parser.add_argument("--no_prefetchers", action="store_true",
-                        help="Enable prefectchers on the caches")
-    parser.add_argument("--l1i_size", type=str, default='32kB',
-                        help="L1 instruction cache size. Default: 32kB")
-    parser.add_argument("--l1d_size", type=str, default='32kB',
-                        help="L1 data cache size. Default: 32kB")
-    parser.add_argument("--l2_size", type=str, default = "256kB",
-                        help="L2 cache size. Default: 256kB")
-    parser.add_argument("--l3_size", type=str, default = "4MB",
-                        help="L2 cache size. Default: 4MB")
-
-    return parser.parse_args()
-
-if __name__ == "__m5_main__":
-    args = parse_options()
-
-
-    # create the system we are going to simulate
-    system = MySystem(args.kernel, args.disk, args.num_cpus, args,
-                      no_kvm=False)
-
-
-    if args.mem_sys == "classic":
-        system = MySystem(args.kernel, args.disk, args.num_cpus, args,
-                          no_kvm=False)
-    else:
-        system = MyRubySystem(args.kernel, args.disk, args.mem_sys,
-                              args.num_cpus, args)
-
-    # Exit from guest on workbegin/workend
-    system.exit_on_work_items = True
-
-    # Create and pass a script to the simulated system to run the reuired
-    # benchmark
-    system.readfile = writeBenchScript(m5.options.outdir, args.benchmark)
-
-    # set up the root SimObject and start the simulation
-    root = Root(full_system = True, system = system)
-
-    if system.getHostParallel():
-        # Required for running kvm on multiple host cores.
-        # Uses gem5's parallel event queue feature
-        # Note: The simulator is quite picky about this number!
-        root.sim_quantum = int(1e9) # 1 ms
-
-    #needed for long running jobs
-    m5.disableAllListeners()
-
-    # instantiate all of the objects we've created above
-    m5.instantiate()
-
-    globalStart = time.time()
-
-    print("Running the simulation")
-    print("Using cpu: {}".format(args.cpu))
-    exit_event = m5.simulate()
-
-    if exit_event.getCause() == "workbegin":
-        print("Done booting Linux")
-        # Reached the start of ROI
-        # start of ROI is marked by an
-        # m5_work_begin() call
-        print("Resetting stats at the start of ROI!")
-        m5.stats.reset()
-        start_tick = m5.curTick()
-        start_insts = system.totalInsts()
-        # switching cpu if argument cpu == atomic or timing
-        if args.cpu == 'atomic':
-            system.switchCpus(system.cpu, system.atomicCpu)
-        if args.cpu == 'timing':
-            system.switchCpus(system.cpu, system.timingCpu)
-    else:
-        print("Unexpected termination of simulation !")
-        exit()
-
-    # Simulate the ROI
-    exit_event = m5.simulate()
-
-    # Reached the end of ROI
-    # Finish executing the benchmark
-
-    print("Dump stats at the end of the ROI!")
-    m5.stats.dump()
-    end_tick = m5.curTick()
-    end_insts = system.totalInsts()
-    m5.stats.reset()
-
-    # Switching back to KVM does not work
-    # with Ruby mem protocols, so not
-    # switching back to simulate the remaining
-    # part
-
-    if args.mem_sys == 'classic':
-        # switch cpu back to kvm if atomic/timing was used for ROI
-        if args.cpu == 'atomic':
-            system.switchCpus(system.atomicCpu, system.cpu)
-        if args.cpu == 'timing':
-            system.switchCpus(system.timingCpu, system.cpu)
-
-        # Simulate the remaning part of the benchmark
-        exit_event = m5.simulate()
-    else:
-        print("Ruby Mem: Not Switching back to KVM!")
-
-    print("Done with the simulation")
-    print()
-    print("Performance statistics:")
-
-    print("Simulated time in ROI: %.2fs" % ((end_tick-start_tick)/1e12))
-    print("Instructions executed in ROI: %d" % ((end_insts-start_insts)))
-    print("Ran a total of", m5.curTick()/1e12, "simulated seconds")
-    print("Total wallclock time: %.2fs, %.2f min" % \
-                (time.time()-globalStart, (time.time()-globalStart)/60))
diff --git a/src/npb/configs/system/MESI_Two_Level.py b/src/npb/configs/system/MESI_Two_Level.py
deleted file mode 100755
index 6fd9b4c..0000000
--- a/src/npb/configs/system/MESI_Two_Level.py
+++ /dev/null
@@ -1,336 +0,0 @@
-#Copyright (c) 2020 The Regents of the University of California.
-#All Rights Reserved
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-
-
-""" This file creates a set of Ruby caches for the MESI TWO Level protocol
-This protocol models two level cache hierarchy. The L1 cache is split into
-instruction and data cache.
-
-This system support the memory size of up to 3GB.
-
-"""
-
-import math
-
-from m5.defines import buildEnv
-from m5.util import fatal, panic
-
-from m5.objects import *
-
-class MESITwoLevelCache(RubySystem):
-
-    def __init__(self):
-        if buildEnv['PROTOCOL'] != 'MESI_Two_Level':
-            fatal("This system assumes MESI_Two_Level!")
-
-        super(MESITwoLevelCache, self).__init__()
-
-        self._numL2Caches = 8
-
-    def setup(self, system, cpus, mem_ctrls, dma_ports, iobus):
-        """Set up the Ruby cache subsystem. Note: This can't be done in the
-           constructor because many of these items require a pointer to the
-           ruby system (self). This causes infinite recursion in initialize()
-           if we do this in the __init__.
-        """
-        # Ruby's global network.
-        self.network = MyNetwork(self)
-
-        # MESI_Two_Level example uses 5 virtual networks
-        self.number_of_virtual_networks = 5
-        self.network.number_of_virtual_networks = 5
-
-        # There is a single global list of all of the controllers to make it
-        # easier to connect everything to the global network. This can be
-        # customized depending on the topology/network requirements.
-        # L1 caches are private to a core, hence there are one L1 cache per CPU
-        # core. The number of L2 caches are dependent to the architecture.
-        self.controllers = \
-            [L1Cache(system, self, cpu, self._numL2Caches) for cpu in cpus] + \
-            [L2Cache(system, self, self._numL2Caches) for num in \
-            range(self._numL2Caches)] + \
-            [DirController(self, system.mem_ranges, mem_ctrls)] + \
-            [DMAController(self) for i in range(len(dma_ports))]
-
-        # Create one sequencer per CPU and dma controller.
-        # Sequencers for other controllers can be here here.
-        self.sequencers = [RubySequencer(version = i,
-                                # Grab dcache from ctrl
-                                dcache = self.controllers[i].L1Dcache,
-                                clk_domain = self.controllers[i].clk_domain,
-                                pio_request_port = iobus.cpu_side_ports,
-                                mem_request_port = iobus.cpu_side_ports,
-                                pio_response_port = iobus.mem_side_ports
-                                ) for i in range(len(cpus))] + \
-                          [DMASequencer(version = i,
-                                        in_ports = port)
-                            for i,port in enumerate(dma_ports)
-                          ]
-
-        for i,c in enumerate(self.controllers[:len(cpus)]):
-            c.sequencer = self.sequencers[i]
-
-        #Connecting the DMA sequencer to DMA controller
-        for i,d in enumerate(self.controllers[-len(dma_ports):]):
-            i += len(cpus)
-            d.dma_sequencer = self.sequencers[i]
-
-        self.num_of_sequencers = len(self.sequencers)
-
-        # Create the network and connect the controllers.
-        # NOTE: This is quite different if using Garnet!
-        self.network.connectControllers(self.controllers)
-        self.network.setup_buffers()
-
-        # Set up a proxy port for the system_port. Used for load binaries and
-        # other functional-only things.
-        self.sys_port_proxy = RubyPortProxy()
-        system.system_port = self.sys_port_proxy.in_ports
-        self.sys_port_proxy.pio_request_port = iobus.cpu_side_ports
-
-        # Connect the cpu's cache, interrupt, and TLB ports to Ruby
-        for i,cpu in enumerate(cpus):
-            cpu.icache_port = self.sequencers[i].in_ports
-            cpu.dcache_port = self.sequencers[i].in_ports
-            cpu.createInterruptController()
-            isa = buildEnv['TARGET_ISA']
-            if isa == 'x86':
-                cpu.interrupts[0].pio = self.sequencers[i].interrupt_out_port
-                cpu.interrupts[0].int_requestor = self.sequencers[i].in_ports
-                cpu.interrupts[0].int_responder = self.sequencers[i].interrupt_out_port
-            if isa == 'x86' or isa == 'arm':
-                cpu.mmu.connectWalkerPorts(
-                    self.sequencers[i].in_ports, self.sequencers[i].in_ports)
-class L1Cache(L1Cache_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, system, ruby_system, cpu, num_l2Caches):
-        """Creating L1 cache controller. Consist of both instruction
-           and data cache. The size of data cache is 512KB and
-           8-way set associative. The instruction cache is 32KB,
-           2-way set associative.
-        """
-        super(L1Cache, self).__init__()
-
-        self.version = self.versionCount()
-        block_size_bits = int(math.log(system.cache_line_size, 2))
-        l1i_size = '32kB'
-        l1i_assoc = '2'
-        l1d_size = '512kB'
-        l1d_assoc = '8'
-        # This is the cache memory object that stores the cache data and tags
-        self.L1Icache = RubyCache(size = l1i_size,
-                                assoc = l1i_assoc,
-                                start_index_bit = block_size_bits ,
-                                is_icache = True)
-        self.L1Dcache = RubyCache(size = l1d_size,
-                            assoc = l1d_assoc,
-                            start_index_bit = block_size_bits,
-                            is_icache = False)
-        self.l2_select_num_bits = int(math.log(num_l2Caches , 2))
-        self.clk_domain = cpu.clk_domain
-        self.prefetcher = RubyPrefetcher()
-        self.send_evictions = self.sendEvicts(cpu)
-        self.transitions_per_cycle = 4
-        self.enable_prefetch = False
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def getBlockSizeBits(self, system):
-        bits = int(math.log(system.cache_line_size, 2))
-        if 2**bits != system.cache_line_size.value:
-            panic("Cache line size not a power of 2!")
-        return bits
-
-    def sendEvicts(self, cpu):
-        """True if the CPU model or ISA requires sending evictions from caches
-           to the CPU. Two scenarios warrant forwarding evictions to the CPU:
-           1. The O3 model must keep the LSQ coherent with the caches
-           2. The x86 mwait instruction is built on top of coherence
-           3. The local exclusive monitor in ARM systems
-        """
-        if type(cpu) is DerivO3CPU or \
-           buildEnv['TARGET_ISA'] in ('x86', 'arm'):
-            return True
-        return False
-
-    def connectQueues(self, ruby_system):
-        """Connect all of the queues for this controller.
-        """
-        self.mandatoryQueue = MessageBuffer()
-        self.requestFromL1Cache = MessageBuffer()
-        self.requestFromL1Cache.out_port = ruby_system.network.in_port
-        self.responseFromL1Cache = MessageBuffer()
-        self.responseFromL1Cache.out_port = ruby_system.network.in_port
-        self.unblockFromL1Cache = MessageBuffer()
-        self.unblockFromL1Cache.out_port = ruby_system.network.in_port
-
-        self.optionalQueue = MessageBuffer()
-
-        self.requestToL1Cache = MessageBuffer()
-        self.requestToL1Cache.in_port = ruby_system.network.out_port
-        self.responseToL1Cache = MessageBuffer()
-        self.responseToL1Cache.in_port = ruby_system.network.out_port
-
-class L2Cache(L2Cache_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, system, ruby_system, num_l2Caches):
-
-        super(L2Cache, self).__init__()
-
-        self.version = self.versionCount()
-        # This is the cache memory object that stores the cache data and tags
-        self.L2cache = RubyCache(size = '1 MB',
-                                assoc = 16,
-                                start_index_bit = self.getBlockSizeBits(system,
-                                num_l2Caches))
-
-        self.transitions_per_cycle = '4'
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def getBlockSizeBits(self, system, num_l2caches):
-        l2_bits = int(math.log(num_l2caches, 2))
-        bits = int(math.log(system.cache_line_size, 2)) + l2_bits
-        return bits
-
-
-    def connectQueues(self, ruby_system):
-        """Connect all of the queues for this controller.
-        """
-        self.DirRequestFromL2Cache = MessageBuffer()
-        self.DirRequestFromL2Cache.out_port = ruby_system.network.in_port
-        self.L1RequestFromL2Cache = MessageBuffer()
-        self.L1RequestFromL2Cache.out_port = ruby_system.network.in_port
-        self.responseFromL2Cache = MessageBuffer()
-        self.responseFromL2Cache.out_port = ruby_system.network.in_port
-        self.unblockToL2Cache = MessageBuffer()
-        self.unblockToL2Cache.in_port = ruby_system.network.out_port
-        self.L1RequestToL2Cache = MessageBuffer()
-        self.L1RequestToL2Cache.in_port = ruby_system.network.out_port
-        self.responseToL2Cache = MessageBuffer()
-        self.responseToL2Cache.in_port = ruby_system.network.out_port
-
-
-class DirController(Directory_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, ruby_system, ranges, mem_ctrls):
-        """ranges are the memory ranges assigned to this controller.
-        """
-        if len(mem_ctrls) > 1:
-            panic("This cache system can only be connected to one mem ctrl")
-        super(DirController, self).__init__()
-        self.version = self.versionCount()
-        self.addr_ranges = ranges
-        self.ruby_system = ruby_system
-        self.directory = RubyDirectoryMemory()
-        # Connect this directory to the memory side.
-        self.memory_out_port = mem_ctrls[0].port
-        self.connectQueues(ruby_system)
-
-    def connectQueues(self, ruby_system):
-        self.requestToDir = MessageBuffer()
-        self.requestToDir.in_port = ruby_system.network.out_port
-        self.responseToDir = MessageBuffer()
-        self.responseToDir.in_port = ruby_system.network.out_port
-        self.responseFromDir = MessageBuffer()
-        self.responseFromDir.out_port = ruby_system.network.in_port
-        self.requestToMemory = MessageBuffer()
-        self.responseFromMemory = MessageBuffer()
-
-class DMAController(DMA_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, ruby_system):
-        super(DMAController, self).__init__()
-        self.version = self.versionCount()
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def connectQueues(self, ruby_system):
-        self.mandatoryQueue = MessageBuffer()
-        self.responseFromDir = MessageBuffer(ordered = True)
-        self.responseFromDir.in_port = ruby_system.network.out_port
-        self.requestToDir = MessageBuffer()
-        self.requestToDir.out_port = ruby_system.network.in_port
-
-
-class MyNetwork(SimpleNetwork):
-    """A simple point-to-point network. This doesn't not use garnet.
-    """
-
-    def __init__(self, ruby_system):
-        super(MyNetwork, self).__init__()
-        self.netifs = []
-        self.ruby_system = ruby_system
-
-    def connectControllers(self, controllers):
-        """Connect all of the controllers to routers and connec the routers
-           together in a point-to-point network.
-        """
-        # Create one router/switch per controller in the system
-        self.routers = [Switch(router_id = i) for i in range(len(controllers))]
-
-        # Make a link from each controller to the router. The link goes
-        # externally to the network.
-        self.ext_links = [SimpleExtLink(link_id=i, ext_node=c,
-                                        int_node=self.routers[i])
-                          for i, c in enumerate(controllers)]
-
-        # Make an "internal" link (internal to the network) between every pair
-        # of routers.
-        link_count = 0
-        self.int_links = []
-        for ri in self.routers:
-            for rj in self.routers:
-                if ri == rj: continue # Don't connect a router to itself!
-                link_count += 1
-                self.int_links.append(SimpleIntLink(link_id = link_count,
-                                                    src_node = ri,
-                                                    dst_node = rj))
diff --git a/src/npb/configs/system/MI_example_caches.py b/src/npb/configs/system/MI_example_caches.py
deleted file mode 100755
index 3c7a71d..0000000
--- a/src/npb/configs/system/MI_example_caches.py
+++ /dev/null
@@ -1,275 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) 2015 Jason Power
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-# Authors: Jason Power
-
-""" This file creates a set of Ruby caches, the Ruby network, and a simple
-point-to-point topology.
-See Part 3 in the Learning gem5 book: learning.gem5.org/book/part3
-You can change simple_ruby to import from this file instead of from msi_caches
-to use the MI_example protocol instead of MSI.
-
-IMPORTANT: If you modify this file, it's likely that the Learning gem5 book
-           also needs to be updated. For now, email Jason <jason@lowepower.com>
-
-"""
-
-import math
-
-from m5.defines import buildEnv
-from m5.util import fatal, panic
-
-from m5.objects import *
-
-class MIExampleSystem(RubySystem):
-
-    def __init__(self):
-        if buildEnv['PROTOCOL'] != 'MI_example':
-            fatal("This system assumes MI_example!")
-
-        super(MIExampleSystem, self).__init__()
-
-    def setup(self, system, cpus, mem_ctrls, dma_ports, iobus):
-        """Set up the Ruby cache subsystem. Note: This can't be done in the
-           constructor because many of these items require a pointer to the
-           ruby system (self). This causes infinite recursion in initialize()
-           if we do this in the __init__.
-        """
-        # Ruby's global network.
-        self.network = MyNetwork(self)
-
-        # MI example uses 5 virtual networks
-        self.number_of_virtual_networks = 5
-        self.network.number_of_virtual_networks = 5
-
-        # There is a single global list of all of the controllers to make it
-        # easier to connect everything to the global network. This can be
-        # customized depending on the topology/network requirements.
-        # Create one controller for each L1 cache (and the cache mem obj.)
-        # Create a single directory controller (Really the memory cntrl)
-        self.controllers = \
-            [L1Cache(system, self, cpu) for cpu in cpus] + \
-            [DirController(self, system.mem_ranges, mem_ctrls)] + \
-            [DMAController(self) for i in range(len(dma_ports))]
-
-        # Create one sequencer per CPU. In many systems this is more
-        # complicated since you have to create sequencers for DMA controllers
-        # and other controllers, too.
-        self.sequencers = [RubySequencer(version = i,
-                                # Grab dcache from ctrl
-                                dcache = self.controllers[i].cacheMemory,
-                                clk_domain = self.controllers[i].clk_domain,
-                                pio_request_port = iobus.cpu_side_ports,
-                                mem_request_port = iobus.cpu_side_ports,
-                                pio_response_port = iobus.mem_side_ports
-                                ) for i in range(len(cpus))] + \
-                          [DMASequencer(version = i,
-                                        in_ports = port)
-                            for i,port in enumerate(dma_ports)
-                          ]
-
-        for i,c in enumerate(self.controllers[0:len(cpus)]):
-            c.sequencer = self.sequencers[i]
-
-        for i,d in enumerate(self.controllers[-len(dma_ports):]):
-            i += len(cpus)
-            d.dma_sequencer = self.sequencers[i]
-
-        self.num_of_sequencers = len(self.sequencers)
-
-        # Create the network and connect the controllers.
-        # NOTE: This is quite different if using Garnet!
-        self.network.connectControllers(self.controllers)
-        self.network.setup_buffers()
-
-        # Set up a proxy port for the system_port. Used for load binaries and
-        # other functional-only things.
-        self.sys_port_proxy = RubyPortProxy()
-        system.system_port = self.sys_port_proxy.in_ports
-        self.sys_port_proxy.pio_request_port = iobus.cpu_side_ports
-
-        # Connect the cpu's cache, interrupt, and TLB ports to Ruby
-        for i,cpu in enumerate(cpus):
-            cpu.icache_port = self.sequencers[i].in_ports
-            cpu.dcache_port = self.sequencers[i].in_ports
-            cpu.createInterruptController()
-            isa = buildEnv['TARGET_ISA']
-            if isa == 'x86':
-                cpu.interrupts[0].pio = self.sequencers[i].interrupt_out_port
-                cpu.interrupts[0].int_requestor = self.sequencers[i].in_ports
-                cpu.interrupts[0].int_responder = self.sequencers[i].interrupt_out_port
-            if isa == 'x86' or isa == 'arm':
-                cpu.mmu.connectWalkerPorts(
-                    self.sequencers[i].in_ports, self.sequencers[i].in_ports)
-
-class L1Cache(L1Cache_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, system, ruby_system, cpu):
-        """CPUs are needed to grab the clock domain and system is needed for
-           the cache block size.
-        """
-        super(L1Cache, self).__init__()
-
-        self.version = self.versionCount()
-        # This is the cache memory object that stores the cache data and tags
-        self.cacheMemory = RubyCache(size = '16kB',
-                               assoc = 8,
-                               start_index_bit = self.getBlockSizeBits(system))
-        self.clk_domain = cpu.clk_domain
-        self.send_evictions = self.sendEvicts(cpu)
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def getBlockSizeBits(self, system):
-        bits = int(math.log(system.cache_line_size, 2))
-        if 2**bits != system.cache_line_size.value:
-            panic("Cache line size not a power of 2!")
-        return bits
-
-    def sendEvicts(self, cpu):
-        """True if the CPU model or ISA requires sending evictions from caches
-           to the CPU. Two scenarios warrant forwarding evictions to the CPU:
-           1. The O3 model must keep the LSQ coherent with the caches
-           2. The x86 mwait instruction is built on top of coherence
-           3. The local exclusive monitor in ARM systems
-        """
-        if type(cpu) is DerivO3CPU or \
-           buildEnv['TARGET_ISA'] in ('x86', 'arm'):
-            return True
-        return False
-
-    def connectQueues(self, ruby_system):
-        """Connect all of the queues for this controller.
-        """
-        self.mandatoryQueue = MessageBuffer()
-        self.requestFromCache = MessageBuffer(ordered = True)
-        self.requestFromCache.out_port = ruby_system.network.in_port
-        self.responseFromCache = MessageBuffer(ordered = True)
-        self.responseFromCache.out_port = ruby_system.network.in_port
-        self.forwardToCache = MessageBuffer(ordered = True)
-        self.forwardToCache.in_port = ruby_system.network.out_port
-        self.responseToCache = MessageBuffer(ordered = True)
-        self.responseToCache.in_port = ruby_system.network.out_port
-
-class DirController(Directory_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, ruby_system, ranges, mem_ctrls):
-        """ranges are the memory ranges assigned to this controller.
-        """
-        if len(mem_ctrls) > 1:
-            panic("This cache system can only be connected to one mem ctrl")
-        super(DirController, self).__init__()
-        self.version = self.versionCount()
-        self.addr_ranges = ranges
-        self.ruby_system = ruby_system
-        self.directory = RubyDirectoryMemory()
-        # Connect this directory to the memory side.
-        self.memory_out_port = mem_ctrls[0].port
-        self.connectQueues(ruby_system)
-
-    def connectQueues(self, ruby_system):
-        self.requestToDir = MessageBuffer(ordered = True)
-        self.requestToDir.in_port = ruby_system.network.out_port
-        self.dmaRequestToDir = MessageBuffer(ordered = True)
-        self.dmaRequestToDir.in_port = ruby_system.network.out_port
-
-        self.responseFromDir = MessageBuffer()
-        self.responseFromDir.out_port = ruby_system.network.in_port
-        self.dmaResponseFromDir = MessageBuffer(ordered = True)
-        self.dmaResponseFromDir.out_port = ruby_system.network.in_port
-        self.forwardFromDir = MessageBuffer()
-        self.forwardFromDir.out_port = ruby_system.network.in_port
-        self.requestToMemory = MessageBuffer()
-        self.responseFromMemory = MessageBuffer()
-
-class DMAController(DMA_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, ruby_system):
-        super(DMAController, self).__init__()
-        self.version = self.versionCount()
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def connectQueues(self, ruby_system):
-        self.mandatoryQueue = MessageBuffer()
-        self.requestToDir = MessageBuffer()
-        self.requestToDir.out_port = ruby_system.network.in_port
-        self.responseFromDir = MessageBuffer(ordered = True)
-        self.responseFromDir.in_port = ruby_system.network.out_port
-
-
-class MyNetwork(SimpleNetwork):
-    """A simple point-to-point network. This doesn't not use garnet.
-    """
-
-    def __init__(self, ruby_system):
-        super(MyNetwork, self).__init__()
-        self.netifs = []
-        self.ruby_system = ruby_system
-
-    def connectControllers(self, controllers):
-        """Connect all of the controllers to routers and connec the routers
-           together in a point-to-point network.
-        """
-        # Create one router/switch per controller in the system
-        self.routers = [Switch(router_id = i) for i in range(len(controllers))]
-
-        # Make a link from each controller to the router. The link goes
-        # externally to the network.
-        self.ext_links = [SimpleExtLink(link_id=i, ext_node=c,
-                                        int_node=self.routers[i])
-                          for i, c in enumerate(controllers)]
-
-        # Make an "internal" link (internal to the network) between every pair
-        # of routers.
-        link_count = 0
-        self.int_links = []
-        for ri in self.routers:
-            for rj in self.routers:
-                if ri == rj: continue # Don't connect a router to itself!
-                link_count += 1
-                self.int_links.append(SimpleIntLink(link_id = link_count,
-                                                    src_node = ri,
-                                                    dst_node = rj))
diff --git a/src/npb/configs/system/MOESI_CMP_directory.py b/src/npb/configs/system/MOESI_CMP_directory.py
deleted file mode 100755
index 33f9f47..0000000
--- a/src/npb/configs/system/MOESI_CMP_directory.py
+++ /dev/null
@@ -1,350 +0,0 @@
-#Copyright (c) 2020 The Regents of the University of California.
-#All Rights Reserved
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-
-
-""" This file creates a set of Ruby caches for the MOESI CMP directory
-protocol.
-This protocol models two level cache hierarchy. The L1 cache is split into
-instruction and data cache.
-
-This system support the memory size of up to 3GB.
-
-"""
-
-from __future__ import print_function
-from __future__ import absolute_import
-
-import math
-
-from m5.defines import buildEnv
-from m5.util import fatal, panic
-
-from m5.objects import *
-
-class MOESICMPDirCache(RubySystem):
-
-    def __init__(self):
-        if buildEnv['PROTOCOL'] != 'MOESI_CMP_directory':
-            fatal("This system assumes MOESI_CMP_directory!")
-
-        super(MOESICMPDirCache, self).__init__()
-
-        self._numL2Caches = 8
-
-    def setup(self, system, cpus, mem_ctrls, dma_ports, iobus):
-        """Set up the Ruby cache subsystem. Note: This can't be done in the
-           constructor because many of these items require a pointer to the
-           ruby system (self). This causes infinite recursion in initialize()
-           if we do this in the __init__.
-        """
-        # Ruby's global network.
-        self.network = MyNetwork(self)
-
-        # MOESI_CMP_directory example uses 3 virtual networks
-        self.number_of_virtual_networks = 3
-        self.network.number_of_virtual_networks = 3
-
-        # There is a single global list of all of the controllers to make it
-        # easier to connect everything to the global network. This can be
-        # customized depending on the topology/network requirements.
-        # L1 caches are private to a core, hence there are one L1 cache per CPU
-        # core. The number of L2 caches are dependent to the architecture.
-        self.controllers = \
-            [L1Cache(system, self, cpu, self._numL2Caches) for cpu in cpus] + \
-            [L2Cache(system, self, self._numL2Caches) for num in \
-            range(self._numL2Caches)] + [DirController(self, \
-            system.mem_ranges, mem_ctrls)] + [DMAController(self) for i \
-            in range(len(dma_ports))]
-
-        # Create one sequencer per CPU and dma controller.
-        # Sequencers for other controllers can be here here.
-        self.sequencers = [RubySequencer(version = i,
-                                # Grab dcache from ctrl
-                                dcache = self.controllers[i].L1Dcache,
-                                clk_domain = self.controllers[i].clk_domain,
-                                pio_request_port = iobus.cpu_side_ports,
-                                mem_request_port = iobus.cpu_side_ports,
-                                pio_response_port = iobus.mem_side_ports
-                                ) for i in range(len(cpus))] + \
-                          [DMASequencer(version = i,
-                                        in_ports = port)
-                            for i,port in enumerate(dma_ports)
-                          ]
-
-        for i,c in enumerate(self.controllers[:len(cpus)]):
-            c.sequencer = self.sequencers[i]
-
-        #Connecting the DMA sequencer to DMA controller
-        for i,d in enumerate(self.controllers[-len(dma_ports):]):
-            i += len(cpus)
-            d.dma_sequencer = self.sequencers[i]
-
-        self.num_of_sequencers = len(self.sequencers)
-
-        # Create the network and connect the controllers.
-        # NOTE: This is quite different if using Garnet!
-        self.network.connectControllers(self.controllers)
-        self.network.setup_buffers()
-
-        # Set up a proxy port for the system_port. Used for load binaries and
-        # other functional-only things.
-        self.sys_port_proxy = RubyPortProxy()
-        system.system_port = self.sys_port_proxy.in_ports
-        self.sys_port_proxy.pio_request_port = iobus.cpu_side_ports
-
-        # Connect the cpu's cache, interrupt, and TLB ports to Ruby
-        for i,cpu in enumerate(cpus):
-            cpu.icache_port = self.sequencers[i].in_ports
-            cpu.dcache_port = self.sequencers[i].in_ports
-            cpu.createInterruptController()
-            isa = buildEnv['TARGET_ISA']
-            if isa == 'x86':
-                cpu.interrupts[0].pio = self.sequencers[i].interrupt_out_port
-                cpu.interrupts[0].int_requestor = self.sequencers[i].in_ports
-                cpu.interrupts[0].int_responder = self.sequencers[i].interrupt_out_port
-            if isa == 'x86' or isa == 'arm':
-                cpu.mmu.connectWalkerPorts(
-                    self.sequencers[i].in_ports, self.sequencers[i].in_ports)
-
-class L1Cache(L1Cache_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, system, ruby_system, cpu, num_l2Caches):
-        """Creating L1 cache controller. Consist of both instruction
-           and data cache. The size of data cache is 512KB and
-           8-way set associative. The instruction cache is 32KB,
-           2-way set associative.
-        """
-        super(L1Cache, self).__init__()
-
-        self.version = self.versionCount()
-        block_size_bits = int(math.log(system.cache_line_size, 2))
-        l1i_size = '32kB'
-        l1i_assoc = '2'
-        l1d_size = '512kB'
-        l1d_assoc = '8'
-        # This is the cache memory object that stores the cache data and tags
-        self.L1Icache = RubyCache(size = l1i_size,
-                                assoc = l1i_assoc,
-                                start_index_bit = block_size_bits ,
-                                is_icache = True,
-                                dataAccessLatency = 1,
-                                tagAccessLatency = 1)
-        self.L1Dcache = RubyCache(size = l1d_size,
-                            assoc = l1d_assoc,
-                            start_index_bit = block_size_bits,
-                            is_icache = False,
-                            dataAccessLatency = 1,
-                            tagAccessLatency = 1)
-        self.clk_domain = cpu.clk_domain
-        self.prefetcher = RubyPrefetcher()
-        self.send_evictions = self.sendEvicts(cpu)
-        self.transitions_per_cycle = 4
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def getBlockSizeBits(self, system):
-        bits = int(math.log(system.cache_line_size, 2))
-        if 2**bits != system.cache_line_size.value:
-            panic("Cache line size not a power of 2!")
-        return bits
-
-    def sendEvicts(self, cpu):
-        """True if the CPU model or ISA requires sending evictions from caches
-           to the CPU. Two scenarios warrant forwarding evictions to the CPU:
-           1. The O3 model must keep the LSQ coherent with the caches
-           2. The x86 mwait instruction is built on top of coherence
-           3. The local exclusive monitor in ARM systems
-        """
-        if type(cpu) is DerivO3CPU or \
-           buildEnv['TARGET_ISA'] in ('x86', 'arm'):
-            return True
-        return False
-
-    def connectQueues(self, ruby_system):
-        """Connect all of the queues for this controller.
-        """
-        self.mandatoryQueue = MessageBuffer()
-        self.requestFromL1Cache = MessageBuffer()
-        self.requestFromL1Cache.out_port = ruby_system.network.in_port
-        self.responseFromL1Cache = MessageBuffer()
-        self.responseFromL1Cache.out_port = ruby_system.network.in_port
-        self.requestToL1Cache = MessageBuffer()
-        self.requestToL1Cache.in_port = ruby_system.network.out_port
-        self.responseToL1Cache = MessageBuffer()
-        self.responseToL1Cache.in_port = ruby_system.network.out_port
-        self.triggerQueue = MessageBuffer(ordered = True)
-
-class L2Cache(L2Cache_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, system, ruby_system, num_l2Caches):
-
-        super(L2Cache, self).__init__()
-
-        self.version = self.versionCount()
-        # This is the cache memory object that stores the cache data and tags
-        self.L2cache = RubyCache(size = '1 MB',
-                                assoc = 16,
-                                start_index_bit = self.getL2StartIdx(system,
-                                num_l2Caches),
-                                dataAccessLatency = 20,
-                                tagAccessLatency = 20)
-
-        self.transitions_per_cycle = '4'
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def getL2StartIdx(self, system, num_l2caches):
-        l2_bits = int(math.log(num_l2caches, 2))
-        bits = int(math.log(system.cache_line_size, 2)) + l2_bits
-        return bits
-
-
-    def connectQueues(self, ruby_system):
-        """Connect all of the queues for this controller.
-        """
-        self.GlobalRequestFromL2Cache = MessageBuffer()
-        self.GlobalRequestFromL2Cache.out_port = ruby_system.network.in_port
-        self.L1RequestFromL2Cache = MessageBuffer()
-        self.L1RequestFromL2Cache.out_port = ruby_system.network.in_port
-        self.responseFromL2Cache = MessageBuffer()
-        self.responseFromL2Cache.out_port = ruby_system.network.in_port
-
-        self.GlobalRequestToL2Cache = MessageBuffer()
-        self.GlobalRequestToL2Cache.in_port = ruby_system.network.out_port
-        self.L1RequestToL2Cache = MessageBuffer()
-        self.L1RequestToL2Cache.in_port = ruby_system.network.out_port
-        self.responseToL2Cache = MessageBuffer()
-        self.responseToL2Cache.in_port = ruby_system.network.out_port
-        self.triggerQueue = MessageBuffer(ordered = True)
-
-
-
-class DirController(Directory_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, ruby_system, ranges, mem_ctrls):
-        """ranges are the memory ranges assigned to this controller.
-        """
-        if len(mem_ctrls) > 1:
-            panic("This cache system can only be connected to one mem ctrl")
-        super(DirController, self).__init__()
-        self.version = self.versionCount()
-        self.addr_ranges = ranges
-        self.ruby_system = ruby_system
-        self.directory = RubyDirectoryMemory()
-        # Connect this directory to the memory side.
-        self.memory_out_port = mem_ctrls[0].port
-        self.connectQueues(ruby_system)
-
-    def connectQueues(self, ruby_system):
-        self.requestToDir = MessageBuffer()
-        self.requestToDir.in_port = ruby_system.network.out_port
-        self.responseToDir = MessageBuffer()
-        self.responseToDir.in_port = ruby_system.network.out_port
-        self.responseFromDir = MessageBuffer()
-        self.responseFromDir.out_port = ruby_system.network.in_port
-        self.forwardFromDir = MessageBuffer()
-        self.forwardFromDir.out_port = ruby_system.network.in_port
-        self.requestToMemory = MessageBuffer()
-        self.responseFromMemory = MessageBuffer()
-        self.triggerQueue = MessageBuffer(ordered = True)
-
-class DMAController(DMA_Controller):
-
-    _version = 0
-    @classmethod
-    def versionCount(cls):
-        cls._version += 1 # Use count for this particular type
-        return cls._version - 1
-
-    def __init__(self, ruby_system):
-        super(DMAController, self).__init__()
-        self.version = self.versionCount()
-        self.ruby_system = ruby_system
-        self.connectQueues(ruby_system)
-
-    def connectQueues(self, ruby_system):
-        self.mandatoryQueue = MessageBuffer()
-        self.responseFromDir = MessageBuffer()
-        self.responseFromDir.in_port = ruby_system.network.out_port
-        self.reqToDir = MessageBuffer()
-        self.reqToDir.out_port = ruby_system.network.in_port
-        self.respToDir = MessageBuffer()
-        self.respToDir.out_port = ruby_system.network.in_port
-        self.triggerQueue = MessageBuffer(ordered = True)
-
-
-class MyNetwork(SimpleNetwork):
-    """A simple point-to-point network. This doesn't not use garnet.
-    """
-
-    def __init__(self, ruby_system):
-        super(MyNetwork, self).__init__()
-        self.netifs = []
-        self.ruby_system = ruby_system
-
-    def connectControllers(self, controllers):
-        """Connect all of the controllers to routers and connec the routers
-           together in a point-to-point network.
-        """
-        # Create one router/switch per controller in the system
-        self.routers = [Switch(router_id = i) for i in range(len(controllers))]
-
-        # Make a link from each controller to the router. The link goes
-        # externally to the network.
-        self.ext_links = [SimpleExtLink(link_id=i, ext_node=c,
-                                        int_node=self.routers[i])
-                          for i, c in enumerate(controllers)]
-
-        # Make an "internal" link (internal to the network) between every pair
-        # of routers.
-        link_count = 0
-        self.int_links = []
-        for ri in self.routers:
-            for rj in self.routers:
-                if ri == rj: continue # Don't connect a router to itself!
-                link_count += 1
-                self.int_links.append(SimpleIntLink(link_id = link_count,
-                                                    src_node = ri,
-                                                    dst_node = rj))
diff --git a/src/npb/configs/system/__init__.py b/src/npb/configs/system/__init__.py
deleted file mode 100755
index 94e676f..0000000
--- a/src/npb/configs/system/__init__.py
+++ /dev/null
@@ -1,31 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) 2016 Jason Lowe-Power
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-# Authors: Jason Lowe-Power
-
-from .system import MySystem
-from .ruby_system import MyRubySystem
diff --git a/src/npb/configs/system/caches.py b/src/npb/configs/system/caches.py
deleted file mode 100755
index 9e44211..0000000
--- a/src/npb/configs/system/caches.py
+++ /dev/null
@@ -1,173 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) 2016 Jason Lowe-Power
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-# Authors: Jason Lowe-Power
-
-""" Caches with options for a simple gem5 configuration script
-
-This file contains L1 I/D and L2 caches to be used in the simple
-gem5 configuration script.
-"""
-
-from m5.objects import Cache, L2XBar, StridePrefetcher
-
-# Some specific options for caches
-# For all options see src/mem/cache/BaseCache.py
-
-class PrefetchCache(Cache):
-
-    def __init__(self, options):
-        super(PrefetchCache, self).__init__()
-        if not options or options.no_prefetchers:
-            return
-        self.prefetcher = StridePrefetcher()
-
-class L1Cache(PrefetchCache):
-    """Simple L1 Cache with default values"""
-
-    assoc = 8
-    tag_latency = 1
-    data_latency = 1
-    response_latency = 1
-    mshrs = 16
-    tgts_per_mshr = 20
-    writeback_clean = True
-
-    def __init__(self, options=None):
-        super(L1Cache, self).__init__(options)
-        pass
-
-    def connectBus(self, bus):
-        """Connect this cache to a memory-side bus"""
-        self.mem_side = bus.cpu_side_ports
-
-    def connectCPU(self, cpu):
-        """Connect this cache's port to a CPU-side port
-           This must be defined in a subclass"""
-        raise NotImplementedError
-
-class L1ICache(L1Cache):
-    """Simple L1 instruction cache with default values"""
-
-    def __init__(self, opts=None):
-        super(L1ICache, self).__init__(opts)
-        if not opts or not opts.l1i_size:
-            return
-        self.size = opts.l1i_size
-
-    def connectCPU(self, cpu):
-        """Connect this cache's port to a CPU icache port"""
-        self.cpu_side = cpu.icache_port
-
-class L1DCache(L1Cache):
-    """Simple L1 data cache with default values"""
-
-    def __init__(self, opts=None):
-        super(L1DCache, self).__init__(opts)
-        if not opts or not opts.l1d_size:
-            return
-        self.size = opts.l1d_size
-
-    def connectCPU(self, cpu):
-        """Connect this cache's port to a CPU dcache port"""
-        self.cpu_side = cpu.dcache_port
-
-class MMUCache(Cache):
-    # Default parameters
-    size = '8kB'
-    assoc = 4
-    tag_latency = 1
-    data_latency = 1
-    response_latency = 1
-    mshrs = 20
-    tgts_per_mshr = 12
-    writeback_clean = True
-
-    def __init__(self):
-        super(MMUCache, self).__init__()
-
-    def connectCPU(self, cpu):
-        """Connect the CPU itb and dtb to the cache
-           Note: This creates a new crossbar
-        """
-        self.mmubus = L2XBar()
-        self.cpu_side = self.mmubus.mem_side_ports
-        cpu.mmu.connectWalkerPorts(
-            self.mmubus.cpu_side_ports, self.mmubus.cpu_side_ports)
-
-    def connectBus(self, bus):
-        """Connect this cache to a memory-side bus"""
-        self.mem_side = bus.cpu_side_ports
-
-class L2Cache(PrefetchCache):
-    """Simple L2 Cache with default values"""
-
-    # Default parameters
-    assoc = 16
-    tag_latency = 10
-    data_latency = 10
-    response_latency = 1
-    mshrs = 20
-    tgts_per_mshr = 12
-    writeback_clean = True
-
-    def __init__(self, opts=None):
-        super(L2Cache, self).__init__(opts)
-        if not opts or not opts.l2_size:
-            return
-        self.size = opts.l2_size
-
-    def connectCPUSideBus(self, bus):
-        self.cpu_side = bus.mem_side_ports
-
-    def connectMemSideBus(self, bus):
-        self.mem_side = bus.cpu_side_ports
-
-class L3Cache(Cache):
-    """Simple L3 Cache bank with default values
-       This assumes that the L3 is made up of multiple banks. This cannot
-       be used as a standalone L3 cache.
-    """
-
-    # Default parameters
-    assoc = 32
-    tag_latency = 40
-    data_latency = 40
-    response_latency = 10
-    mshrs = 256
-    tgts_per_mshr = 12
-    clusivity = 'mostly_excl'
-
-    def __init__(self, opts):
-        super(L3Cache, self).__init__()
-        self.size = (opts.l3_size)
-
-    def connectCPUSideBus(self, bus):
-        self.cpu_side = bus.mem_side_ports
-
-    def connectMemSideBus(self, bus):
-        self.mem_side = bus.cpu_side_ports
diff --git a/src/npb/configs/system/fs_tools.py b/src/npb/configs/system/fs_tools.py
deleted file mode 100755
index 5e5e2df..0000000
--- a/src/npb/configs/system/fs_tools.py
+++ /dev/null
@@ -1,39 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) 2016 Jason Lowe-Power
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-# Authors: Jason Lowe-Power
-
-from m5.objects import IdeDisk, CowDiskImage, RawDiskImage
-
-class CowDisk(IdeDisk):
-
-    def __init__(self, filename):
-        super(CowDisk, self).__init__()
-        self.driveID = 'device0'
-        self.image = CowDiskImage(child=RawDiskImage(read_only=True),
-                                  read_only=False)
-        self.image.child.image_file = filename
diff --git a/src/npb/configs/system/ruby_system.py b/src/npb/configs/system/ruby_system.py
deleted file mode 100755
index a6d7fcb..0000000
--- a/src/npb/configs/system/ruby_system.py
+++ /dev/null
@@ -1,238 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) 2016 Jason Lowe-Power
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-# Authors: Jason Lowe-Power
-
-import m5
-from m5.objects import *
-from .fs_tools import *
-
-
-class MyRubySystem(System):
-
-    def __init__(self, kernel, disk, mem_sys, num_cpus, opts):
-        super(MyRubySystem, self).__init__()
-        self._opts = opts
-
-        self._host_parallel = True
-
-        # Set up the clock domain and the voltage domain
-        self.clk_domain = SrcClockDomain()
-        self.clk_domain.clock = '3GHz'
-        self.clk_domain.voltage_domain = VoltageDomain()
-
-        self.mem_ranges = [AddrRange(Addr('3GB')), # All data
-                           AddrRange(0xC0000000, size=0x100000), # For I/0
-                           ]
-
-        self.initFS(num_cpus)
-
-        # Replace these paths with the path to your disk images.
-        # The first disk is the root disk. The second could be used for swap
-        # or anything else.
-        self.setDiskImages(disk, disk)
-
-        # Change this path to point to the kernel you want to use
-        self.workload.object_file = kernel
-        # Options specified on the kernel command line
-        boot_options = ['earlyprintk=ttyS0', 'console=ttyS0', 'lpj=7999923',
-                         'root=/dev/hda1']
-
-        self.workload.command_line = ' '.join(boot_options)
-
-        # Create the CPUs for our system.
-        self.createCPU(num_cpus)
-
-        self.createMemoryControllersDDR3()
-
-        # Create the cache hierarchy for the system.
-        if mem_sys == 'MI_example':
-            from .MI_example_caches import MIExampleSystem
-            self.caches = MIExampleSystem()
-        elif mem_sys == 'MESI_Two_Level':
-            from .MESI_Two_Level import MESITwoLevelCache
-            self.caches = MESITwoLevelCache()
-        elif mem_sys == 'MOESI_CMP_directory':
-            from .MOESI_CMP_directory import MOESICMPDirCache
-            self.caches = MOESICMPDirCache()
-        self.caches.setup(self, self.cpu, self.mem_cntrls,
-                          [self.pc.south_bridge.ide.dma,
-                          self.iobus.mem_side_ports],
-                          self.iobus)
-
-        if self._host_parallel:
-            # To get the KVM CPUs to run on different host CPUs
-            # Specify a different event queue for each CPU
-            for i,cpu in enumerate(self.cpu):
-                for obj in cpu.descendants():
-                    obj.eventq_index = 0
-
-                # the number of eventqs are set based
-                # on experiments with few benchmarks
-
-                cpu.eventq_index = i + 1
-
-    def getHostParallel(self):
-        return self._host_parallel
-
-    def totalInsts(self):
-        return sum([cpu.totalInsts() for cpu in self.cpu])
-
-    def createCPUThreads(self, cpu):
-        for c in cpu:
-            c.createThreads()
-
-    def createCPU(self, num_cpus):
-
-        # Note KVM needs a VM and atomic_noncaching
-        self.cpu = [X86KvmCPU(cpu_id = i)
-                    for i in range(num_cpus)]
-        self.kvm_vm = KvmVM()
-        self.mem_mode = 'atomic_noncaching'
-        self.createCPUThreads(self.cpu)
-
-        self.atomicCpu = [AtomicSimpleCPU(cpu_id = i,
-                                            switched_out = True)
-                            for i in range(num_cpus)]
-        self.createCPUThreads(self.atomicCpu)
-
-        self.timingCpu = [TimingSimpleCPU(cpu_id = i,
-                                     switched_out = True)
-				   for i in range(num_cpus)]
-        self.createCPUThreads(self.timingCpu)
-
-    def switchCpus(self, old, new):
-        assert(new[0].switchedOut())
-        m5.switchCpus(self, list(zip(old, new)))
-
-    def setDiskImages(self, img_path_1, img_path_2):
-        disk0 = CowDisk(img_path_1)
-        disk2 = CowDisk(img_path_2)
-        self.pc.south_bridge.ide.disks = [disk0, disk2]
-
-    def createMemoryControllersDDR3(self):
-        self._createMemoryControllers(1, DDR3_1600_8x8)
-
-    def _createMemoryControllers(self, num, cls):
-        self.mem_cntrls = [
-            MemCtrl(dram = cls(range = self.mem_ranges[0]))
-            for i in range(num)
-            ]
-
-    def initFS(self, cpus):
-        self.pc = Pc()
-
-        self.workload = X86FsLinux()
-
-        # North Bridge
-        self.iobus = IOXBar()
-
-        # connect the io bus
-        # Note: pass in a reference to where Ruby will connect to in the future
-        # so the port isn't connected twice.
-        self.pc.attachIO(self.iobus, [self.pc.south_bridge.ide.dma])
-
-        ###############################################
-
-        # Add in a Bios information structure.
-        self.workload.smbios_table.structures = [X86SMBiosBiosInformation()]
-
-        # Set up the Intel MP table
-        base_entries = []
-        ext_entries = []
-        for i in range(cpus):
-            bp = X86IntelMPProcessor(
-                    local_apic_id = i,
-                    local_apic_version = 0x14,
-                    enable = True,
-                    bootstrap = (i ==0))
-            base_entries.append(bp)
-        io_apic = X86IntelMPIOAPIC(
-                id = cpus,
-                version = 0x11,
-                enable = True,
-                address = 0xfec00000)
-        self.pc.south_bridge.io_apic.apic_id = io_apic.id
-        base_entries.append(io_apic)
-        pci_bus = X86IntelMPBus(bus_id = 0, bus_type='PCI   ')
-        base_entries.append(pci_bus)
-        isa_bus = X86IntelMPBus(bus_id = 1, bus_type='ISA   ')
-        base_entries.append(isa_bus)
-        connect_busses = X86IntelMPBusHierarchy(bus_id=1,
-                subtractive_decode=True, parent_bus=0)
-        ext_entries.append(connect_busses)
-        pci_dev4_inta = X86IntelMPIOIntAssignment(
-                interrupt_type = 'INT',
-                polarity = 'ConformPolarity',
-                trigger = 'ConformTrigger',
-                source_bus_id = 0,
-                source_bus_irq = 0 + (4 << 2),
-                dest_io_apic_id = io_apic.id,
-                dest_io_apic_intin = 16)
-        base_entries.append(pci_dev4_inta)
-        def assignISAInt(irq, apicPin):
-            assign_8259_to_apic = X86IntelMPIOIntAssignment(
-                    interrupt_type = 'ExtInt',
-                    polarity = 'ConformPolarity',
-                    trigger = 'ConformTrigger',
-                    source_bus_id = 1,
-                    source_bus_irq = irq,
-                    dest_io_apic_id = io_apic.id,
-                    dest_io_apic_intin = 0)
-            base_entries.append(assign_8259_to_apic)
-            assign_to_apic = X86IntelMPIOIntAssignment(
-                    interrupt_type = 'INT',
-                    polarity = 'ConformPolarity',
-                    trigger = 'ConformTrigger',
-                    source_bus_id = 1,
-                    source_bus_irq = irq,
-                    dest_io_apic_id = io_apic.id,
-                    dest_io_apic_intin = apicPin)
-            base_entries.append(assign_to_apic)
-        assignISAInt(0, 2)
-        assignISAInt(1, 1)
-        for i in range(3, 15):
-            assignISAInt(i, i)
-        self.workload.intel_mp_table.base_entries = base_entries
-        self.workload.intel_mp_table.ext_entries = ext_entries
-
-        entries = \
-           [
-            # Mark the first megabyte of memory as reserved
-            X86E820Entry(addr = 0, size = '639kB', range_type = 1),
-            X86E820Entry(addr = 0x9fc00, size = '385kB', range_type = 2),
-            # Mark the rest of physical memory as available
-            X86E820Entry(addr = 0x100000,
-                    size = '%dB' % (self.mem_ranges[0].size() - 0x100000),
-                    range_type = 1),
-            ]
-
-        # Reserve the last 16kB of the 32-bit address space for m5ops
-        entries.append(X86E820Entry(addr = 0xFFFF0000, size = '64kB',
-                                    range_type=2))
-
-        self.workload.e820_table.entries = entries
diff --git a/src/npb/configs/system/system.py b/src/npb/configs/system/system.py
deleted file mode 100755
index f0e71c2..0000000
--- a/src/npb/configs/system/system.py
+++ /dev/null
@@ -1,392 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) 2018 The Regents of the University of California
-# All Rights Reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met: redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer;
-# redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution;
-# neither the name of the copyright holders nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-# Authors: Jason Lowe-Power
-
-import m5
-from m5.objects import *
-from .fs_tools import *
-from .caches import *
-
-
-class MySystem(System):
-
-    def __init__(self, kernel, disk, num_cpus, opts, no_kvm=False):
-        super(MySystem, self).__init__()
-        self._opts = opts
-        self._no_kvm = no_kvm
-
-        self._host_parallel = not self._opts.no_host_parallel
-
-        # Set up the clock domain and the voltage domain
-        self.clk_domain = SrcClockDomain()
-        self.clk_domain.clock = '2.3GHz'
-        self.clk_domain.voltage_domain = VoltageDomain()
-
-        mem_size = '32GB'
-        self.mem_ranges = [AddrRange('100MB'), # For kernel
-                           AddrRange(0xC0000000, size=0x100000), # For I/0
-                           AddrRange(Addr('4GB'), size = mem_size) # All data
-                           ]
-
-        # Create the main memory bus
-        # This connects to main memory
-        self.membus = SystemXBar(width = 64) # 64-byte width
-        self.membus.badaddr_responder = BadAddr()
-        self.membus.default = Self.badaddr_responder.pio
-
-        # Set up the system port for functional access from the simulator
-        self.system_port = self.membus.cpu_side_ports
-
-        self.initFS(self.membus, num_cpus)
-
-
-        # Replace these paths with the path to your disk images.
-        # The first disk is the root disk. The second could be used for swap
-        # or anything else.
-
-        self.setDiskImages(disk, disk)
-
-        if opts.second_disk:
-            self.setDiskImages(disk, opts.second_disk)
-        else:
-            self.setDiskImages(disk, disk)
-
-        # Change this path to point to the kernel you want to use
-        self.workload.object_file = kernel
-        # Options specified on the kernel command line
-        boot_options = ['earlyprintk=ttyS0', 'console=ttyS0', 'lpj=7999923',
-                         'root=/dev/hda1']
-
-        self.workload.command_line = ' '.join(boot_options)
-
-        # Create the CPUs for our system.
-        self.createCPU(num_cpus)
-
-        # Create the cache heirarchy for the system.
-        self.createCacheHierarchy()
-
-        # Set up the interrupt controllers for the system (x86 specific)
-        self.setupInterrupts()
-
-        self.createMemoryControllersDDR4()
-
-        if self._host_parallel:
-            # To get the KVM CPUs to run on different host CPUs
-            # Specify a different event queue for each CPU
-            for i,cpu in enumerate(self.cpu):
-                for obj in cpu.descendants():
-                    obj.eventq_index = 0
-
-                # the number of eventqs are set based
-                # on experiments with few benchmarks
-
-                if len(self.cpu) > 16:
-                    cpu.eventq_index = (i/4) + 1
-                else:
-                    cpu.eventq_index = (i/2) + 1
-    def getHostParallel(self):
-        return self._host_parallel
-
-    def totalInsts(self):
-        return sum([cpu.totalInsts() for cpu in self.cpu])
-
-    def createCPUThreads(self, cpu):
-        for c in cpu:
-            c.createThreads()
-
-    def createCPU(self, num_cpus):
-        if self._no_kvm:
-            self.cpu = [AtomicSimpleCPU(cpu_id = i, switched_out = False)
-                              for i in range(num_cpus)]
-            self.createCPUThreads(self.cpu)
-            self.mem_mode = 'timing'
-
-        else:
-            # Note KVM needs a VM and atomic_noncaching
-            self.cpu = [X86KvmCPU(cpu_id = i)
-                        for i in range(num_cpus)]
-            self.createCPUThreads(self.cpu)
-            self.kvm_vm = KvmVM()
-            self.mem_mode = 'atomic_noncaching'
-
-            self.atomicCpu = [AtomicSimpleCPU(cpu_id = i,
-                                              switched_out = True)
-                              for i in range(num_cpus)]
-            self.createCPUThreads(self.atomicCpu)
-
-        self.timingCpu = [TimingSimpleCPU(cpu_id = i,
-                                     switched_out = True)
-                          for i in range(num_cpus)]
-
-        self.createCPUThreads(self.timingCpu)
-
-    def switchCpus(self, old, new):
-        assert(new[0].switchedOut())
-        m5.switchCpus(self, list(zip(old, new)))
-
-    def setDiskImages(self, img_path_1, img_path_2):
-        disk0 = CowDisk(img_path_1)
-        disk2 = CowDisk(img_path_2)
-        self.pc.south_bridge.ide.disks = [disk0, disk2]
-
-    def createCacheHierarchy(self):
-        # Create an L3 cache (with crossbar)
-        self.l3bus = L2XBar(width = 64,
-                            snoop_filter = SnoopFilter(max_capacity='32MB'))
-
-        for cpu in self.cpu:
-            # Create a memory bus, a coherent crossbar, in this case
-            cpu.l2bus = L2XBar()
-
-            # Create an L1 instruction and data cache
-            cpu.icache = L1ICache(self._opts)
-            cpu.dcache = L1DCache(self._opts)
-            cpu.mmucache = MMUCache()
-
-            # Connect the instruction and data caches to the CPU
-            cpu.icache.connectCPU(cpu)
-            cpu.dcache.connectCPU(cpu)
-            cpu.mmucache.connectCPU(cpu)
-
-            # Hook the CPU ports up to the l2bus
-            cpu.icache.connectBus(cpu.l2bus)
-            cpu.dcache.connectBus(cpu.l2bus)
-            cpu.mmucache.connectBus(cpu.l2bus)
-
-            # Create an L2 cache and connect it to the l2bus
-            cpu.l2cache = L2Cache(self._opts)
-            cpu.l2cache.connectCPUSideBus(cpu.l2bus)
-
-            # Connect the L2 cache to the L3 bus
-            cpu.l2cache.connectMemSideBus(self.l3bus)
-
-        self.l3cache = L3Cache(self._opts)
-        self.l3cache.connectCPUSideBus(self.l3bus)
-
-        # Connect the L3 cache to the membus
-        self.l3cache.connectMemSideBus(self.membus)
-
-    def setupInterrupts(self):
-        for cpu in self.cpu:
-            # create the interrupt controller CPU and connect to the membus
-            cpu.createInterruptController()
-
-            # For x86 only, connect interrupts to the memory
-            # Note: these are directly connected to the memory bus and
-            #       not cached
-            cpu.interrupts[0].pio = self.membus.mem_side_ports
-            cpu.interrupts[0].int_requestor = self.membus.cpu_side_ports
-            cpu.interrupts[0].int_responder = self.membus.mem_side_ports
-
-    # Memory latency: Using the smaller number from [3]: 96ns
-    def createMemoryControllersDDR4(self):
-        self._createMemoryControllers(8, DDR4_2400_16x4)
-
-    def _createMemoryControllers(self, num, cls):
-        kernel_controller = self._createKernelMemoryController(cls)
-
-        ranges = self._getInterleaveRanges(self.mem_ranges[-1], num, 7, 20)
-
-        self.mem_cntrls = [
-            MemCtrl(dram = cls(range = ranges[i]),
-                    port = self.membus.mem_side_ports)
-            for i in range(num)
-        ] + [kernel_controller]
-
-    def _createKernelMemoryController(self, cls):
-        return MemCtrl(dram = cls(range = self.mem_ranges[0]),
-                       port = self.membus.mem_side_ports)
-
-    def _getInterleaveRanges(self, rng, num, intlv_low_bit, xor_low_bit):
-        from math import log
-        bits = int(log(num, 2))
-        if 2**bits != num:
-            m5.fatal("Non-power of two number of memory controllers")
-
-        intlv_bits = bits
-        ranges = [
-            AddrRange(start=rng.start,
-                      end=rng.end,
-                      intlvHighBit = intlv_low_bit + intlv_bits - 1,
-                      xorHighBit = xor_low_bit + intlv_bits - 1,
-                      intlvBits = intlv_bits,
-                      intlvMatch = i)
-                for i in range(num)
-            ]
-
-        return ranges
-
-    def initFS(self, membus, cpus):
-        self.pc = Pc()
-        self.workload = X86FsLinux()
-
-        # Constants similar to x86_traits.hh
-        IO_address_space_base = 0x8000000000000000
-        pci_config_address_space_base = 0xc000000000000000
-        interrupts_address_space_base = 0xa000000000000000
-        APIC_range_size = 1 << 12;
-
-        # North Bridge
-        self.iobus = IOXBar()
-        self.bridge = Bridge(delay='50ns')
-        self.bridge.mem_side_port = self.iobus.cpu_side_ports
-        self.bridge.cpu_side_port = membus.mem_side_ports
-        # Allow the bridge to pass through:
-        #  1) kernel configured PCI device memory map address: address range
-        #  [0xC0000000, 0xFFFF0000). (The upper 64kB are reserved for m5ops.)
-        #  2) the bridge to pass through the IO APIC (two pages, already
-        #     contained in 1),
-        #  3) everything in the IO address range up to the local APIC, and
-        #  4) then the entire PCI address space and beyond.
-        self.bridge.ranges = \
-            [
-            AddrRange(0xC0000000, 0xFFFF0000),
-            AddrRange(IO_address_space_base,
-                      interrupts_address_space_base - 1),
-            AddrRange(pci_config_address_space_base,
-                      Addr.max)
-            ]
-
-        # Create a bridge from the IO bus to the memory bus to allow access
-        # to the local APIC (two pages)
-        self.apicbridge = Bridge(delay='50ns')
-        self.apicbridge.cpu_side_port = self.iobus.mem_side_ports
-        self.apicbridge.mem_side_port = membus.cpu_side_ports
-        self.apicbridge.ranges = [AddrRange(interrupts_address_space_base,
-                                            interrupts_address_space_base +
-                                            cpus * APIC_range_size
-                                            - 1)]
-
-        # connect the io bus
-        self.pc.attachIO(self.iobus)
-
-        # Add a tiny cache to the IO bus.
-        # This cache is required for the classic memory model for coherence
-        self.iocache = Cache(assoc=8,
-                            tag_latency = 50,
-                            data_latency = 50,
-                            response_latency = 50,
-                            mshrs = 20,
-                            size = '1kB',
-                            tgts_per_mshr = 12,
-                            addr_ranges = self.mem_ranges)
-        self.iocache.cpu_side = self.iobus.mem_side_ports
-        self.iocache.mem_side = self.membus.cpu_side_ports
-
-        ###############################################
-
-        # Add in a Bios information structure.
-        self.workload.smbios_table.structures = [X86SMBiosBiosInformation()]
-
-        # Set up the Intel MP table
-        base_entries = []
-        ext_entries = []
-        for i in range(cpus):
-            bp = X86IntelMPProcessor(
-                    local_apic_id = i,
-                    local_apic_version = 0x14,
-                    enable = True,
-                    bootstrap = (i ==0))
-            base_entries.append(bp)
-        io_apic = X86IntelMPIOAPIC(
-                id = cpus,
-                version = 0x11,
-                enable = True,
-                address = 0xfec00000)
-        self.pc.south_bridge.io_apic.apic_id = io_apic.id
-        base_entries.append(io_apic)
-        pci_bus = X86IntelMPBus(bus_id = 0, bus_type='PCI   ')
-        base_entries.append(pci_bus)
-        isa_bus = X86IntelMPBus(bus_id = 1, bus_type='ISA   ')
-        base_entries.append(isa_bus)
-        connect_busses = X86IntelMPBusHierarchy(bus_id=1,
-                subtractive_decode=True, parent_bus=0)
-        ext_entries.append(connect_busses)
-        pci_dev4_inta = X86IntelMPIOIntAssignment(
-                interrupt_type = 'INT',
-                polarity = 'ConformPolarity',
-                trigger = 'ConformTrigger',
-                source_bus_id = 0,
-                source_bus_irq = 0 + (4 << 2),
-                dest_io_apic_id = io_apic.id,
-                dest_io_apic_intin = 16)
-        base_entries.append(pci_dev4_inta)
-        def assignISAInt(irq, apicPin):
-            assign_8259_to_apic = X86IntelMPIOIntAssignment(
-                    interrupt_type = 'ExtInt',
-                    polarity = 'ConformPolarity',
-                    trigger = 'ConformTrigger',
-                    source_bus_id = 1,
-                    source_bus_irq = irq,
-                    dest_io_apic_id = io_apic.id,
-                    dest_io_apic_intin = 0)
-            base_entries.append(assign_8259_to_apic)
-            assign_to_apic = X86IntelMPIOIntAssignment(
-                    interrupt_type = 'INT',
-                    polarity = 'ConformPolarity',
-                    trigger = 'ConformTrigger',
-                    source_bus_id = 1,
-                    source_bus_irq = irq,
-                    dest_io_apic_id = io_apic.id,
-                    dest_io_apic_intin = apicPin)
-            base_entries.append(assign_to_apic)
-        assignISAInt(0, 2)
-        assignISAInt(1, 1)
-        for i in range(3, 15):
-            assignISAInt(i, i)
-        self.workload.intel_mp_table.base_entries = base_entries
-        self.workload.intel_mp_table.ext_entries = ext_entries
-
-        entries = \
-           [
-            # Mark the first megabyte of memory as reserved
-            X86E820Entry(addr = 0, size = '639kB', range_type = 1),
-            X86E820Entry(addr = 0x9fc00, size = '385kB', range_type = 2),
-            # Mark the rest of physical memory as available
-            X86E820Entry(addr = 0x100000,
-                    size = '%dB' % (self.mem_ranges[0].size() - 0x100000),
-                    range_type = 1),
-            ]
-        # Mark [mem_size, 3GB) as reserved if memory less than 3GB, which
-        # force IO devices to be mapped to [0xC0000000, 0xFFFF0000). Requests
-        # to this specific range can pass though bridge to iobus.
-        entries.append(X86E820Entry(addr = self.mem_ranges[0].size(),
-            size='%dB' % (0xC0000000 - self.mem_ranges[0].size()),
-            range_type=2))
-
-        # Reserve the last 16kB of the 32-bit address space for m5ops
-        entries.append(X86E820Entry(addr = 0xFFFF0000, size = '64kB',
-                                    range_type=2))
-
-        # Add the rest of memory. This is where all the actual data is
-        entries.append(X86E820Entry(addr = self.mem_ranges[-1].start,
-            size='%dB' % (self.mem_ranges[-1].size()),
-            range_type=1))
-
-        self.workload.e820_table.entries = entries
-