Building a Retro Gaming Rig – Part 3

Welcome to the third installment of my Building a Retro Gaming Rig series. Today, I’ll be taking a look at another motherboard and CPU combo that I picked up from eBay on a bit of a whim.

In Part 1 of this series, I took an in-depth look at some Slot-1 gear, including the popular Asus P2B and some CPU options. As I was thinking ahead in the build, I got frustrated with the lack of simple and classic-looking ATX tower cases available these days. Everything looks far too modern, has too much bling or is just plain gigantic. Used tower cases from twenty years ago are all yellowed pretty badly and just look bad. On the other hand, there are lots of small, simple and affordable micro ATX cases available.

Micro ATX – or mATX – motherboards were actually pretty uncommon twenty-odd years ago. PC tower cases were pretty large and in those days people really did use lots of expansion cards and needed the extra space. Only very compact systems and OEMs seemed to use the mATX form factor at that time. Many of these boards were heavily integrated, lacked expansion slots and stuck you with some pretty weak onboard video solutions.

MSI MS-6160 Motherboard

In an interesting twist, I came across an MSI MS-6160 mATX board based on the Intel 440LX chipset that seemed to tick many of the right boxes. The combo included a Celeron 400MHz processor and 512MB of SDRAM for only $35 CDN.

Continue reading “Building a Retro Gaming Rig – Part 3”

VM Network Performance and CPU Scheduling

Over the years, I’ve been on quite a few network performance cases and have seen many reasons for performance trouble. One that is often overlooked is the impact of CPU contention and a VM’s inability to schedule CPU time effectively.

Today, I’ll be taking a quick look at the actual impact CPU scheduling can have on network throughput.

Testing Setup

To demonstrate, I’ll be using my dual-socket management host. As I did in my recent VMXNET3 ring buffer exhaustion post, I’ll be testing with VMs on the same host and port group to eliminate bottlenecks created by physical networking components. The VMs should be able to communicate as quickly as their compute resources will allow them.

Physical Host:

  • 2x Intel Xeon E5 2670 Processors (16 cores at 2.6GHz, 3.3GHz Turbo)
  • 96GB PC3-12800R Memory
  • ESXi 6.0 U3 Build 5224934

VM Configuration:

  • 1x vCPU
  • 1024MB RAM
  • VMXNET3 Adapter (1.1.29 driver with default ring sizes)
  • Debian Linux 7.4 x86 PAE
  • iperf 2.0.5

The VMs I used for this test are quite small with only a single vCPU and 1GB of RAM. This was done intentionally so that CPU contention could be more easily simulated. Much higher throughput would be possible with multiple vCPUs and additional RX queues.

The CPUs in my physical host are Xeon E5 2670 processors clocked at 2.6GHz per core. Because this processor supports Intel Turbo Boost, the maximum frequency of each core will vary depending on several factors and can be as high as 3.3GHz at times. To take this into consideration, I will test with a CPU limit of 2600MHz, as well as with no limit at all to show the benefit this provides.

To measure throughput, I’ll be using a pair of Debian Linux VMs running iperf 2.0.5. One will be the sending side and the other the receiving side. I’ll be running four simultaneous threads to maximize throughput and load.

I should note that my testing is far from precise and is not being done with the usual controls and safeguards to ensure accurate results. This said, my aim isn’t to be accurate, but rather to illustrate some higher-level patterns and trends.

Continue reading “VM Network Performance and CPU Scheduling”