So I've been thinking about building a computer for a few days, and as a result I came up with a few design ideas... and then some crazy ones I probably could never do myself, but figured I'd post.
In the process of putting together a rough idea of a computer I hit on ram disks, and decided I liked them. A lot. They're really, really fast; they sound like just the sort of thing that you'd want to run a virtual machine on. Unfortunately they have a few downsides, like the fact that one large enough to run a VM on would use up a significant portion of your available ram and that the software for emulating a hard drive in ram also consumes ram, and processor resources. There are a few hardware ram drives out there, but the newest and best one I know of only takes DDR2 ram and is accessed through a single SATA II port.
There's no reason I couldn't just use a software ram disk; in a few months there will probably be non-server 16gb ram sticks that run at a decent speed. However, my curiosity has been aroused. Surely there exists a better way. Perhaps a person could build their own sort of hardware ram disk. Buy a small, cheap barebones system. Install a processor capable of using DDR3 1066 at full speed, with two memory channels, and add two 8gb sticks of DDR3 1066. Put in a cheap HDD, and install a lightweight linux distribution on it. Install ram disk software, and allocate ram to the ram disk, leaving just enough for the OS and the ram disk software. Unfortunately, I then hit a wall. How do you make an internal hard drive accessible to another computer? Surely there is a way.
Making an internal hard drive accessible to another system is critical to the idea of homemade external ram disks. Unfortunately, I've so far been unable to find a way. I'm considering two possibilities. If there are others I'm missing, please speak up.
One: Some way to output over SATA ports. I don't know if this is possible, with the default SATA drivers or with custom software. If it isn't, oh well. If it is, we're still limited to the 6gb/s of SATA III, which is slower than the speeds a ram disk is capable of. If the system is implemented this way, then the best set up would be multiple barebones computer ram drives hooked up to a hardware RAID controller configured for RAID 0. The card would be connected to the system that needed to access the ram disks.
Two: Some way to make a hard drive accessible over PCIe. I know that it's possible to access an emulated hard drive over PCIe; RAID controller cards do it all the time, including those in the PCIe SSDs. However, I don't know if there is software out there that can make a hard drive accessible over PCIe, or if there isn't, how hard it would be to write some (borrowing as much as possible from other sources). If the system was implemented this way, the best (and most expensive) solution would be to build a single larger computer to act as a ram drive, with a processor with four channels. Install four sticks of 8gb ram for a desktop processor and motherboard, or if the processor is an xeon or some other server processor that caps out at DDR3 1066, four sticks of 16gb (Kingston makes 16gb DDR3 1066 ECC Registered ram).
I don't know if these things are possible, but I'm interested in learning. I could be missing something obvious; I am by no means an expert at any of this. However, I figured I'd post this idea and see if others had any useful or interesting input.
EDIT: Some reference numbers from various places (Wikipedia, IBM Redbooks, et cetera)
16 lane PCIe, each way
v1.x: 4 GB/s (40 GT/s) | I *think* that's 1024x4 MB, but the standards are rather hard to be sure about. However, after comparing the Wikipedia numbers to the Redbook numbers for the original PCIe, I'm fairly sure it is indeed 1024 rather than 1000. On the other hand, I'm not even sure of how the Redbooks were handling the numbers, so I'm going to think on this for a bit. The Wikipedia numbers don't actually match the Redbook numbers for the same thing... Agh. Ah, Wikipedia is assuming Gigabytes as 1000 Megabytes. The Redbooks, on the other hand, aren't assuming any sort of connection with reality or any standard I've heard of. E.g.: PCIe, Lane width x1, Throughput (duplex, bits): 5 Gbps | Throughput (duplex, bytes): 400 MBps. Of course, this could all be my fault and I'm having a brain failure, but I don't think so. Someone help me out here.
v2.x: 8 GB/s (80 GT/s)
v3.0: 16 GB/s (128 GT/s) |
GbE: 1 Gigabit per second, each way, or 125 MB (Megabytes)
10 GbE: 10 Gigabits per second, each way, or 1250 MB (Megabytes) per second
1.5 Gigabits per second, 1500 Megabits per second, 187.5 Megabytes per second
3 Gigabits per second, 375 Megabytes per second
6 Gigabits per second, 750 Megabytes per second
Last edited by deathinlonging
on Mon Jan 30, 2012 8:56 pm UTC, edited 2 times in total.