Help Home server

Moriath

I am a FH squatter
Joined
Dec 23, 2003
Messages
16,209
I want to build a machine that will be linux

4 or so drive spaces
i3 processor
8 GB at least maybe 16
64GB SSD for OS drive
Low power
24/7 server

WAnt to make it headless stick it in a corner admin it from my pc and run minecraft and other game servers and some backup storage.

Anyone got suggestions as to hardware for this

cheers
 

caLLous

I am a FH squatter
FH Subscriber
Joined
Dec 23, 2003
Messages
18,518
I don't know how demanding game servers are (especially if you're running a few together) but those specs look OK for the server part. Maybe up the CPU if you're going to be running the game servers and streaming media or whatever simultaneously.
 

soze

I am a FH squatter
Joined
Jan 22, 2004
Messages
12,508
It is not the perfect spec I know but the HP MicroServer has 4 drive bays plus you can mount an SSD in the DVD drive bay but you need to update the BIOS to get full speed. It can take take up to 16GB of RAM. Best part is with the current Cash Back it is only about £80. That will get you an AMD Dual Core and the one I bought I sold the HP Original memory for £15 and the HP original 250gb disk for £25 so the whole server cost me £40.

Look around for the best price mine was £180 on the 1st Feb but now the cashback is announced it gone up £30. http://www.ballicom.co.uk/704941421...-250-gb-gigabit-lan-monitor-none.p882339.html

Sorry if its no good.
 

dysfunction

FH is my second home
Joined
Dec 22, 2003
Messages
9,709
I was going to suggest the HP Microserver but was not sure how good it would be to run games on...
 

TdC

Trem's hunky sex love muffin
Joined
Dec 20, 2003
Messages
30,925
I completed a build yesterday to use for uni research and a disk provider to myself. It's basically my old computer, so way overpowered. Anything I got new I'll state the price:
  • i7 quad core / HT 2.6GHZ
  • 12 GB ram
  • Asus P6T mobo
  • some cheap and cheerful nvidia card I got for 25 euros. passively cooled, so that is nice as well.
  • a quad gigabit ethernet server card (I blagged two of these of of someone for 100 euros
  • intel 100GB ssd for OS
  • physical host OS is Debian. No real reason other than me liking Debian. I virtualize other OS's.

originally I had two 750GB samsung disks in it as the backing store, which I had lying around from a previous build. I had an absolutely TERRIBLE run with these disks: I had 5, and 3 failed within half a year. These two actually lasted, but after some performance tests they BOTH failed. I took them out and replaced with a single WD "Red" at 2TB for 89 euros at the mediamarket. I could get cheaper off the interwebs but I wanted something in my hands right away. On the whole I am pleased with the "Red", it does what it says, is fast(ish) and is very quiet. I will get two more to complete a fault tolerant setup.

as to the server, it's decent. I could have gone for a more energy efficient or quieter setup, but that would have cost more money than I am willing to spend. Bottom line is that a) I get 250MB/s sequential reads off the windows iSCSI initiated disks and b) I can virtualize all the shit I want to.
 

DaGaffer

Down With That Sorta Thing
Joined
Dec 22, 2003
Messages
18,498
I completed a build yesterday to use for uni research and a disk provider to myself. It's basically my old computer, so way overpowered. Anything I got new I'll state the price:
  • i7 quad core / HT 2.6GHZ
  • 12 GB ram
  • Asus P6T mobo
  • some cheap and cheerful nvidia card I got for 25 euros. passively cooled, so that is nice as well.
  • a quad gigabit ethernet server card (I blagged two of these of of someone for 100 euros
  • intel 100GB ssd for OS
  • physical host OS is Debian. No real reason other than me liking Debian. I virtualize other OS's.

originally I had two 750GB samsung disks in it as the backing store, which I had lying around from a previous build. I had an absolutely TERRIBLE run with these disks: I had 5, and 3 failed within half a year. These two actually lasted, but after some performance tests they BOTH failed. I took them out and replaced with a single WD "Red" at 2TB for 89 euros at the mediamarket. I could get cheaper off the interwebs but I wanted something in my hands right away. On the whole I am pleased with the "Red", it does what it says, is fast(ish) and is very quiet. I will get two more to complete a fault tolerant setup.

as to the server, it's decent. I could have gone for a more energy efficient or quieter setup, but that would have cost more money than I am willing to spend. Bottom line is that a) I get 250MB/s sequential reads off the windows iSCSI initiated disks and b) I can virtualize all the shit I want to.

One of my WD Reds has failed in my Synology after less than a year. Which is irritating.
 

MYstIC G

Official Licensed Lump of Coal™ Distributor
Staff member
Moderator
FH Subscriber
Joined
Dec 22, 2003
Messages
12,442
One of my WD Reds has failed in my Synology after less than a year. Which is irritating.
Reds come with server class RMA though don't they?
 

MYstIC G

Official Licensed Lump of Coal™ Distributor
Staff member
Moderator
FH Subscriber
Joined
Dec 22, 2003
Messages
12,442
Do they? I'd better check.
I believe that's the difference. I've got Greens as Reds were not out at the time. My understanding is Blacks are Performance, Greens are Eco and Reds are the cross between Eco and server.
 

TdC

Trem's hunky sex love muffin
Joined
Dec 20, 2003
Messages
30,925
I have greens in my NAS, as that thing will never be high performance anyway. both have worked fine, and they're a mirror so at least fault tolerant to single unit failure. the performance(ish) set will be three in a raid-Z. with the quad GB card, I already get nice sequential speeds on one disk as I've found that the one-connection limited iSCSI target software can be worked around by offering up several luns to the initiator. A stripe across that lets me use as many GB connections as I have, which is nice. In a week or two I will buy more disks and let you know how it goes :)
 

TdC

Trem's hunky sex love muffin
Joined
Dec 20, 2003
Messages
30,925
mmmkay disks are done:

Code:
mypoolz                                        163G  5.28T     11    380  93.3K  2.48M
  raidz1                                       163G  5.28T     11    380  93.3K  2.48M
    ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M1627145      -      -      8     35  34.7K  1.80M
    ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M1641253      -      -      7     38  29.3K  1.79M
    ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M1641261      -      -      7     39  29.3K  1.79M

I had to do some data-juggling to get this set up without losing the data I had. This is how I did it:

The scenario:
on a disk, let's call it A, I had some data in two volumes. This disk was on it's own in a zpool, and my goal was to add two disks, B and C, to end up with a RAIDZ (without losing data).

The problem:
you can't make a RAIDZ with < 3 disks, and disk A was already in a pool. There is no way to add two disks to the pool to somehow automagicfaff it in to a RAIDZ.

What I did:
well I used the power of UNIX, and the awesome robustness of ZFS of course! First I made a 2TB sparse file (my WD Red disks are 2TB and I figured I needed the sizes to match up for the RAIDZ). A sparse file is a file of which the begin- and endpoints are known to the OS, but not the contents per-se. I had to do this because I certainly do not have 2TB lying about on my ickle OS ssd disk. So, while in reality the file has a tiny size, when the OS queries the filesystem it thinks it's muuuuuch larger.

Code:
dd if=/dev/zero of=/root/myballs count=1 seek=2000000000k

then I told the OS to link this file to a loop device like so:
Code:
losetup /dev/loop0 /root/myballs

I could now introduce this file to ZFS so I made a zpool with disks B and C, and the loop device like so:

Code:
zpool create mypoolz raidz -o ashift=12 /dev/diskB's ID /dev/diskC's ID /dev/loop0

then I offlined the loop device so I didn't fuck anything up:

Code:
zpool offline mypoolz loop0

I made zfs snapshots of the volumes I had created in the pool mypool, games0 and games1, and sent them over to the new pool mypoolz. Note: the pool mypoolz was showing as DEGRADED in zpool status, but this is only natural as I had made a RAIDZ with three devices and then taken one away, e. g. the loop device.

Code:
zfs snapshot mypool/games0@snap_games0
zfs snapshot mypool/games1@snap_games1
zfs send mypool/games0@snap_games0 | zfs receive mypoolz/games0@snap_games0
zfs send mypool/games1@snap_games1 | zfs receive mypoolz/games1@snap_games1

note: the above took about 40 mins for 1TB to be sent over.

After this I renamed the volumes and made some new ones like so:

Code:
zfs rename mypoolz/games0@snap_games0 mypoolz/games0
zfs rename mypoolz/games1@snap_games1 mypoolz/games1
zfs create -V 500G mypoolz/games3
zfs create -V 500G mypoolz/games4

After this was all done I destroyed the original pool mypool, and added disk A, which had become free, in to the pool mypoolz as a replacement of the loop device:

Code:
zpool destroy mypool
zpool replace mypoolz loop0 /dev/diskA's ID

which I then added to my iSCSI setup, and set up on my windows box. Suddenly I realized I had been in error, as although my Win7 install recognized it knew the iSCSI LUNs that it had known before, e. g. games0 and games1, I did not have a way to add the new LUNs games3 and games4 to the existing stripe without destroying it first. Ho-hum so all my creative mucking about with ZFS snapshots was for naught: I simply copied everything off the drive made up of games0 & games1 on to a handy USB3 disk I had, destroyed the existing stripe, made a new one consisting of all four LUNs and gave it a whirl in ye olde ATTO disk benchmark tool.

disks.png

I haven't really tuned anything, and I actually ran this while updating some steam games and Diablo 3 on the stripe at the same time, but hey ho on the whole I am happy with the speeds, all things considered. So, what are we looking at? I have two quad 1GB network cards, of which I have bonded the interfaces together...meaning that for example in Win7 the quad card is seen as a single network interface with a speed of 4GB. I wanted 4 iSCSI LUNs in a strip so Win7 would have 4 connections to the server via it's built in iSCSI initiator program (which is pretty nifty btw). I wanted 4 connections because the iSCSI target I chose to run in the server only allows single connections to the LUNs it provides, so with one LUN I would never max out the quad card. This also ties in to the difference (imo) between the max write (~115MB/s) and max read (~390MB/s) : writing to the server is always done by a single network interface within the bond, thus 115MB is not really bad for a single 1GB connection and by not really bad I mean it's about perfect tbh. Read speeds are also about right for 4x1GB but could possibly be every-so-slightly faster if I tweak some things or maybe get better cables.

In terms of real world performance, I can drag a 1GB zipfile from the stripe to my desktop, and it's basically there before I let go of my mouse, so to speak. I'm happy with the flexibility the iSCSI offers me, and the extra space ofc. This setup is going to be used for lots of things and the linux server makes it all possible. Only one thing: the choice to stick all three WD Red's in a single RAIDZ setup is a little bit inflexible: you can't really add disks after setting up a RAIDZ without doing all kinds of data juggling, and that sucks. If I had *really* wanted to be optimally flexible, and have redundancy on top of that I would have gone for multiple mirrors or sets of RAIDZ disks, with ZFS stripes on top of that. Still, choices, choices eh :)
 

caLLous

I am a FH squatter
FH Subscriber
Joined
Dec 23, 2003
Messages
18,518
Nerd.

:)

I have 8 of the 4tb Reds in mine at the moment. Somewhen I will be buying 8 more. I have to do it in one lump because of how my array is setup right now. I have the 8 disks in a raidz2 but what I'm going to aim for is a 10 disk raidz3 and a 6 disk raidz2.

So I'm going to have to bite the bullet and use 6 of the new disks in either a striped array or just as separate disks and copy all of the data from my existing 8 disk raidz2 to them (as you said in the other thread, zfs steals a small percentage of the array space for its own gubbins). That will then free up the 8 disks in the current raidz2, which I can destroy and then use (with the other 2 new ones) to make the 10 disk raidz3, which I will then use to take the data from the 6 standalone disks. I will then be able to take those disks and make the 6 disk raidz2 and hey presto, I'll have my 10 disk raidz3 and my 6 disk raidz2 with (touch wood) no loss of data.

It's the only way I can think of to get everything how I want. I'd like to not have to buy all the new disks together but there's no way of doing it where I can introduce them 4 at a time or whatever.
 
Last edited:

TdC

Trem's hunky sex love muffin
Joined
Dec 20, 2003
Messages
30,925
yeah dude, that's the flexibility hit you get as a penalty for all the sweetness that ZFS gives you :)

also...do you really need the three disk hot spare setup? I know it pays to be prepared, but if you're getting the 4TB models, woo man that's some expensive redundancy. Ofc I realize you have your business on the drives as well as your pr0n stash, no? In that case it may be worth it hehe :)
 

caLLous

I am a FH squatter
FH Subscriber
Joined
Dec 23, 2003
Messages
18,518
You're probably right tbh. Maybe two raidz2's: one with 9 disks and one with 7. I read somewhere (may have been the Solaris ZFS best practises page) to go with z2 for 6+ disks and z3 for 9+. Probably overkill though.
 

caLLous

I am a FH squatter
FH Subscriber
Joined
Dec 23, 2003
Messages
18,518
For the number of disks in the storage pool, use the "power of two plus parity" recommendation. This is for storage space efficiency and hitting the "sweet spot" in performance. So, for a RAIDZ-1 VDEV, use three (2+1), five (4+1), or nine (8+1) disks. For a RAIDZ-2 VDEV, use four (2+2), six (4+2), ten (8+2), or eighteen (16+2) disks. For a RAIDZ-3 VDEV, use five (2+3), seven (4+3), eleven (8+3), or nineteen (16+3) disks. For pools larger than this, consider striping across mirrored VDEVs.
I don't know how important this is but it's bugged me since I learned that my 6+2 array wasn't a "recommended" config. I might indeed be better going for a 4+2 and an 8+2 with 16 disks, for conformity's sake rather than having an extra disk's worth of space (which is always welcome).
 

TdC

Trem's hunky sex love muffin
Joined
Dec 20, 2003
Messages
30,925
Yes, tbh they are playing the risks and laws of averages and stuff, as well as performance (there's only so much you can shove through a VDEV). Here's a page from what we do at workies: we have shelves with 40 disks, some SSD's and two RAM cards. You know what the latter are for, but the former becomes a striped mirror, e. g. RAID1+0, which we view as the best possible performance / resilience tradeoff. Which is fun, because workies views disks as cheap as chips, and doesn't care that they have to buy for redundancy. I on the other hand, am poor, and want the maximum possible amount of usable disk acreage rather than mirrors mirrors everywhere. Frankly I doubt that you'll notice a performance hit with your terribad config ;-)
 

MYstIC G

Official Licensed Lump of Coal™ Distributor
Staff member
Moderator
FH Subscriber
Joined
Dec 22, 2003
Messages
12,442
ZFS is worth its weight in gold though. One of the drives in my NAS is starting to show signs of issues so I scrubbed it the other day. It only has about 1MB of data errors on it but you never know what files could have been affected. Now the files are sorted and when I get the 5 hours I can run the RMA program from Western Digital on the drive and probably get it replaced without issue.
 

Users who are viewing this thread

Top Bottom