User Panel
Posted: 5/18/2016 11:04:15 AM EDT
So, I'm in the process of building a home server. I have one desktop that I will want to transfer large amounts of data between it and the server and thought it'd be nice to have a 10Gb connection between the two. I've been looking on eBay, and 10Gb RJ45 cards can be found all day around $100-$200, which I think is really affordable.
Basically, I'm thinking about having the server and desktop on a 10Gb subnet with direct connection, but the server also needs to be on the regular 1Gb so everything else on the network can access it. So, I do some google searching to see what others have done and how they have set it up. But, instead of finding answers I find a bunch of people questioning the original poster (on several posts on different websites) of why he needs a 10Gb connection. Who really cares why he wants it. When did "because I want it" and "that's what I want to spend my money on" become not good enough? Guess it goes for more than computer and networking. |
|
Get a switch with dual fabric GBE and XGE Ethernet. You typically see them in 24GBE/4XGE or 48GBE/4XGE formats. The copper style RJ-45 XGE ports are not as common as the SPF+ ports for XGE and so may be harder to find and/or more expensive. I'm not real familiar with the used market so YMMV. With SPFs you'd need DACs or transceivers and MM fiber patches to link the NICs to the XGE Switch ports. No special configuration is needed for those ports to work at XGE or to communicate with the GBE ports.
You can pretty much ignore the fact that you have 2 different bandwidth rates. There is no need to think of the networking as a subnet for XGE and one for GBE. It can be a flat network. A PC with an XGE NIC will communicate with a server with an XGE NIC at that rate (assuming both connect via the switch's XGE ports) and with everything else at GBE. It's just a switch with some extra high bandwidth ports. Posted Via AR15.Com Mobile |
|
|
Quoted:
Get a switch with dual fabric GBE and XGE Ethernet. You typically see them in 24GBE/4XGE or 48GBE/4XGE formats. The copper style RJ-45 XGE ports are not as common as the SPF+ ports for XGE and so may be harder to find and/or more expensive. I'm not real familiar with the used market so YMMV. With SPFs you'd need DACs or transceivers and MM fiber patches to link the NICs to the XGE Switch ports. No special configuration is needed for those ports to work at XGE or to communicate with the GBE ports. You can pretty much ignore the fact that you have 2 different bandwidth rates. There is no need to think of the networking as a subnet for XGE and one for GBE. It can be a flat network. A PC with an XGE NIC will communicate with a server with an XGE NIC at that rate (assuming both connect via the switch's XGE ports) and with everything else at GBE. It's just a switch with some extra high bandwidth ports. Posted Via AR15.Com Mobile View Quote I would do this... Much less hassle. Just curious, what constitutes larges amounts of data? |
|
Yep what you say is good to go. Alternatively you can get a dual 10gb nic or a 10gig nic and a 1 gig nic and bridge the ports together so everything is on the same network.
|
|
I wish I had hardware fast enough to keep up with even a 1 gig link.
|
|
|
What are you doing on a home network, that you are saturating a 1GB link?
The cheapest way would be to do 10GB copper NICs in server and desktop with a patch cable between them on a private network, plus a <10GB switched/routed network for everything else. Server: e0: 10.10.10.1 255.255.255.252 (10GB) e1: 10.10.11.10 255.255.255.0, route 0.0.0.0 10.10.11.1 Workstation: e0: 10.10.10.2 255.255.255.252 (10GB) e1: 10.10.11.11 255.255.255.0 route 0.0.0.0 10.10.11.1 With hosts file entries in both machines to point to the 10.10.10.x IPs. |
|
Quoted:
Somehow I just don't think that bottlenecks are a real concern for the home server guy that wants to play with some 10 GbE connectivity on a limited basis. Posted Via AR15.Com Mobile View Quote View All Quotes View All Quotes Quoted:
Somehow I just don't think that bottlenecks are a real concern for the home server guy that wants to play with some 10 GbE connectivity on a limited basis. Quoted:
Well, the bottleneck won't be at layer 0. LOL Posted Via AR15.Com Mobile Sorry, I'm just a network admin at a place that throws raw HD video all over the place all day and night. What do I know about these things? |
|
Quoted:
What are you doing on a home network, that you are saturating a 1GB link? The cheapest way would be to do 10GB copper NICs in server and desktop with a patch cable between them on a private network, plus a <10GB switched/routed network for everything else. Server: e0: 10.10.10.1 255.255.255.252 (10GB) e1: 10.10.11.10 255.255.255.0, route 0.0.0.0 10.10.11.1 Workstation: e0: 10.10.10.2 255.255.255.252 (10GB) e1: 10.10.11.11 255.255.255.0 route 0.0.0.0 10.10.11.1 With hosts file entries in both machines to point to the 10.10.10.x IPs. View Quote THIS is the right answer. The fabric switch solution above is WAY too expensive. I have the exact scenario already set up in my lab. I have two servers, that need to transfer a LARGE amount of data very quickly between them. I have two 10GBE base-T cards with a direct patch cable between them. I do product demos of Live Migration, a Hyper-V feature, and I regularly have to demonstrate moving 96GB worth of live VM's from one server to another. https://blogs.technet.microsoft.com/kevinholman/2013/07/01/hyper-v-live-migration-and-the-upgrade-to-10-gigabit-ethernet/ Then each server has its internet/network facing NIC for normal stuff. No GW configured on the 10GBE nics, and hosts file to ensure they use the private network for peer to peer communications. I bought my 10GBe nics for $25 each. DONT get a nic with a fan. The fan dies then the nic shuts down. Get an intel x540 based NIC, with a large heatsink on it like most of them come. What models are you looking at? Also - make sure you have a PCI express slot that is wired for this type of card. These need a slot that is wired for 8 lane traffic (x8). A feature often only found in servers. I use Dell Precision T7500's. |
|
Quoted:
THIS is the right answer. The fabric switch solution above is WAY too expensive. I have the exact scenario already set up in my lab. I have two servers, that need to transfer a LARGE amount of data very quickly between them. I have two 10GBE base-T cards with a direct patch cable between them. I do product demos of Live Migration, a Hyper-V feature, and I regularly have to demonstrate moving 96GB worth of live VM's from one server to another. https://blogs.technet.microsoft.com/kevinholman/2013/07/01/hyper-v-live-migration-and-the-upgrade-to-10-gigabit-ethernet/ Then each server has its internet/network facing NIC for normal stuff. No GW configured on the 10GBE nics, and hosts file to ensure they use the private network for peer to peer communications. I bought my 10GBe nics for $25 each. DONT get a nic with a fan. The fan dies then the nic shuts down. Get an intel x540 based NIC, with a large heatsink on it like most of them come. What models are you looking at? Also - make sure you have a PCI express slot that is wired for this type of card. These need a slot that is wired for 8 lane traffic (x8). A feature often only found in servers. I use Dell Precision T7500's. View Quote View All Quotes View All Quotes Quoted:
Quoted:
What are you doing on a home network, that you are saturating a 1GB link? The cheapest way would be to do 10GB copper NICs in server and desktop with a patch cable between them on a private network, plus a <10GB switched/routed network for everything else. Server: e0: 10.10.10.1 255.255.255.252 (10GB) e1: 10.10.11.10 255.255.255.0, route 0.0.0.0 10.10.11.1 Workstation: e0: 10.10.10.2 255.255.255.252 (10GB) e1: 10.10.11.11 255.255.255.0 route 0.0.0.0 10.10.11.1 With hosts file entries in both machines to point to the 10.10.10.x IPs. THIS is the right answer. The fabric switch solution above is WAY too expensive. I have the exact scenario already set up in my lab. I have two servers, that need to transfer a LARGE amount of data very quickly between them. I have two 10GBE base-T cards with a direct patch cable between them. I do product demos of Live Migration, a Hyper-V feature, and I regularly have to demonstrate moving 96GB worth of live VM's from one server to another. https://blogs.technet.microsoft.com/kevinholman/2013/07/01/hyper-v-live-migration-and-the-upgrade-to-10-gigabit-ethernet/ Then each server has its internet/network facing NIC for normal stuff. No GW configured on the 10GBE nics, and hosts file to ensure they use the private network for peer to peer communications. I bought my 10GBe nics for $25 each. DONT get a nic with a fan. The fan dies then the nic shuts down. Get an intel x540 based NIC, with a large heatsink on it like most of them come. What models are you looking at? Also - make sure you have a PCI express slot that is wired for this type of card. These need a slot that is wired for 8 lane traffic (x8). A feature often only found in servers. I use Dell Precision T7500's. Awesome. Thanks. I have just been looking at cards like this on ebay: http://www.ebay.com/itm/171979516430?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT http://www.ebay.com/itm/262113959205?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT http://www.ebay.com/itm/311602237942?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT I'm running an X99 Sabertooth in my desktop, so I have the 8 lane slot taken care of. I haven't started my server build yet. |
|
Quoted:
Awesome. Thanks. I have just been looking at cards like this on ebay: http://www.ebay.com/itm/171979516430?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT http://www.ebay.com/itm/262113959205?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT http://www.ebay.com/itm/311602237942?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT I'm running an X99 Sabertooth in my desktop, so I have the 8 lane slot taken care of. I haven't started my server build yet. View Quote Yep that's exactly the card I'd buy today. They were $350 when I bought by broadcomms for crazy cheap. I just wish mine didn't have fans, they only last a few months then puke. Luckily I have 10 of them. That card will ROCK, and offers you options down the road being dual port. No need for "crossover" cables.... 10GBE is auto MDI/MDIx. Just a good quality cat5e+ cable |
|
Quoted:
Yep that's exactly the card I'd buy today. They were $350 when I bought by broadcomms for crazy cheap. I just wish mine didn't have fans, they only last a few months then puke. Luckily I have 10 of them. That card will ROCK, and offers you options down the road being dual port. No need for "crossover" cables.... 10GBE is auto MDI/MDIx. Just a good quality cat5e+ cable View Quote View All Quotes View All Quotes Quoted:
Quoted:
Awesome. Thanks. I have just been looking at cards like this on ebay: http://www.ebay.com/itm/171979516430?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT http://www.ebay.com/itm/262113959205?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT http://www.ebay.com/itm/311602237942?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT I'm running an X99 Sabertooth in my desktop, so I have the 8 lane slot taken care of. I haven't started my server build yet. Yep that's exactly the card I'd buy today. They were $350 when I bought by broadcomms for crazy cheap. I just wish mine didn't have fans, they only last a few months then puke. Luckily I have 10 of them. That card will ROCK, and offers you options down the road being dual port. No need for "crossover" cables.... 10GBE is auto MDI/MDIx. Just a good quality cat5e+ cable Do those dual-port cards support channel bonding? If they do, I don't see the downside to enabling it if they're on a private network. |
|
Quoted:
Do those dual-port cards support channel bonding? If they do, I don't see the downside to enabling it if they're on a private network. View Quote Windows would handle that, without anything at the network layer. Function of SMB3. But yes I'm sure they do at the driver layer. However, you have to work hard to saturate a 10GBe link, there are other bottlenecks typically. But if you are in the "why not" game already..... I guess "why not". |
|
Now you need to Why Not the shit out of your storage now.
http://www.neweggbusiness.com/Product/Product.aspx?Item=9B-20-167-363&nm_mc=KNC-GoogleBiz-PC&cm_mmc=KNC-GoogleBiz-PC-_-pla-_-Solid+State+Disk-_-9B-20-167-363&gclid=CjwKEAjw6_q5BRCOp-Hj-IfHwncSJABMtDaiFJwoJtwYp4jCPDM2EJhNCM3qfcAlU62vGG5lAYX-phoC_63w_wcB Do it, you know you want to. |
|
Quoted:
Windows would handle that, without anything at the network layer. Function of SMB3. But yes I'm sure they do at the driver layer. However, you have to work hard to saturate a 10GBe link, there are other bottlenecks typically. But if you are in the "why not" game already..... I guess "why not". View Quote View All Quotes View All Quotes Quoted:
Quoted:
Do those dual-port cards support channel bonding? If they do, I don't see the downside to enabling it if they're on a private network. Windows would handle that, without anything at the network layer. Function of SMB3. But yes I'm sure they do at the driver layer. However, you have to work hard to saturate a 10GBe link, there are other bottlenecks typically. But if you are in the "why not" game already..... I guess "why not". That's kind of my thinking.. for the cost of a second cable and a minute or so tweaking the OS, what's there to lose. |
|
If you have NICS that use SFP/SFP+ ports, I think you can use these to do back-back connections. No need to buy fiber & SFP modules. (please correct me if I'm wrong, as I'll be doing this soon)
Monoprice 10g SFP+ cables I'm pretty sure you can find better prices on lower-brand stuff, but this is what I found first. |
|
Quoted:
If you have NICS that use SFP/SFP+ ports, I think you can use these to do back-back connections. No need to buy fiber & SFP modules. (please correct me if I'm wrong, as I'll be doing this soon) Monoprice 10g SFP+ cables I'm pretty sure you can find better prices on lower-brand stuff, but this is what I found first. View Quote I have always heard this - that you can find cheap mellanox $50 cards and use a SFP cable to do a peer to peer connection, but I have never tried it and it is hard to find good info on it, what to buy, etc. |
|
This is interesting:
They were using these: http://www.ebay.com/itm/Genuine-INTEL-Ethernet-Server-Adapter-10-Gbps-Dual-Port-X520-DA2-E10G42BTDA-/282041524671?hash=item41aafc21bf:g:A8oAAOSwxehXO3l2 and buy his recommended cable for $50 http://www.cablesondemand.com/pcategory/91/category/SFP%2B+CBL/URvars/Catalog/Library/InfoManage/SFP+_CABLES_(DIRECT_ATTACH).htm Or take a chance on some $10 cables on ebay. Interesting. Also this is CHEAP: https://www.youtube.com/watch?v=Gp_4apKCVMc |
|
I don't like the copper sfp+ cables because they are stiff and can put strain on the card. Hard to do cable management on them. I stick to rj45 style copper nics for short runs (under 30ft if cat5e). Fiber otherwise.
|
|
Quoted: If you have NICS that use SFP/SFP+ ports, I think you can use these to do back-back connections. No need to buy fiber & SFP modules. (please correct me if I'm wrong, as I'll be doing this soon) Monoprice 10g SFP+ cables I'm pretty sure you can find better prices on lower-brand stuff, but this is what I found first. View Quote |
|
Quoted:
I don't like the copper sfp+ cables because they are stiff and can put strain on the card. Hard to do cable management on them. I stick to rj45 style copper nics for short runs (under 30ft if cat5e). Fiber otherwise. View Quote Yeah, those are just poor mans direct attached storage. |
|
I'm not for telling a guy what he should and shouldn't buy, but I doubt you could saturate a LAG with two GBE connections from a workstation.
|
|
Quoted:
I'm not for telling a guy what he should and shouldn't buy, but I doubt you could saturate a LAG with two GBE connections from a workstation. View Quote I can easily. I saturate a 10GBe connection. As shown in my link above. However - that is primarily copying memory to memory. If disk to disk, I'd tend to agree. |
|
I can get over 800 MB/s over NFS using Linux or Mac clients to a Linux/ZFS server using Intel 10 Gbpe NICs and the cheap Netgear switch, over copper.
|
|
Quoted: I can get over 800 MB/s over NFS using Linux or Mac clients to a Linux/ZFS server using Intel 10 Gbpe NICs and the cheap Netgear switch, over copper. View Quote Wait one.... These words... "cheap", "10g", and "netgear" generate a bit of cognitive dissonance when used in the same sentence. Netgear and Cheap are OK, but I just can't fold 10g ethernet into that image without making my brain divide by zero. |
|
Quoted:
Wait one.... These words... "cheap", "10g", and "netgear" generate a bit of cognitive dissonance when used in the same sentence. Netgear and Cheap are OK, but I just can't fold 10g ethernet into that image without making my brain divide by zero. View Quote View All Quotes View All Quotes Quoted:
Quoted:
I can get over 800 MB/s over NFS using Linux or Mac clients to a Linux/ZFS server using Intel 10 Gbpe NICs and the cheap Netgear switch, over copper. Wait one.... These words... "cheap", "10g", and "netgear" generate a bit of cognitive dissonance when used in the same sentence. Netgear and Cheap are OK, but I just can't fold 10g ethernet into that image without making my brain divide by zero. It's all relative. By cheap he means roughly $800 bucks for an 8 port 10Gbps switch, and $800 is pretty cheap compared to say $5000. |
|
Yes, that one. The XS708E. I found one used on ebay for around $500. Haven't had any problems with it.
The other issue with 10 Gbe is that some of the NICs (I have intel single or dual port), run extremely hot and need either rackmount-server style airflow or a PCI slot exhaust fan right next to their heatsink for workstation-style machines. For anyone who wants to move data and get work done over network storage and has SSD's and/or a high performance disk array, regular 1 Gbe quickly becomes the bottleneck. |
|
Was moving 2.5GB/s @ 15k IOPS off an SSD Pure Storage SAN yesterday
Rebooted several hundred VMs at once. |
|
Quoted:
It's all relative. By cheap he means roughly $800 bucks for an 8 port 10Gbps switch, and $800 is pretty cheap compared to say $5000. View Quote View All Quotes View All Quotes Quoted:
Quoted:
Quoted:
I can get over 800 MB/s over NFS using Linux or Mac clients to a Linux/ZFS server using Intel 10 Gbpe NICs and the cheap Netgear switch, over copper. Wait one.... These words... "cheap", "10g", and "netgear" generate a bit of cognitive dissonance when used in the same sentence. Netgear and Cheap are OK, but I just can't fold 10g ethernet into that image without making my brain divide by zero. It's all relative. By cheap he means roughly $800 bucks for an 8 port 10Gbps switch, and $800 is pretty cheap compared to say $5000. What about just using a pc and using it as dhcp server after throwing 4 or 5 used intel 10gbps cards in it? I have never fooled around with 10gbps stuff at all and I have been wanting to play around with it but I have always went for the cheapest/most painfull way to do stuff like this, well that is usually the best way to learn. |
|
It looks like there are several options to turn a spare PC into a switch/router. Just don't make assumptions and get bitten like I did yesterday - I had 3 PCIe x8 cards but only found 2 PCIe x8 slots on my MB... Oopsy!
PFSense, Monowall, Vyatta (VyOS), OpenWRT, Sophos, etc. http://vyos.net/wiki/Main_Page https://openwrt.org/ http://www.practicallynetworked.com/networking/convert_old_pc_to_new_router.htm https://www.pfsense.org/ Sophos Free Tools (including two different firewall type programs) Hmmm... This is starting to look REALLY interesting.... (edit - linkified things for the lazy) |
|
Sign up for the ARFCOM weekly newsletter and be entered to win a free ARFCOM membership. One new winner* is announced every week!
You will receive an email every Friday morning featuring the latest chatter from the hottest topics, breaking news surrounding legislation, as well as exclusive deals only available to ARFCOM email subscribers.
AR15.COM is the world's largest firearm community and is a gathering place for firearm enthusiasts of all types.
From hunters and military members, to competition shooters and general firearm enthusiasts, we welcome anyone who values and respects the way of the firearm.
Subscribe to our monthly Newsletter to receive firearm news, product discounts from your favorite Industry Partners, and more.
Copyright © 1996-2024 AR15.COM LLC. All Rights Reserved.
Any use of this content without express written consent is prohibited.
AR15.Com reserves the right to overwrite or replace any affiliate, commercial, or monetizable links, posted by users, with our own.