Warning

 

Close

Confirm Action

Are you sure you wish to do this?

Confirm Cancel
BCM
User Panel

Site Notices
Posted: 5/18/2016 11:04:15 AM EDT
So, I'm in the process of building a home server.  I have one desktop that I will want to transfer large amounts of data between it and the server and thought it'd be nice to have a 10Gb connection between the two.  I've been looking on eBay, and 10Gb RJ45 cards can be found all day around $100-$200, which I think is really affordable.  

Basically, I'm thinking about having the server and desktop on a 10Gb subnet with direct connection, but the server also needs to be on the regular 1Gb so everything else on the network can access it.  So, I do some google searching to see what others have done and how they have set it up.  But, instead of finding answers I find a bunch of people questioning the original poster (on several posts on different websites) of why he needs a 10Gb connection.  Who really cares why he wants it.  When did "because I want it" and "that's what I want to spend my money on" become not good enough?  Guess it goes for more than computer and networking.
Link Posted: 5/18/2016 11:16:21 AM EDT
[#1]
Get a switch with dual fabric GBE and XGE Ethernet. You typically see them in 24GBE/4XGE or 48GBE/4XGE formats.  The copper style RJ-45 XGE ports are not as common as the SPF+ ports for XGE and so may be harder to find and/or more expensive.  I'm not real familiar with the used market so YMMV. With SPFs you'd need DACs or transceivers and MM fiber patches to link the NICs to the XGE Switch ports.  No special configuration is needed for those ports to work at XGE or to communicate with the GBE ports.  

You can pretty much ignore the fact that you have 2 different bandwidth rates. There is no need to think of the networking as a subnet for XGE and one for GBE.  It can be a flat network.  A PC with an XGE NIC will communicate with a server with an XGE NIC at that rate (assuming both connect via the switch's XGE ports) and with everything else at GBE.  It's just a switch with some extra high bandwidth ports.



Posted Via AR15.Com Mobile
Link Posted: 5/18/2016 12:52:12 PM EDT
[#2]
Well, the bottleneck won't be at layer 0. LOL
Link Posted: 5/18/2016 8:46:03 PM EDT
[#3]
Somehow I just don't think that bottlenecks are a real concern for the home server guy that wants to play with some 10 GbE connectivity on a limited basis.

Discussion ForumsJump to Quoted PostQuote History
Quoted:
Well, the bottleneck won't be at layer 0. LOL
View Quote



Posted Via AR15.Com Mobile
Link Posted: 5/19/2016 11:14:11 AM EDT
[#4]
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Get a switch with dual fabric GBE and XGE Ethernet. You typically see them in 24GBE/4XGE or 48GBE/4XGE formats.  The copper style RJ-45 XGE ports are not as common as the SPF+ ports for XGE and so may be harder to find and/or more expensive.  I'm not real familiar with the used market so YMMV. With SPFs you'd need DACs or transceivers and MM fiber patches to link the NICs to the XGE Switch ports.  No special configuration is needed for those ports to work at XGE or to communicate with the GBE ports.  

You can pretty much ignore the fact that you have 2 different bandwidth rates. There is no need to think of the networking as a subnet for XGE and one for GBE.  It can be a flat network.  A PC with an XGE NIC will communicate with a server with an XGE NIC at that rate (assuming both connect via the switch's XGE ports) and with everything else at GBE.  It's just a switch with some extra high bandwidth ports.



Posted Via AR15.Com Mobile
View Quote



I would do this... Much less hassle.

Just curious, what constitutes larges amounts of data?
Link Posted: 5/19/2016 11:22:19 AM EDT
[#5]
Yep what you say is good to go. Alternatively you can get a dual 10gb nic or a 10gig nic and a 1 gig nic and bridge the ports together so everything is on the same network.
Link Posted: 5/19/2016 11:13:15 PM EDT
[#6]
I wish I had hardware fast enough to keep up with even a 1 gig link.
Link Posted: 5/19/2016 11:16:37 PM EDT
[#7]
porn
Link Posted: 5/20/2016 12:20:22 AM EDT
[#8]
The foundation of all advances in high technology.



Discussion ForumsJump to Quoted PostQuote History
Quoted:
porn
View Quote

Link Posted: 5/20/2016 1:40:28 AM EDT
[#9]
What are you doing on a home network, that you are saturating a 1GB link?





The cheapest way would be to do 10GB copper NICs in server and desktop with a patch cable between them on a private network, plus a <10GB switched/routed network for everything else.


 



Server:

e0: 10.10.10.1 255.255.255.252 (10GB)

e1: 10.10.11.10 255.255.255.0, route 0.0.0.0 10.10.11.1




Workstation:

e0: 10.10.10.2 255.255.255.252 (10GB)

e1: 10.10.11.11 255.255.255.0 route 0.0.0.0 10.10.11.1




With hosts file entries in both machines to point to the 10.10.10.x IPs.
Link Posted: 5/20/2016 2:16:47 AM EDT
[#10]
Discussion ForumsJump to Quoted PostQuote History
Quoted:
What are you doing on a home network, that you are saturating a 1GB link?
View Quote


Why do you need 30 round magazines for an AR? Why does someone need a 500 HP car in their garage? Why does anyone need a shiny compressed piece of carbon on their finger/neck/earlobe/etc?

Because he wants to - at least, that's what I read from the OP.



I have a box that can generate 400Gbps of traffic with two of these in a 4x bond. I moved up to those after making storage arrays that can saturate multichannel 40GbE and 56FDR IB lines. Why? Because it's completely awesome. And that's a good enough reason for me.


OP - You can do a PTP link at 10GbE with a second connection at 1GbE, but I'd really suggest you go with a multifabric switch like brassburn suggests. The switch eliminates requiring multiple subnets, and nets you a switch with additional ports if you want to add more 10GbE clients later. It also eliminates the need for an extra 1GbE line for the 10GbE boxes, as the switch will do rate matching for you.
Link Posted: 5/20/2016 9:51:27 AM EDT
[#11]
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Somehow I just don't think that bottlenecks are a real concern for the home server guy that wants to play with some 10 GbE connectivity on a limited basis.




Posted Via AR15.Com Mobile
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Somehow I just don't think that bottlenecks are a real concern for the home server guy that wants to play with some 10 GbE connectivity on a limited basis.

Quoted:
Well, the bottleneck won't be at layer 0. LOL



Posted Via AR15.Com Mobile



Sorry,  I'm just a network admin at a place that throws raw HD video all over the place all day and night. What do I know about these things?
Link Posted: 5/20/2016 9:56:54 AM EDT
[#12]
Discussion ForumsJump to Quoted PostQuote History
Quoted:
What are you doing on a home network, that you are saturating a 1GB link?

The cheapest way would be to do 10GB copper NICs in server and desktop with a patch cable between them on a private network, plus a <10GB switched/routed network for everything else.
 

Server:
e0: 10.10.10.1 255.255.255.252 (10GB)
e1: 10.10.11.10 255.255.255.0, route 0.0.0.0 10.10.11.1


Workstation:
e0: 10.10.10.2 255.255.255.252 (10GB)
e1: 10.10.11.11 255.255.255.0 route 0.0.0.0 10.10.11.1


With hosts file entries in both machines to point to the 10.10.10.x IPs.
View Quote



THIS is the right answer.  The fabric switch solution above is WAY too expensive.

I have the exact scenario already set up in my lab.

I have two servers, that need to transfer a LARGE amount of data very quickly between them.  I have two 10GBE base-T cards with a direct patch cable between them.  I do product demos of Live Migration, a Hyper-V feature, and I regularly have to demonstrate moving 96GB worth of live VM's from one server to another.
https://blogs.technet.microsoft.com/kevinholman/2013/07/01/hyper-v-live-migration-and-the-upgrade-to-10-gigabit-ethernet/  

Then each server has its internet/network facing NIC for normal stuff.

No GW configured on the 10GBE nics, and hosts file to ensure they use the private network for peer to peer communications.


I bought my 10GBe nics for $25 each.  

DONT get a nic with a fan.  The fan dies then the nic shuts down.  Get an intel x540 based NIC, with a large heatsink on it like most of them come.

What models are you looking at?



Also - make sure you have a PCI express slot that is wired for this type of card.  These need a slot that is wired for 8 lane traffic (x8).  A feature often only found in servers.  I use Dell Precision T7500's.
Link Posted: 5/20/2016 1:36:31 PM EDT
[#13]
Discussion ForumsJump to Quoted PostQuote History
Quoted:



THIS is the right answer.  The fabric switch solution above is WAY too expensive.

I have the exact scenario already set up in my lab.

I have two servers, that need to transfer a LARGE amount of data very quickly between them.  I have two 10GBE base-T cards with a direct patch cable between them.  I do product demos of Live Migration, a Hyper-V feature, and I regularly have to demonstrate moving 96GB worth of live VM's from one server to another.
https://blogs.technet.microsoft.com/kevinholman/2013/07/01/hyper-v-live-migration-and-the-upgrade-to-10-gigabit-ethernet/  

Then each server has its internet/network facing NIC for normal stuff.

No GW configured on the 10GBE nics, and hosts file to ensure they use the private network for peer to peer communications.


I bought my 10GBe nics for $25 each.  

DONT get a nic with a fan.  The fan dies then the nic shuts down.  Get an intel x540 based NIC, with a large heatsink on it like most of them come.

What models are you looking at?



Also - make sure you have a PCI express slot that is wired for this type of card.  These need a slot that is wired for 8 lane traffic (x8).  A feature often only found in servers.  I use Dell Precision T7500's.
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Quoted:
What are you doing on a home network, that you are saturating a 1GB link?

The cheapest way would be to do 10GB copper NICs in server and desktop with a patch cable between them on a private network, plus a <10GB switched/routed network for everything else.
 

Server:
e0: 10.10.10.1 255.255.255.252 (10GB)
e1: 10.10.11.10 255.255.255.0, route 0.0.0.0 10.10.11.1


Workstation:
e0: 10.10.10.2 255.255.255.252 (10GB)
e1: 10.10.11.11 255.255.255.0 route 0.0.0.0 10.10.11.1


With hosts file entries in both machines to point to the 10.10.10.x IPs.



THIS is the right answer.  The fabric switch solution above is WAY too expensive.

I have the exact scenario already set up in my lab.

I have two servers, that need to transfer a LARGE amount of data very quickly between them.  I have two 10GBE base-T cards with a direct patch cable between them.  I do product demos of Live Migration, a Hyper-V feature, and I regularly have to demonstrate moving 96GB worth of live VM's from one server to another.
https://blogs.technet.microsoft.com/kevinholman/2013/07/01/hyper-v-live-migration-and-the-upgrade-to-10-gigabit-ethernet/  

Then each server has its internet/network facing NIC for normal stuff.

No GW configured on the 10GBE nics, and hosts file to ensure they use the private network for peer to peer communications.


I bought my 10GBe nics for $25 each.  

DONT get a nic with a fan.  The fan dies then the nic shuts down.  Get an intel x540 based NIC, with a large heatsink on it like most of them come.

What models are you looking at?



Also - make sure you have a PCI express slot that is wired for this type of card.  These need a slot that is wired for 8 lane traffic (x8).  A feature often only found in servers.  I use Dell Precision T7500's.


Awesome.  Thanks.  I have just been looking at cards like this on ebay:

http://www.ebay.com/itm/171979516430?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT

http://www.ebay.com/itm/262113959205?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT

http://www.ebay.com/itm/311602237942?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT

I'm running an X99 Sabertooth in my desktop, so I have the 8 lane slot taken care of.  I haven't started my server build yet.  
Link Posted: 5/20/2016 2:54:44 PM EDT
[#14]
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Awesome.  Thanks.  I have just been looking at cards like this on ebay:

http://www.ebay.com/itm/171979516430?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT

http://www.ebay.com/itm/262113959205?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT

http://www.ebay.com/itm/311602237942?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT

I'm running an X99 Sabertooth in my desktop, so I have the 8 lane slot taken care of.  I haven't started my server build yet.  
View Quote


Yep that's exactly the card I'd buy today.  They were $350 when I bought by broadcomms for crazy cheap.  I just wish mine didn't have fans, they only last a few months then puke.    Luckily I have 10 of them.  

That card will ROCK, and offers you options down the road being dual port.

No need for "crossover" cables.... 10GBE is auto MDI/MDIx.  Just a good quality cat5e+ cable

Link Posted: 5/20/2016 4:33:07 PM EDT
[#15]
Discussion ForumsJump to Quoted PostQuote History
Quoted:


Yep that's exactly the card I'd buy today.  They were $350 when I bought by broadcomms for crazy cheap.  I just wish mine didn't have fans, they only last a few months then puke.    Luckily I have 10 of them.  

That card will ROCK, and offers you options down the road being dual port.

No need for "crossover" cables.... 10GBE is auto MDI/MDIx.  Just a good quality cat5e+ cable

View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Quoted:
Awesome.  Thanks.  I have just been looking at cards like this on ebay:

http://www.ebay.com/itm/171979516430?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT

http://www.ebay.com/itm/262113959205?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT

http://www.ebay.com/itm/311602237942?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT

I'm running an X99 Sabertooth in my desktop, so I have the 8 lane slot taken care of.  I haven't started my server build yet.  


Yep that's exactly the card I'd buy today.  They were $350 when I bought by broadcomms for crazy cheap.  I just wish mine didn't have fans, they only last a few months then puke.    Luckily I have 10 of them.  

That card will ROCK, and offers you options down the road being dual port.

No need for "crossover" cables.... 10GBE is auto MDI/MDIx.  Just a good quality cat5e+ cable



Do those dual-port cards support channel bonding?  If they do, I don't see the downside to enabling it if they're on a private network.
Link Posted: 5/20/2016 4:47:52 PM EDT
[#16]
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Do those dual-port cards support channel bonding?  If they do, I don't see the downside to enabling it if they're on a private network.
View Quote


Windows would handle that, without anything at the network layer.  Function of SMB3.  But yes I'm sure they do at the driver layer.

However, you have to work hard to saturate a 10GBe link, there are other bottlenecks typically.  But if you are in the "why not" game already..... I guess "why not".
Link Posted: 5/20/2016 5:04:58 PM EDT
[#18]
Discussion ForumsJump to Quoted PostQuote History
Quoted:


Windows would handle that, without anything at the network layer.  Function of SMB3.  But yes I'm sure they do at the driver layer.

However, you have to work hard to saturate a 10GBe link, there are other bottlenecks typically.  But if you are in the "why not" game already..... I guess "why not".
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Quoted:
Do those dual-port cards support channel bonding?  If they do, I don't see the downside to enabling it if they're on a private network.


Windows would handle that, without anything at the network layer.  Function of SMB3.  But yes I'm sure they do at the driver layer.

However, you have to work hard to saturate a 10GBe link, there are other bottlenecks typically.  But if you are in the "why not" game already..... I guess "why not".


That's kind of my thinking.. for the cost of a second cable and a minute or so tweaking the OS, what's there to lose.
Link Posted: 5/21/2016 4:28:32 PM EDT
[#19]
If you have NICS that use SFP/SFP+ ports, I think you can use these to do back-back connections. No need to buy fiber & SFP modules. (please correct me if I'm wrong, as I'll be doing this soon)



Monoprice 10g SFP+ cables




I'm pretty sure you can find better prices on lower-brand stuff, but this is what I found first.
Link Posted: 5/21/2016 4:33:52 PM EDT
[#20]
Discussion ForumsJump to Quoted PostQuote History
Quoted:
If you have NICS that use SFP/SFP+ ports, I think you can use these to do back-back connections. No need to buy fiber & SFP modules. (please correct me if I'm wrong, as I'll be doing this soon)

Monoprice 10g SFP+ cables


I'm pretty sure you can find better prices on lower-brand stuff, but this is what I found first.
View Quote


I have always heard this - that you can find cheap mellanox $50 cards and use a SFP cable to do a peer to peer connection, but I have never tried it and it is hard to find good info on it, what to buy, etc.
Link Posted: 5/21/2016 6:19:55 PM EDT
[#22]
I don't like the copper sfp+ cables because they are stiff and can put strain on the card. Hard to do cable management on them. I stick to rj45 style copper nics for short runs (under 30ft if cat5e). Fiber otherwise.
Link Posted: 5/22/2016 12:23:31 AM EDT
[#23]

Discussion ForumsJump to Quoted PostQuote History
Quoted:


If you have NICS that use SFP/SFP+ ports, I think you can use these to do back-back connections. No need to buy fiber & SFP modules. (please correct me if I'm wrong, as I'll be doing this soon)



Monoprice 10g SFP+ cables





I'm pretty sure you can find better prices on lower-brand stuff, but this is what I found first.

View Quote
Yep. We use all twinax connectors between our UCS chassis and interconnects in our data center.

 
Link Posted: 5/22/2016 8:58:31 PM EDT
[#24]
Discussion ForumsJump to Quoted PostQuote History
Quoted:
I don't like the copper sfp+ cables because they are stiff and can put strain on the card. Hard to do cable management on them. I stick to rj45 style copper nics for short runs (under 30ft if cat5e). Fiber otherwise.
View Quote

Yeah, those are just poor mans direct attached storage.
Link Posted: 5/22/2016 8:59:22 PM EDT
[#25]
I'm not for telling a guy what he should and shouldn't buy, but I doubt you could saturate a LAG with two GBE connections from a workstation.




Link Posted: 5/22/2016 10:32:18 PM EDT
[#26]
Discussion ForumsJump to Quoted PostQuote History
Quoted:
I'm not for telling a guy what he should and shouldn't buy, but I doubt you could saturate a LAG with two GBE connections from a workstation.
View Quote


I can easily.  I saturate a 10GBe connection.  As shown in my link above.  However - that is primarily copying memory to memory.   If disk to disk, I'd tend to agree.
Link Posted: 5/27/2016 1:06:32 AM EDT
[#27]
Link Posted: 5/27/2016 8:56:47 AM EDT
[#28]

Discussion ForumsJump to Quoted PostQuote History
Quoted:


I can get over 800 MB/s over NFS using Linux or Mac clients to a Linux/ZFS server using Intel 10 Gbpe NICs and the cheap Netgear switch, over copper.

View Quote




 
Wait one....




These words... "cheap", "10g", and "netgear" generate a bit of cognitive dissonance when used in the same sentence.




Netgear and Cheap are OK, but I just can't fold 10g ethernet into that image without making my brain divide by zero.
Link Posted: 5/27/2016 9:32:12 AM EDT
[#29]
Discussion ForumsJump to Quoted PostQuote History
Quoted:

  Wait one....


These words... "cheap", "10g", and "netgear" generate a bit of cognitive dissonance when used in the same sentence.


Netgear and Cheap are OK, but I just can't fold 10g ethernet into that image without making my brain divide by zero.
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Quoted:
I can get over 800 MB/s over NFS using Linux or Mac clients to a Linux/ZFS server using Intel 10 Gbpe NICs and the cheap Netgear switch, over copper.

  Wait one....


These words... "cheap", "10g", and "netgear" generate a bit of cognitive dissonance when used in the same sentence.


Netgear and Cheap are OK, but I just can't fold 10g ethernet into that image without making my brain divide by zero.

It's all relative.  By cheap he means roughly $800 bucks for an 8 port 10Gbps switch, and $800 is pretty cheap compared to say $5000.
Link Posted: 5/27/2016 10:51:57 AM EDT
[#30]
Link Posted: 5/27/2016 10:58:09 AM EDT
[#31]
Was moving 2.5GB/s @ 15k IOPS off an SSD Pure Storage SAN yesterday
Rebooted several hundred VMs at once.
Link Posted: 5/30/2016 9:02:51 PM EDT
[#32]
Discussion ForumsJump to Quoted PostQuote History
Quoted:

It's all relative.  By cheap he means roughly $800 bucks for an 8 port 10Gbps switch, and $800 is pretty cheap compared to say $5000.
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Quoted:
Quoted:
Quoted:
I can get over 800 MB/s over NFS using Linux or Mac clients to a Linux/ZFS server using Intel 10 Gbpe NICs and the cheap Netgear switch, over copper.

  Wait one....


These words... "cheap", "10g", and "netgear" generate a bit of cognitive dissonance when used in the same sentence.


Netgear and Cheap are OK, but I just can't fold 10g ethernet into that image without making my brain divide by zero.

It's all relative.  By cheap he means roughly $800 bucks for an 8 port 10Gbps switch, and $800 is pretty cheap compared to say $5000.



What about just using a pc and using it as dhcp server after throwing 4 or 5 used intel 10gbps cards in it?

I have never fooled around with 10gbps stuff at all and I have been wanting to play around with it but I have always went for the cheapest/most painfull way to do stuff like this, well that is usually the best way to learn.  

Link Posted: 5/30/2016 10:47:44 PM EDT
[#33]
It looks like there are several options to turn a spare PC into a switch/router. Just don't make assumptions and get bitten like I did yesterday - I had 3 PCIe x8 cards but only found 2 PCIe x8 slots on my MB... Oopsy!







PFSense, Monowall, Vyatta (VyOS), OpenWRT, Sophos, etc.










http://vyos.net/wiki/Main_Page




https://openwrt.org/




http://www.practicallynetworked.com/networking/convert_old_pc_to_new_router.htm




https://www.pfsense.org/


Sophos Free Tools (including two different firewall type programs)










Hmmm... This is starting to look REALLY interesting....



 
(edit - linkified things for the lazy)


 
Close Join Our Mail List to Stay Up To Date! Win a FREE Membership!

Sign up for the ARFCOM weekly newsletter and be entered to win a free ARFCOM membership. One new winner* is announced every week!

You will receive an email every Friday morning featuring the latest chatter from the hottest topics, breaking news surrounding legislation, as well as exclusive deals only available to ARFCOM email subscribers.


By signing up you agree to our User Agreement. *Must have a registered ARFCOM account to win.
Top Top