10GBase-T network switch and NICs

Arfa

Prominent Member
Joined
Apr 25, 2006
Messages
1,202
Reaction score
95
Points
334
Location
Stirling, Scotland
I'm in the middle of laying down ethernet cable around a new house and decided to use cat6a cable, patch panel, and sockets to future proof things for 10Gigabit. I still only have a 1 Gigabit switch and NICs in all the computers in the house, and although I know its all backward compatible, I can't help but wonder if there is any 10Gbe kit that's worth checking out (new or 2nd hand) that won't totally break the bank. Last time I looked it was £400+ a NIC and £500-600+ for a switch with a few ports on it, or they're all STP+ fibre transceivers for links within a cabinet.

I've two PCs with PCIe slots (one being my main file server) and two older MacBook Pros with Thunderbolt2 sockets. Switch is a 3Com baseline 2924, though I'm only using about a dozen of the sockets thus far and only 3-4 really need 10Gbe.

So what's worth checking out for copper 10Gbe over cat6a RJ45 connections that's not crazy money?
 
-- As an eBay Associate, AVForums earns from qualifying purchases --
As @oneman says you can get some cheap(ish) cards these days. However I would say that the run very hot, hence they generally have big heat sinks. Asus do one also with a big heat sink on it.

The Gb base t transceivers also run hot hot and often manufacturers may advise not to put them in adjacent SFP+ ports.

If you are going 10 gig then you probably want to go managed switches but not essential I guess.

I use MikroTik switches and they tend to be good value for money but I use SFP+ cards with DAC in the PC and server as well as a couple of 10 gig ports on the servers’ motherboards as they are short enough cable runs. It depends on your need and budget really
 
.
 
Last edited:
Yeap, they can get pretty got. Usually not a issue on servers but for home PC make sure you have sufficient airflow.
 
Cool, thanks for the pointers. As it happens I've just twigged my server only has x1 PCIe slots left free in it (got a RAID card in its only larger PCIe slot) and it seems all these 10Gbe cards need x4 PCIe slots at least. So looks like I'm going to shelve this plan for a while until I upgrade the server Motherboard etc.
 
They will run in a x1 PCIe slot but you probably won't get max performance on it. Having said that your disks might not keep up with the throughput anyway depending on how they are configured.

I had fun and games where my older PC would drop the speed of the PCIe slot anyway but that still managed to get full speed but was only a single port card.
 
Yeah, totally not expecting the disks to reach 10Gigabit and thus become the bottleneck. However, I was expecting to at least exceed 1Gbit and utilize the disks at full speed. FWIW they're only magnetic disks, with a read speeds of ~250Megabytes/s (according to /sbin/hdparm)
 
Yeah you probably get away with it then, having said that my new server motherboard had 2 x 10GbE slots built-in anyway so I ended up both a RAID card and an SFP+ network card in.

Also, small files on spinning rust will give you a lot lower throughput so possibly worth looking at a solid state cache of some description.
 
Fair enough. Most of my data is large 4k video files, so should fair better than many small files. The other main use case is a TimeMachine for a pair of Macbooks Pros, which could be a bit more varied, however this will be limited by the disk's write speed.
 
I couldn't get TimeMachine to ever work properly with Windows network drives, it would just give up but I would have thought the Mac would be doing the processing so the files been written are larger but could be wrong.
 
Re: TimeMachine - I just used netatalk on Centos, which creates some afp shares, including one with a specific TimeMachine designation. It happily runs alongside other smb windows shares. Pretty straightforward to setup and works grand.
 
Yeah, totally not expecting the disks to reach 10Gigabit and thus become the bottleneck. However, I was expecting to at least exceed 1Gbit and utilize the disks at full speed. FWIW they're only magnetic disks, with a read speeds of ~250Megabytes/s (according to /sbin/hdparm)
Even with 1x slot your disks are probably going to be a limiting factor.

Which RAID card are you using ? I've got a 5 x 5900rpm Seagate Iron Wolf drives and I am getting 600 to 700 MB/Sec read and around 500 MB/Sec write.
 
I have a HP LSI SAS 9212-4i 4-Port 6 GB/s Raid card (Broadcom / LSI SAS2008 chip, mpt3sas kernel module). Currently got a pair of 1TB WDC WD10EZRX-00A8LB0 and a pair of 3TB 7200rpm TOSHIBA DT01ACA300 disks, each pair mirrored (RAID1). Nothing exceptional TBH and I do plan to replace the 1TB pair with something bigger in the near future. So, undoubtedly they'll be the limiting factor once I get above 1Gigabit ethernet.
 
I have a HP LSI SAS 9212-4i 4-Port 6 GB/s Raid card (Broadcom / LSI SAS2008 chip, mpt3sas kernel module). Currently got a pair of 1TB WDC WD10EZRX-00A8LB0 and a pair of 3TB 7200rpm TOSHIBA DT01ACA300 disks, each pair mirrored (RAID1). Nothing exceptional TBH and I do plan to replace the 1TB pair with something bigger in the near future. So, undoubtedly they'll be the limiting factor once I get above 1Gigabit ethernet.
For sure, those drives are going to be a limitation, especially those 1TB ones
 
Wow - still rocking a 3com switch! :D It must be about 10 years ago, at time of writing, that HP bought them out.

Not that there's anything "wrong" with using old kit if it's still working. Solid state electronics doesn't usuallly "wear out" (at least not in human timescales) or get slower with age. In my experience it's mostly things that spin (like fans and HDD's) that pack up or thermal effects "do" for them - particualarly in power supplies - or they get filled up with dust/flour (yes really - I used to work for a company that owed a few cake factories,) which, sort of, "concretes" the innards and/or restricts air flow, so "heat" again causes issues. I find when electronic stuff does pack up, it tends to be catastrophic rather than a gradual reduction in capabilities.
 
Hehe, yeah the 3com Baseline is still going strong, picked it up dirt cheap many years ago as the admin pass had been lost, but I managed to reset it using a serial-to-rj45 control cable over an old school comm connection, and some carefully timed key combos! I did swap the fans out for quieter ones and they did recently need a bit of a blast with GT85 to clear the dust and crud out of them, but otherwise, I haven't been able to fault it.

"If it ain't broke, don't fix it!"

Ultimately, I'll probably use the 3com alongside a small 10Gbe switch to cover everything else in the house that's still on 1Gb.
 
Yeah I had my first switch die on Tuesday - had it maybe 15 years , not sure. Unplugged it as I was having a new desk arrive. Came to plug it in and it went futt and there was a burning smell - no power, dead. If I hadnt unplugged it I wonder how much longer it would have lasted, I mean its been on for 18 months solid and is never powered off (last time was a power cut)
 
Yeah I had my first switch die on Tuesday - had it maybe 15 years , not sure. Unplugged it as I was having a new desk arrive. Came to plug it in and it went futt and there was a burning smell - no power, dead. If I hadnt unplugged it I wonder how much longer it would have lasted, I mean its been on for 18 months solid and is never powered off (last time was a power cut)

Integrated power supply or separate? Some of the Netgear ones have a lifetime warranty, think I had one replaced after about 7 years once!
 
external PSU but the smell and puff of smoke was from the switch, I know as It was in my hands at the time. Was a Netgear actually.
 
If it is one of the metal GS ones then they did have a lifetime warranty. I think I just provided a serial number, don't think I provided a receipt and they sent me the equivalent new switch so if it is might get a new one back.
 
I've only had one switch fail at home, a TP Link. Annoying intermittent fault where it would drop packets More annoyingly I put it aside and forgot about it. Years later, gave it to my son for uni and of course he had problems with causing him much annoyance, lol

These days I stick with metal case Netgear ones
 
For your interest, since starting this thread I've made some progress and sorted out some 10Gbe kit.

Picked up a Solarflare SFN5161T dual 10gbe network for my server - £40 second hand.
A Promise Sanlink2 Thunderbolt 2 dual 10Gbe external box - £70 second hand (possibly overpaid a little here).
A Zyxel XS1930-10 8x 10Gbe RJ45 + 2x SFP+ port switch - £300 off eBay, but brand new. Not cheap, but a good price for this kit. The only thing close I saw cheaper was a Netgear XS505M 5 port switch for just over £200.

Now I'm getting some nice speeds: iperf report:
------------------------------------------------------------
Client connecting to boadicea.local, TCP port 5001
TCP window size: 129 KByte (default)
------------------------------------------------------------
[ 5] local 192.168.1.184 port 59922 connected with 192.168.1.136 port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 10.1 GBytes 8.67 Gbits/sec
Arthurs-MacBook-Pro-2:iperf-2.0.5-macos-x86_64 arthur$

My server's main drives are three 8TB Toshiba N300 NAS drives in RAID5
BlackMagic Disk Speed tests across the network are reporting back about 360MB/s read, 170MB/s write. So, well above 1Gbe speeds and maxing out at the drives' speeds.

Either way, not too shabby and should be plenty fast for a good few years.
 

The latest video from AVForums

TV Buying Guide - Which TV Is Best For You?
Subscribe to our YouTube channel
Back
Top Bottom