10GB networking issues

I watched a Linus tech video fairly recently which covered this exact issue, from memory there experience wasn’t massively different to yours, with the link not getting anywhere near the speed expected.
 
I watched a Linus tech video fairly recently which covered this exact issue, from memory there experience wasn’t massively different to yours, with the link not getting anywhere near the speed expected.

I watched that too, but they were saying that about a simple Windows file transfer. None of my tests above are using that, that's the point. A multi-threaded robocopy (which is what LTT used) confirmed the speeds iPerf is reporting. Windows file copy is much slower.
 
It's be a real ball ache, but I suppose you could use a reductionist test plan and eliminate culpability bit by bit. Create the smallest network possible to establishing some "known good" endpoints and infrastructure then once you've established your baselines, progressively build back out until you've found what is culpable: Literally, create the simplest network of your best performing switch and two endpoints, then build back out until you find the choke point.
 
I'm fairly confident it's my PC that's causing the issues with the test I've run. The issue I have is I don't have any more 10GBE endpoints to test with.
 
A further thought occurs that if you're switches avail it, have a quick look at the port stats and see if there are any error rates that are a cause for concern - that might be an indicator of a cable lobe (or a NIC) that isn't performing well.
 
Last edited:
Pedant alert - usually when we denote aggregated links on network diagrams, we annotate them with a "loop" symbol around the participants, like this...

Link_Aggregation1.JPG

...not that this is going to solve your problem! [/PEDANT] :)
 
Pedant alert - usually when we denote aggregated links on network diagrams, we annotate them with a "loop" symbol around the participants, like this...

View attachment 1440382
...not that this is going to solve your problem! [/PEDANT] :)

That diagram is taken directly from the UniFi console, I just added the PC, Mikrotik switch and two NAS for clarity. :)
 
@mickevh just for clarity, (this is a question not a statement) Aggregated Links dont actually double (triple etc) the speed of the links combined but allow more bandwidth for individual processes using the links. So on a 2 x 1 Gb LAG a single process will only use upto 1Gb on a single channel (not 2 Gb) of the LAG and 2 processes will use 2 x 1Gb channels. Is this correct ?
 
@mickevh just for clarity, (this is a question not a statement) Aggregated Links dont actually double (triple etc) the speed of the links combined but allow more bandwidth for individual processes using the links. So on a 2 x 1 Gb LAG a single process will only use upto 1Gb on a single channel (not 2 Gb) of the LAG and 2 processes will use 2 x 1Gb channels. Is this correct ?

That's what I always thought as well.
 
I watched that too, but they were saying that about a simple Windows file transfer. None of my tests above are using that, that's the point. A multi-threaded robocopy (which is what LTT used) confirmed the speeds iPerf is reporting. Windows file copy is much slower.

Apologies I couldn’t quite remember the context, I just remembered it was surprising result!!
 
@mickevh just for clarity, (this is a question not a statement) Aggregated Links dont actually double (triple etc) the speed of the links combined but allow more bandwidth for individual processes using the links. So on a 2 x 1 Gb LAG a single process will only use upto 1Gb on a single channel (not 2 Gb) of the LAG and 2 processes will use 2 x 1Gb channels. Is this correct ?

Correct. The analogy I use is that it's "more lanes on the motorway" not "double the speed." Thusly, the system as a whole can "carry more traffic" with "less chance of congestion," rather than any individual car gets there in half the time (double the speed.)

It's illustrative to contemplate how the node at the sending end of an AL determines which physical link to send any given packet over. We don't want to encourage any out-of-order packet delivery (and the IEEE standards LA mandates it does not happen) so we don't want to have to require the receiving end node to have to do any extra work to reassemble packet in orders. The simplest way to do so is to ensure all packets in any given stream head across the same physical line, thereby guaranteeing they egress the AL in order without any additional processing (or buffering) required.

Of course, simply "A/B'ing" the packets of a single stream over all the physical links would be a disaster for OOPD if one remembers that ethernet packets are all not the same length a thusly a short one could "overtake" a fast one and then need to be reassembled in the correct order, so we don't do that.

There's a few strategies that could achieve link selection, but for a small 2-link AL, one of the simplest ways is to use the last bit of either the source MAC Address, destination MAC Address of perhaps some kind of hash of both. Point being those are trivial to compute. Or we could try to use some form of "round robin" scheduling to try and load balance a bit. Way back when, the HP NIC Teaming driver for their enterprise servers (before IEEE standards AL was codified IIRC) used to give you some settings to "play" with for choosing the scheduling strategy, but I don't recall ever seeing a switch that offered any such controls (not that I've done any exhaustive research!)
 
Last edited:
Thanks for clearing that up @mickevh. Its great having our own networking guru to explain our tinkering in the Home Networking world. Home networks are starting to get more complex, especially with IoT and people wanting to keep their toys and their business separated.
 
Thanks all for you kind words. Though I don't think of myself as guru - there are guys out there that can run rings around me - I have been around block a few times and I'm happy to pass on what I've learned on the way.
 
Just had a play around with mine as I had some issues previously. I now have two fast 10Gbit clients one PC with an X520 single NIC (on x8) and a Mac Book Pro with a 10Gbit Thunderbolt adapter, connected through a Mikrotik CSR328. Neither running jumbo frames, Mac is default settings and the PC should be.

With a single connection on 5.5Gbps PC -> Mac and just over 6Gbps Mac -> PC

1610974392169.png


If I use the -P 20 option then I get 9.2-9.3Gbps either way round which is around what you would expect

1610975065680.png


In fact just two connections is sufficient to get the full bandwidth

1610975201218.png


Am going to try and see what my server running Hyper-V will do, but that was where the bottle neck was and I would struggle to get past 3 - 5Gbps depending even with multiple connections. Just wonder if the server (like your NAS) is just struggling in some areas? Or is it a config issue local to the server\nas?
 
So test number 1. I have connected my PC to my Unraid NAS directly using a CAT6A cable. set a static IP on both NICs that is outside of my normal LAN range.

1610989487504.png


Results are a little lower than I would expect still, but a lot better than I was getting. Here it is with more connections.

1610989729197.png
 
Test 2. I've added the Mikrotik CRS305 between my PC and the NAS. using RJ45 SFP+ modules for my PC and a DAC cable for the NAS. Results below. So very similar to direct connection, which is good. next I'm going to take the NAS upstairs and connect it to the UniFi Aggregation switch and test again.

1610990391483.png
 
Test 3, before I connect to the UniFi switch I tried the other end of the cable that runs from my office to the network cabinet. Seems there is a problem.

1610991655747.png
 
Test 4. I've taken the cable out of the patch panel (it's a keystone one) and plugged it directly into the NAS. Back up to 10GbE.

1610992953107.png
 
Test 5. Connecting the cable directly into the UniFi switch then connecting the NAS via the DAC cable. Back down to 1GbE speed.

1610994492080.png


I have noticed that UniFi reports the port is 10000 FDX but the compliance on the cable seems to be 1000BASE-CX which is odd but I already know that the cable works at 10GbE.

1610994426534.png
 
Test 6. I've now connected them both back to the Mikrotik switch but with the NAS still in the network cabinet. Back to 10GbE.

1610996302349.png



So I think I have two issues, one is the patch panel doesn't seem to want to work with 10GbE, which is easy to fix as I only have a single 10GbE PC so I'll just connect that cable into the switch. The second issue seems to be the UniFi USW-Aggregation, it's just not allowing full speed through it.
 
It suddently occurred to me I had two NICs connected on both devices and it turns out the traffic in the 5th test was going via the 1GbE port on my PC. Disabled that and we're off to the races. So it seems the issue all along was a patch panel. Still unsure why that one would cause an issue, it's a "CAT6" one so should be fine. Maybe it's just cheap junk.

1611002933086.png
 
Usual causes of patch panel issues are poor termination or possibly the spring connector either being bent or dislodged. Sometimes you can get a flakey patch cable as I found out due to the plastic end causes a miscontact
 
Usual causes of patch panel issues are poor termination or possibly the spring connector either being bent or dislodged. Sometimes you can get a flakey patch cable as I found out due to the plastic end causes a miscontact

It can't be poor termination, as the patch panel is a keystone one. The same connector is still on the cable that I now have working :)

It could be something in that port I guess, but I'd expect no connection at all rather than a slow one. I'll have to take it out at some point and check it.
 
Not quite sure what you mean by the keystone reference, or could be a termination issue.

If you have a bad connection you can get port flapping when the connection speed changes between the different levels. The MikroTik ones will tell you if there is this the issue
 

The latest video from AVForums

Is 4K Blu-ray Worth It?
Subscribe to our YouTube channel
Back
Top Bottom