10GB networking issues

Puntoboy

Prominent Member
Joined
Apr 12, 2008
Messages
1,647
Reaction score
424
Points
345
Location
Kettering
Hello all. :)

I've had a 10GBE fibre link to my garage for about a year now and as far as I know it's always just worked fine. Until recently I wasn't really using it to it's full potential as my Unraid NAS didn't have a 10GBE NIC and all that was connected through the link was a couple of PoE cameras and an AP.

More recently, I've upgraded my core network and added a 10GBE USW-Aggregation. This has given me another 8 SFP+ ports that I can use for 10GB connectivity so I have no added a 10GBE NIC to my PC and to my Unraid NAS. I also have a Synology NAS in the house which currently has 4 1GBE NICs and doesn't support 10BGE. I have these all configured with LACP but the uplink between that switch and the aggregation switch is only 2 1GBE ports in LACP. The below map shows mostly how this is connected, my PC connects to the USW-Aggregation using an Intel X520 PCI adapter with an 8PIN (RJ-45) Copper module and CAT6A cable (yes I know it's not CAT anything as it's not certified).

network.jpg


The issue I am having are the speeds I've seeing are not as fast as I expected. I've run some iPerf tests from various locations but here are two tests between my PC and the Unraid NAS. First result is between Windows on my PC and the Unraid NAS. Second Result is between a Linux VM running on my PC and the Unraid NAS. I tried in both directions.

1610451279475.png
1610451294290.png


As you can see, Windows is getting around 700MB/s whereas the Linux VM is getting around 3GB/s. Whilst Linux is faster than Windows, even it is not getting the full 10GB/s (or near) that I would expect. Am I ecpecting too much? If I'm not, what can I do to fix this?
 
What motherboard is your PC using? What type of PCIe slot is the NIC plugged into? I am guessing it s not in a x16 slot (which it would need for full 10GBe). Also your unraid server would need a x8 10GBe NIC in a x8 or x16 PCIe slot otherwise it will also bottleneck. It may not be your network but your devices connected to it. And stupd question but all of your NICs are set at Jumbo+ Frames?
Have you disabled Memory Map IO above 4Gb on your motherboard ?
 
Last edited by a moderator:
On my PC I have it in a x16 PCI-E slot, on the Unraid server I'm not sure but it is likely a x8.

I'm mostly concerned with the difference between Windows and the Linux VM. I understand Linux handles things a bit better than Windows but didn't expect it to be so different.

Jumbo frames isn't enabled. I did try that but I'm pretty sure it caused things to stop working entirely.
 
Is your PC NIC a single or dual port ? If its dual have you tried swapping it over ?
 
What motherboard is your PC using? What type of PCIe slot is the NIC plugged into? I am guessing it s not in a x16 slot (which it would need for full 10GBe). Also your unraid server would need a x8 10GBe NIC in a x8 or x16 PCIe slot otherwise it will also bottleneck. It may not be your network but your devices connected to it. And stupd question but all of your NICs are set at Jumbo+ Frames?
Have you disabled Memory Map IO above 4Gb on your motherboard ?

The X520 cards are only x8 slot cards anyway though and x4 has more than enough bandwidth to cope with 10GbE. The way I have got my PC setup at mo is that the x520 is in a 16x slot and it drops the GPU and the x520 to x8 anyway, so I need to fix this :(

The Asus cards are only x4 so I know there have been some reported issues with putting a x8 card on a x4 but it didn't make any difference on mine.

I am also not getting full speeds but it also depends on the clients. So my Mac can get to around 6Gbps both ways to my server but my older PC didn't get pass 2.5Gbps each way. That I put down to either being rubbish or the firewall on it (don't ask what it was ;)) and that was using iPerf. Also tried with jumbo frames configured everywhere but no noticeable difference. I know there are a number of tweaks you can make to the card in software so wonder if this is why Linux is working better as per it settings.

I need to do some testing as getting okish speeds but also haven't had the time. It's on the to do list :D
 
The X520 cards are only x8 slot cards anyway though and x4 has more than enough bandwidth to cope with 10GbE. The way I have got my PC setup at mo is that the x520 is in a 16x slot and it drops the GPU and the x520 to x8 anyway, so I need to fix this :(

The Asus cards are only x4 so I know there have been some reported issues with putting a x8 card on a x4 but it didn't make any difference on mine.

I am also not getting full speeds but it also depends on the clients. So my Mac can get to around 6Gbps both ways to my server but my older PC didn't get pass 2.5Gbps each way. That I put down to either being rubbish or the firewall on it (don't ask what it was ;)) and that was using iPerf. Also tried with jumbo frames configured everywhere but no noticeable difference. I know there are a number of tweaks you can make to the card in software so wonder if this is why Linux is working better as per it settings.

I need to do some testing as getting okish speeds but also haven't had the time. It's on the to do list :D

My Motherboard has two x16 slots so my GPU is using the other one.
 
Was looking for stuff came across this


As I running Hyper-V on this server wonder if I need to tweak it somewhat.
 
You have pretty much exhausted my knowledge, sorry that I cant help any further. Googling there is a lot out there. Seems like some people have no problems with windows at all and others just have a nightmare.
 
It is inadvisable to use JF unless everything on the subnet can support it. Unlikely as it is, if something sent out a broadcast using a JF, then only the JF compliant clients would see it. Everything else with treat it as corrupt as they won't have buffers big enough to receive them. Likewise if a JF station sends a unicast frame too big to a non-JF station and/or there's infrastructure components (ie switches, routers AP's.) that are not JF compliant, they will discard them.

Note the backplane capacity of your 10GBE switches: In olden days it was not uncommon for gigabit switches to not have the backplane capacity to switch all ports a full packet rate. Such switches used to cite their backplane capacity in the datasheets. You may be enjoying such factors in 10GBE switches. For instance I note " 760 Gbps switching capacity" is cited.

Whilst testing with Windows hosts, it might be worth testing with the "Personal Firewall" disabled and maybe even the AV turned off (temporarily) and knock out the "metering" "QOS Packet scheduler" and anything else you can think of that might be interfering with traffic flow.
 
Last edited:
The switching capacity of the CRS305 I have in my garage MikroTik

USW-Aggregation - 160 Gbps switching capacity

I only have 2 clients that support 10Gbps currently so plenty of capacity it seems.

I've enabled Jumbo frames on the switches, and set the MTU on the CRS305 to 9014. and now I see this in iPerf, which is better, but not quite enough.

1610462136513.png



Running with AV and FW off makes no discernible difference to the above result.
 
Note the backplane capacity of your 10GBE switches: In olden days it was not uncommon for gigabit switches to not have the backplane capacity to switch all ports a full packet rate. Such switches used to cite their backplane capacity in the datasheets. You may be enjoying such factors in 10GBE switches. For instance I note " 760 Gbps switching capacity" is cited.

Them were the days :)

I haven't seen a switch these days that hasn't had the capacity to switch all its port at full-duplex whether its 1Gb or 10Gb or a combination.

Whilst testing with Windows hosts, it might be worth testing with the "Personal Firewall" disabled and maybe even the AV turned off (temporarily) and knock out the "metering" "QOS Packet scheduler" and anything else you can think of that might be interfering with traffic flow.

Yep agree that one of the biggest issues, particularly if the Windows drivers and settings don't match the required settings.
 
Them were the days :)

I haven't seen a switch these days that hasn't had the capacity to switch all its port at full-duplex whether its 1Gb or 10Gb or a combination.



Yep agree that one of the biggest issues, particularly if the Windows drivers and settings don't match the required settings.

the 10 1GB ports on the UDM-Pro share a 1GB backplane. Hence why I don't use it for clients and only have my ATS and Hue hub plugged into it.
 
Yep agree, just wondered if you had it with too many now, is one client a PC and one a NAS ?
 
Yep agree, just wondered if you had it with too many now, is one client a PC and one a NAS ?

Yep, one client is my PC (from Windows) and the other is the UNRaid NAS.

Seems whatever I've changed has reduced the speed on my Linux VM though.

1610462821264.png
 

The latest video from AVForums

Is 4K Blu-ray Worth It?
Subscribe to our YouTube channel
Back
Top Bottom