2.5gbit networking card (PC to PC )

sykotik

Distinguished Member
Joined
Sep 8, 2008
Messages
9,439
Reaction score
3,130
Points
1,957
Location
co. durham
I could be way off here , but having just seen the new vid from LTT , he talks about 2.5gbit networking ,
and got me thinking ,
I download games on one PC and then sometimes copy the file over to an ssd ( which could take some time depending on the size ) then take that ssd and copy the file over to my other PC ( which again can take a good while to copy over )

so- if i was to buy 2 of these 2.5Gbps network cards Gigabit PCIe Network Card 1 Port from Dynamode
put one in each PC , would i be able to to connect both PC'c using say a switch ( which i all ready use ) and have them transfer the file/s a lot quicker than using a SSD ?
 
I haven’t seen the vid yet although I did see it in my feed. Tbh why bother when you could grab a 10Gbs Melanox card and create a peer to peer connection via Fibre. I have an unraid and two workstations in my studio all linked via Fibre as well as a regular copper network. I can exchange data over fibre anywhere between 500MB/s all the way up to 1000MB/s (drive dependant)
 
The 2.5Gbps here refers to the speed of the PCI Express interface, if you look at the detail it’s still 10/100/1000 network card....bit of false advertising there!

For speeds above 1gbps you’ve got a couple of options - a two port card using LACP or something like this: Gigabyte AQC107 10 Gigabit PCIe Network Card

In both cases your switch would need to support either LACP (also known as link aggregation) or speeds above 1gbps (2.5/5/10gbps).......
 
You can still do a direct connection from two PCs so you don’t necessarily need to upgrade your switch. (Assuming you have upgraded your network cards)
 
You can still do a direct connection from two PCs so you don’t necessarily need to upgrade your switch. (Assuming you have upgraded your network cards)
Upgraded my network cards ?
the only "connection" i have is just whats built into the mobo's

it's been a good while since i have tried just using the built in network devices , ( and remember it been slow ) could just be me not setting it properly.
 
Upgraded my network cards ?
the only "connection" i have is just whats built into the mobo's

it's been a good while since i have tried just using the built in network devices , ( and remember it been slow ) could just be me not setting it properly.

What speed is your switch? If it’s only 10/100 then you might want to look at gigabit switch instead (assuming the built in NICs are 10/100/1000)......
 


What speed is your switch? If it’s only 10/100 then you might want to look at gigabit switch instead (assuming the built in NICs are 10/100/1000)......
Netgear GS108 and GS 105 ( both are been in use )
 
Netgear GS108 and GS 105 ( both are been in use )

What size files and how long do they take to transfer via SSD? Assuming a 10% overhead, a 10GB file would take 1m 28s to transfer at 1gbps - taking into consideration other factors (disk subsystem, multiple files, network congestion etc.)
 
Upgraded my network cards ?
the only "connection" i have is just whats built into the mobo's

it's been a good while since i have tried just using the built in network devices , ( and remember it been slow ) could just be me not setting it properly.

Back in the day network cards used to be on PCI cards that would plug into your MB. As everybody needs them these days mobo manufacturers just built them in directly. You cannot upgrade the on in the MB normally so you end up buying a net plug in card to plug into the PCI Express slot.

1Gbps network cards have been around a long time. Then they brought out 10Gbps but these currently cost around 10x as much as the slower ones. Then they bought new speeds with 2.5 and 5Gbps which some cards do. The latest 10Gbps cards generally support 1, 2.5, 5 and 10Gbps speeds.

However as @Streamer73 states you should get a reasonable copy speed it shouldn't be taking that long.

Check you have not get any of the machines connected via both WiFi and Ethernet as sometimes they can take the slower route.

The LTT 2.5Gbps cards are around the £70 mark from a quick Google, for around the £100 mark you can get an Asus one or the Gigabyte one mentioned earlier.
 
Just had a little play around with the a connection ( Ethernet (PC) - switch - Ethernet (PC) )
and the speeds aren't that good ,
one file at 60GB took about 11 mins at around about 100mbps ( which i guess ins't bad )

where as an other file took a good while - 9.97GB file size , with speeds at only 30mbps /ish
PC to PC.jpg


i guess it depends on the file ,
but using an SSD was faster ( hence why i have been using it )
will do a little test over the next days ( at work shortly for the weekend ) and post a speed using the SSD ..
 
Just had a little play around with the a connection ( Ethernet (PC) - switch - Ethernet (PC) )
and the speeds aren't that good ,
one file at 60GB took about 11 mins at around about 100mbps ( which i guess ins't bad )

where as an other file took a good while - 9.97GB file size , with speeds at only 30mbps /ish
View attachment 1266219

i guess it depends on the file ,
but using an SSD was faster ( hence why i have been using it )
will do a little test over the next days ( at work shortly for the weekend ) and post a speed using the SSD ..

It’s not just 1 file though - there are 9,018 there, multiple files will take longer than a single large file. Also, your speed in the image is 29.3 megabytes/sec which equates to 234.4 megabits/sec. 100MB/s is 800mbps which is probably as good as it’ll get for 1gbps - was that a single file or multiples as above?
 
It’s not just 1 file though - there are 9,018 there, multiple files will take longer than a single large file. Also, your speed in the image is 29.3 megabytes/sec which equates to 234.4 megabits/sec. 100MB/s is 800mbps which is probably as good as it’ll get for 1gbps - was that a single file or multiples as above?
Just one file , was an old game ( grid 2 )
 
In a lightly loaded SOHO network, usually if transfer rates observed are significantly less than the capabilities of the network, it indicates some bottleneck in either the source of sink devices - e.g. the bus, disk drives, OS, etc.

Ethernet has a very high "protocol efficiency" at about 97%, which means very little of the available bandwidth is utilised "just making it work" (unlike Wi-Fi which is of the order of 55-75% and HomePlugs which is often cited at 45-55%.)

For some very, very, rough "rule of thumb" numbers - at it's absolute best using any mechanism that utilises TCP/IP (such as file copying, downloading, etc.), 100mbps (AKA "fast") ethernet tops out at about 11.8MB/s (bytes per second) and 1000mbps (AKA "gigabit") ethernet tops out at about 118MB/s. Don't panic if you're not actually hitting those numbers, but if you are in the right ball park, you know you're saturating the ethernet links. If much less than this, then there's a bottleneck somewhere else.

So to take the example in post 11 - transfer rate of 30MB/s suggest you are using a gigabit link, (as it's way faster than 100mbps can avail) but nowhere near the 118MB/s gigabit ethernet is capable of.

It's also interesting to observe that is "started out fast then slowed down" - that could be an indication of some caching going on somewhere. For example, the sink device might be accepting data as fast as the network will deliver it and caching in in RAM to write back to disk later. However, typically once such a cache is full, it will "throttle" back to the (effectively) the speed of the discs as the (full) cache essentially becomes "one in, one out" so to speak.

If you're not caning your network links, then obviously a faster NIC won't help - the bottlenecks elsewhere need to be identified and eradicated.

To cite an example of my own - my infrastructure at home is all gigabit and my server is a decent enough "micro-server." However, the write performance I observe is pretty dire at about 20-30MB/s. In my case this occurs because of the disc operating system I'm running on my server (ZFS in FUSE) which is known to have performance problems. The "fix" would be to upgrade my server OS to one that runs ZFS in OS kernel. However, for my use case, I write bulk data so rarely it's not worth my while (and worth the risk) to do an upgrade, so I just live with it.
 
Well this a vast improvement !!
better.jpg


Check what cables i was running , and found out two of them where only , cat5e or 5patched
so having just swapped the cables over to cat6 patched , ( which i think i bought many moons ago too replace the older cable )

the speeds have vastly improved !!
Will peak to 112MB/s !!
 
But you are only copying a few albeit larger files. The previous showed thousands which is generally slower and often all over the place.

have you tried putting the old cables back to see if it is repeatable. You also shouldn’t rely on your files to copy as some will read into ram etc.

Use a program called iperf that works without worrying about your disks.

CAT5e is more them capable of running at 1Gbps, I get 10Gbps over a short run in my house :)
 
I suggest if swapping cables effected a fix, then it's more likely that (for example) one of the cables was broken or the contacts are dirty rather than they "weren't a high enough cat." With ethernet, when cables are knackered, it's extremely rare for them "work but slowly" - generally they just don't work at all (or sometimes fall down 100mbps.)

With gigabit ethernet, crappy cables can cause issues with the clocks either end of the link synchronising with each other, (the transceivers each end of a cable lobe have to run in lock-step with each other and as the link is bought up, the tranceivers hold a kind of "election" to determine who is going to be the clock master,) if not, they can fall down to half-duplex operation which hits throughput, though I wouldn't have thought it would have been as dramatic as you observe. On better kit, one could check the port stats and see what they have some up at (speeds, full of half duplex) and the error rates, but cheap SOHO gear tends to lack the functionality to check such things.

Of course, what has been demonstrated is that the source and sink devices are up to the job, though I concur with ChuckMountain that to be a fair test, one always needs to use the same files each time. (In methodical testing, it's best to change as few things as possible between each test.) Or use iPerf or NetIO and take the files (and DOS, and HDD's, etc) out of the test path if you just want to test the network throughput.

However, if a cable swap as made they difference, then I guess we're all good.
 

The latest video from AVForums

TV Buying Guide - Which TV Is Best For You?
Subscribe to our YouTube channel
Back
Top Bottom