These are my links for January 28th through January 29th:
I’ve been experiencing some slow network connections between a couple of Linux systems (CentOS), Window systems (Server 2008R2), and a NAS device (Netgear ReadyNAS) and wanted to confirm that my network connections weren’t the source of the problem. To limit any variability due to different tools, I decided to use iperf, which can run on all my previously mentioned operating systems.
In addition to iperf, I also used scp to copy a 100MB file from one Linux machine to another just to confirm what iperf was reporting.
Iperf is available in most major distribution repos, I was able to install it on my CentOS systems from the EPEL and/or RPMForge repositories.
NOTE: The source can be downloaded from the iperf’s sourceforge page here.
For windows I was able to find a pre-built executable on the ivaturi.org blog. NOTE: I’m also providing the pre-built executable here on my site as well.
Generating a 100MB Sample File
## generate a 100MB file
% dd if=/dev/zero of=100mb.dat bs=100M count=1
% ls -l 100mb.dat
-rw-rw-r-- 1 saml saml 104857600 Jan 27 21:10 100mb.dat
My testing followed the following general format:
- traceroute to server
- scp 100mb file to server
NOTE: I then started iperf on the server and then on the client. For simplicity I’m only going to show the results after the iperf client has finished sending all it’s data.
- (on server): iperf -s
- (on client): iperf -c
NOTE: In each test below, the server was the system that received data, and the client was the one sending it. For example, I was logged onto a client system (grinchy), and ran the scp command, copying the 100mb.dat file to the server (skinner).
## on client (grinchy)
% scp 100mb.dat skinner:~
100mb.dat 100% 100MB 2.1MB/s 00:47
TEST #1: Wireless (Netgear WGT624 108Mbps G)
…. Continue reading → How to Analyze Network Performance using iperf on Fedora, CentOS & Windows »»
These are my links for January 26th through January 27th:
These are my links for January 26th from 00:07 to 02:00:
Recently I came across this really cool UNIX tool called dupx. I was looking for a way to connect to a program’s STDOUT and STDERR after I had already started it. This was a long running, think 30+ hours job, and I didn’t want to have to stop and restart it. As is the case with UNIX/Linux, there’s an app for that.
Dupx isn’t an actual program per say, it’s actually a shell script that eases the task of using the real workhorse, GDB that allows you to connect to an already running program. GDB is the command line GNU Debugger for C & C++ applications but it can do a lot more. In our case, GDB is being used by dupx to attach to our program/script’s process id (PID), and then manipulating the already running programs environment to repoint STDOUT, STDERR, and even STDIN to new locations.
Here’s an example where I’ve started up a program without redirecting its output, STDOUT to anywhere specific:
# ex. 1: sleep for 10 seconds, then echo a msg.
# get the current time
Wed Jan 25 21:25:07 EST 2012
# run a job
% bash -c 'sleep 10 && echo "rise and shine"' &
# NOTE: pid 14992, output from job
% rise and shine
# wait 10 secs, hit return a couple of times
+ Done bash -c 'sleep 10 && echo "rise and shine"'
# note that program ran for > 10 secs.
Wed Jan 25 21:25:29 EST 2012
Here’s an example where I’ve started up a program without redirecting its output, and then ran dupx to redirect this program’s STDOUT to my shell’s STDOUT:
…. Continue reading → [one-liner]: Redirecting an Already Running Program’s STDOUT to a File using dupx »»
These are my links for January 25th from 09:48 to 17:23: