Test 1 sends one small datagram per second for 10 minutes. It initially tests simple connectivity, and after several minutes verifies that multicast streams are properly maintained over several IGMP query timeouts. Start the mdump first, then after a second or two start the msend.
Receiving Host:
mdump -omdump1.log 224.9.10.11 12965 10.1.2.3Sending Host:
msend -1 224.9.10.11 12965 15 10.1.2.4(Note: host interface addresses 10.1.2.* should be changed to reflect your hosts.) This test will run for 10 minutes and then report the percentage of dropped messages (datagrams). During that time, mdump will display the received messages in a hex dump form. At the end of the test run, msend will tell mdump to display statistics. You should use ctrl-C to exit mdump when test 1 finishes.
Be sure to run the test a second time, switching the roles of sender and receiver, and remember to use a different multicast address for that second run. Also, note that other tests use the "-q" option on the mdump command. However, test 1 does not use "-q".
Test 2 sends one large datagram per second for 5 seconds. It tests the ability of the network hardware to establish a multicast stream from a fragmented datagram. Start the mdump first, then after a second or two start the msend.
Receiving Host:
mdump -q -omdump2.log 224.10.10.10 14400 10.1.2.3Sending Host:
msend -2 224.10.10.10 14400 15 10.1.2.4(Note: host interface addresses 10.1.2.* should be changed to reflect your hosts.) Notice that this and subsequent mdump commands include the "-q" option to prevent the hex dump of each datagram. In this test it is for convenience; in future tests it is necessary to prevent slow receiver operation.
Be sure to run the test a second time, switching the roles of sender and receiver, and remember to use a different multicast address for that second run.
Test 3 sends 50 bursts of 100 datagrams (8K each). Each burst of 100 is sent at the maximum possible send rate for the machine usually fully saturating the wire), and the bursts are separated by a tenth of a second. This is a pretty heavy load that tests the ability of the network hardware to establish a wire-speed multicast stream from a fragmented datagram. Start the mdump first, then after a second or two start the msend.
Receiving Host:
mdump -q -omdump3.log 224.10.10.14 14400 10.1.2.3Sending Host:
msend -3 224.10.10.14 14400 14 10.1.2.4(Note: host interface addresses 10.1.2.* should be changed to reflect your hosts.) Depending on the speed of the machine, this test should not run much longer than 7 seconds, usually much shorter.
Be sure to run the test a second time, switching the roles of sender and receiver, and remember to use a different multicast address for that second run.
Test 4 sends a single burst of 5000 datagrams (20 bytes each). The burst is sent at the maximum possible send rate for the machine. It may not fully saturate the wire, but does lead to a very high message rate during the burst. This is a another heavy load that tests the ability of the network hardware to sustain a high message rate multicast stream. Start the mdump first, then after a second or two start the msend.
Receiving Host:
mdump -q -omdump4.log 224.10.10.18 14400 10.1.2.3Sending Host:
msend -4 224.10.10.18 14400 15 10.1.2.4(Note: host interface addresses 10.1.2.* should be changed to reflect your hosts.) Depending on the speed of the sending machine, this test should not run much more than 5 seconds, often much less.
Be sure to run the test a second time, switching the roles of sender and receiver, and remember to use a different multicast address for that second run.
Test 5 sends a single burst of 50,000 datagrams (800 bytes each). The burst is sent at the maximum possible send rate for the machine. This test generates the heaviest load of the 5 tests, and should saturate a 1-gig link. Start the mdump first, then after a second or two start the msend.
Receiving Host:
mdump -q -omdump5.log 224.10.10.18 14400 10.1.2.3Sending Host:
msend -5 224.10.10.18 14400 15 10.1.2.4(Note: host interface addresses 10.1.2.* should be changed to reflect your hosts.) Depending on the speed of the sending machine, this test should not run much more than 5 seconds, often much less.
If this test experiences loss, re-run the msend command with the option "-S65536". If this option removes the loss, then your system default UDP send buffer size is too large. Many Linux systems suffer from this if the UDP send buffer is larger than a few hundred KB. We recommend setting the default to either three times your maximum datagram size, or 64 KB, whichever is larger. If that is not desirable, then we recommend configuring your multicast applications to override the system default UDP send buffer size.
It is a good idea during the execution of this test for the network hardware administration team to monitor switch CPU usage. We have seen cases where switches that handle multicast in hardware still overload the switch CPU when high-rate multicast is used. For example, we saw one user of Cisco hardware enable an ACL, with the result that the CPU had to examine each multicast packet. This left his Cisco switch at 90% CPU utilization even though he was only using about half of the gigabit bandwidth. It is always better to discover this kind of CPU loading early, rather than on the "go-live" day.
Be sure to run the test a second time, switching the roles of sender and receiver, and remember to use a different multicast address for that second run.
It is beyond the scope of this simple document to attempt to fully diagnose and describe the treatments of various multicast networking maladies. Networking routers and switches are too diverse. However, you can find a wealth of general information in our THPM document. If you suspect that your network infrastructure is not able to handle high-speed multicast traffic, there is a very good chance that it is simply a matter of switch and router configuration. We have found that network administrators, working with the network hardware's support team, are usually successful at enabling the proper hardware multicast routing parameters. It sometimes requires a bit of patience and digging, but the scaling advantages of multicast are well worth the effort.
Copyright 2005 - 2011 Informatica, Inc.